[
  {
    "path": ".github/ISSUE_TEMPLATE.md",
    "content": "#### Actual behavior\n*Description of the actual behavior.*\n\n#### Expected behavior\n*Description of the expected behavior.*\n\n#### Steps to reproduce\n*Description of the various steps required to reproduce the error.*\n\n#### Solution proposal\n*Description of what you thingk would need to be done.*\n\n#### Installation type\n- [ ] console.aporeto.com\n- [ ] on-prem\n- [ ] dev\n\n> Version: ?\n> Customer: ?\n> ETA: ?\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "#### Description\n*Changes proposed in this pull request.*\n\n#### Test plan\n*Outline the test plan used to test this change before merging it.*\n\n> Fixes #.\n"
  },
  {
    "path": ".gitignore",
    "content": ".DS_Store\nexample/example\nvendor\ncoverage.txt\ncontroller/internal/processmon/testbinary/testbinary\nGopkg.lock\nprofile.out\n"
  },
  {
    "path": ".lint.windows.sh",
    "content": "#!/usr/bin/env bash\n\n# goimports and gofmt complain about cr-lf line endings, so don't run them on \n#\ta Windows machine where git is configured to auto-convert line endings\n\nOS=`uname -s`\nif [[ $OS == *\"NT-\"* ]]; then \n\tGOIMPORTS_OPTION=\n\tGOFMT_OPTION=\nelse\n\tGOIMPORTS_OPTION=--enable=goimports\n\tGOFMT_OPTION=--enable=gofmt\nfi\nCGO_ENABLED=0 GOOS=windows GOARCH=amd64 golangci-lint run --deadline=10m --disable-all --exclude-use-default=false --enable=errcheck --enable=ineffassign --enable=govet --enable=golint --enable=unused --enable=structcheck --enable=varcheck --enable=deadcode --enable=unconvert --enable=goconst --enable=gosimple --enable=misspell --enable=staticcheck --enable=unparam --enable=prealloc --enable=nakedret --enable=typecheck $GOIMPORTS_OPTION $GOFMT_OPTION --skip-dirs=vendor/github.com/iovisor ./...\n"
  },
  {
    "path": ".test.sh",
    "content": "#!/usr/bin/env bash\n\nset -e\necho \"\" > coverage.txt\n\n./mockgen.sh\n./fix_bpf\n\n## FIX ME. go1.14 automatically enables unsafe ptr checks when doing race checks,\n## and it is not clear if this is compatible (it is disabled on Windows)\n##\n## This needs to be revisited and maybe remove \"-gcflags=all=-d=checkptr=0\" below\n## for go1.14 once we determine if there is a real pointer issue in the tests.\n##\n## this is the file that fails when ptr checking is enabled:\n## go.aporeto.io/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn_test.go\n##\n## to see the failure, test that package individually setting \"checkptr=1\"\n\ncase \"$(go version)\" in\n    *1.13*) CHECKPTR=\"\"  ;;\n    *)      CHECKPTR=\"-gcflags=all=-d=checkptr=0\" ;;\nesac\n\nfor d in $(go list ./... | grep -v 'mock|bpf'); do\n    go test ${CHECKPTR} -race -tags test -coverprofile=profile.out -covermode=atomic $d\n    if [ -f profile.out ]; then\n        cat profile.out >> coverage.txt\n        rm profile.out\n    fi\ndone\n"
  },
  {
    "path": ".test.windows.sh",
    "content": "#!/usr/bin/env bash\n\n# use wine to execute if not running tests on a Windows machine\nOS=`uname -s`\nif [[ $OS == *\"NT-\"* ]]; then\n\tWINE_EXEC=\nelse\n\tWINE_EXEC=\"-exec wine\"\nfi\n\nset -e\necho \"\" > coverage.windows.txt\n\nfor d in $(CGO_ENABLED=0 go list ./... | grep -v remoteenforcer | grep -v remoteapi | grep -v \"plugins/pam\"); do\n    CGO_ENABLED=0 GOOS=windows GOARCH=amd64 go test -tags test $WINE_EXEC -coverprofile=profile.windows.out -covermode=atomic $d\n    if [ -f profile.windows.out ]; then\n        cat profile.windows.out >> coverage.windows.txt\n        rm profile.windows.out\n    fi\ndone\n"
  },
  {
    "path": ".travis.yml",
    "content": "language: go\nsudo: required\ndist: bionic\n\ngo_import_path: go.aporeto.io/trireme-lib\n\nmatrix:\n  include:\n    - go: \"1.13.x\"\n    - go: \"1.14.x\"\n    - go: master\n  allow_failures:\n    - go: master\n  fast_finish: true\n\naddons:\n  apt:\n    packages:\n      - libnetfilter-queue-dev\n      - libnetfilter-log-dev\n      - iptables\n      - ipset\n      - wine-stable\n\nenv:\n  global:\n    - TOOLS_CMD=golang.org/x/tools/cmd\n    - PATH=$GOROOT/bin:$PATH\n    - SUDO_PERMITTED=1\n\nbefore_install:\n  - GO111MODULE=on go get github.com/golangci/golangci-lint/cmd/golangci-lint@v1.21.0\n  - GO111MODULE=off go get github.com/golang/dep/cmd/dep\n  - sudo apt-get -y install wine32\n\ninstall:\n  - dep ensure\n  - ./fix_bpf\n  - dep status || true\n\nscript:\n  - GO111MODULE=off ./.test.sh\n  - golangci-lint run --deadline=10m --disable-all --exclude-use-default=false --enable=errcheck --enable=goimports --enable=ineffassign --enable=govet --enable=golint --enable=unused --enable=structcheck --enable=varcheck --enable=deadcode --enable=unconvert --enable=goconst --enable=gosimple --enable=misspell --enable=staticcheck --enable=unparam --enable=prealloc --enable=nakedret --enable=gofmt --enable=typecheck --skip-dirs=vendor/github.com/iovisor ./...\n  - GO111MODULE=off ./.lint.windows.sh\n  - GO111MODULE=off ./.test.windows.sh\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "Contributing\n------------\n\nAs an open source project, your contributions are important to the future of Trireme. Whether you're looking to write code, add new documentation, or just report a bug, you'll be helping everyone who uses Trireme in the future.\n\n### Reporting Bugs\n\nFiling issues is a simple but very important part of contributing to Trireme. It provides a metric for measuring progress and allows the community to know what is being worked on. \"Issues\" in the context of the project refer to everything from broken aspects of the framework, to regressions and unimplemented features. Trireme uses [GitHub Issues](https://github.com/aporeto-inc/trireme-lib/issues) for tracking issues!\n\nWhen opening an issue or a pull request, labels will be applied to it. You can check the Issue and Pull Request Lifecycle [here](https://github.com/aporeto-inc/trireme-lib/wiki/Issue-and-Pull-Request-Lifecycle).\n\n### Contributing to the project\n\nThere are always bugs to be fixed and features to be implemented, some large, some small. Fixing even the smallest bug is enormously helpful! If you have something in mind, don't hesitate to create a pull request [here](https://github.com/aporeto-inc/trireme-lib/pulls)!\n\nWhen contributing to a project you must propose a pull request for each modifications you do.\nTo do a PR, please follow this instructions :\n\n```bash\ngit checkout master #make sure your base is the master\ngit checkout -b nameOfYourNewBranch #create a new branch and start to work locally on this branch\n\n... #do your change\n\n./.test.sh\n\ngit add yourFile* #add the files to the commit\ngit commit #create a beautiful messages for the commit, please follow the guideline below\ngit push origin nameOfYourNewBranch #push the branch on the remote origin\n```\n\nThen we advise to create your PR from the github website, make sure to read your changes again.\nMore documentation here : https://git-scm.com/documentation\n\nPull requests will not be accepted if the tests are not passing and if the coverage of the tests has dicreased.\n\n### Coding Guidelines\n\nGo Coding Style Guidelines\nYou must follow the golint, gofmt guide style when coding in Go. You must also know by heart Effective Go.\nInstall the linter on your system and add it to your pre-commit hook.\n\n### Do's and don'ts\n\nDO\n* Be consistent.\n* Use symmetry. If you provide a way to do something, provide a way to undo it.\n* All public methods and functions must be commented.\n* A comment has a space between // and the first letter.\n* A comment starts with a capitalized letter and ends with a final dot.\n* Be careful with the commenting. This could be released to GoDoc at anytime.\n* An interface or a struct Thing comment should be in the form of \"// A Thing is a thing\", not \"// Thing is a thing\"\n* Code must go straight to the point. Do not over engineer by thinking \"maybe one day\". If that day comes, update your code.\n* Use short explicit name for variables. \"objectThatMayBeCreatedIfEverythingIsFine\" is overkill. \"obj\" is enough. Use best judgement.\n* A function must do what its name says. \"catchPokemon(pokeball) bool\" should not get an int as parameters, and do an addition.\n* Organize your packages consistently.\n* Skip a line between the function declaration and the first line of code.\n* A constructor must use the \"return &Struct{a: 1}\" pattern, not \"newthing:= &Struct{}; thing.a = 1; return a\".\n* Always provide a way to understand what went wrong or what went well in a function/method. A log is useless.\n* A function or method that starts by \"Is\" or \"Are\" must return a boolean.\n* Order of appearance of methods should be consistent:\n* Imports\n* Constants (if any)\n* Variables (if any)\n* Helper functions (if any)\n* Main structure\n* Constructor (if any)\n* Implemented interface methods (if any)\n* Public methods (if any)\n* Private methods (if any)\n\nDON'T\n\n* Do not write more than a main structure per file.\n* Do not ignore returned errors. Never. Handle them, or pass them back. You can use github.com/kisielk/errcheck to help you find them.\n* Do not over log. Go philosophy is \"no news, good news\". Carefully minimize all logs above the \"debug\" level.\n* Do not put logs in a helper functions. Return an error and let the main program decide what to do.\n* Don't use new().\n* Do not pass entire structure down, because at some point you need one value. Just pass the value. (see \"maybe one day\" point)\n* Do not export methods if they are not used outside of the package (see \"maybe one day\" point)\n* Do not let an unused function in the code, delete it. Use Go Oracle to find the referrers if needed. (if we need it later, git is our friend)\n* Do not let Todos hanging in the code forever. Fix them asap.\n* Do not overuse wrappers. If a library provides a structure that matches what you need, use it  (see \"maybe one day\" point).\n* Do not store duplicate information: a.Name = b.Name, a.B = b. a.B.Name is enough\n* Don't use strings. Strings are bad for anything else than giving information to a human. Use a constant or a structure.\n* Do not copy and paste business logic code. If you need the same code twice, write a function.\n* Do not use map[string]map[string]chan map[string]bool, create a Type.\n* Do not pass information through maps. Maps content cannot be controlled by the compiler (see \"don't use string\" point)\n* Do not overused go routines. Remember that if someone wants to thread something, he can use \"go\" himself. Leave the guy a choice.\n* Do not let old naming. If a structure doesn't do what it did in the beginning, rename it. gorename is our friend here.\n* Do not paste StackOverflow code without understanding exactly what it does. You should also add a backlink.\n\n### Commit Messages\n\nThe style and format of your commit messages are very important to the health of the project. A good commit message helps not only users reading the release notes, but also your fellow developers as they review git log or git blame to figure out what you were doing.\n\nCommit messages should be in the following format:\n\n```\n#comments\n<type>: <summary>\n<body>\n<footer>\n<transition-state >\n```\n\n### Types\n\n* Allowed type values are:\n* New — A new feature has been implemented\n* Fixed — A bug has been fixed\n* Docs — Documentation has been added or tweaked\n* Formatting — Code has been reformatted to conform to style guidelines\n* Test — Test cases have been added\n* Task — A build task has been added or updated\n\n### Message summary\n\nThe summary is one of the most important parts of the commit message, because that is what we see when scanning through a list of commits, and it is also what we use to generate change logs.\nThe summary should be a concise description of the commit, preferably 72 characters or less (so we can see the entire description in github), beginning with a lowercase letter and with a terminating period. It should describe only the core issue addressed by the commit. If you find that the summary needs to be very long, your commit is probably too big! Smaller commits are better.\n\nFor a New commit, the summary should answer the question, “What is new and where?” For a Fixed commit, the summary should answer the question, “What was fixed?”, for example “Wrong Python version in Kafka Dockerfile”. It should not answer the question, “What was done to fix it?” That belongs in the body.\n\nDo not simply reference another issue or pull request by number in the summary. First of all, we want to know what was actually changed and why, which may not be fully explained in the referenced issue. Second, github will not create a link to the referenced issue in the commit summary.\n\n### Message body\n\nThe details of the commit go in the body. Specifically, the body should include the motivation for the change for New, Fixed and Task types. For Fixed commits, you should also contrast behavior before the commit with behavior after the commit.\nIf the summary can completely express everything, there is no need for a message body.\n\n### Message footer\n\nIf the commit closes an issue by fixing the bug, implementing a feature, or rendering it obsolete, or if it references an issue without closing it, that should be indicated in the message footer.\nIssues closed by a commit should be listed on a separate line in the footer with an appropriate prefix:\n\"Fixes\" for Fixed commit types\n\"Closes\" for all other commit types\nFor example:\n\n```\nFixes #1234\n```\n\nor in the case of multiple issues, like this:\n\n```\nFixes 1234, 2345\n```\n\nIssues that a commit references without closing them should be listed on a separate line in the footer with the prefix \"Refs\", like this:\n\n```\nRefs #1234\n```\n\nor in the case of multiple issues, like this:\n\n```\nRefs #1234, #2345\n```\n\nIf a commit changes the API or behavior in such a way that existing code may break, a description of the change, what might break, and how existing code should be modified must be noted in the footer like this:\n\n```\nBREAKING CHANGE:\nDockerfile of Kafka has been changed, you must generate a new image of Kafka.\n```\n\n### Examples\n\n```\nFixed: Wrong Python version in Kafka Dockerfile\n\nPreviously, the Kafka's Dockerfile installed the version 3.0 of Python. This version of Python\ndid not work with the lib configobj.\n\nWe now make sure to install the version 2.7 of Python by specifying the version when installing\nPython.\n\nFixes 1234\n```\n"
  },
  {
    "path": "Gopkg.toml",
    "content": "required = [\"github.com/docker/distribution\"]\n\n[[constraint]]\n  branch = \"master\"\n  name = \"github.com/aporeto-inc/go-ipset\"\n\n[[constraint]]\n  name = \"github.com/docker/docker\"\n  version = \"v17.05.0-ce-rc3\"\n\n[[constraint]]\n  name = \"github.com/docker/distribution\"\n  revision = \"b38e5838b7b2f2ad48e06ec4b500011976080621\"\n\n[[override]]\n  name = \"github.com/ti-mo/netfilter\"\n  version = \"=0.3.0\"\n\n# as per github.com/ti-mo/netfilter go.mod file\n[[constraint]]\n  name = \"github.com/mdlayher/netlink\"\n  revision = \"a1644773bc99cfd147417f646434a663b5892505\"\n#\n# The most significant dependency for the Kubernetes monitor: the controller-runtime\n# NOTE: change with care and always adjust the Kubernetes dependencies below\n#\n[[override]]\n  name = \"sigs.k8s.io/controller-runtime\"\n  version = \"v0.1.10\"\n\n#\n# Kubernetes dependencies\n# NOTE: always match exactly to what controller-runtime uses\n#\n[[constraint]]\n  name = \"k8s.io/api\"\n  version = \"kubernetes-1.13.1\"\n\n[[override]]\n  name = \"k8s.io/apiextensions-apiserver\"\n  version = \"kubernetes-1.13.1\"\n\n[[override]]\n  name = \"k8s.io/apimachinery\"\n  version = \"kubernetes-1.13.1\"\n\n[[override]]\n  name = \"k8s.io/apiserver\"\n  version = \"kubernetes-1.13.1\"\n\n[[constraint]]\n  name = \"k8s.io/client-go\"\n  version = \"kubernetes-1.13.1\"\n\n[[override]]\n  name = \"k8s.io/code-generator\"\n  version = \"kubernetes-1.13.1\"\n\n[[override]]\n  name = \"k8s.io/gengo\"\n  branch = \"master\"\n\n[[constraint]]\n  name = \"github.com/envoyproxy/go-control-plane\"\n  version = \"v0.9.0\"\n\n[[constraint]]\n  name = \"github.com/aporeto-inc/oxy\"\n  branch = \"sirupsen\"\n\n[[constraint]]\n  name = \"go.aporeto.io/netlink-go\"\n  branch = \"master\"\n\n[[constraint]]\n  name = \"go.aporeto.io/tg\"\n  branch = \"master\"\n\n[[constraint]]\n  name = \"github.com/hashicorp/go-version\"\n  version = \"v1.0.0\"\n\n[prune]\n  go-tests = true\n  unused-packages = true\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "Makefile",
    "content": "\n## all: default is to show the help text\n.PHONY: all help\nall: help\n\nci: test lint\n\n## vet: run go vet on the source\nvet:\n\tgo vet ./...\n\n## test: test with race checks\ntest:\n\t@ scripts/test.sh\n\n## lint: run the linter\nlint:\n\t@ scripts/lint.sh\n\n#\n# help uses all the ## marks for help text\n#\n.PHONY: help\n## help: prints this help message\nhelp:\n\t@ echo \"Usage: \"\n\t@ echo\n\t@ echo \"Run 'make <target>' where <target> is one of:\"\n\t@ echo\n\t@ sed -n 's/^##//p' ${MAKEFILE_LIST} | column -t -s ':' |  sed -e 's/^/ /' | sort\n"
  },
  {
    "path": "NOTICE",
    "content": "\nTrirem Project -  \n\nCopyright (c) 2016 Aporeto, Inc. All Rights Reserved\n\nThis product includes software developed at Aporeto Inc. \n"
  },
  {
    "path": "README.md",
    "content": "# Trireme\n\n<img src=\"docs/trireme.png\" width=\"400\">\n\n[![Build Status](https://travis-ci.org/aporeto-inc/trireme-lib.svg?branch=go-mod)](https://travis-ci.org/aporeto-inc/trireme-lib) [![codecov](https://codecov.io/gh/aporeto-inc/trireme-lib/branch/go-mod/graph/badge.svg)](https://codecov.io/gh/aporeto-inc/trireme-lib) [![Twitter URL](https://img.shields.io/badge/twitter-follow-blue.svg)](https://twitter.com/aporeto_trireme) [![Slack URL](https://img.shields.io/badge/slack-join-green.svg)](https://triremehq.slack.com/messages/general/) [![License](https://img.shields.io/badge/license-GPL--2.0-blue.svg)](https://www.gnu.org/licenses/gpl-2.0.html) [![Documentation](https://img.shields.io/badge/docs-godoc-blue.svg)](https://godoc.org/go.aporeto.io/trireme-lib) [![Go Report Card](https://goreportcard.com/badge/go.aporeto.io/trireme-lib)](https://goreportcard.com/report/go.aporeto.io/trireme-lib)\n\n\nWelcome to Trireme, an open-source library curated by Aporeto to provide cryptographic isolation for cloud-native applications. Trireme-lib is a Zero-Trust networking\nlibrary that makes it possible to setup security policies and segment applications by enforcing end-to-end authentication and authorization without the need\nfor complex control planes or IP/port-centric ACLs and east-west firewalls.\n\nTrireme-lib supports both containers and Linux processes as well user-based activation, and it allows security policy enforcement between any of these entities.\n\n# TL;DR\n\nTrireme-lib is a library. The following projects use it:\n\n* [Trireme as a set of simple examples to get started](https://github.com/aporeto-inc/trireme-example)\n* [Trireme implementing NetworkPolicies on Kubernetes](https://github.com/aporeto-inc/trireme-kubernetes/tree/master/deployment)\n\n# Description\n\nIn the Trireme world, a processing unit (PU) end-point can be a container, Kubernetes POD, or a general Linux process. It can also be a user session\nto a particular server. We will be referring to processing units as PUs throughout this discussion.\n\nThe technology behind Trireme is streamlined, elegant, and simple. It is based on the concepts of Zero-Trust networking:\n\n1. The identity is the set of attributes and metadata that describes the container as key/value pairs. Trireme provides an extensible interface for defining these identities. Users can choose customized methods appropriate to their environment for establishing PU identity. For example, in a Kubernetes environment, the identity can be the set of labels identifying a POD.\n2. There is an authorization policy that defines when PUs with different types of identity attributes can interact or exchange traffic. The authorization policy implements an Attribute-Based Access Control (ABAC) mechanism (https://en.wikipedia.org/wiki/Attribute-Based_Access_Control), where the policy describes relationships between identity attributes.\n3. Every communication between two PUs is controlled through a cryptographic end-to-end authentication and authorization step, by overlaying an authorization function over the TCP negotiation. The authorization steps are performed during the `SYN`/`SYNACK`/`ACK` negotiation.\n\nThe result of this approach is the decoupling of network security from the underlying network infrastructure because this approach is centered on workload identity attributes and interactions between workloads. Network security can be achieved simply by managing application identity and authorization policy. Segmentation granularity can be adjusted based on the needs of the platform.\n\nTrireme is a node-centric library. Each node participating in the Trireme cluster must spawn one instance of a process that uses this library to transparently insert the authentication and authorization step. Trireme provides the data path functions but does not implement either the identity management or the policy resolution function. Function implementation depends on the particular operational environment. Users have to provide PolicyLogic (ABAC “rules”) to Trireme for well-defined PUs, such as containers.\n\n# Existing implementation using Trireme library\n\n* [This example](https://github.com/aporeto-inc/trireme-example) is a straightforward implementation of the PolicyLogic for a simple use-case.\n\n* [Kubernetes-Integration](https://github.com/aporeto-inc/kubernetes-integration) is a full implementation of PolicyLogic that follows the Kubernetes Network Policies model.\n\n* [Bare-Metal-Integration](https://github.com/aporeto-inc/trireme-bare-metal) is an implementation of Trireme for Kubernetes on-prem, with a Cumulus agent that allows you to have a very simple networking model (routes are advertised by Cumulus) together with Trireme for policy enforcement.\n\n# Security Model\n\nTrireme is a Zero-Trust networking library. The security model behind Zero-trust networking is:\n* The Network is always untrusted. It doesn't matter if you are inside or outside your enterprise.\n* Every Flow/Connection needs to be authenticated and authorized by the endpoints.\n* The network information (IP/Port) is completely irrelevant to the authorization/authentication.\n\nWith Trireme, there is no need to define any security rules with IPs, port numbers, or ACLs. Everything is based on identity attributes; your IP and port allocation scheme is not relevant to Trireme and it is compatible with most underlying networking technologies. The end-to-end authentication and authorization approach is also compatible with NATs and IPv4/IPv6 translations.\n\n\nA PU is a logical unit of control to which you attach identity and authorization policies. It provides a simple mechanism where the identity is derived out of the Docker manifest; however, other mechanisms are possible for more sophisticated identity definition. For instance, you may want to tag your 3-tier container application as \"frontend,\" \"backend,\" and \"database.\" By associating corresponding labels and containers, these labels become \"the identity.\" A policy for the “backend” containers can simply accept traffic only from “frontend” containers. Alternatively, an orchestration system might define a composite identity for each container and implement more sophisticated policies.\n\n\nPolicyLogic defines the set of authorization rules as a function of the identity of attributes and loads these rules into Trireme when a container is instantiated. Authorization rules describe the set of identities with which a particular container is allowed to interact. We provide an example of this integration logic with Kubernetes [here](https://github.com/aporeto-inc/kubernetes-integration). Furthermore, we provide an example of a simple policy where two containers can only talk to each other if they have matching labels in [this example](https://github.com/aporeto-inc/trireme-lib/tree/master/example). Each rule defines a match based on the identity attributes. PolicyLogic assumes a whitelist model where everything is dropped unless explicitly allowed by the authorization policy.\n\n\nPU identities are cryptographically signed with a node-specific secret and sent as part of a TCP connection setup negotiation. Trireme supports both mutual and receiver-only authorization. Moreover, it supports two authentication and signing modes: (1) A pre-shared key and (2) a PKI mechanism based on ECDSA. In the case of ECDSA, public keys are either transmitted on the wire or pre-populated through an out-of-band mechanism to improve efficiency. Trireme also supports two identity encoding mechanisms: (1) A signed JSON Web Token (JWT) and (2) a custom binary mapping mechanism.\n\nWith these mechanisms, the Trireme run-time on each node will only allow communication after an end-to-end authentication and authorization step is performed between the containers.\n\n# Trireme Architecture\n\nTrireme-lib is built as a set of modules (Go packages) that provide a default implementation for each component. It is simple to swap the default implementation of each of those modules with custom-built ones for more complex and specific features.\n\nConceptually, Trireme acts on PU events. In the default implementation, the PU is a Docker container. Trireme can be easily extended to other PUs such as processes, files, sockets, and so forth.\nTrireme consists of two main packages:\n\n* The `Monitor` listens to a well-defined PU creation module. The built-in monitor listens to Docker events and generates a standard Trireme PU runtime representation. Additional monitors provided can listen to events on creation of Linux processes or user sessions from the Linux PAM module. The `Monitor` hands over the PU runtime to an external `Resolver`.\n  * The `Resolver` is implemented outside of Trireme and is not part of the library. The `Resolver` depends on the orchestration system used for managing identity and policy. If you plan to implement your own policy with Trireme, you will essentially need to implement a `Resolver`.\n* The `Controller` receives instructions from the `Resolver` and enforces the policy by analyzing the redirected packets and enforcing the identity and policy rules.\n\n# Defining Your Own Policy\n\nTrireme allows you to define any type of identity attribute or policy to associate with the PUs.\nIn order to define your own policies and identities, you need to implement a `Resolver` interface that will receive policy requests from Trireme whenever a policy resolution is required.\n\n# Resolver Implementation\n\n```go\n// A Resolver must be implemented by a policy engine that receives monitor events.\ntype Resolver interface {\n\n\t// HandlePUEvent is called by all monitors when a PU event is generated. The implementer\n\t// is responsible to update all components by explicitly adding a new PU.\n\tHandlePUEvent(ctx context.Context, puID string, event common.Event, runtime RuntimeReader) error\n}\n```\n\nEach container event generates a call to `HandlePUEvent`\n\nThe `Resolver` can then issue explicit calls to the `Controller` in order to implement the policy decision. The `Controller` interface is consumed by the `Resolver` and it is described below:\n\n```go\n// TriremeController is the main API of the Trireme controller\ntype TriremeController interface {\n\t// Run initializes and runs the controller.\n\tRun(ctx context.Context) error\n\n\t// CleanUp cleans all the supervisors and ACLs for a clean exit\n\tCleanUp() error\n\n\t// Enforce asks the controller to enforce policy on a processing unit\n\tEnforce(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) (err error)\n\n\t// UnEnforce asks the controller to un-enforce policy on a processing unit\n\tUnEnforce(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) (err error)\n\n\t// UpdatePolicy updates the policy of the isolator for a container.\n\tUpdatePolicy(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) error\n\n\t// UpdateSecrets updates the secrets of running enforcers managed by trireme. Remote enforcers will get the secret updates with the next policy push\n\tUpdateSecrets(secrets secrets.Secrets) error\n\n\t// UpdateConfiguration updates the configuration of the controller. Only specific configuration\n\t// parameters can be updated during run time.\n\tUpdateConfiguration(networks []string) error\n}\n```\n\n# Prerequisites\n\n* Trireme-lib requires IPTables with access to the `Mangle` module.\n* Trireme-lib requires access to the Docker event API socket (`/var/run/docker.sock` by default).\n* Trireme-lib requires privileged access.\n* Trireme-lib requires to run in the Host PID namespace.\n\n[![Analytics](https://ga-beacon.appspot.com/UA-90317101-1/welcome-page)](https://github.com/igrigorik/ga-beacon)\n"
  },
  {
    "path": "buildflags/buildflags.go",
    "content": "// +build !rhel6\n\npackage buildflags\n\n// Distro constants\nconst (\n\tRhel6 = \"\"\n\tRhel5 = \"\"\n)\n\n// IsRHEL6 returns true if the build flag was set for rhel6\nfunc IsRHEL6() bool {\n\treturn false\n}\n\n// IsRHEL5 returns true if the build flag was set for rhel5\nfunc IsRHEL5() bool {\n\treturn false\n}\n\n// IsLegacyKernel returns true if the build flag was set for rhel5/rhel6\nfunc IsLegacyKernel() bool {\n\treturn false\n}\n"
  },
  {
    "path": "buildflags/buildflags_rhel.go",
    "content": "// +build rhel6\n\npackage buildflags\n\n// Distro constants\nconst (\n\tRhel6 = \"rhel6\"\n\tRhel5 = \"rhel5\"\n)\n\n// IsRHEL6 returns true if the build flag was set for rhel6\nfunc IsRHEL6() bool {\n\treturn true\n}\n\n// IsRHEL5 returns true if the build flag was set for rhel5\nfunc IsRHEL5() bool {\n\treturn false\n}\n\n// IsLegacyKernel returns true if the build flag was set for rhel5/rhel6\nfunc IsLegacyKernel() bool {\n\treturn true\n}\n"
  },
  {
    "path": "cmd/systemdutil/exec.go",
    "content": "// +build !windows\n\npackage systemdutil\n\nimport (\n\t\"syscall\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\nfunc execve(c *CLIRequest, env []string) error {\n\treturn syscall.Exec(c.Executable, append([]string{c.Executable}, c.Parameters...), env)\n}\n\nfunc getPUType() common.PUType {\n\treturn common.LinuxProcessPU\n}\n"
  },
  {
    "path": "cmd/systemdutil/exec_windows.go",
    "content": "package systemdutil\n\nimport (\n\t\"os\"\n\t\"os/exec\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// execve does not exist in Windows, so we do the best we can.\nfunc execve(c *CLIRequest, env []string) error {\n\tcmd := exec.Command(c.Executable, c.Parameters...)\n\tcmd.Env = env\n\tcmd.Stdin = os.Stdin\n\tcmd.Stdout = os.Stdout\n\tcmd.Stderr = os.Stderr\n\treturn cmd.Run()\n}\n\nfunc getPUType() common.PUType {\n\treturn common.WindowsProcessPU\n}\n"
  },
  {
    "path": "cmd/systemdutil/systemdutil.go",
    "content": "package systemdutil\n\nimport (\n\t\"encoding/hex\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path\"\n\t\"regexp\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/remoteapi/client\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\n// ExecuteCommandFromArguments processes the command from the arguments\nfunc ExecuteCommandFromArguments(arguments map[string]interface{}) error {\n\n\tp := NewRequestProcessor()\n\n\tc, err := p.ParseCommand(arguments)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn p.ExecuteRequest(c)\n}\n\n// RequestType is the type of the request\ntype RequestType int\n\nconst (\n\t// CreateRequest requests a create event\n\tCreateRequest RequestType = iota\n\t// DeleteCgroupRequest requests deletion based on the cgroup - issued by the kernel\n\tDeleteCgroupRequest\n\t// DeleteServiceRequest requests deletion by the service ID\n\tDeleteServiceRequest\n)\n\n// CLIRequest captures all CLI parameters\ntype CLIRequest struct {\n\t// Request is the type of the request\n\tRequest RequestType\n\t// Cgroup is only provided for delete cgroup requests\n\tCgroup string\n\t// Executable is the path to the executable\n\tExecutable string\n\t// Parameters are the parameters of the executable\n\tParameters []string\n\t// Labels are the user labels attached to the request\n\tLabels []string\n\t// ServiceName is a user defined service name\n\tServiceName string\n\t// Services are the user defined services (protocol, port)\n\tServices []common.Service\n\t// HostPolicy indicates that this is a host policy\n\tHostPolicy bool\n\t// NetworkOnly indicates that the request is only for traffic coming from the network\n\tNetworkOnly bool\n\t// AutoPort indicates that auto port feature is enabled for the PU\n\tAutoPort bool\n}\n\n// RequestProcessor is an instance of the processor\ntype RequestProcessor struct {\n\taddress string\n}\n\n// NewRequestProcessor creates a default request processor\nfunc NewRequestProcessor() *RequestProcessor {\n\treturn &RequestProcessor{\n\t\taddress: common.TriremeSocket,\n\t}\n}\n\n// NewCustomRequestProcessor creates a new request processor\nfunc NewCustomRequestProcessor(address string) *RequestProcessor {\n\tr := NewRequestProcessor()\n\n\tif address != \"\" {\n\t\tr.address = address\n\t}\n\n\treturn r\n}\n\n// Generic command line arguments\n// Assumes a command like that:\n// usage = `Trireme Client Command\n//\n// Usage: enforcerd -h | --help\n// \t\t trireme -v | --version\n// \t\t trireme run\n// \t\t\t[--service-name=<sname>]\n// \t\t\t[[--ports=<ports>]...]\n// \t\t\t[[--label=<keyvalue>]...]\n// \t\t\t[--networkonly]\n// \t\t\t[--hostpolicy]\n//          [--uidpolicy]\n// \t\t\t[<command> [--] [<params>...]]\n// \t\t trireme rm\n//          [--service-name=<sname>]\n// \t\t\t[--hostpolicy]\n//          [--uidpolicy]\n// \t\t trireme <cgroup>\n//\n// Run Client Options:\n// \t--service-name=<sname>              Service name for the executed command [default ].\n// \t--ports=<ports>                     Ports that the executed service is listening to [default ].\n// \t--label=<keyvalue>                  Label (key/value pair) attached to the service [default ].\n// \t--networkonly                       Control traffic from the network only and not from applications [default false].\n// \t--hostpolicy                        Default control of the base namespace [default false].\n// \t--uidpolicy                         Default control of the base namespace [default false].\n//\n// `\n\n// ParseCommand parses a command based on the above specification\n// This is a helper function for CLIs like in Trireme Example.\n// Proper use is through the CLIRequest structure\nfunc (r *RequestProcessor) ParseCommand(arguments map[string]interface{}) (*CLIRequest, error) {\n\n\tc := &CLIRequest{}\n\n\t// First parse a command that only provides the cgroup\n\t// The kernel will only send us a command with one argument\n\tif value, ok := arguments[\"<cgroup>\"]; ok && value != nil {\n\t\tc.Cgroup = value.(string)\n\t\tc.Request = DeleteCgroupRequest\n\t\treturn c, nil\n\t}\n\n\tif value, ok := arguments[\"--service-name\"]; ok && value != nil {\n\t\tc.ServiceName = value.(string)\n\t}\n\n\tif value, ok := arguments[\"--hostpolicy\"]; ok && value != nil {\n\t\tc.HostPolicy = value.(bool)\n\t}\n\n\t// If the command is remove use hostpolicy and service-id\n\tif arguments[\"rm\"].(bool) {\n\t\tc.Request = DeleteServiceRequest\n\t\treturn c, nil\n\t}\n\n\t// Process the rest of the arguments of the run command\n\tif value, ok := arguments[\"run\"]; !ok || value == nil {\n\t\treturn nil, errors.New(\"invalid command\")\n\t}\n\n\t// This is a create request - proceed\n\tc.Request = CreateRequest\n\n\tif value, ok := arguments[\"<command>\"]; ok && value != nil {\n\t\tc.Executable = value.(string)\n\t}\n\n\tif value, ok := arguments[\"--label\"]; ok && value != nil {\n\t\tc.Labels = value.([]string)\n\t}\n\n\tif value, ok := arguments[\"<params>\"]; ok && value != nil {\n\t\tc.Parameters = value.([]string)\n\t}\n\n\tif value, ok := arguments[\"--ports\"]; ok && value != nil {\n\t\tservices, err := ParseServices(value.([]string))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tc.Services = services\n\t}\n\n\tif value, ok := arguments[\"--autoport\"]; ok && value != nil {\n\t\tc.AutoPort = value.(bool)\n\t}\n\n\tif value, ok := arguments[\"--networkonly\"]; ok && value != nil {\n\t\tc.NetworkOnly = value.(bool)\n\t}\n\n\treturn c, nil\n}\n\n// CreateAndRun creates a processing unit\nfunc (r *RequestProcessor) CreateAndRun(c *CLIRequest) error {\n\tvar err error\n\n\t// If its not hostPolicy and the command doesn't exist we return an error\n\tif !c.HostPolicy {\n\t\tif c.Executable == \"\" {\n\t\t\treturn errors.New(\"command must be provided\")\n\t\t}\n\t\tif !path.IsAbs(c.Executable) {\n\t\t\tc.Executable, err = exec.LookPath(c.Executable)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\tif c.ServiceName == \"\" {\n\t\t\tc.ServiceName = c.Executable\n\t\t}\n\t}\n\n\tpuType := getPUType()\n\tif c.NetworkOnly {\n\t\tpuType = common.HostNetworkPU\n\t} else if c.HostPolicy {\n\t\tpuType = common.HostPU\n\t}\n\n\texeTags := executableTags(c)\n\tc.Labels = append(c.Labels, exeTags...)\n\n\t// This is added since the release_notification comes in this format\n\t// Easier to massage it while creation rather than change at the receiving end depending on event\n\trequest := &common.EventInfo{\n\t\tPUType:             puType,\n\t\tName:               c.ServiceName,\n\t\tExecutable:         c.Executable,\n\t\tTags:               c.Labels,\n\t\tPID:                int32(os.Getpid()),\n\t\tEventType:          common.EventStart,\n\t\tServices:           c.Services,\n\t\tNetworkOnlyTraffic: c.NetworkOnly,\n\t\tHostService:        c.HostPolicy,\n\t\tAutoPort:           c.AutoPort,\n\t}\n\n\tif err := sendRequest(r.address, request); err != nil {\n\t\treturn err\n\t}\n\n\tif c.HostPolicy {\n\t\treturn nil\n\t}\n\n\tenv := os.Environ()\n\tenv = append(env, \"APORETO_WRAP=1\")\n\treturn execve(c, env)\n}\n\n// DeleteService will issue a delete command\nfunc (r *RequestProcessor) DeleteService(c *CLIRequest) error {\n\n\trequest := &common.EventInfo{\n\t\tPUType:      getPUType(),\n\t\tPUID:        c.ServiceName,\n\t\tEventType:   common.EventStop,\n\t\tHostService: c.HostPolicy,\n\t}\n\n\t// Send Stop request\n\tif err := sendRequest(r.address, request); err != nil {\n\t\treturn err\n\t}\n\n\t// Send destroy request\n\trequest.EventType = common.EventDestroy\n\n\treturn sendRequest(r.address, request)\n}\n\n// DeleteCgroup will issue a delete command based on the cgroup\n// This is used mainly by the cleaner.\nfunc (r *RequestProcessor) DeleteCgroup(c *CLIRequest) error {\n\tregexCgroup := regexp.MustCompile(`^/trireme/(ssh-)?[a-zA-Z0-9_\\-:.$%]{1,64}$`)\n\n\tif !regexCgroup.Match([]byte(c.Cgroup)) {\n\t\treturn fmt.Errorf(\"invalid cgroup: %s\", c.Cgroup)\n\t}\n\n\tvar eventPUID string\n\tvar eventType common.PUType\n\n\tif strings.HasPrefix(c.Cgroup, common.TriremeCgroupPath) {\n\t\teventType = getPUType()\n\t\teventPUID = c.Cgroup[len(common.TriremeCgroupPath):]\n\t} else {\n\t\t// Not our Cgroup\n\t\treturn nil\n\t}\n\n\trequest := &common.EventInfo{\n\t\tPUType:    eventType,\n\t\tPUID:      eventPUID,\n\t\tEventType: common.EventStop,\n\t}\n\n\t// Send Stop request\n\tif err := sendRequest(r.address, request); err != nil {\n\t\treturn err\n\t}\n\n\t// Send destroy request\n\trequest.EventType = common.EventDestroy\n\n\treturn sendRequest(r.address, request)\n}\n\n// ExecuteRequest executes the command with an RPC request\nfunc (r *RequestProcessor) ExecuteRequest(c *CLIRequest) error {\n\n\tswitch c.Request {\n\tcase CreateRequest:\n\t\treturn r.CreateAndRun(c)\n\tcase DeleteCgroupRequest:\n\t\treturn r.DeleteCgroup(c)\n\tcase DeleteServiceRequest:\n\t\treturn r.DeleteService(c)\n\tdefault:\n\t\treturn fmt.Errorf(\"unknown request: %d\", c.Request)\n\t}\n}\n\n// sendRequest sends an RPC request to the provided address\nfunc sendRequest(address string, event *common.EventInfo) error {\n\n\tclient, err := client.NewClient(address)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn client.SendRequest(event)\n}\n\nfunc executableTags(c *CLIRequest) []string {\n\n\ttags := []string{}\n\n\tif fileMd5, err := extractors.ComputeFileMd5(c.Executable); err == nil {\n\t\ttags = append(tags, fmt.Sprintf(\"@app:%s:filechecksum=%s\", extractors.OSHostString, hex.EncodeToString(fileMd5)))\n\t}\n\n\tdepends := extractors.Libs(c.ServiceName)\n\tfor _, lib := range depends {\n\t\ttags = append(tags, fmt.Sprintf(\"@app:%s:lib:%s=true\", extractors.OSHostString, lib))\n\t}\n\n\treturn tags\n}\n\n// ParseServices parses strings with the services and returns them in an\n// validated slice\nfunc ParseServices(ports []string) ([]common.Service, error) {\n\n\t// If no ports are provided, we add the default 0 port\n\tif len(ports) == 0 {\n\t\tports = append(ports, \"0\")\n\t}\n\n\t// Parse the ports and create the services. Cleanup any bad ports\n\tservices := []common.Service{}\n\tprotocol := packet.IPProtocolTCP\n\n\tfor _, p := range ports {\n\t\t// check for port string of form port#/udp eg 8085/udp\n\t\tportProtocolPair := strings.Split(p, \"/\")\n\t\tif len(portProtocolPair) > 2 || len(portProtocolPair) <= 0 {\n\t\t\treturn nil, fmt.Errorf(\"Invalid port format. Expected format is of form 80 or 8085/udp\")\n\t\t}\n\n\t\tif len(portProtocolPair) == 2 {\n\t\t\tif portProtocolPair[1] == \"tcp\" {\n\t\t\t\tprotocol = packet.IPProtocolTCP\n\t\t\t} else if portProtocolPair[1] == \"udp\" {\n\t\t\t\tprotocol = packet.IPProtocolUDP\n\t\t\t} else {\n\t\t\t\treturn nil, fmt.Errorf(\"Invalid protocol specified. Only tcp/udp accepted\")\n\t\t\t}\n\t\t}\n\n\t\ts, err := portspec.NewPortSpecFromString(portProtocolPair[0], nil)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"Invalid port spec: %s \", err)\n\t\t}\n\n\t\tservices = append(services, common.Service{\n\t\t\tProtocol: uint8(protocol),\n\t\t\tPorts:    s,\n\t\t})\n\t}\n\n\treturn services, nil\n}\n"
  },
  {
    "path": "collector/default.go",
    "content": "package collector\n\nimport (\n\t\"encoding/binary\"\n\n\t\"github.com/cespare/xxhash\"\n\t\"go.aporeto.io/underwater/core/policy/services\"\n)\n\n// DefaultCollector implements a default collector infrastructure to syslog\ntype DefaultCollector struct{}\n\n// NewDefaultCollector returns a default implementation of an EventCollector\nfunc NewDefaultCollector() EventCollector {\n\treturn &DefaultCollector{}\n}\n\n// CollectFlowEvent is part of the EventCollector interface.\nfunc (d *DefaultCollector) CollectFlowEvent(record *FlowRecord) {}\n\n// CollectContainerEvent is part of the EventCollector interface.\nfunc (d *DefaultCollector) CollectContainerEvent(record *ContainerRecord) {}\n\n// CollectUserEvent is part of the EventCollector interface.\nfunc (d *DefaultCollector) CollectUserEvent(record *UserRecord) {}\n\n// CollectTraceEvent collects iptables trace events\nfunc (d *DefaultCollector) CollectTraceEvent(records []string) {}\n\n// CollectPacketEvent collects packet events from the datapath\nfunc (d *DefaultCollector) CollectPacketEvent(report *PacketReport) {}\n\n// CollectCounterEvent collect counters from the datapath\nfunc (d *DefaultCollector) CollectCounterEvent(report *CounterReport) {}\n\n// CollectDNSRequests collect counters from the datapath\nfunc (d *DefaultCollector) CollectDNSRequests(report *DNSRequestReport) {}\n\n// CollectPingEvent collects ping events from the datapath\nfunc (d *DefaultCollector) CollectPingEvent(report *PingReport) {}\n\n// CollectConnectionExceptionReport collects the connection exception report\nfunc (d *DefaultCollector) CollectConnectionExceptionReport(report *ConnectionExceptionReport) {}\n\n// StatsFlowHash is a hash function to hash flows. Ignores source ports. Returns two hashes\n// flowhash - minimal with SIP/DIP/Dport\n// contenthash - hash with all contents to compare quickly and report when changes are observed\nfunc StatsFlowHash(r *FlowRecord) (flowhash, contenthash uint64) {\n\n\thash := xxhash.New()\n\thash.Write([]byte(r.Source.ID))       // nolint errcheck\n\thash.Write([]byte(r.Destination.ID))  // nolint errcheck\n\thash.Write([]byte(r.Destination.URI)) // nolint errcheck\n\thash.Write([]byte(r.Source.IP))       // nolint errcheck\n\thash.Write([]byte(r.Destination.IP))  // nolint errcheck\n\tport := make([]byte, 2)\n\tbinary.BigEndian.PutUint16(port, r.Destination.Port)\n\thash.Write(port) // nolint errcheck\n\tflowhash = hash.Sum64()\n\n\thash.Write([]byte(r.Action.String()))         // nolint errcheck\n\thash.Write([]byte(r.ObservedAction.String())) // nolint errcheck\n\thash.Write([]byte(r.DropReason))              // nolint errcheck\n\thash.Write([]byte(r.PolicyID))                // nolint errcheck\n\treturn flowhash, hash.Sum64()\n}\n\n// StatsFlowContentHash is a hash function to hash flows. Ignores source ports. Returns\n// contenthash - hash with all contents to compare quickly and report when changes are observed\nfunc StatsFlowContentHash(r *FlowRecord) (contenthash uint64) {\n\n\t_, contenthash = StatsFlowHash(r)\n\treturn contenthash\n}\n\n// StatsUserHash is a hash function to hash user records.\nfunc StatsUserHash(r *UserRecord) error {\n\thash, err := services.HashClaims(r.Claims, r.Namespace)\n\tif err != nil {\n\t\treturn err\n\t}\n\tr.ID = hash\n\treturn nil\n}\n\n// ConnectionExceptionReportHash is a hash function to hash connection exception reports.\nfunc ConnectionExceptionReportHash(r *ConnectionExceptionReport) uint64 {\n\n\thash := xxhash.New()\n\thash.Write([]byte(r.PUID))          // nolint errcheck\n\thash.Write([]byte(r.SourceIP))      // nolint errcheck\n\thash.Write([]byte(r.DestinationIP)) // nolint errcheck\n\thash.Write([]byte(r.Reason))        // nolint errcheck\n\thash.Write([]byte(r.State))         // nolint errcheck\n\tport := make([]byte, 2)\n\tbinary.BigEndian.PutUint16(port, r.DestinationPort)\n\thash.Write(port) // nolint errcheck\n\n\treturn hash.Sum64()\n}\n"
  },
  {
    "path": "collector/default_test.go",
    "content": "package collector\n\nimport (\n\t\"testing\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc TestStatsUserHash(t *testing.T) {\n\ttype args struct {\n\t\tuserRecord *UserRecord\n\t\thash       string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"Test_StatsUserHash1\",\n\t\t\targs: args{\n\t\t\t\tuserRecord: &UserRecord{\n\t\t\t\t\tNamespace: \"/_apotests\",\n\t\t\t\t\tClaims: []string{\n\t\t\t\t\t\t\"CN=b01a6042-437-allow\",\n\t\t\t\t\t\t\"O=b01a6042-437-allow\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\thash: \"14815208496714115169\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Test_StatsUserHash2\",\n\t\t\targs: args{\n\t\t\t\tuserRecord: &UserRecord{\n\t\t\t\t\tNamespace: \"/_apotests\",\n\t\t\t\t\tClaims: []string{\n\t\t\t\t\t\t\"CN=apotests-master-staging2 Root CA\",\n\t\t\t\t\t\t\"O=_apotests/b01a6042-437c-44ab-a17f-e14f6f915b87\",\n\t\t\t\t\t\t\"OU=aporeto-enforcerd\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\thash: \"3750309273572959404\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif err := StatsUserHash(tt.args.userRecord); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"StatsUserHash() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif tt.args.hash != tt.args.userRecord.ID {\n\t\t\t\tt.Errorf(\"Wanted %s but got %s\", tt.args.hash, tt.args.userRecord.ID)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestStatsFlowHash(t *testing.T) {\n\ttype args struct {\n\t\tr *FlowRecord\n\t}\n\ttests := []struct {\n\t\tname            string\n\t\targs            args\n\t\twantFlowhash    uint64\n\t\twantContenthash uint64\n\t}{\n\t\t{\n\t\t\tname: \"basic hash\",\n\t\t\targs: args{\n\t\t\t\tr: &FlowRecord{\n\t\t\t\t\tContextID:             \"context\",\n\t\t\t\t\tNamespace:             \"ns\",\n\t\t\t\t\tSource:                EndPoint{},\n\t\t\t\t\tDestination:           EndPoint{},\n\t\t\t\t\tTags:                  []string{\"tag=val\"},\n\t\t\t\t\tDropReason:            \"none\",\n\t\t\t\t\tPolicyID:              \"default\",\n\t\t\t\t\tObservedPolicyID:      \"default\",\n\t\t\t\t\tServiceType:           policy.ServiceL3,\n\t\t\t\t\tServiceID:             \"svc\",\n\t\t\t\t\tCount:                 1,\n\t\t\t\t\tAction:                policy.Accept,\n\t\t\t\t\tObservedAction:        policy.Accept,\n\t\t\t\t\tObservedActionType:    policy.ObserveContinue,\n\t\t\t\t\tL4Protocol:            7,\n\t\t\t\t\tSourceController:      \"src-controller\",\n\t\t\t\t\tDestinationController: \"dst-controller\",\n\t\t\t\t\tRuleName:              \"1\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twantFlowhash:    11145182160106660097,\n\t\t\twantContenthash: 5951126184511352450,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgotFlowhash, gotContenthash := StatsFlowHash(tt.args.r)\n\t\t\tif gotFlowhash != tt.wantFlowhash {\n\t\t\t\tt.Errorf(\"StatsFlowHash() gotFlowhash = %v, want %v\", gotFlowhash, tt.wantFlowhash)\n\t\t\t}\n\t\t\tif gotContenthash != tt.wantContenthash {\n\t\t\t\tt.Errorf(\"StatsFlowHash() gotContenthash = %v, want %v\", gotContenthash, tt.wantContenthash)\n\t\t\t}\n\n\t\t\tgothash := StatsFlowContentHash(tt.args.r)\n\t\t\tif gothash != tt.wantContenthash {\n\t\t\t\tt.Errorf(\"StatsFlowHash() gothash = %v, want %v\", gothash, tt.wantContenthash)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "collector/interfaces.go",
    "content": "package collector\n\nimport (\n\t\"fmt\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/gaia\"\n)\n\n// Flow event description\nconst (\n\t// FlowReject indicates that a flow was rejected\n\tFlowReject = \"reject\"\n\t// FlowAccept logs that a flow is accepted\n\tFlowAccept = \"accept\"\n\t// MissingToken indicates that the token was missing\n\tMissingToken = \"missingtoken\"\n\t// InvalidToken indicates that the token was invalid\n\tInvalidToken = \"token\"\n\t// InvalidFormat indicates that the packet metadata were not correct\n\tInvalidFormat = \"format\"\n\t// InvalidHeader indicates that the TCP header was not there.\n\tInvalidHeader = \"header\"\n\t// InvalidPayload indicates that the TCP payload was not there or bad.\n\tInvalidPayload = \"payload\"\n\t// InvalidContext indicates that there was no context in the metadata\n\tInvalidContext = \"context\"\n\t// InvalidConnection indicates that there was no connection found\n\tInvalidConnection = \"connection\"\n\t// InvalidState indicates that a packet was received without proper state information\n\tInvalidState = \"state\"\n\t// InvalidNonse indicates that the nonse check failed\n\tInvalidNonse = \"nonse\"\n\t// PolicyDrop indicates that the flow is rejected because of the policy decision\n\tPolicyDrop = \"policy\"\n\t// APIPolicyDrop indicates that the request was dropped because of failed API validation.\n\tAPIPolicyDrop = \"api\"\n\t// UnableToDial indicates that the proxy cannot dial out the connection\n\tUnableToDial = \"dial\"\n\t// CompressedTagMismatch indicates that the compressed tag version is dissimilar\n\tCompressedTagMismatch = \"compressedtagmismatch\"\n\t// EncryptionMismatch indicates that the policy encryption varies between client and server enforcer\n\tEncryptionMismatch = \"encryptionmismatch\"\n\t// DatapathVersionMismatch indicates that the datapath version is dissimilar\n\tDatapathVersionMismatch = \"datapathversionmismatch\"\n\t// PacketDrop indicate a single packet drop\n\tPacketDrop = \"packetdrop\"\n)\n\n// Container event description\nconst (\n\t// ContainerStart indicates a container start event\n\tContainerStart = \"start\"\n\t// ContainerStop indicates a container stop event\n\tContainerStop = \"stop\"\n\t// ContainerCreate indicates a container create event\n\tContainerCreate = \"create\"\n\t// ContainerDelete indicates a container delete event\n\tContainerDelete = \"delete\"\n\t// ContainerUpdate indicates a container policy update event\n\tContainerUpdate = \"update\"\n\t// ContainerFailed indicates an event that a container was stopped because of policy issues\n\tContainerFailed = \"forcestop\"\n\t// ContainerIgnored indicates that the container will be ignored by Trireme\n\tContainerIgnored = \"ignore\"\n\t// ContainerDeleteUnknown indicates that policy for an unknown  container was deleted\n\tContainerDeleteUnknown = \"unknowncontainer\"\n)\n\nconst (\n\t// PolicyValid Normal flow accept\n\tPolicyValid = \"V\"\n\t// DefaultEndPoint  provides a string for unknown container sources\n\tDefaultEndPoint = \"default\"\n\t// SomeClaimsSource provides a string for some claims flow source.\n\tSomeClaimsSource = \"some-claims\"\n)\n\n// EventCollector is the interface for collecting events.\ntype EventCollector interface {\n\n\t// CollectFlowEvent collect a  flow event.\n\tCollectFlowEvent(record *FlowRecord)\n\n\t// CollectContainerEvent collects a container events\n\tCollectContainerEvent(record *ContainerRecord)\n\n\t// CollectUserEvent  collects a user event\n\tCollectUserEvent(record *UserRecord)\n\n\t// CollectTraceEvent collects a set of trace messages generated with Iptables trace command\n\tCollectTraceEvent(records []string)\n\n\t// CollectPacketEvent collects packet event from nfqdatapath\n\tCollectPacketEvent(report *PacketReport)\n\n\t// CollectCounterEvent collects the counters from\n\tCollectCounterEvent(counterReport *CounterReport)\n\n\t// CollectDNSRequests collects the dns requests\n\tCollectDNSRequests(request *DNSRequestReport)\n\n\t// CollectPingEvent collects the ping events\n\tCollectPingEvent(report *PingReport)\n\n\t// CollectConnectionExceptionReport collects the connection exception report\n\tCollectConnectionExceptionReport(report *ConnectionExceptionReport)\n}\n\n// EndPointType is the type of an endpoint (PU or an external IP address )\ntype EndPointType byte\n\nconst (\n\t// EndPointTypeExternalIP indicates that the endpoint is an external IP address\n\tEndPointTypeExternalIP EndPointType = iota\n\t// EndPointTypePU indicates that the endpoint is a PU.\n\tEndPointTypePU\n\t// EndPointTypeClaims indicates that the endpoint is of type claims.\n\tEndPointTypeClaims\n)\n\nfunc (e *EndPointType) String() string {\n\n\tswitch *e {\n\tcase EndPointTypeExternalIP:\n\t\treturn \"ext\"\n\tcase EndPointTypePU:\n\t\treturn \"pu\"\n\tcase EndPointTypeClaims:\n\t\treturn \"claims\"\n\t}\n\n\treturn \"pu\" // backward compatibility (CS: 04/24/2018)\n}\n\n// EndPoint is a structure that holds all the endpoint information\ntype EndPoint struct {\n\tID         string\n\tIP         string\n\tURI        string\n\tHTTPMethod string\n\tUserID     string\n\tType       EndPointType\n\tPort       uint16\n}\n\n// FlowRecord describes a flow record for statistis\ntype FlowRecord struct {\n\tContextID             string\n\tNamespace             string\n\tSource                EndPoint\n\tDestination           EndPoint\n\tTags                  []string\n\tDropReason            string\n\tPolicyID              string\n\tObservedPolicyID      string\n\tServiceType           policy.ServiceType\n\tServiceID             string\n\tCount                 int\n\tAction                policy.ActionType\n\tObservedAction        policy.ActionType\n\tObservedActionType    policy.ObserveActionType\n\tL4Protocol            uint8\n\tSourceController      string\n\tDestinationController string\n\tRuleName              string\n}\n\nfunc (f *FlowRecord) String() string {\n\treturn fmt.Sprintf(\"<flowrecord contextID:%s namespace:%s count:%d sourceID:%s destinationID:%s sourceIP: %s destinationIP:%s destinationPort:%d action:%s mode:%s>\",\n\t\tf.ContextID,\n\t\tf.Namespace,\n\t\tf.Count,\n\t\tf.Source.ID,\n\t\tf.Destination.ID,\n\t\tf.Source.IP,\n\t\tf.Destination.IP,\n\t\tf.Destination.Port,\n\t\tf.Action.String(),\n\t\tf.DropReason,\n\t)\n}\n\n// ContainerRecord is a statistics record for a container\ntype ContainerRecord struct {\n\tContextID string\n\tIPAddress policy.ExtendedMap\n\tTags      *policy.TagStore\n\tEvent     string\n}\n\n// UserRecord reports a new user access. These will be reported\n// periodically.\ntype UserRecord struct {\n\tID        string\n\tNamespace string\n\tClaims    []string\n}\n\n// PacketReport is the struct which is used to report packets captured in datapath\ntype PacketReport struct {\n\tTCPFlags        int\n\tClaims          []string\n\tDestinationIP   string\n\tDestinationPort int\n\tDropReason      string\n\tEncrypt         bool\n\tEvent           packettracing.PacketEvent\n\tLength          int\n\tMark            int\n\tNamespace       string\n\tPacketID        int\n\tProtocol        int\n\tPUID            string\n\tSourceIP        string\n\tSourcePort      int\n\tTriremePacket   bool\n\tTimestamp       int64\n\tPayload         []byte\n}\n\n// DNSRequestReport object is used to report dns requests being made by PU's\ntype DNSRequestReport struct {\n\tContextID   string\n\tNamespace   string\n\tSource      *EndPoint\n\tDestination *EndPoint\n\tNameLookup  string\n\tError       string\n\tCount       int\n\tTs          time.Time\n\tIPs         []string\n}\n\n// Counters represent a single entry with name and current val\ntype Counters uint32\n\n// CounterReport is called from the PU which reports Counters from the datapath\ntype CounterReport struct {\n\tNamespace string\n\tPUID      string\n\tTimestamp int64\n\tCounters  []Counters\n}\n\n// PingReport represents a single ping report from datapath.\ntype PingReport struct {\n\tPingID               string\n\tIterationID          int\n\tType                 gaia.PingProbeTypeValue\n\tPUID                 string\n\tNamespace            string\n\tFourTuple            string\n\tRTT                  string\n\tProtocol             int\n\tServiceType          string\n\tPayloadSize          int\n\tPayloadSizeType      gaia.PingProbePayloadSizeTypeValue\n\tPolicyID             string\n\tPolicyAction         policy.ActionType\n\tAgentVersion         string\n\tApplicationListening bool\n\tSeqNum               uint32\n\tTargetTCPNetworks    bool\n\tExcludedNetworks     bool\n\tError                string\n\tClaims               []string\n\tClaimsType           gaia.PingProbeClaimsTypeValue\n\tACLPolicyID          string\n\tACLPolicyAction      policy.ActionType\n\tPeerCertIssuer       string\n\tPeerCertSubject      string\n\tPeerCertExpiry       time.Time\n\tIsServer             bool\n\tServiceID            string\n\n\t// Remote pu fields.\n\tRemoteController    string\n\tRemotePUID          string\n\tRemoteEndpointType  EndPointType\n\tRemoteNamespace     string\n\tRemoteNamespaceType gaia.PingProbeRemoteNamespaceTypeValue\n}\n\n// IPTablesTrace is a bundle of iptables trace records\ntype IPTablesTrace struct {\n\tNamespace string\n\tTimestamp int64\n\tRecords   []*IPTablesTraceRecord\n}\n\n// IPTablesTraceRecord is the info parsed out from a trace event message\ntype IPTablesTraceRecord struct {\n\tTTL                  int\n\tChain                string\n\tDestinationIP        string\n\tDestinationInterface string\n\tDestinationPort      int\n\tLength               int\n\tPacketID             int\n\tProtocol             int\n\tRuleID               int\n\tSourceIP             string\n\tSourceInterface      string\n\tSourcePort           int\n\tTableName            string\n}\n\n// ConnectionExceptionReport represents a single connection exception report from datapath.\ntype ConnectionExceptionReport struct {\n\tTimestamp       time.Time\n\tPUID            string\n\tNamespace       string\n\tProtocol        int\n\tSourceIP        string\n\tDestinationIP   string\n\tDestinationPort uint16\n\tState           string\n\tReason          string\n\tValue           uint32\n}\n"
  },
  {
    "path": "collector/mockcollector/mockcollector.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: collector/interfaces.go\n\n// Package mockcollector is a generated GoMock package.\npackage mockcollector\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tcollector \"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\n// MockEventCollector is a mock of EventCollector interface\n// nolint\ntype MockEventCollector struct {\n\tctrl     *gomock.Controller\n\trecorder *MockEventCollectorMockRecorder\n}\n\n// MockEventCollectorMockRecorder is the mock recorder for MockEventCollector\n// nolint\ntype MockEventCollectorMockRecorder struct {\n\tmock *MockEventCollector\n}\n\n// NewMockEventCollector creates a new mock instance\n// nolint\nfunc NewMockEventCollector(ctrl *gomock.Controller) *MockEventCollector {\n\tmock := &MockEventCollector{ctrl: ctrl}\n\tmock.recorder = &MockEventCollectorMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockEventCollector) EXPECT() *MockEventCollectorMockRecorder {\n\treturn m.recorder\n}\n\n// CollectFlowEvent mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectFlowEvent(record *collector.FlowRecord) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectFlowEvent\", record)\n}\n\n// CollectFlowEvent indicates an expected call of CollectFlowEvent\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectFlowEvent(record interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectFlowEvent\", reflect.TypeOf((*MockEventCollector)(nil).CollectFlowEvent), record)\n}\n\n// CollectContainerEvent mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectContainerEvent(record *collector.ContainerRecord) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectContainerEvent\", record)\n}\n\n// CollectContainerEvent indicates an expected call of CollectContainerEvent\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectContainerEvent(record interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectContainerEvent\", reflect.TypeOf((*MockEventCollector)(nil).CollectContainerEvent), record)\n}\n\n// CollectUserEvent mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectUserEvent(record *collector.UserRecord) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectUserEvent\", record)\n}\n\n// CollectUserEvent indicates an expected call of CollectUserEvent\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectUserEvent(record interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectUserEvent\", reflect.TypeOf((*MockEventCollector)(nil).CollectUserEvent), record)\n}\n\n// CollectTraceEvent mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectTraceEvent(records []string) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectTraceEvent\", records)\n}\n\n// CollectTraceEvent indicates an expected call of CollectTraceEvent\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectTraceEvent(records interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectTraceEvent\", reflect.TypeOf((*MockEventCollector)(nil).CollectTraceEvent), records)\n}\n\n// CollectPacketEvent mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectPacketEvent(report *collector.PacketReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectPacketEvent\", report)\n}\n\n// CollectPacketEvent indicates an expected call of CollectPacketEvent\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectPacketEvent(report interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectPacketEvent\", reflect.TypeOf((*MockEventCollector)(nil).CollectPacketEvent), report)\n}\n\n// CollectCounterEvent mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectCounterEvent(counterReport *collector.CounterReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectCounterEvent\", counterReport)\n}\n\n// CollectCounterEvent indicates an expected call of CollectCounterEvent\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectCounterEvent(counterReport interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectCounterEvent\", reflect.TypeOf((*MockEventCollector)(nil).CollectCounterEvent), counterReport)\n}\n\n// CollectDNSRequests mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectDNSRequests(request *collector.DNSRequestReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectDNSRequests\", request)\n}\n\n// CollectDNSRequests indicates an expected call of CollectDNSRequests\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectDNSRequests(request interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectDNSRequests\", reflect.TypeOf((*MockEventCollector)(nil).CollectDNSRequests), request)\n}\n\n// CollectPingEvent mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectPingEvent(report *collector.PingReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectPingEvent\", report)\n}\n\n// CollectPingEvent indicates an expected call of CollectPingEvent\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectPingEvent(report interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectPingEvent\", reflect.TypeOf((*MockEventCollector)(nil).CollectPingEvent), report)\n}\n\n// CollectConnectionExceptionReport mocks base method\n// nolint\nfunc (m *MockEventCollector) CollectConnectionExceptionReport(report *collector.ConnectionExceptionReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectConnectionExceptionReport\", report)\n}\n\n// CollectConnectionExceptionReport indicates an expected call of CollectConnectionExceptionReport\n// nolint\nfunc (mr *MockEventCollectorMockRecorder) CollectConnectionExceptionReport(report interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectConnectionExceptionReport\", reflect.TypeOf((*MockEventCollector)(nil).CollectConnectionExceptionReport), report)\n}\n"
  },
  {
    "path": "common/events.go",
    "content": "package common\n\nimport (\n\t\"context\"\n)\n\n// TriremeSocket is the standard API server Trireme socket path\n// it is set via ConfigureTriremeSocketPath() and canonicalized with\n// utils.GetPathOnHostViaProcRoot() at point of use\nvar TriremeSocket = \"/var/run/trireme.sock\"\n\n// ConfigureTriremeSocketPath updates the TriremeSocket path\nfunc ConfigureTriremeSocketPath(path string) {\n\tTriremeSocket = path\n}\n\n// PUType defines the PU type\ntype PUType int\n\nconst (\n\t// ContainerPU indicates that this PU is a container\n\tContainerPU PUType = iota\n\t// LinuxProcessPU indicates that this is Linux process\n\tLinuxProcessPU\n\t// WindowsProcessPU indicates that this is Windows process\n\tWindowsProcessPU\n\t// HostPU is a host wrapping PU\n\tHostPU\n\t// HostNetworkPU is a PU for a network service in a host\n\tHostNetworkPU\n\t// KubernetesPU indicates that this is KubernetesPod\n\tKubernetesPU\n\t// TransientPU PU -- placeholder to run processing. This should not\n\t// be inserted in any cache. This is valid only for processing a packet\n\tTransientPU\n)\n\nconst (\n\t// TriremeCgroupPath is the standard Trireme cgroup path\n\tTriremeCgroupPath = \"/trireme/\"\n\n\t// TriremeDockerHostNetwork is the path for Docker HostNetwork container based activations\n\tTriremeDockerHostNetwork = \"/trireme_docker_hostnet/\"\n)\n\n// EventInfo is a generic structure that defines all the information related to a PU event.\n// EventInfo should be used as a normalized struct container that\ntype EventInfo struct {\n\n\t// EventType refers to one of the standard events that Trireme handles.\n\tEventType Event `json:\"eventtype,omitempty\"`\n\n\t// PUType is the the type of the PU\n\tPUType PUType `json:\"putype,omitempty\"`\n\n\t// The PUID is a unique value for the Processing Unit. Ideally this should be the UUID.\n\tPUID string `json:\"puid,omitempty\"`\n\n\t// The Name is a user-friendly name for the Processing Unit.\n\tName string `json:\"name,omitempty\"`\n\n\t// The Executable is the executable name  for the Processing Unit.\n\tExecutable string `json:\"executable,omitempty\"`\n\n\t// Tags represents the set of MetadataTags associated with this PUID.\n\tTags []string `json:\"tags,omitempty\"`\n\n\t// The path for the Network Namespace.\n\tNS string `json:\"namespace,omitempty\"`\n\n\t// Cgroup is the path to the cgroup - used for deletes\n\tCgroup string `json:\"cgroup,omitempty\"`\n\n\t// IPs is a map of all the IPs that fully belong to this processing Unit.\n\tIPs map[string]string `json:\"ipaddressesutype,omitempty\"`\n\n\t// Services is a list of services of interest - for host control\n\tServices []Service `json:\"services,omitempty\"`\n\n\t// The PID is the PID on the system where this Processing Unit is running.\n\tPID int32 `json:\"pid,omitempty\"`\n\n\t// HostService indicates that the request is for the root namespace\n\tHostService bool `json:\"hostservice,omitempty\"`\n\n\t// AutoPort indicates that the PU will have auto port feature enabled\n\tAutoPort bool `json:\"autoport,omitempty\"`\n\n\t// NetworkOnlyTraffic indicates that traffic towards the applications must be controlled.\n\tNetworkOnlyTraffic bool `json:\"networktrafficonly,omitempty\"`\n\n\t// Root indicates that this request is coming from a roor user. Its overwritten by the enforcer\n\tRoot bool `json:\"root,omitempty\"`\n}\n\n// Event represents the event picked up by the monitor.\ntype Event string\n\n// Values of the events\nconst (\n\tEventStart   Event = \"start\"\n\tEventStop    Event = \"stop\"\n\tEventUpdate  Event = \"update\"\n\tEventCreate  Event = \"create\"\n\tEventDestroy Event = \"destroy\"\n\tEventPause   Event = \"pause\"\n\tEventUnpause Event = \"unpause\"\n\tEventResync  Event = \"resync\"\n)\n\nvar (\n\t// EventMap used for validations\n\tEventMap = map[Event]*struct{}{\n\t\t\"start\":   nil,\n\t\t\"stop\":    nil,\n\t\t\"update\":  nil,\n\t\t\"create\":  nil,\n\t\t\"destroy\": nil,\n\t\t\"pause\":   nil,\n\t\t\"unpause\": nil,\n\t\t\"resync\":  nil,\n\t}\n)\n\n// EventResponse encapsulate the error response if any.\ntype EventResponse struct {\n\tError string\n}\n\n// A EventHandler is type of event handler functions.\ntype EventHandler func(ctx context.Context, event *EventInfo) error\n\n// A State describes the state of the PU.\ntype State int\n\nconst (\n\t// StateStarted is the state of a started PU.\n\tStateStarted State = iota + 1\n\n\t// StateStopped is the state of stopped PU.\n\tStateStopped\n\n\t// StatePaused is the state of a paused PU.\n\tStatePaused\n\n\t// StateDestroyed is the state of destroyed PU.\n\tStateDestroyed\n\n\t// StateUnknwown is the state of PU in an unknown state.\n\tStateUnknwown\n)\n"
  },
  {
    "path": "common/hooks.go",
    "content": "package common\n\n// Values for hook methods\nconst (\n\tMetadataHookPolicy      = \"metadata:policy\"\n\tMetadataHookHealth      = \"metadata:health\"\n\tMetadataHookCertificate = \"metadata:certificate\"\n\tMetadataHookKey         = \"metadata:key\"\n\tMetadataHookToken       = \"metadata:token\"\n\tAWSHookInfo             = \"aws:info\"\n\tAWSHookRole             = \"aws:role\"\n)\n\n// AWSRole reserved prefix\nconst (\n\tAWSRoleARNPrefix = \"@awsrole=arn:aws:iam::\"\n\n\tAWSRolePrefix = \"@awsrole=\"\n)\n\n// Metadata API constants\nconst (\n\tMetadataKey   = \"X-Aporeto-Metadata\"\n\tMetadataValue = \"secrets\"\n)\n"
  },
  {
    "path": "common/oauthtokens.go",
    "content": "package common\n\nimport (\n\t\"context\"\n\t\"time\"\n)\n\n// ServiceTokenType is the type of the token.\ntype ServiceTokenType string\n\n// Values of ServiceTokenType\nconst (\n\tServiceTokenTypeOAUTH ServiceTokenType = \"oauth\"\n\n\tServiceTokenTypeAWS ServiceTokenType = \"aws\"\n)\n\n// ServiceTokenIssuer is an interface of an implementation that can issue service tokens on behalf\n// of a PU. The user of the library must provide the implementation. ServiceTokens can be OAUTH\n// tokens or cloud provider specific tokens such AWS Role credentials.\ntype ServiceTokenIssuer interface {\n\tIssue(ctx context.Context, contextID string, stype ServiceTokenType, audience string, validity time.Duration) (string, error)\n}\n"
  },
  {
    "path": "common/service.go",
    "content": "package common\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\n// Service is a protocol/port service of interest - used to pass user requests\ntype Service struct {\n\t// Ports are the corresponding ports\n\tPorts *portspec.PortSpec `json:\"ports,omitempty\"`\n\n\t// Port is the service port. This has been deprecated and will be removed in later releases 01/13/2018\n\tPort uint16\n\n\t// Protocol is the protocol number\n\tProtocol uint8 `json:\"protocol,omitempty\"`\n\n\t// Addresses are the IP addresses. An empty list means 0.0.0.0/0\n\tAddresses map[string]struct{} `json:\"addresses,omitempty\"`\n\n\t// FQDNs is the list of FQDNs for the service.\n\tFQDNs []string `json:\"fqdns,omitempty\"`\n}\n\n// ConvertServicesToPortList converts an array of services to a port list\nfunc ConvertServicesToPortList(services []Service) string {\n\n\tportlist := \"\"\n\tfor _, s := range services {\n\t\tportlist = portlist + s.Ports.String() + \",\"\n\t}\n\n\tif len(portlist) == 0 {\n\t\tportlist = \"0\"\n\t} else {\n\t\tportlist = portlist[:len(portlist)-1]\n\t}\n\n\treturn portlist\n}\n\n// ConvertServicesToProtocolPortList converts an array of services to tcp/udp port list\nfunc ConvertServicesToProtocolPortList(services []Service) (string, string) {\n\n\ttcpPortlist := \"\"\n\tudpPortlist := \"\"\n\tfor _, s := range services {\n\t\tif s.Protocol == packet.IPProtocolTCP {\n\t\t\ttcpPortlist = tcpPortlist + s.Ports.String() + \",\"\n\t\t} else {\n\t\t\tudpPortlist = udpPortlist + s.Ports.String() + \",\"\n\t\t}\n\t}\n\n\tif len(tcpPortlist) == 0 {\n\t\ttcpPortlist = \"0\"\n\t} else {\n\t\ttcpPortlist = tcpPortlist[:len(tcpPortlist)-1]\n\t}\n\n\tif len(udpPortlist) == 0 {\n\t\tudpPortlist = \"0\"\n\t} else {\n\t\tudpPortlist = udpPortlist[:len(udpPortlist)-1]\n\t}\n\n\treturn tcpPortlist, udpPortlist\n}\n"
  },
  {
    "path": "controller/config.go",
    "content": "package controller\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/blang/semver\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\tenforcerproxy \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/proxy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor\"\n\tsupervisornoop \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor/noop\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/env\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packetprocessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\n// config specifies all configurations accepted by trireme to start.\ntype config struct {\n\t// Required Parameters.\n\tserverID string\n\n\t// External Interface implementations that we allow to plugin to components.\n\tcollector collector.EventCollector\n\tservice   packetprocessor.PacketProcessor\n\tsecret    secrets.Secrets\n\n\t// Configurations for fine tuning internal components.\n\tmode                   constants.ModeType\n\tfq                     fqconfig.FilterQueue\n\tisBPFEnabled           bool\n\tlinuxProcess           bool\n\tmutualAuth             bool\n\tpacketLogs             bool\n\tvalidity               time.Duration\n\tprocMountPoint         string\n\texternalIPcacheTimeout time.Duration\n\truntimeCfg             *runtime.Configuration\n\truntimeErrorChannel    chan *policy.RuntimeError\n\tremoteParameters       *env.RemoteParameters\n\ttokenIssuer            common.ServiceTokenIssuer\n\tipv6Enabled            bool\n\tagentVersion           semver.Version\n\tiptablesLockfile       string\n}\n\n// Option is provided using functional arguments.\ntype Option func(*config)\n\n// OptionBPFEnabled is an option\nfunc OptionBPFEnabled(bpfEnabled bool) Option {\n\treturn func(cfg *config) {\n\t\tcfg.isBPFEnabled = bpfEnabled\n\t}\n}\n\n// OptionIptablesLockfile is a string option to set the path to the iptables lockfile\nfunc OptionIptablesLockfile(iptablesLockfile string) Option {\n\treturn func(cfg *config) {\n\t\tcfg.iptablesLockfile = iptablesLockfile\n\t}\n}\n\n//OptionIPv6Enable is an option to enable ipv6\nfunc OptionIPv6Enable(ipv6Enabled bool) Option {\n\treturn func(cfg *config) {\n\t\tcfg.ipv6Enabled = ipv6Enabled\n\t}\n}\n\n// OptionCollector is an option to provide an external collector implementation.\nfunc OptionCollector(c collector.EventCollector) Option {\n\treturn func(cfg *config) {\n\t\tcfg.collector = c\n\t}\n}\n\n// OptionDatapathService is an option to provide an external datapath service implementation.\nfunc OptionDatapathService(s packetprocessor.PacketProcessor) Option {\n\treturn func(cfg *config) {\n\t\tcfg.service = s\n\t}\n}\n\n// OptionSecret is an option to provide an external datapath service implementation.\nfunc OptionSecret(s secrets.Secrets) Option {\n\treturn func(cfg *config) {\n\t\tcfg.secret = s\n\t}\n}\n\n// OptionEnforceLinuxProcess is an option to request support for linux process support.\nfunc OptionEnforceLinuxProcess() Option {\n\treturn func(cfg *config) {\n\t\tcfg.linuxProcess = true\n\t}\n}\n\n// OptionEnforceFqConfig is an option to override filter queues.\nfunc OptionEnforceFqConfig(f fqconfig.FilterQueue) Option {\n\treturn func(cfg *config) {\n\t\tcfg.fq = f\n\t}\n}\n\n// OptionDisableMutualAuth is an option to disable MutualAuth (enabled by default)\nfunc OptionDisableMutualAuth() Option {\n\treturn func(cfg *config) {\n\t\tcfg.mutualAuth = false\n\t}\n}\n\n// OptionRuntimeConfiguration is an option to provide target network configuration.\nfunc OptionRuntimeConfiguration(c *runtime.Configuration) Option {\n\treturn func(cfg *config) {\n\t\tcfg.runtimeCfg = c\n\t}\n}\n\n// OptionProcMountPoint is an option to provide proc mount point.\nfunc OptionProcMountPoint(p string) Option {\n\treturn func(cfg *config) {\n\t\tcfg.procMountPoint = p\n\t}\n}\n\n// OptionRuntimeErrorChannel configures the error channel for the policy engine.\nfunc OptionRuntimeErrorChannel(errorChannel chan *policy.RuntimeError) Option {\n\treturn func(cfg *config) {\n\t\tcfg.runtimeErrorChannel = errorChannel\n\t}\n\n}\n\n// OptionPacketLogs is an option to enable packet level logging.\nfunc OptionPacketLogs() Option {\n\treturn func(cfg *config) {\n\t\tcfg.packetLogs = true\n\t}\n}\n\n// OptionRemoteParameters is an option to set the parameters for the remote\nfunc OptionRemoteParameters(p *env.RemoteParameters) Option {\n\treturn func(cfg *config) {\n\t\tcfg.remoteParameters = p\n\t}\n}\n\n// OptionTokenIssuer provides the token issuer.\nfunc OptionTokenIssuer(t common.ServiceTokenIssuer) Option {\n\treturn func(cfg *config) {\n\t\tcfg.tokenIssuer = t\n\t}\n}\n\n// OptionAgentVersion is an option to set agent version.\nfunc OptionAgentVersion(v semver.Version) Option {\n\treturn func(cfg *config) {\n\t\tcfg.agentVersion = v\n\t}\n}\n\nfunc (t *trireme) newEnforcers(ctx context.Context) error {\n\tzap.L().Debug(\"LinuxProcessSupport\", zap.Bool(\"Status\", t.config.linuxProcess))\n\tvar err error\n\tif t.config.linuxProcess {\n\t\tt.enforcers[constants.LocalServer], err = enforcer.New(\n\t\t\tt.config.mutualAuth,\n\t\t\tt.config.fq,\n\t\t\tt.config.collector,\n\t\t\tt.config.secret,\n\t\t\tt.config.serverID,\n\t\t\tt.config.validity,\n\t\t\tconstants.LocalServer,\n\t\t\tt.config.procMountPoint,\n\t\t\tt.config.externalIPcacheTimeout,\n\t\t\tt.config.packetLogs,\n\t\t\tt.config.runtimeCfg,\n\t\t\tt.config.tokenIssuer,\n\t\t\tt.config.isBPFEnabled,\n\t\t\tt.config.agentVersion,\n\t\t\tpolicy.None,\n\t\t)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"Failed to initialize LocalServer enforcer: %s \", err)\n\t\t}\n\t\terr = t.setupEnvoyAuthorizer()\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"Failed to initialize LocalEnvoyAuthorizer enforcer: %s \", err)\n\t\t}\n\t}\n\n\tif t.config.mode == constants.RemoteContainer {\n\t\tenforcerProxy := enforcerproxy.NewProxyEnforcer(\n\t\t\tctx,\n\t\t\tt.config.mutualAuth,\n\t\t\tt.config.fq,\n\t\t\tt.config.collector,\n\t\t\tt.config.secret,\n\t\t\tt.config.serverID,\n\t\t\tt.config.validity,\n\t\t\t\"enforce\",\n\t\t\tt.config.procMountPoint,\n\t\t\tt.config.externalIPcacheTimeout,\n\t\t\tt.config.packetLogs,\n\t\t\tt.config.runtimeCfg,\n\t\t\tt.config.runtimeErrorChannel,\n\t\t\tt.config.remoteParameters,\n\t\t\tt.config.tokenIssuer,\n\t\t\tt.config.isBPFEnabled,\n\t\t\tt.config.ipv6Enabled,\n\t\t\tt.config.iptablesLockfile,\n\t\t\trpcwrapper.NewRPCServer(),\n\t\t)\n\t\tt.enforcers[constants.RemoteContainer] = enforcerProxy\n\t\tt.enforcers[constants.RemoteContainerEnvoyAuthorizer] = enforcerProxy\n\t}\n\n\tzap.L().Debug(\"TriremeMode\", zap.Int(\"Status\", int(t.config.mode)))\n\treturn nil\n}\n\nfunc (t *trireme) newSupervisors() error {\n\n\tnoopSup := supervisornoop.NewNoopSupervisor()\n\n\tif t.config.linuxProcess {\n\t\tsup, err := supervisor.NewSupervisor(\n\t\t\tt.config.collector,\n\t\t\tt.enforcers[constants.LocalServer],\n\t\t\tconstants.LocalServer,\n\t\t\tt.config.runtimeCfg,\n\t\t\tt.config.ipv6Enabled,\n\t\t\tt.config.iptablesLockfile,\n\t\t)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"Could Not create process supervisor :: received error %v\", err)\n\t\t}\n\n\t\tt.supervisors[constants.LocalServer] = sup\n\t\terr = t.setupEnvoySupervisor(noopSup)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"Could Not create envoy supervisor :: received error %v\", err)\n\t\t}\n\t}\n\n\tif t.config.mode == constants.RemoteContainer {\n\t\tt.supervisors[constants.RemoteContainer] = noopSup\n\t\tt.supervisors[constants.RemoteContainerEnvoyAuthorizer] = noopSup\n\t}\n\n\treturn nil\n}\n\n// newTrireme returns a reference to the trireme object based on the parameter subelements.\nfunc newTrireme(ctx context.Context, c *config) TriremeController {\n\n\tvar err error\n\n\tt := &trireme{\n\t\tconfig:               c,\n\t\tenforcers:            map[constants.ModeType]enforcer.Enforcer{},\n\t\tsupervisors:          map[constants.ModeType]supervisor.Supervisor{},\n\t\tpuTypeToEnforcerType: map[common.PUType]constants.ModeType{},\n\t\tlocks:                sync.Map{},\n\t\tenablingTrace:        make(chan *traceTrigger, 10),\n\t}\n\n\tzap.L().Debug(\"Creating Enforcers\")\n\tif err = t.newEnforcers(ctx); err != nil {\n\t\tzap.L().Error(\"Unable to create datapath enforcers\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tzap.L().Debug(\"Creating Supervisors\")\n\tif err = t.newSupervisors(); err != nil {\n\t\tzap.L().Error(\"Unable to start datapath supervisor\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tif c.linuxProcess {\n\t\tt.puTypeToEnforcerType[common.LinuxProcessPU] = constants.LocalServer\n\t\tt.puTypeToEnforcerType[common.WindowsProcessPU] = constants.LocalServer\n\t\tt.puTypeToEnforcerType[common.HostPU] = constants.LocalServer\n\t\tt.puTypeToEnforcerType[common.HostNetworkPU] = constants.LocalServer\n\t}\n\n\tif t.config.mode == constants.RemoteContainer {\n\t\tt.puTypeToEnforcerType[common.ContainerPU] = constants.RemoteContainer\n\t\tt.puTypeToEnforcerType[common.KubernetesPU] = constants.RemoteContainer\n\t}\n\n\treturn t\n}\n"
  },
  {
    "path": "controller/config_nonwindows.go",
    "content": "// +build !windows\n\npackage controller\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc (t *trireme) setupEnvoyAuthorizer() error {\n\n\tvar err error\n\tt.enforcers[constants.LocalEnvoyAuthorizer], err = enforcer.New(\n\t\tt.config.mutualAuth,\n\t\tt.config.fq,\n\t\tt.config.collector,\n\t\tt.config.secret,\n\t\tt.config.serverID,\n\t\tt.config.validity,\n\t\tconstants.LocalEnvoyAuthorizer,\n\t\tt.config.procMountPoint,\n\t\tt.config.externalIPcacheTimeout,\n\t\tt.config.packetLogs,\n\t\tt.config.runtimeCfg,\n\t\tt.config.tokenIssuer,\n\t\tt.config.isBPFEnabled,\n\t\tt.config.agentVersion,\n\t\tpolicy.None,\n\t)\n\treturn err\n}\n\nfunc (t *trireme) setupEnvoySupervisor(sup supervisor.Supervisor) error {\n\n\tt.supervisors[constants.LocalEnvoyAuthorizer] = sup\n\treturn nil\n}\n"
  },
  {
    "path": "controller/config_windows.go",
    "content": "// +build windows\n\npackage controller\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor\"\n)\n\nfunc (t *trireme) setupEnvoyAuthorizer() error {\n\treturn nil\n}\n\nfunc (t *trireme) setupEnvoySupervisor(sup supervisor.Supervisor) error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/constants/constants.go",
    "content": "package constants\n\nimport (\n\t\"path/filepath\"\n\t\"time\"\n)\n\nconst (\n\t// DefaultProcMountPoint The default proc mountpoint\n\tDefaultProcMountPoint = \"/proc\"\n\t// DefaultAporetoProcMountPoint The aporeto proc mountpoint just in case we are launched with some specific docker config\n\tDefaultAporetoProcMountPoint = \"/aporetoproc\"\n\t// DefaultSecretsPath is the default path for the secrets proxy.\n\tDefaultSecretsPath = \"@secrets\"\n\n\t// EnforcerdCleanerName is the path of the cleaner script.\n\tEnforcerdCleanerName = \"cleaner\"\n\n\t// DefaultEnforcerdCleanerPath is the default path of the cleaner.  For now set this to\n\t// /sbin/cleaner.  Note that the cleaner path is set via the container master enforcer but\n\t// it is ultimately run in the host and not in the container. Prior to defender integrations\n\t// we used the same path /sbin/cleaner in the host and /sbin/cleaner in the container and it\n\t// worked.  Now, with the defender integration we use a tarball to install the enforcer and\n\t// all the binaries, and it may install anywhere on the host, as specified by the defender\n\t// installer.  The container does not know where it was installed. Furthermore, we no longer\n\t// install via .deb or .rpm files so there is no system files installed, and cleaner will not\n\t// be found in /sbin/cleaner when installed via the defender bundle installer.  So a new\n\t// startup flag is defined now, `--cleaner-path` and environment variable\n\t// `ENFORCED_CLEANER_PATH` that defender can use to tell us where it installed the\n\t// cleaner. With this, the container enforcer can properly set the cleaner path in the\n\t// cgroups v1 release_agent.\n\t//\n\t// TLDR; We now use /enforcerd-tools/cleaner in the container, and it will be installed in\n\t// /path/to/install/ation/dir/enforcerd-tools/cleaner in the host.  container enforcer will\n\t// tell cgroup sub-system to use /enforcerd-tools/cleaner but cgroups executes it on the\n\t// host and can't find it here. Ergo, it won't work on a defender install this way.\n\t//\n\t// TODO - fix this so cleaner can be installed by defender and the installation path can be\n\t// discovered by container enforcer\n\tDefaultEnforcerdCleanerPath = \"/sbin/cleaner\"\n\n\t// RemoteEnforcerBuildName is the name of the remote enforcer binary we will build and deploy\n\tRemoteEnforcerBuildName = \"remoteenforcerd\"\n\n\t// RemoteEnforcerSrcName is the name of the original copy of the remote enforcer binary\n\tRemoteEnforcerSrcName = \"remoteenforcer\"\n)\n\nconst (\n\t// DefaultRemoteArg is the default arguments for a remote enforcer\n\tDefaultRemoteArg = \"enforce\"\n)\n\nconst (\n\n\t// EnvMountPoint is an environment variable which will contain the mount point\n\tEnvMountPoint = \"TRIREME_ENV_PROC_MOUNTPOINT\"\n\n\t// EnvEnforcerType is an environment variable which will indicate what enforcer type we want to use\n\tEnvEnforcerType = \"TRIREME_ENV_ENFORCER_TYPE\"\n\n\t// EnvContextSocket stores the path to the context specific socket\n\tEnvContextSocket = \"TRIREME_ENV_SOCKET_PATH\"\n\n\t// EnvStatsChannel stores the path to the stats channel\n\tEnvStatsChannel = \"TRIREME_ENV_STATS_CHANNEL_PATH\"\n\n\t// EnvDebugChannel stores the path to the debug channel\n\tEnvDebugChannel = \"TRIREME_ENV_DEBUG_CHANNEL_PATH\"\n\n\t// EnvRPCClientSecret is the secret used between RPC client/server\n\tEnvRPCClientSecret = \"TRIREME_ENV_SECRET\"\n\n\t// EnvStatsSecret is the secret to be used for the stats channel\n\tEnvStatsSecret = \"TRIREME_ENV_STATS_SECRET\"\n\n\t// EnvContainerPID is the PID of the container\n\tEnvContainerPID = \"TRIREME_ENV_CONTAINER_PID\"\n\n\t// EnvNSPath is the path of the network namespace\n\tEnvNSPath = \"TRIREME_ENV_NS_PATH\"\n\n\t// EnvNsenterErrorState stores the error state as reported by remote enforcer\n\tEnvNsenterErrorState = \"TRIREME_ENV_NSENTER_ERROR_STATE\"\n\n\t// EnvNsenterLogs stores the logs as reported by remote enforcer\n\tEnvNsenterLogs = \"TRIREME_ENV_NSENTER_LOGS\"\n\n\t// EnvLogLevel store the log level to be used.\n\tEnvLogLevel = \"TRIREME_ENV_LOG_LEVEL\"\n\n\t// EnvLogFormat store the log format to be used.\n\tEnvLogFormat = \"TRIREME_ENV_LOG_FORMAT\"\n\n\t// EnvLogID store the context Id for the log file to be used.\n\tEnvLogID = \"TRIREME_ENV_LOG_ID\"\n\n\t// EnvCompressedTags stores whether we should be using compressed tags.\n\tEnvCompressedTags = \"TRIREME_ENV_COMPRESSED_TAGS\"\n\n\t// EnvEnforcerdToolsDir is the path to the /enforcerd-tools directory so remote enforcerd can find tools.\n\tEnvEnforcerdToolsDir = \"TRIREME_ENV_ENFORCERD_TOOLS_DIR\"\n\n\t// EnvEnforcerdNFQueues exports the number of nfqueues to remote enforcer\n\tEnvEnforcerdNFQueues = \"TRIREME_ENV_NUM_NFQUEUES\"\n)\n\n// ModeType defines the mode of the enforcement and supervisor.\ntype ModeType int\n\nconst (\n\t// RemoteContainer indicates that the Supervisor is implemented in the\n\t// container namespace\n\tRemoteContainer ModeType = iota\n\t// LocalServer indicates that the Supervisor applies to Linux processes\n\tLocalServer\n\t// LocalEnvoyAuthorizer indicates to use a local envoyproxy as enforcer/authorizer\n\tLocalEnvoyAuthorizer\n\t// RemoteContainerEnvoyAuthorizer indicates to use the envoyproxy enforcer/authorizer for containers\n\tRemoteContainerEnvoyAuthorizer\n)\n\n// LogLevel corresponds to log level of any logger. eg: zap.\ntype LogLevel string\n\n// LogOptions\nconst (\n\t// OptionLogLevel represents the log-level\n\tOptionLogLevel = \"log-level\"\n\t// OptionLogFormat represents the log-format\n\tOptionLogFormat = \"log-format\"\n\t// OptionLogFilePath represents the log location path\n\tOptionLogFilePath = \"log-file-path\"\n)\n\n// Various log levels.\nconst (\n\tInfo  LogLevel = \"Info\"\n\tDebug LogLevel = \"Debug\"\n\tTrace LogLevel = \"Trace\"\n\tError LogLevel = \"Error\"\n\tWarn  LogLevel = \"Warn\"\n)\n\n// API service related constants\nconst (\n\tCallbackURIExtension = \"/aporeto/oidc/callback\"\n)\n\n// Protocol constants\nconst (\n\tTCPProtoNum    = \"6\"\n\tUDPProtoNum    = \"17\"\n\tTCPProtoString = \"TCP\"\n\tUDPProtoString = \"UDP\"\n\tAllProtoString = \"ALL\"\n)\n\n//MaxICMPCodes constant puts the maximum number of codes that can be put in a single string\nconst MaxICMPCodes = 25\n\n// Channel variables\nvar (\n\tStatsChannel string\n\tDebugChannel string\n)\n\n// PortNumberLabelString is the label to use for port numbers\nconst (\n\tPortNumberLabelString = \"@sys:port\"\n)\n\n// ControllerLabelString is the label to use for control planes\nconst (\n\tControllerLabelString = \"$controller\"\n)\n\n// Token and cache default validities. These have performance implications.\n// The faster the datapath issues new tokens it affects performance. However,\n// making it too slow can potentially allow reuse of the tokens. The\n// token issuance rate must be always faster than the expiration rate.\nconst (\n\t// SynTokenRefreshTime determines how often the data path creates new tokens.\n\tSynTokenRefreshTime = 5 * time.Minute\n\t// SynTokenValidity determines how long after the tokens are considered valid.\n\tSynTokenValidity = 10 * time.Minute\n)\n\n// SocketsPath is used to find the socket file corresponding to the container\nvar SocketsPath string\n\n// RemoteEnforcerPath sets the path of the remote enforcer\nvar RemoteEnforcerPath string\n\n// ConfigureRemoteEnforcerPath updates the remote enforcer path\nfunc ConfigureRemoteEnforcerPath(path string) {\n\tRemoteEnforcerPath = filepath.Join(path, RemoteEnforcerBuildName)\n}\n\n// ConfigureSocketsPath updates the sockets path\nfunc ConfigureSocketsPath(sockPath string) {\n\tSocketsPath = sockPath\n\tStatsChannel = filepath.Join(sockPath, \"statschannel.sock\")\n\tDebugChannel = filepath.Join(sockPath, \"debugchannel.sock\")\n}\n\n// Mark used by the proxies/ping to bypass trap rules.\nconst (\n\tProxyMarkInt = 0x40\n\tProxyMark    = \"0x40\"\n)\n\nconst (\n\t// ChainPrefix represents trireme chain prefix.\n\tChainPrefix = \"TRI-\"\n)\n"
  },
  {
    "path": "controller/constants/constants_nonrhel6.go",
    "content": "// +build !rhel6\n\npackage constants\n\n// IpsetBinaryName is the ipset binary name\nconst IpsetBinaryName = \"aporeto-ipset\"\n"
  },
  {
    "path": "controller/constants/constants_rhel6.go",
    "content": "// +build rhel6\n\npackage constants\n\n// IpsetBinaryName is the (system) ipset binary name on RHEL6\nconst IpsetBinaryName = \"ipset\"\n"
  },
  {
    "path": "controller/constants/constants_test.go",
    "content": "package constants\n\nimport (\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestConfigureRemoteEnforcerPath(t *testing.T) {\n\targ := \"foo\"\n\twant := filepath.Join(\"foo\", \"remoteenforcerd\")\n\n\tt.Run(\"Test with one path\", func(t *testing.T) {\n\n\t\tConfigureRemoteEnforcerPath(arg)\n\t\tgot := RemoteEnforcerPath\n\t\tif got != want {\n\t\t\tt.Errorf(\"RemoteEnforcerPath was wrong, got: %s, want: %s.\", got, want)\n\t\t}\n\t})\n\n}\n\nfunc TestConfigureSocketsPath(t *testing.T) {\n\tpath := \"the/path\"\n\twant1 := filepath.Join(path, \"statschannel.sock\")\n\twant2 := filepath.Join(path, \"debugchannel.sock\")\n\n\tt.Run(\"Test with one path\", func(t *testing.T) {\n\n\t\tConfigureSocketsPath(path)\n\t\tgot := SocketsPath\n\t\tgot1 := filepath.Join(path, want1)\n\t\tgot2 := filepath.Join(path, want2)\n\t\tif got != path {\n\t\t\tt.Errorf(\"ConfigureSocketsPath was wrong, got: %s, want: %s.\", got, path)\n\t\t}\n\t\tif got != path {\n\t\t\tt.Errorf(\"ConfigureSocketsPath was wrong, got: %s, want: %s.\", got1, want1)\n\t\t}\n\t\tif got != path {\n\t\t\tt.Errorf(\"ConfigureSocketsPath was wrong, got: %s, want: %s.\", got2, want2)\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "controller/controller.go",
    "content": "package controller\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os/exec\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/dmesgparser\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/env\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\ntype traceTrigger struct {\n\tduration time.Duration\n\texpiry   time.Time\n}\n\n// trireme contains references to all the different components of the controller.\n// Depending on the configuration we might have multiple supervisor and enforcer types.\n// The initialization process must provide the mode that Trireme will run in.\ntype trireme struct {\n\tconfig               *config\n\tsupervisors          map[constants.ModeType]supervisor.Supervisor\n\tenforcers            map[constants.ModeType]enforcer.Enforcer\n\tpuTypeToEnforcerType map[common.PUType]constants.ModeType\n\tenablingTrace        chan *traceTrigger\n\tlocks                sync.Map\n}\n\n// New returns a trireme interface implementation based on configuration provided.\nfunc New(ctx context.Context, serverID string, mode constants.ModeType, opts ...Option) TriremeController {\n\n\tc := &config{\n\t\tserverID:               serverID,\n\t\tcollector:              collector.NewDefaultCollector(),\n\t\tmode:                   mode,\n\t\tmutualAuth:             true,\n\t\tvalidity:               constants.SynTokenValidity,\n\t\tprocMountPoint:         constants.DefaultProcMountPoint,\n\t\texternalIPcacheTimeout: -1,\n\t\tremoteParameters: &env.RemoteParameters{\n\t\t\tLogFormat:      \"console\",\n\t\t\tLogWithID:      false,\n\t\t\tCompressedTags: claimsheader.CompressionTypeV1,\n\t\t},\n\t}\n\n\tfor _, opt := range opts {\n\t\topt(c)\n\t}\n\n\tzap.L().Debug(\"Trireme configuration\", zap.String(\"configuration\", fmt.Sprintf(\"%+v\", c)))\n\n\treturn newTrireme(ctx, c)\n}\n\n// Run starts the supervisor and the enforcer and go routines. It doesn't try to clean\n// up if something went wrong. It will be up to the caller to decide what to do.\nfunc (t *trireme) Run(ctx context.Context) error {\n\n\t// Start all the supervisors.\n\tfor _, s := range t.supervisors {\n\n\t\tif err := s.Run(ctx); err != nil {\n\t\t\tzap.L().Error(\"Error when starting the supervisor\", zap.Error(err))\n\t\t\treturn fmt.Errorf(\"Error while starting supervisor %v\", err)\n\t\t}\n\t}\n\n\t// Start all the enforcers.\n\tfor _, e := range t.enforcers {\n\t\tif err := e.Run(ctx); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to start the enforcer: %s\", err)\n\t\t}\n\t}\n\tgo t.runIPTraceCollector(ctx)\n\treturn nil\n}\n\n// CleanUp cleans all the acls and all the remote supervisors\nfunc (t *trireme) CleanUp() error {\n\tfor _, s := range t.supervisors {\n\t\ts.CleanUp() // nolint\n\t}\n\n\tfor _, e := range t.enforcers {\n\t\te.CleanUp() // nolint\n\t}\n\treturn nil\n}\n\n// Enforce asks the controller to enforce policy to a processing unit\nfunc (t *trireme) Enforce(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) error {\n\tlock, _ := t.locks.LoadOrStore(puID, &sync.Mutex{})\n\tlock.(*sync.Mutex).Lock()\n\tdefer lock.(*sync.Mutex).Unlock()\n\treturn t.doHandleCreate(ctx, puID, policy, runtime)\n}\n\n// Enforce asks the controller to enforce policy to a processing unit\nfunc (t *trireme) UnEnforce(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) error {\n\tlock, _ := t.locks.LoadOrStore(puID, &sync.Mutex{})\n\tlock.(*sync.Mutex).Lock()\n\tdefer func() {\n\t\tt.locks.Delete(puID)\n\t\tlock.(*sync.Mutex).Unlock()\n\t}()\n\treturn t.doHandleDelete(ctx, puID, policy, runtime)\n}\n\n// UpdatePolicy updates a policy for an already activated PU. The PU is identified by the contextID\nfunc (t *trireme) UpdatePolicy(ctx context.Context, puID string, plc *policy.PUPolicy, runtime *policy.PURuntime) error {\n\tlock, _ := t.locks.LoadOrStore(puID, &sync.Mutex{})\n\tlock.(*sync.Mutex).Lock()\n\tdefer lock.(*sync.Mutex).Unlock()\n\treturn t.doUpdatePolicy(ctx, puID, plc, runtime)\n}\n\nfunc (t *trireme) EnableDatapathPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, direction packettracing.TracingDirection, interval time.Duration) error {\n\tlock, _ := t.locks.LoadOrStore(puID, &sync.Mutex{})\n\tlock.(*sync.Mutex).Lock()\n\tdefer lock.(*sync.Mutex).Unlock()\n\treturn t.doHandleEnableDatapathPacketTracing(ctx, puID, policy, runtime, direction, interval)\n}\n\nfunc (t *trireme) EnableIPTablesPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, interval time.Duration) error {\n\tlock, _ := t.locks.LoadOrStore(puID, &sync.Mutex{})\n\tlock.(*sync.Mutex).Lock()\n\tdefer lock.(*sync.Mutex).Unlock()\n\treturn t.doHandleEnableIPTablesPacketTracing(ctx, puID, policy, runtime, interval)\n}\n\n// Ping runs ping based on the given config.\nfunc (t *trireme) Ping(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, pingConfig *policy.PingConfig) error {\n\tlock, _ := t.locks.LoadOrStore(puID, &sync.Mutex{})\n\tlock.(*sync.Mutex).Lock()\n\tdefer lock.(*sync.Mutex).Unlock()\n\treturn t.enforcers[t.modeTypeFromPolicy(policy, runtime)].Ping(ctx, puID, pingConfig)\n}\n\nfunc (t *trireme) DebugCollect(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, debugConfig *policy.DebugConfig) error {\n\tlock, _ := t.locks.LoadOrStore(puID, &sync.Mutex{})\n\tlock.(*sync.Mutex).Lock()\n\tdefer lock.(*sync.Mutex).Unlock()\n\treturn t.enforcers[t.modeTypeFromPolicy(policy, runtime)].DebugCollect(ctx, puID, debugConfig)\n}\n\n// UpdateSecrets updates the secrets of the controllers.\nfunc (t *trireme) UpdateSecrets(secrets secrets.Secrets) error {\n\tfor _, enforcer := range t.enforcers {\n\t\tif err := enforcer.UpdateSecrets(secrets); err != nil {\n\t\t\tzap.L().Error(\"unable to update secrets\", zap.Error(err))\n\t\t}\n\t}\n\treturn nil\n}\n\n// UpdateConfiguration updates the configuration of the controller. Only\n// a limited number of parameters can be updated at run time.\nfunc (t *trireme) UpdateConfiguration(cfg *runtime.Configuration) error {\n\n\tfailure := false\n\n\tfor _, s := range t.supervisors {\n\t\terr := s.SetTargetNetworks(cfg)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Failed to update target networks in supervisor\", zap.Error(err))\n\t\t\tfailure = true\n\t\t}\n\t}\n\n\tfor _, e := range t.enforcers {\n\t\tif cfg.LogLevel != \"\" {\n\t\t\tif err := e.SetLogLevel(cfg.LogLevel); err != nil {\n\t\t\t\tzap.L().Error(\"unable to set log level\", zap.Error(err))\n\t\t\t}\n\t\t}\n\n\t\terr := e.SetTargetNetworks(cfg)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Failed to update target networks in controller\", zap.Error(err))\n\t\t\tfailure = true\n\t\t}\n\t}\n\n\tif failure {\n\t\treturn fmt.Errorf(\"configuration update failed\")\n\t}\n\n\treturn nil\n}\n\n// doHandleCreate is the detailed implementation of the create event.\nfunc (t *trireme) doHandleCreate(ctx context.Context, contextID string, policyInfo *policy.PUPolicy, runtimeInfo *policy.PURuntime) error {\n\n\tcontainerInfo := policy.PUInfoFromPolicyAndRuntime(contextID, policyInfo, runtimeInfo)\n\n\tlogEvent := &collector.ContainerRecord{\n\t\tContextID: contextID,\n\t\tIPAddress: policyInfo.IPAddresses(),\n\t\tTags:      policyInfo.Annotations(),\n\t\tEvent:     collector.ContainerStart,\n\t}\n\n\tdefer func() {\n\t\tt.config.collector.CollectContainerEvent(logEvent)\n\t}()\n\n\taddTransmitterLabel(contextID, containerInfo)\n\tif !mustEnforce(contextID, containerInfo) {\n\t\tlogEvent.Event = collector.ContainerIgnored\n\t\treturn nil\n\t}\n\n\tmodeType := t.modeTypeFromPolicy(containerInfo.Policy, containerInfo.Runtime)\n\n\tif err := t.enforcers[modeType].Enforce(ctx, contextID, containerInfo); err != nil {\n\t\tlogEvent.Event = collector.ContainerFailed\n\t\treturn fmt.Errorf(\"unable to setup enforcer: %s\", err)\n\t}\n\n\tif err := t.supervisors[modeType].Supervise(contextID, containerInfo); err != nil {\n\t\tif werr := t.enforcers[modeType].Unenforce(ctx, contextID); werr != nil {\n\t\t\tzap.L().Warn(\"Failed to clean up state after failures\",\n\t\t\t\tzap.String(\"contextID\", contextID),\n\t\t\t\tzap.Error(werr),\n\t\t\t)\n\t\t}\n\n\t\tlogEvent.Event = collector.ContainerFailed\n\t\treturn fmt.Errorf(\"unable to setup supervisor: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// doHandleDelete is the detailed implementation of the delete event.\nfunc (t *trireme) doHandleDelete(ctx context.Context, contextID string, policyInfo *policy.PUPolicy, runtime *policy.PURuntime) error {\n\n\tmodeType := t.modeTypeFromPolicy(policyInfo, runtime)\n\n\terrS := t.supervisors[modeType].Unsupervise(contextID)\n\terrE := t.enforcers[modeType].Unenforce(ctx, contextID)\n\n\tt.config.collector.CollectContainerEvent(&collector.ContainerRecord{\n\t\tContextID: contextID,\n\t\tIPAddress: runtime.IPAddresses(),\n\t\tTags:      nil,\n\t\tEvent:     collector.ContainerDelete,\n\t})\n\n\tif errS != nil || errE != nil {\n\t\treturn fmt.Errorf(\"unable to delete context id %s, supervisor %s, enforcer %s\", contextID, errS, errE)\n\t}\n\n\treturn nil\n}\n\n// doUpdatePolicy is the detailed implementation of the update policy event.\nfunc (t *trireme) doUpdatePolicy(ctx context.Context, contextID string, newPolicy *policy.PUPolicy, runtime *policy.PURuntime) error {\n\n\tcontainerInfo := policy.PUInfoFromPolicyAndRuntime(contextID, newPolicy, runtime)\n\n\taddTransmitterLabel(contextID, containerInfo)\n\n\tif !mustEnforce(contextID, containerInfo) {\n\t\treturn nil\n\t}\n\n\tmodeType := t.modeTypeFromPolicy(containerInfo.Policy, containerInfo.Runtime)\n\n\tif err := t.enforcers[modeType].Enforce(ctx, contextID, containerInfo); err != nil {\n\t\t//We lost communication with the remote and killed it lets restart it here by feeding a create event in the request channel\n\t\tif werr := t.supervisors[modeType].Unsupervise(contextID); werr != nil {\n\t\t\tzap.L().Warn(\"Failed to clean up after enforcerments failures\",\n\t\t\t\tzap.String(\"contextID\", contextID),\n\t\t\t\tzap.Error(werr),\n\t\t\t)\n\t\t}\n\t\treturn fmt.Errorf(\"unable to update policy for pu %s: %s\", contextID, err)\n\t}\n\n\tif err := t.supervisors[modeType].Supervise(contextID, containerInfo); err != nil {\n\t\tif werr := t.enforcers[modeType].Unenforce(ctx, contextID); werr != nil {\n\t\t\tzap.L().Warn(\"Failed to clean up after enforcerments failures\",\n\t\t\t\tzap.String(\"contextID\", contextID),\n\t\t\t\tzap.Error(werr),\n\t\t\t)\n\t\t}\n\t\treturn fmt.Errorf(\"supervisor failed to update policy for pu %s: %s\", contextID, err)\n\t}\n\n\tt.config.collector.CollectContainerEvent(&collector.ContainerRecord{\n\t\tContextID: contextID,\n\t\tIPAddress: runtime.IPAddresses(),\n\t\tTags:      containerInfo.Runtime.Tags(),\n\t\tEvent:     collector.ContainerUpdate,\n\t})\n\n\treturn nil\n}\n\n//Debug Handlers\nfunc (t *trireme) doHandleEnableDatapathPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, direction packettracing.TracingDirection, interval time.Duration) error {\n\n\treturn t.enforcers[t.modeTypeFromPolicy(policy, runtime)].EnableDatapathPacketTracing(ctx, puID, direction, interval)\n}\n\nfunc (t *trireme) doHandleEnableIPTablesPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, interval time.Duration) error {\n\n\tmodeType := t.modeTypeFromPolicy(policy, runtime)\n\n\tsysctlCmd, err := exec.LookPath(\"sysctl\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"sysctl command not found\")\n\t}\n\n\tcmd := exec.Command(sysctlCmd, \"-w\", \"net.netfilter.nf_log_all_netns=1\")\n\tif err := cmd.Run(); err != nil {\n\t\treturn fmt.Errorf(\"remote container iptables tracing will not work %s\", err)\n\t}\n\n\tt.enablingTrace <- &traceTrigger{\n\t\tduration: interval,\n\t\texpiry:   time.Now().Add(interval),\n\t}\n\n\tif err := t.supervisors[modeType].EnableIPTablesPacketTracing(ctx, puID, interval); err != nil {\n\t\treturn err\n\t}\n\n\treturn t.enforcers[modeType].EnableIPTablesPacketTracing(ctx, puID, interval)\n}\n\nfunc (t *trireme) runIPTraceCollector(ctx context.Context) {\n\t//Run dmesg once to establish baseline\n\texpiry := time.Now()\n\thdl := dmesgparser.New()\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\tcase traceparams := <-t.enablingTrace:\n\t\t\tif !traceparams.expiry.After(expiry) {\n\t\t\t\t//if we already have a request expiring later drop this\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\texpiry = traceparams.expiry\n\t\tcase <-time.After(1 * time.Second):\n\t\t\tif !time.Now().After(expiry) {\n\t\t\t\tmessages, err := hdl.RunDmesgCommand()\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Warn(\"Unable to run dmesg\", zap.Error(err))\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tt.config.collector.CollectTraceEvent(messages)\n\n\t\t\t}\n\t\t}\n\t}\n\n}\n\nfunc (t *trireme) modeTypeFromPolicy(policyInfo *policy.PUPolicy, runtime *policy.PURuntime) constants.ModeType {\n\tif policyInfo == nil {\n\t\t// there are edge cases when policyInfo really can be nil - and it is fine\n\t\t// let's just fall back to the normal enforcertype mapping if this is the case\n\t\t//\n\t\t// Here is an example: when a PU Create event failed, but the PU gets destroyed afterwards, there is a stop\n\t\t// event generated which will call UnEnforce. However, in this case there is no guarantee that PUPolicy has\n\t\t// actually ever been set.\n\t\tzap.L().Debug(\"modeTypeFromPolicy received no PU policy\", zap.String(\"name\", runtime.Name()))\n\t\treturn t.puTypeToEnforcerType[runtime.PUType()]\n\t}\n\n\tswitch policyInfo.EnforcerType() {\n\tcase policy.EnforcerMapping:\n\t\treturn t.puTypeToEnforcerType[runtime.PUType()]\n\tcase policy.EnvoyAuthorizerEnforcer:\n\t\tswitch runtime.PUType() {\n\t\tcase common.KubernetesPU:\n\t\t\tfallthrough\n\t\tcase common.ContainerPU:\n\t\t\treturn constants.RemoteContainerEnvoyAuthorizer\n\t\tcase common.HostPU:\n\t\t\tfallthrough\n\t\tcase common.HostNetworkPU:\n\t\t\tfallthrough\n\t\tcase common.LinuxProcessPU, common.WindowsProcessPU:\n\t\t\treturn constants.LocalEnvoyAuthorizer\n\t\tdefault:\n\t\t\treturn t.puTypeToEnforcerType[runtime.PUType()]\n\t\t}\n\tdefault:\n\t\treturn t.puTypeToEnforcerType[runtime.PUType()]\n\t}\n}\n"
  },
  {
    "path": "controller/helpers.go",
    "content": "package controller\n\nimport (\n\t\"context\"\n\n\t\"github.com/blang/semver\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\n// LaunchRemoteEnforcer launches a remote enforcer instance.\nfunc LaunchRemoteEnforcer(ctx context.Context, logLevel string, logFormat string, logID string, numQueues int, agentVersion semver.Version) error {\n\n\treturn remoteenforcer.LaunchRemoteEnforcer(ctx, logLevel, logFormat, logID, numQueues, agentVersion)\n}\n\n// addTransmitterLabel adds the enforcerconstants.TransmitterLabel as a fixed label in the policy.\n// The ManagementID part of the policy is used as the enforcerconstants.TransmitterLabel.\n// If the Policy didn't set the ManagementID, we use the Local contextID as the\n// default enforcerconstants.TransmitterLabel.\nfunc addTransmitterLabel(contextID string, containerInfo *policy.PUInfo) {\n\n\tif containerInfo.Policy.ManagementID() == \"\" {\n\t\tcontainerInfo.Policy.AddIdentityTag(enforcerconstants.TransmitterLabel, contextID)\n\t} else {\n\t\tcontainerInfo.Policy.AddIdentityTag(enforcerconstants.TransmitterLabel, containerInfo.Policy.ManagementID())\n\t}\n}\n\n// MustEnforce returns true if the Policy should go Through the Enforcer/internal/supervisor.\n// Return false if:\n//   - PU is in host namespace.\n//   - Policy got the AllowAll tag.\nfunc mustEnforce(contextID string, containerInfo *policy.PUInfo) bool {\n\n\tif containerInfo.Policy.TriremeAction() == policy.AllowAll {\n\t\tzap.L().Debug(\"PUPolicy with AllowAll Action. Not policing\", zap.String(\"contextID\", contextID))\n\t\treturn false\n\t}\n\n\treturn true\n}\n"
  },
  {
    "path": "controller/interfaces.go",
    "content": "package controller\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// TriremeController is the main API of the Trireme controller\ntype TriremeController interface {\n\t// Run initializes and runs the controller.\n\tRun(ctx context.Context) error\n\n\t// CleanUp cleans all the supervisors and ACLs for a clean exit\n\tCleanUp() error\n\n\t// Enforce asks the controller to enforce policy on a processing unit\n\tEnforce(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) (err error)\n\n\t// UnEnforce asks the controller to ub-enforce policy on a processing unit\n\tUnEnforce(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) (err error)\n\n\t// UpdatePolicy updates the policy of the isolator for a container.\n\tUpdatePolicy(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) error\n\n\t// UpdateSecrets updates the secrets of running enforcers managed by trireme. Remote enforcers will get the secret updates with the next policy push\n\tUpdateSecrets(secrets secrets.Secrets) error\n\n\t// UpdateConfiguration updates the configuration of the controller. Only specific configuration\n\t// parameters can be updated during run time.\n\tUpdateConfiguration(cfg *runtime.Configuration) error\n\tDebugInfo\n}\n\n// DebugInfo is the interface implemented by controllers to support configuring debug options\ntype DebugInfo interface {\n\t// EnableReceivedPacketTracing will enable tracing of packets received by the datapath for a particular PU. Setting Disabled as tracing direction will stop tracing for the contextID\n\tEnableDatapathPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, direction packettracing.TracingDirection, interval time.Duration) error\n\t// EnablePacketTracing enable iptables -j trace for the particular pu and is much wider packet stream.\n\tEnableIPTablesPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, interval time.Duration) error\n\t// Ping runs ping based on the given config.\n\tPing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, pingConfig *policy.PingConfig) error\n\t// DebugCollect collects debug information, such as packet capture\n\tDebugCollect(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, debugConfig *policy.DebugConfig) error\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/acl.go",
    "content": "package acls\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"reflect\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/ipprefix\"\n\t\"go.aporeto.io/gaia/protocols\"\n)\n\n// acl holds all the ACLS in an internal DB\n\ntype acl struct {\n\ttcpCache  ipprefix.IPcache\n\tudpCache  ipprefix.IPcache\n\ticmpCache ipprefix.IPcache\n}\n\nfunc newACL() *acl {\n\treturn &acl{\n\t\ttcpCache:  ipprefix.NewIPCache(),\n\t\tudpCache:  ipprefix.NewIPCache(),\n\t\ticmpCache: ipprefix.NewIPCache(),\n\t}\n}\n\n// errNoMatchFromRule must stop the LPM check\nvar errNoMatchFromRule = errors.New(\"No Match\")\nvar errNotFound = errors.New(\"No Match\")\n\nfunc (a *acl) addICMPToCache(ip net.IP, mask int, baseRule string, listOfDisjunctives []string, policy *policy.FlowPolicy) {\n\tvar icmpRuleList []*icmpRule\n\n\tval, exists := a.icmpCache.Get(ip, mask)\n\tif !exists {\n\t\ticmpRuleList = []*icmpRule{}\n\t} else {\n\t\ticmpRuleList = val.([]*icmpRule)\n\t}\n\n\tnewRule := &icmpRule{baseRule, listOfDisjunctives, policy}\n\ticmpRuleList = append(icmpRuleList, newRule)\n\ta.icmpCache.Put(ip, mask, icmpRuleList)\n}\n\nfunc (a *acl) removeICMPFromCache(ip net.IP, mask int, baseRule string, listOfDisjunctives []string, policy *policy.FlowPolicy) error {\n\tvar icmpRuleList []*icmpRule\n\tval, exists := a.icmpCache.Get(ip, mask)\n\tif !exists {\n\t\t// nothing to remove\n\t\treturn nil\n\t}\n\ticmpRuleList = val.([]*icmpRule)\n\n\tsearchRule := icmpRule{baseRule, listOfDisjunctives, policy}\n\tnewIcmpRuleList := make([]*icmpRule, 0, len(icmpRuleList))\n\tfor _, rule := range icmpRuleList {\n\t\tif reflect.DeepEqual(searchRule, *rule) {\n\t\t\t// this is a full match, skip\n\t\t\tcontinue\n\t\t}\n\t\t// TODO: partial matches aren't handled. Should they?\n\t\tnewIcmpRuleList = append(newIcmpRuleList, rule)\n\t}\n\n\ta.icmpCache.Put(ip, mask, newIcmpRuleList)\n\n\treturn nil\n}\n\nfunc (a *acl) removeFromCache(ip net.IP, mask int, nomatch bool, proto string, ports []string, policy *policy.FlowPolicy) error {\n\n\tremoveICMPCache := func(baseRule string, listOfDisjunctives []string) error {\n\t\treturn a.removeICMPFromCache(ip, mask, baseRule, listOfDisjunctives, policy)\n\t}\n\n\t// the TCP or UDP cases use this part\n\tremoveCache := func(lookupCache ipprefix.IPcache, port string) error {\n\t\tval, exists := lookupCache.Get(ip, mask)\n\t\tif !exists {\n\t\t\t// nothing to remove\n\t\t\treturn nil\n\t\t}\n\t\tportList := val.(portActionList)\n\n\t\tnewPortList := make(portActionList, 0, len(portList))\n\t\tr, err := newPortAction(port, policy, nomatch)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"unable to create port action: %s\", err)\n\t\t}\n\n\t\tfor _, portAction := range portList {\n\t\t\tif reflect.DeepEqual(*r, *portAction) {\n\t\t\t\t// this is a full match, skip\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t// TODO: partial matches aren't handled. Should they?\n\t\t\tnewPortList = append(newPortList, portAction)\n\t\t}\n\n\t\tlookupCache.Put(ip, mask, newPortList)\n\n\t\treturn nil\n\t}\n\n\tswitch strings.ToLower(proto) {\n\tcase constants.TCPProtoNum:\n\t\tfor _, port := range ports {\n\t\t\tif err := removeCache(a.tcpCache, port); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\treturn nil\n\n\tcase constants.UDPProtoNum:\n\t\tfor _, port := range ports {\n\t\t\tif err := removeCache(a.udpCache, port); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\treturn nil\n\n\tdefault:\n\t\t// ICMP protocol\n\t\tif splits := strings.Split(proto, \"/\"); strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP || strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP6 {\n\t\t\treturn removeICMPCache(proto, ports)\n\t\t}\n\n\t\t// unknown protocol - nothing to do\n\t\treturn nil\n\t}\n}\n\nfunc (a *acl) addToCache(ip net.IP, mask int, port string, proto string, policy *policy.FlowPolicy, nomatch bool) error {\n\tvar err error\n\tvar portList portActionList\n\tvar lookupCache ipprefix.IPcache\n\tswitch strings.ToLower(proto) {\n\tcase constants.TCPProtoNum:\n\t\t{\n\t\t\tlookupCache = a.tcpCache\n\t\t}\n\tcase constants.UDPProtoNum:\n\t\t{\n\t\t\tlookupCache = a.udpCache\n\t\t}\n\tdefault:\n\t\treturn nil\n\t}\n\tr, err := newPortAction(port, policy, nomatch)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to create port action: %s\", err)\n\t}\n\tval, exists := lookupCache.Get(ip, mask)\n\tif !exists {\n\t\tportList = portActionList{}\n\t} else {\n\t\tportList = val.(portActionList)\n\t}\n\n\t/* check if this is duplicate entry */\n\tfor _, portAction := range portList {\n\t\tif *r == *portAction {\n\t\t\treturn nil\n\t\t}\n\t}\n\n\tportList = append(portList, r)\n\tlookupCache.Put(ip, mask, portList)\n\n\treturn nil\n}\n\nfunc (a *acl) removeIPMask(ip net.IP, mask int) {\n\ta.tcpCache.Put(ip, mask, nil)\n\ta.udpCache.Put(ip, mask, nil)\n\ta.icmpCache.Put(ip, mask, nil)\n}\n\nfunc (a *acl) matchRule(ip net.IP, port uint16, proto uint8, preReport *policy.FlowPolicy) (report *policy.FlowPolicy, packetPolicy *policy.FlowPolicy, err error) {\n\treport = preReport\n\n\terr = errNotFound\n\n\tlookup := func(val interface{}) bool {\n\t\tif val != nil {\n\t\t\tportList := val.(portActionList)\n\n\t\t\treport, packetPolicy, err = portList.lookup(port, report)\n\t\t\tif err == nil || err == errNoMatchFromRule {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\tif proto == packet.IPProtocolTCP {\n\t\ta.tcpCache.RunFuncOnLpmIP(ip, lookup)\n\t} else if proto == packet.IPProtocolUDP {\n\t\ta.udpCache.RunFuncOnLpmIP(ip, lookup)\n\t}\n\n\treturn report, packetPolicy, err\n}\n\nfunc (a *acl) addRule(rule policy.IPRule) (err error) {\n\n\taddCache := func(address, port, proto string) error {\n\t\taddr, err := ParseAddress(address)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif err := a.addToCache(addr.IP, addr.Mask, port, proto, rule.Policy, addr.NoMatch); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn nil\n\t}\n\n\taddICMPCache := func(address, baseRule string, listOfDisjunctives []string) error {\n\t\taddr, err := ParseAddress(address)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\ta.addICMPToCache(addr.IP, addr.Mask, baseRule, listOfDisjunctives, rule.Policy)\n\n\t\treturn nil\n\t}\n\n\tfor _, proto := range rule.Protocols {\n\t\tswitch strings.ToLower(proto) {\n\t\tcase constants.TCPProtoNum, constants.UDPProtoNum:\n\t\t\tfor _, address := range rule.Addresses {\n\t\t\t\tfor _, port := range rule.Ports {\n\t\t\t\t\tif err := addCache(address, port, proto); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif splits := strings.Split(proto, \"/\"); strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP || strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP6 {\n\t\t\tfor _, address := range rule.Addresses {\n\t\t\t\tif err := addICMPCache(address, proto, rule.Ports); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// getMatchingAction does lookup in acl in a common way for accept/reject rules.\nfunc (a *acl) getMatchingAction(ip net.IP, port uint16, proto uint8, preReport *policy.FlowPolicy) (report *policy.FlowPolicy, packet *policy.FlowPolicy, err error) {\n\n\treturn a.matchRule(ip, port, proto, preReport)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/acl_test.go",
    "content": "package acls\n\nimport (\n\t\"net\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nvar (\n\trules = policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"172.0.0.0/8\"},\n\t\t\tPorts:     []string{\"400:500\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"tcp172/8\"},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"172.17.0.0/16\"},\n\t\t\tPorts:     []string{\"400:500\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"tcp172.17/16\"},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.168.100.0/24\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"tcp192.168.100/24\"},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"10.1.1.1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"tcp10.1.1.1\"}},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPorts:     []string{\"443\"},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"tcp0/0\"}},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\tProtocols: []string{constants.UDPProtoNum},\n\t\t\tPorts:     []string{\"443\"},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"udp0/0\"}},\n\t}\n)\n\nfunc TestLookup(t *testing.T) {\n\n\tConvey(\"Given a good DB\", t, func() {\n\n\t\ta := newACL()\n\t\tSo(a, ShouldNotBeNil)\n\t\tfor _, r := range rules {\n\t\t\terr := a.addRule(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t}\n\n\t\tConvey(\"When I lookup for a matching address and a port range, I should get the right action\", func() {\n\t\t\tip := net.ParseIP(\"172.17.0.1\")\n\t\t\tport := uint16(401)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"tcp172.17/16\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"tcp172.17/16\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address with less specific match and a port range, I should get the right action\", func() {\n\t\t\tip := net.ParseIP(\"172.16.0.1\")\n\t\t\tport := uint16(401)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"tcp172/8\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"tcp172/8\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address exact port, I should get the right action\", func() {\n\t\t\tip := net.ParseIP(\"192.168.100.1\")\n\t\t\tport := uint16(80)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"tcp192.168.100/24\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"tcp192.168.100/24\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a non matching address . I should get reject\", func() {\n\t\t\tip := net.ParseIP(\"192.168.200.1\")\n\t\t\tport := uint16(80)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address but failed port, I should get reject\", func() {\n\t\t\tip := net.ParseIP(\"192.168.100.1\")\n\t\t\tport := uint16(600)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching exact address exact port, I should get the right action\", func() {\n\t\t\tip := net.ParseIP(\"10.1.1.1\")\n\t\t\tport := uint16(80)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"tcp10.1.1.1\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"tcp10.1.1.1\")\n\t\t})\n\n\t})\n}\n\nfunc TestICMPMatch(t *testing.T) {\n\n\tvar icmpRules = policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\tProtocols: []string{\"ICMP/8/1:3\"},\n\t\t\tPorts:     []string{},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"icmp0/0-8/1:3\",\n\t\t\t},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"684D:1111:222:3333:4444:5555:6:77\"},\n\t\t\tProtocols: []string{\"ICMP6\"},\n\t\t\tPorts:     []string{},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"icmp6\",\n\t\t\t},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.0.2.1\"},\n\t\t\tProtocols: []string{\"icmp\"},\n\t\t\tPorts:     []string{},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"removeme\",\n\t\t\t},\n\t\t},\n\t}\n\n\tConvey(\"Given a good DB\", t, func() {\n\n\t\ta := newACL()\n\t\tSo(a, ShouldNotBeNil)\n\t\tfor _, r := range icmpRules {\n\t\t\terr := a.addRule(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t}\n\n\t\tConvey(\"When I lookup for a matching address for icmp but wrong type or code, I should get the right action\", func() {\n\t\t\tip := net.ParseIP(\"172.17.0.1\")\n\t\t\tr, p, err := a.matchICMPRule(ip.To4(), 8, 2)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"icmp0/0-8/1:3\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"icmp0/0-8/1:3\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address for icmp, I should not get a match\", func() {\n\t\t\tip := net.ParseIP(\"172.17.0.1\")\n\t\t\tr, p, err := a.matchICMPRule(ip.To4(), 8, 4)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address for icmp6, I should get the right action\", func() {\n\t\t\tip := net.ParseIP(\"684D:1111:222:3333:4444:5555:6:77\")\n\t\t\tr, p, err := a.matchICMPRule(ip, 8, 1)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"icmp6\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"icmp6\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a non-matching address for icmp6, I should not get a match\", func() {\n\t\t\tip := net.ParseIP(\"684D:1111:222:3333:4444:5555:6:77\")\n\t\t\tr, p, err := a.matchICMPRule(ip, 8, 1)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"icmp6\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"icmp6\")\n\t\t})\n\t})\n}\n\nfunc TestICMPRemove(t *testing.T) {\n\n\tvar icmpRules = policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.0.2.1\"},\n\t\t\tProtocols: []string{\"icmp\"},\n\t\t\tPorts:     []string{},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"removeme\",\n\t\t\t},\n\t\t},\n\t}\n\n\tremoveMePolicy := &policy.FlowPolicy{\n\t\tAction:   policy.Accept,\n\t\tPolicyID: \"removeme\",\n\t}\n\n\tConvey(\"Given a good DB\", t, func() {\n\n\t\ta := newACL()\n\t\tSo(a, ShouldNotBeNil)\n\t\tfor _, r := range icmpRules {\n\t\t\terr := a.addRule(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t}\n\n\t\tConvey(\"When I try to remove a rule which does not exist, then it should not error\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.2\")\n\t\t\terr := a.removeFromCache(ip, 32, false, \"icmp\", nil, removeMePolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to remove a rule which does not match, then nothing should change\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\toldVal, ok := a.icmpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\told := oldVal.([]*icmpRule)\n\t\t\tSo(old, ShouldNotBeEmpty)\n\t\t\terr := a.removeFromCache(ip, 32, false, \"icmp\", nil, removeMePolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnewVal, ok := a.icmpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tnew := newVal.([]*icmpRule)\n\t\t\tSo(new, ShouldNotBeEmpty)\n\t\t\tSo(old, ShouldResemble, new)\n\t\t})\n\n\t\tConvey(\"When I try to remove a rule which matches, then it should get removed\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\toldVal, ok := a.icmpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\told := oldVal.([]*icmpRule)\n\t\t\tSo(old, ShouldNotBeEmpty)\n\t\t\terr := a.removeFromCache(ip, 32, false, \"icmp\", []string{}, removeMePolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnewVal, ok := a.icmpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tnew := newVal.([]*icmpRule)\n\t\t\tSo(new, ShouldBeEmpty)\n\t\t})\n\n\t})\n}\n\nfunc TestRemove(t *testing.T) {\n\n\t// keep one policy here for a direct pointer comparison\n\tpolicyOne := &policy.FlowPolicy{\n\t\tAction:   policy.Accept,\n\t\tPolicyID: \"1\",\n\t}\n\n\tremoveRules := policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.0.2.1\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy:    policyOne,\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.0.2.1\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{constants.UDPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"2\",\n\t\t\t},\n\t\t},\n\t}\n\n\t// and one here for a content comparison\n\tpolicyTwo := &policy.FlowPolicy{\n\t\tAction:   policy.Accept,\n\t\tPolicyID: \"2\",\n\t}\n\n\tConvey(\"Given a good DB\", t, func() {\n\n\t\ta := newACL()\n\t\tSo(a, ShouldNotBeNil)\n\t\tfor _, r := range removeRules {\n\t\t\terr := a.addRule(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t}\n\n\t\tConvey(\"When I try to remove a rule with an unsupported protocol, then it should not error\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\terr := a.removeFromCache(ip, 32, false, \"unsupported\", nil, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to remove a TCP rule which does not exist, then it should not error\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.2\")\n\t\t\terr := a.removeFromCache(ip, 32, false, constants.TCPProtoNum, []string{\"42\"}, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to remove a TCP rule which cannot be parsed correctly, then it should error\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\terr := a.removeFromCache(ip, 32, false, constants.TCPProtoNum, []string{\"invalid port\"}, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to remove a UDP rule which does not exist, then it should not error\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.2\")\n\t\t\terr := a.removeFromCache(ip, 32, false, constants.UDPProtoNum, []string{\"43\"}, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to remove a UDP rule which cannot be parsed correctly, then it should error\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\terr := a.removeFromCache(ip, 32, false, constants.UDPProtoNum, []string{\"another invalid port\"}, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to remove a TCP rule which does not match, then nothing should change\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\toldVal, ok := a.tcpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\told := oldVal.(portActionList)\n\t\t\tSo(old, ShouldNotBeEmpty)\n\t\t\terr := a.removeFromCache(ip, 32, false, constants.TCPProtoNum, []string{\"44\"}, policyOne)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnewVal, ok := a.tcpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tnew := newVal.(portActionList)\n\t\t\tSo(new, ShouldNotBeEmpty)\n\t\t\tSo(old, ShouldResemble, new)\n\t\t})\n\n\t\tConvey(\"When I try to remove a TCP rule which matches, then it should get removed\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\toldVal, ok := a.tcpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\told := oldVal.(portActionList)\n\t\t\tSo(old, ShouldNotBeEmpty)\n\t\t\toldLength := len(old)\n\t\t\terr := a.removeFromCache(ip, 32, false, constants.TCPProtoNum, []string{\"80\"}, policyOne)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnewVal, ok := a.tcpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tnew := newVal.(portActionList)\n\t\t\tSo(new, ShouldHaveLength, oldLength-1)\n\t\t})\n\n\t\tConvey(\"When I try to remove a UDP rule which does not match, then nothing should change\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\toldVal, ok := a.udpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\told := oldVal.(portActionList)\n\t\t\tSo(old, ShouldNotBeEmpty)\n\t\t\terr := a.removeFromCache(ip, 32, false, constants.UDPProtoNum, []string{\"45\"}, policyTwo)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnewVal, ok := a.udpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tnew := newVal.(portActionList)\n\t\t\tSo(new, ShouldNotBeEmpty)\n\t\t\tSo(old, ShouldResemble, new)\n\t\t})\n\n\t\tConvey(\"When I try to remove a UDP rule which matches, then it should get removed\", func() {\n\t\t\tip := net.ParseIP(\"192.0.2.1\")\n\t\t\toldVal, ok := a.udpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\told := oldVal.(portActionList)\n\t\t\tSo(old, ShouldNotBeEmpty)\n\t\t\toldLength := len(old)\n\t\t\terr := a.removeFromCache(ip, 32, false, constants.UDPProtoNum, []string{\"80\"}, policyTwo)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnewVal, ok := a.udpCache.Get(ip, 32)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tnew := newVal.(portActionList)\n\t\t\tSo(new, ShouldHaveLength, oldLength-1)\n\t\t})\n\t})\n}\n\nfunc TestObservedLookup(t *testing.T) {\n\n\tip1 := \"200.17.0.0/17\"\n\tip2 := \"200.18.0.0/17\"\n\tip3 := \"200.0.0.0/9\"\n\tvar (\n\t\trulesWithObservation = policy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{ip1},\n\t\t\t\tPorts:     []string{\"401\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:        policy.Accept,\n\t\t\t\t\tObserveAction: policy.ObserveContinue,\n\t\t\t\t\tPolicyID:      \"observed-continue-tcp200.17/17\"},\n\t\t\t},\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{ip2},\n\t\t\t\tPorts:     []string{\"401\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:        policy.Accept,\n\t\t\t\t\tObserveAction: policy.ObserveApply,\n\t\t\t\t\tPolicyID:      \"observed-applied-tcp200.18/17\"},\n\t\t\t},\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{ip3},\n\t\t\t\tPorts:     []string{\"401\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"tcp200/9\"},\n\t\t\t},\n\t\t}\n\t)\n\n\tConvey(\"Given a good DB\", t, func() {\n\t\ta := newACL()\n\t\tSo(a, ShouldNotBeNil)\n\t\tfor _, r := range rulesWithObservation {\n\t\t\terr := a.addRule(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t}\n\n\t\t// Ensure all the elements are there in the cache\n\t\tcidrs := []string{ip1, ip2, ip3}\n\n\t\tfor _, cidr := range cidrs {\n\t\t\tip, ipnet, _ := net.ParseCIDR(cidr)\n\t\t\tsize, _ := ipnet.Mask.Size()\n\t\t\t_, ok := a.tcpCache.Get(ip, size)\n\t\t\tSo(ok, ShouldEqual, true)\n\t\t}\n\n\t\tConvey(\"When I lookup for a matching address and a port range, I should get the right action and observed action\", func() {\n\t\t\tip := net.ParseIP(\"200.17.0.1\")\n\t\t\tport := uint16(401)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"tcp200/9\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"observed-continue-tcp200.17/17\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address and a port range, I should get the observed action as applied\", func() {\n\t\t\tip := net.ParseIP(\"200.18.0.1\")\n\t\t\tport := uint16(401)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"observed-applied-tcp200.18/17\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"observed-applied-tcp200.18/17\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address and a port range with an already reported action of reject, I should get the observed action as applied\", func() {\n\t\t\tip := net.ParseIP(\"200.18.0.1\")\n\t\t\tport := uint16(401)\n\t\t\tpreReported := &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Reject,\n\t\t\t\tPolicyID: \"preReportedPolicyID\",\n\t\t\t}\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, preReported)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"observed-applied-tcp200.18/17\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Reject)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"preReportedPolicyID\")\n\t\t})\n\t})\n}\n\nfunc TestNomatchLookup(t *testing.T) {\n\n\tip1 := \"200.17.0.0/16\"\n\tip2 := \"200.18.0.0/16\"\n\tip3 := \"200.0.0.0/8\"\n\tvar (\n\t\trulesWithNomatch = policy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"!\" + ip1},\n\t\t\t\tPorts:     []string{\"401\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"nomatch-tcp200.17/16\"},\n\t\t\t},\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"!\" + ip2},\n\t\t\t\tPorts:     []string{\"401\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"nomatch-tcp200.18/16\"},\n\t\t\t},\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{ip3},\n\t\t\t\tPorts:     []string{\"401\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"tcp200/8\"},\n\t\t\t},\n\t\t}\n\t)\n\n\tConvey(\"Given a good DB\", t, func() {\n\t\ta := newACL()\n\t\tSo(a, ShouldNotBeNil)\n\t\tfor _, r := range rulesWithNomatch {\n\t\t\terr := a.addRule(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t}\n\n\t\t// Ensure all the elements are there in the cache\n\t\tcidrs := []string{ip1, ip2, ip3}\n\n\t\tfor _, cidr := range cidrs {\n\t\t\tip, ipnet, _ := net.ParseCIDR(cidr)\n\t\t\tsize, _ := ipnet.Mask.Size()\n\t\t\t_, ok := a.tcpCache.Get(ip, size)\n\t\t\tSo(ok, ShouldEqual, true)\n\t\t}\n\n\t\tConvey(\"When I lookup for a nomatch address and a port range, I should get nomatch\", func() {\n\t\t\tip := net.ParseIP(\"200.17.0.1\")\n\t\t\tport := uint16(401)\n\t\t\t_, _, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for another nomatch address and a port range, I should get nomatch\", func() {\n\t\t\tip := net.ParseIP(\"200.18.0.1\")\n\t\t\tport := uint16(401)\n\t\t\t_, _, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address and a port range, I should get the accept action\", func() {\n\t\t\tip := net.ParseIP(\"200.19.0.1\")\n\t\t\tport := uint16(401)\n\t\t\tr, p, err := a.getMatchingAction(ip.To4(), port, packet.IPProtocolTCP, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"tcp200/8\")\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"tcp200/8\")\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/aclcache.go",
    "content": "package acls\n\nimport (\n\t\"errors\"\n\t\"net\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// ACLCache holds all the ACLS in an internal DB\n// map[prefixes][subnets] -> list of ports with their actions\ntype ACLCache struct {\n\treject  *acl\n\taccept  *acl\n\tobserve *acl\n}\n\n// NewACLCache a new ACL cache\nfunc NewACLCache() *ACLCache {\n\treturn &ACLCache{\n\t\treject:  newACL(),\n\t\taccept:  newACL(),\n\t\tobserve: newACL(),\n\t}\n}\n\n// AddRule adds a single rule to the ACL Cache\nfunc (c *ACLCache) AddRule(rule policy.IPRule) (err error) {\n\n\tif rule.Policy.ObserveAction.ObserveApply() {\n\t\treturn c.observe.addRule(rule)\n\t}\n\n\tif rule.Policy.Action.Accepted() {\n\t\treturn c.accept.addRule(rule)\n\t}\n\n\treturn c.reject.addRule(rule)\n}\n\n// AddRuleList adds a list of rules to the cache\nfunc (c *ACLCache) AddRuleList(rules policy.IPRuleList) (err error) {\n\n\tfor _, rule := range rules {\n\t\tif err = c.AddRule(rule); err != nil {\n\t\t\treturn\n\t\t}\n\t}\n\n\treturn\n}\n\n// RemoveRulesForAddress is going to remove all rules for the provided address, protocol and ports.\nfunc (c *ACLCache) RemoveRulesForAddress(address *Address, protocol string, ports []string, policy *policy.FlowPolicy) error {\n\n\tif err := c.reject.removeFromCache(address.IP, address.Mask, address.NoMatch, protocol, ports, policy); err != nil {\n\t\treturn err\n\t}\n\tif err := c.accept.removeFromCache(address.IP, address.Mask, address.NoMatch, protocol, ports, policy); err != nil {\n\t\treturn err\n\t}\n\tif err := c.observe.removeFromCache(address.IP, address.Mask, address.NoMatch, protocol, ports, policy); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// RemoveIPMask removes the entries indexed with (ip, mask). This is an idempotent operation\n// and thus does not returns an error\nfunc (c *ACLCache) RemoveIPMask(ip net.IP, mask int) {\n\n\tc.reject.removeIPMask(ip, mask)\n\tc.accept.removeIPMask(ip, mask)\n\tc.observe.removeIPMask(ip, mask)\n}\n\n// GetMatchingAction gets the action from the acl cache\nfunc (c *ACLCache) GetMatchingAction(ip net.IP, port uint16, proto uint8, defaultFlowPolicy *policy.FlowPolicy) (report *policy.FlowPolicy, packet *policy.FlowPolicy, err error) {\n\n\treport, packet, err = c.reject.getMatchingAction(ip, port, proto, report)\n\tif err == nil {\n\t\treturn\n\t}\n\n\treport, packet, err = c.accept.getMatchingAction(ip, port, proto, report)\n\tif err == nil {\n\t\treturn\n\t}\n\n\treport, packet, err = c.observe.getMatchingAction(ip, port, proto, report)\n\tif err == nil {\n\t\treturn\n\t}\n\n\tif report == nil {\n\t\treport = defaultFlowPolicy\n\t}\n\n\tif packet == nil {\n\t\tpacket = defaultFlowPolicy\n\t}\n\n\tif defaultFlowPolicy.Action.Accepted() {\n\t\treturn report, packet, nil\n\t}\n\n\treturn report, packet, errors.New(\"no match\")\n}\n\n// GetMatchingICMPAction gets the action based on icmp policy\nfunc (c *ACLCache) GetMatchingICMPAction(ip net.IP, icmpType, icmpCode int8, defaultFlowPolicy *policy.FlowPolicy) (report *policy.FlowPolicy, packet *policy.FlowPolicy, err error) {\n\n\treport, packet, err = c.reject.matchICMPRule(ip, icmpType, icmpCode)\n\tif err == nil {\n\t\treturn\n\t}\n\n\treport, packet, err = c.accept.matchICMPRule(ip, icmpType, icmpCode)\n\tif err == nil {\n\t\treturn\n\t}\n\n\treport, packet, err = c.observe.matchICMPRule(ip, icmpType, icmpCode)\n\tif err == nil {\n\t\treturn\n\t}\n\n\tif report == nil {\n\t\treport = defaultFlowPolicy\n\t}\n\n\tif packet == nil {\n\t\tpacket = defaultFlowPolicy\n\t}\n\n\tif defaultFlowPolicy.Action.Accepted() {\n\t\treturn report, packet, nil\n\t}\n\n\treturn report, packet, errors.New(\"no match\")\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/aclcache_test.go",
    "content": "// +build !windows\n\npackage acls\n\nimport (\n\t\"net\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nvar catchAllPolicy = &policy.FlowPolicy{Action: policy.Reject | policy.Log, PolicyID: \"default\", ServiceID: \"default\"}\n\nfunc TestEmptyACLCacheLookup(t *testing.T) {\n\n\tConvey(\"Given an empty ACL Cache\", t, func() {\n\t\tc := NewACLCache()\n\t\tConvey(\"When I lookup for a matching address but failed port, I should get reject\", func() {\n\t\t\tip := net.ParseIP(\"192.168.100.1\")\n\t\t\tport := uint16(600)\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(a.Action&policy.Reject, ShouldEqual, policy.Reject)\n\t\t\tSo(a.PolicyID, ShouldEqual, \"default\")\n\t\t\tSo(p.Action&policy.Reject, ShouldEqual, policy.Reject)\n\t\t\tSo(p.ServiceID, ShouldEqual, \"default\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address but failed port, I should get accept\", func() {\n\t\t\tip := net.ParseIP(\"192.168.100.1\")\n\t\t\tport := uint16(600)\n\t\t\tdefaultFlowPolcy := &policy.FlowPolicy{Action: policy.Accept | policy.Log, PolicyID: \"default\", ServiceID: \"default\"}\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, defaultFlowPolcy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(a.Action&policy.Accept, ShouldEqual, policy.Accept)\n\t\t\tSo(a.PolicyID, ShouldEqual, \"default\")\n\t\t\tSo(p.Action&policy.Accept, ShouldEqual, policy.Accept)\n\t\t\tSo(p.ServiceID, ShouldEqual, \"default\")\n\t\t})\n\t})\n}\n\nfunc TestRejectPrioritizedOverAcceptCacheLookup(t *testing.T) {\n\n\trules = policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"172.0.0.0/8\"},\n\t\t\tPorts:     []string{\"1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"tcp172/8\"},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\tPorts:     []string{\"1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Reject,\n\t\t\t\tPolicyID: \"catchAllDrop\"},\n\t\t},\n\t}\n\n\tConvey(\"Given an ACL Cache with accept and reject rules\", t, func() {\n\t\tc := NewACLCache()\n\t\tSo(c, ShouldNotBeNil)\n\t\terr := c.AddRuleList(rules)\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I lookup for a matching address to both accept and reject rule, I should get reject\", func() {\n\t\t\tip := net.ParseIP(\"172.1.1.1\")\n\t\t\tport := uint16(1)\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(a.Action, ShouldEqual, policy.Reject)\n\t\t\tSo(a.PolicyID, ShouldEqual, \"catchAllDrop\")\n\t\t\tSo(p.Action, ShouldEqual, policy.Reject)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"catchAllDrop\")\n\t\t})\n\t})\n}\n\nfunc TestEmptyACLWithObserveContinueCacheLookup(t *testing.T) {\n\n\trules = policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\tPorts:     []string{\"1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:        policy.Accept,\n\t\t\t\tObserveAction: policy.ObserveContinue,\n\t\t\t\tPolicyID:      \"ObserveAcceptContinue\"},\n\t\t},\n\t}\n\n\tConvey(\"Given an empty ACL Cache\", t, func() {\n\t\tc := NewACLCache()\n\t\tSo(c, ShouldNotBeNil)\n\t\terr := c.AddRuleList(rules)\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I lookup for a matching address, I should get accept\", func() {\n\t\t\tip := net.ParseIP(\"192.168.100.1\")\n\t\t\tport := uint16(1)\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(a.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(a.PolicyID, ShouldEqual, \"ObserveAcceptContinue\")\n\t\t\tSo(p.Action&policy.Reject, ShouldEqual, policy.Reject)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"default\")\n\t\t})\n\t})\n}\n\nfunc TestEmptyACLWithObserveApplyCacheLookup(t *testing.T) {\n\n\trules = policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\tPorts:     []string{\"1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:        policy.Accept,\n\t\t\t\tObserveAction: policy.ObserveApply,\n\t\t\t\tPolicyID:      \"observeAcceptApply\"},\n\t\t},\n\t}\n\n\tConvey(\"Given an empty ACL Cache\", t, func() {\n\t\tc := NewACLCache()\n\t\tSo(c, ShouldNotBeNil)\n\t\terr := c.AddRuleList(rules)\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I lookup for a matching address, I should get accept\", func() {\n\t\t\tip := net.ParseIP(\"192.168.100.1\")\n\t\t\tport := uint16(1)\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(a.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(a.PolicyID, ShouldEqual, \"observeAcceptApply\")\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"observeAcceptApply\")\n\t\t})\n\t})\n}\n\nfunc TestObserveContinueApplyCacheLookup(t *testing.T) {\n\n\trules = policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"172.1.0.0/16\"},\n\t\t\tPorts:     []string{\"1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:        policy.Reject,\n\t\t\t\tObserveAction: policy.ObserveContinue,\n\t\t\t\tPolicyID:      \"observeRejectContinue-172.1/16\"},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"172.0.0.0/8\"},\n\t\t\tPorts:     []string{\"1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"tcp172/8\"},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"172.0.0.0/8\"},\n\t\t\tPorts:     []string{\"1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:        policy.Accept,\n\t\t\t\tObserveAction: policy.ObserveApply,\n\t\t\t\tPolicyID:      \"observeRejectApply\"},\n\t\t},\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"172.0.0.0/8\"},\n\t\t\tPorts:     []string{\"1\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:        policy.Reject,\n\t\t\t\tObserveAction: policy.ObserveContinue,\n\t\t\t\tPolicyID:      \"observeRejectContinue\"},\n\t\t},\n\t}\n\n\tConvey(\"Given an ACL Cache with accept observe-apply and observe-continue rules for same prefix\", t, func() {\n\t\tc := NewACLCache()\n\t\tSo(c, ShouldNotBeNil)\n\t\terr := c.AddRuleList(rules)\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I lookup for a matching address to /16, I should get report reject and packet accept and ignore observe-apply rule\", func() {\n\t\t\tip := net.ParseIP(\"172.1.1.1\")\n\t\t\tport := uint16(1)\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(a.Action, ShouldEqual, policy.Reject)\n\t\t\tSo(a.PolicyID, ShouldEqual, \"observeRejectContinue-172.1/16\")\n\t\t\t// So(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"tcp172/8\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching address to /8, I should get report reject and packet accept and ignore observe-apply rule\", func() {\n\t\t\tip := net.ParseIP(\"172.2.1.1\")\n\t\t\tport := uint16(1)\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(a.Action, ShouldEqual, policy.Reject)\n\t\t\tSo(a.PolicyID, ShouldEqual, \"observeRejectContinue\")\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"tcp172/8\")\n\t\t})\n\t})\n}\n\nfunc TestAcceptWithNomatchCacheLookup(t *testing.T) {\n\n\trules = policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"0.0.0.0/1\", \"!10.10.10.0/24\", \"128.0.0.0/1\", \"!10.0.0.0/8\", \"10.10.0.0/16\"},\n\t\t\tPorts:     []string{\"0:65535\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction: policy.Accept,\n\t\t\t},\n\t\t},\n\t}\n\n\tConvey(\"Given an ACL Cache with accept policy with some nomatch addresses\", t, func() {\n\t\tc := NewACLCache()\n\t\tSo(c, ShouldNotBeNil)\n\t\terr := c.AddRuleList(rules)\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I lookup address within nomatch outer but also within match inner, I should get accept\", func() {\n\t\t\tip := net.ParseIP(\"10.10.2.100\")\n\t\t\tport := uint16(443)\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(a.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t})\n\n\t\tConvey(\"When I lookup address within nomatch, I should get no match\", func() {\n\t\t\tip := net.ParseIP(\"10.10.10.100\")\n\t\t\tport := uint16(443)\n\t\t\t_, _, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup address within nomatch outer and not also within match inner, I should get no match\", func() {\n\t\t\tip := net.ParseIP(\"10.4.10.100\")\n\t\t\tport := uint16(443)\n\t\t\t_, _, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup address within match outer and not also within match inner, I should get accept\", func() {\n\t\t\tip := net.ParseIP(\"192.168.10.100\")\n\t\t\tport := uint16(443)\n\t\t\ta, p, err := c.GetMatchingAction(ip.To4(), port, packet.IPProtocolTCP, catchAllPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(a.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t})\n\t})\n}\n\nfunc TestRemoveRules(t *testing.T) {\n\n\tConvey(\"Given an ACL Cache with some rules\", t, func() {\n\t\tip := net.ParseIP(\"172.1.0.0\")\n\t\tSo(ip, ShouldNotBeNil)\n\t\tc := NewACLCache()\n\t\tSo(c, ShouldNotBeNil)\n\t\terr := c.AddRuleList(policy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"172.1.0.0/16\"},\n\t\t\t\tPorts:     []string{\"1\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Reject,\n\t\t\t\t\tPolicyID: \"reject\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"172.1.0.0/16\"},\n\t\t\t\tPorts:     []string{\"1\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tObserveAction: policy.ObserveApply,\n\t\t\t\t\tPolicyID:      \"observeApply\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"172.1.0.0/16\"},\n\t\t\t\tPorts:     []string{\"1\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"accept\",\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tSo(err, ShouldBeNil)\n\t\tval, ok := c.reject.tcpCache.Get(ip, 16)\n\t\tSo(ok, ShouldBeTrue)\n\t\tSo(val.(portActionList), ShouldNotBeEmpty)\n\t\tval, ok = c.observe.tcpCache.Get(ip, 16)\n\t\tSo(ok, ShouldBeTrue)\n\t\tSo(val.(portActionList), ShouldNotBeEmpty)\n\t\tval, ok = c.accept.tcpCache.Get(ip, 16)\n\t\tSo(ok, ShouldBeTrue)\n\t\tSo(val.(portActionList), ShouldNotBeEmpty)\n\n\t\tConvey(\"Then I should error if I pass unparseable rules\", func() {\n\t\t\terr := c.RemoveRulesForAddress(\n\t\t\t\t&Address{IP: ip, Mask: 16, NoMatch: false},\n\t\t\t\tconstants.TCPProtoNum,\n\t\t\t\t[]string{\"invalid\"},\n\t\t\t\t&policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Reject,\n\t\t\t\t\tPolicyID: \"reject\",\n\t\t\t\t},\n\t\t\t)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Then I should be able to remove the rules\", func() {\n\t\t\terr := c.RemoveRulesForAddress(\n\t\t\t\t&Address{IP: ip, Mask: 16, NoMatch: false},\n\t\t\t\tconstants.TCPProtoNum,\n\t\t\t\t[]string{\"1\"},\n\t\t\t\t&policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Reject,\n\t\t\t\t\tPolicyID: \"reject\",\n\t\t\t\t},\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = c.RemoveRulesForAddress(\n\t\t\t\t&Address{IP: ip, Mask: 16, NoMatch: false},\n\t\t\t\tconstants.TCPProtoNum,\n\t\t\t\t[]string{\"1\"},\n\t\t\t\t&policy.FlowPolicy{\n\t\t\t\t\tObserveAction: policy.ObserveApply,\n\t\t\t\t\tPolicyID:      \"observeApply\",\n\t\t\t\t},\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = c.RemoveRulesForAddress(\n\t\t\t\t&Address{IP: ip, Mask: 16, NoMatch: false},\n\t\t\t\tconstants.TCPProtoNum,\n\t\t\t\t[]string{\"1\"},\n\t\t\t\t&policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"accept\",\n\t\t\t\t},\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tval, ok := c.reject.tcpCache.Get(ip, 16)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(val.(portActionList), ShouldBeEmpty)\n\t\t\tval, ok = c.observe.tcpCache.Get(ip, 16)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(val.(portActionList), ShouldBeEmpty)\n\t\t\tval, ok = c.accept.tcpCache.Get(ip, 16)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(val.(portActionList), ShouldBeEmpty)\n\t\t})\n\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/icmpacl.go",
    "content": "package acls\n\nimport (\n\t\"net\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\ntype icmpRule struct {\n\tbaseRule           string\n\tlistOfDisjunctives []string\n\tpolicy             *policy.FlowPolicy\n}\n\nfunc (rule *icmpRule) match(icmpType, icmpCode int8) (*policy.FlowPolicy, bool) {\n\n\ttype evaluator func(int8) bool\n\n\tprocessList := func(vs []string, f func(string) evaluator) []evaluator {\n\t\tvals := make([]evaluator, len(vs))\n\n\t\tfor i, v := range vs {\n\t\t\tvals[i] = f(v)\n\t\t}\n\n\t\treturn vals\n\t}\n\n\tgenCodes := func(val string) evaluator {\n\n\t\tgenCode := func(v string) evaluator {\n\n\t\t\tswitch splits := strings.Split(v, \":\"); len(splits) {\n\t\t\tcase 1:\n\t\t\t\tnumVal, _ := strconv.Atoi(v)\n\t\t\t\treturn func(input int8) bool { return input == int8(numVal) }\n\t\t\tdefault:\n\t\t\t\tmin := splits[0]\n\t\t\t\tmax := splits[1]\n\n\t\t\t\tminVal, _ := strconv.Atoi(min)\n\t\t\t\tmaxVal, _ := strconv.Atoi(max)\n\n\t\t\t\treturn func(input int8) bool { return input >= int8(minVal) && input <= int8(maxVal) }\n\t\t\t}\n\t\t}\n\n\t\tsplits := strings.Split(val, \",\")\n\t\tvals := processList(splits, genCode)\n\n\t\treturn func(input int8) bool {\n\t\t\tresult := false\n\t\t\tfor _, v := range vals {\n\t\t\t\tresult = result || v(input)\n\t\t\t}\n\n\t\t\treturn result\n\t\t}\n\t}\n\n\tprocessSingleTypeCode := func(icmpTypeCode string) (evaluator, evaluator) {\n\t\tsplits := strings.Split(icmpTypeCode, \"/\")\n\n\t\tvar typeEval evaluator\n\t\tvar codeEval evaluator\n\n\t\ttypeEval = func(val int8) bool { return true }\n\t\tcodeEval = func(val int8) bool { return true }\n\n\t\tfor i, val := range splits {\n\t\t\tswitch i {\n\t\t\tcase 0:\n\t\t\tcase 1:\n\t\t\t\tcodeVal, _ := strconv.Atoi(val)\n\t\t\t\ttypeEval = func(input int8) bool { return input == int8(codeVal) }\n\t\t\tcase 2:\n\t\t\t\tcodeEval = genCodes(val)\n\t\t\t}\n\t\t}\n\n\t\treturn typeEval, codeEval\n\t}\n\n\tmatches := func(icmpType, icmpCode int8, icmpTypeCode string) bool {\n\t\ttypeMatch, codeMatch := processSingleTypeCode(icmpTypeCode)\n\t\treturn typeMatch(icmpType) && codeMatch(icmpCode)\n\t}\n\n\tif !matches(icmpType, icmpCode, rule.baseRule) {\n\t\treturn rule.policy, false\n\t}\n\n\taction := true\n\n\tfor _, r := range rule.listOfDisjunctives {\n\t\taction = false\n\t\tif matches(icmpType, icmpCode, r) {\n\t\t\treturn rule.policy, true\n\t\t}\n\t}\n\n\treturn rule.policy, action\n}\n\nfunc (a *acl) matchICMPRule(ip net.IP, icmpType int8, icmpCode int8) (*policy.FlowPolicy, *policy.FlowPolicy, error) {\n\n\tvar report *policy.FlowPolicy\n\tvar match bool\n\n\tlookup := func(val interface{}) bool {\n\t\tif val != nil {\n\t\t\ticmpRules := val.([]*icmpRule)\n\t\t\tfor _, icmpRule := range icmpRules {\n\t\t\t\treport, match = icmpRule.match(icmpType, icmpCode)\n\t\t\t\tif match {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn false\n\t\t}\n\n\t\treturn false\n\t}\n\n\ta.icmpCache.RunFuncOnLpmIP(ip, lookup)\n\n\tif !match {\n\t\treturn nil, nil, errNotFound\n\t}\n\n\treturn report, report, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/ports.go",
    "content": "package acls\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// ErrNoMatch is error returned when no match is found.\nvar ErrNoMatch = errors.New(\"No Match\")\n\n// portAction captures the minimum and maximum ports for an action\ntype portAction struct {\n\tmin     uint16\n\tmax     uint16\n\tpolicy  *policy.FlowPolicy\n\tnomatch bool\n}\n\n// portActionList is a list of Port Actions\ntype portActionList []*portAction\n\n// newPortAction parses a port spec and creates the action\nfunc newPortAction(tcpport string, policy *policy.FlowPolicy, nomatch bool) (*portAction, error) {\n\n\tp := &portAction{}\n\tif strings.Contains(tcpport, \":\") {\n\t\tparts := strings.Split(tcpport, \":\")\n\t\tif len(parts) != 2 {\n\t\t\treturn nil, fmt.Errorf(\"invalid port: %s\", tcpport)\n\t\t}\n\n\t\tport, err := strconv.Atoi(parts[0])\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tp.min = uint16(port)\n\n\t\tport, err = strconv.Atoi(parts[1])\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tp.max = uint16(port)\n\n\t} else {\n\t\tport, err := strconv.Atoi(tcpport)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tp.min = uint16(port)\n\t\tp.max = p.min\n\t}\n\n\tif p.min > p.max {\n\t\treturn nil, errors.New(\"min port is greater than max port\")\n\t}\n\n\tp.policy = policy\n\tp.nomatch = nomatch\n\n\treturn p, nil\n}\n\nfunc (p *portActionList) lookup(port uint16, preReported *policy.FlowPolicy) (report *policy.FlowPolicy, packet *policy.FlowPolicy, err error) {\n\n\treport = preReported\n\n\t// Scan the ports - TODO: better algorithm needed here\n\tfor _, pa := range *p {\n\t\tif port >= pa.min && port <= pa.max {\n\n\t\t\tif pa.nomatch {\n\t\t\t\treturn report, packet, errNoMatchFromRule\n\t\t\t}\n\n\t\t\t// Check observed policies.\n\t\t\tif pa.policy.ObserveAction.Observed() {\n\t\t\t\tif report == nil {\n\t\t\t\t\treport = pa.policy\n\t\t\t\t}\n\t\t\t\tif pa.policy.ObserveAction.ObserveContinue() {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tpacket = pa.policy\n\t\t\t\treturn report, packet, nil\n\t\t\t}\n\n\t\t\tpacket = pa.policy\n\t\t\tif report == nil {\n\t\t\t\treport = packet\n\t\t\t}\n\t\t\treturn report, packet, nil\n\t\t}\n\t}\n\n\treturn report, packet, ErrNoMatch\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/ports_test.go",
    "content": "// +build !windows\n\npackage acls\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc TestEmptyPortListLookup(t *testing.T) {\n\n\tConvey(\"Given an empty port action list\", t, func() {\n\t\tpl := &portActionList{}\n\n\t\tConvey(\"When I lookup for a matching port, I should not get any result\", func() {\n\t\t\tr, p, err := pl.lookup(10, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestPortListLookup(t *testing.T) {\n\n\trule := policy.IPRule{\n\t\tAddresses: []string{\"172.0.0.0/8\"},\n\t\tPorts:     []string{\"1:999\"},\n\t\tProtocols: []string{\"tcp\"},\n\t\tPolicy: &policy.FlowPolicy{\n\t\t\tAction:   policy.Accept,\n\t\t\tPolicyID: \"portMatch\",\n\t\t},\n\t}\n\n\tConvey(\"Given a non-empty port action list\", t, func() {\n\t\tvar pl portActionList\n\t\tfor _, port := range rule.Ports {\n\t\t\tpa, err := newPortAction(port, rule.Policy, false)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(pa, ShouldNotBeNil)\n\n\t\t\tpl = append(pl, pa)\n\t\t}\n\n\t\tConvey(\"When I lookup for a non matching port, I should get error\", func() {\n\t\t\tr, p, err := pl.lookup(0, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a non matching port, I should get error but get the unmodified reported flow input\", func() {\n\t\t\tr, p, err := pl.lookup(0, &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"portPreMatch\"},\n\t\t\t)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldNotBeNil)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portPreMatch\")\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching port, I should get accept\", func() {\n\t\t\tr, p, err := pl.lookup(10, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portMatch\")\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"portMatch\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching port, and a report action, packet must be reported with no error\", func() {\n\t\t\tr, p, err := pl.lookup(10, &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"portPreMatch\"},\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(r, ShouldNotBeNil)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portPreMatch\")\n\t\t\tSo(p, ShouldNotBeNil)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"portMatch\")\n\t\t})\n\t})\n}\n\nfunc TestPortListLookupObservedPolicyContinue(t *testing.T) {\n\n\trule := policy.IPRule{\n\t\tAddresses: []string{\"172.0.0.0/8\"},\n\t\tPorts:     []string{\"1:999\"},\n\t\tProtocols: []string{\"tcp\"},\n\t\tPolicy: &policy.FlowPolicy{\n\t\t\tObserveAction: policy.ObserveContinue,\n\t\t\tAction:        policy.Accept,\n\t\t\tPolicyID:      \"portMatch\",\n\t\t},\n\t}\n\n\tConvey(\"Given a non-empty port action list\", t, func() {\n\t\tvar pl portActionList\n\t\tfor _, port := range rule.Ports {\n\t\t\tpa, err := newPortAction(port, rule.Policy, false)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(pa, ShouldNotBeNil)\n\n\t\t\tpl = append(pl, pa)\n\t\t}\n\n\t\tConvey(\"When I lookup for a non matching port, I should get error\", func() {\n\t\t\tr, p, err := pl.lookup(0, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a non matching port, I should get error but get the unmodified reported flow input\", func() {\n\t\t\tr, p, err := pl.lookup(0, &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"portPreMatch\"},\n\t\t\t)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldNotBeNil)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portPreMatch\")\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching port with observed policy, I should get report but no packet action and error \", func() {\n\t\t\tr, p, err := pl.lookup(10, nil)\n\t\t\tSo(err, ShouldEqual, ErrNoMatch)\n\t\t\tSo(r.ObserveAction, ShouldEqual, policy.ObserveContinue)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portMatch\")\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching port with observed policy and pre-existing report, I should get unmodified report but no packet action and error \", func() {\n\t\t\tr, p, err := pl.lookup(10, &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"portPreMatch\",\n\t\t\t})\n\t\t\tSo(err, ShouldEqual, ErrNoMatch)\n\t\t\tSo(r.ObserveAction, ShouldEqual, policy.ObserveNone)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portPreMatch\")\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestPortListLookupObservedPolicyApply(t *testing.T) {\n\n\trule := policy.IPRule{\n\t\tAddresses: []string{\"172.0.0.0/8\"},\n\t\tPorts:     []string{\"1:999\"},\n\t\tProtocols: []string{\"tcp\"},\n\t\tPolicy: &policy.FlowPolicy{\n\t\t\tObserveAction: policy.ObserveApply,\n\t\t\tAction:        policy.Accept,\n\t\t\tPolicyID:      \"portMatch\",\n\t\t},\n\t}\n\n\tConvey(\"Given a non-empty port action list\", t, func() {\n\t\tvar pl portActionList\n\t\tfor _, port := range rule.Ports {\n\t\t\tpa, err := newPortAction(port, rule.Policy, false)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(pa, ShouldNotBeNil)\n\n\t\t\tpl = append(pl, pa)\n\t\t}\n\n\t\tConvey(\"When I lookup for a non matching port, I should get error\", func() {\n\t\t\tr, p, err := pl.lookup(0, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a non matching port, I should get error but get the unmodified reported flow input\", func() {\n\t\t\tr, p, err := pl.lookup(0, &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"portPreMatch\"},\n\t\t\t)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldNotBeNil)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portPreMatch\")\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching port with observed policy apply, I should get report and packet action and no error \", func() {\n\t\t\tr, p, err := pl.lookup(10, nil)\n\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tSo(r.ObserveAction, ShouldEqual, policy.ObserveApply)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portMatch\")\n\n\t\t\tSo(p.ObserveAction, ShouldEqual, policy.ObserveApply)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"portMatch\")\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching port with observed policy and pre-existing report, I should get unmodified report, packet action and no error \", func() {\n\t\t\tr, p, err := pl.lookup(10, &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"portPreMatch\",\n\t\t\t})\n\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tSo(r.ObserveAction, ShouldEqual, policy.ObserveNone)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portPreMatch\")\n\n\t\t\tSo(p.ObserveAction, ShouldEqual, policy.ObserveApply)\n\t\t\tSo(p.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(p.PolicyID, ShouldEqual, \"portMatch\")\n\t\t})\n\t})\n}\n\nfunc TestPortListWithNomatchLookup(t *testing.T) {\n\n\trule := policy.IPRule{\n\t\tAddresses: []string{\"0.0.0.0/1\", \"128.0.0.0/1\", \"!172.0.0.0/8\"},\n\t\tPorts:     []string{\"1:999\"},\n\t\tProtocols: []string{\"tcp\"},\n\t\tPolicy: &policy.FlowPolicy{\n\t\t\tAction:   policy.Accept,\n\t\t\tPolicyID: \"portMatch\",\n\t\t},\n\t}\n\n\tConvey(\"Given a non-empty port action list\", t, func() {\n\t\tvar pl portActionList\n\t\tfor _, port := range rule.Ports {\n\t\t\tpa, err := newPortAction(port, rule.Policy, true)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(pa, ShouldNotBeNil)\n\n\t\t\tpl = append(pl, pa)\n\t\t}\n\n\t\tConvey(\"When I lookup for a non matching port, I should get error\", func() {\n\t\t\tr, p, err := pl.lookup(0, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a non matching port, I should get error but get the unmodified reported flow input\", func() {\n\t\t\tr, p, err := pl.lookup(0, &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"portPreMatch\"},\n\t\t\t)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldNotBeNil)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portPreMatch\")\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching port, I should get no match\", func() {\n\t\t\tr, p, err := pl.lookup(10, nil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldBeNil)\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I lookup for a matching port, I should get no match but the unmodified reported flow input\", func() {\n\t\t\tr, p, err := pl.lookup(10, &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"portPreMatch\"},\n\t\t\t)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(r, ShouldNotBeNil)\n\t\t\tSo(r.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(r.PolicyID, ShouldEqual, \"portPreMatch\")\n\t\t\tSo(p, ShouldBeNil)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/utils.go",
    "content": "package acls\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\t\"strings\"\n)\n\n// Address is a parsed IP address or CIDR\ntype Address struct {\n\tIP      net.IP\n\tMask    int\n\tNoMatch bool\n}\n\n// ParseAddress parses `address` as an IP or CIDR address - based on the notation that we allow in our backend.\n// If the address is prefixed with a \"!\"\", then the NoMatch attribute will be true.\n// If the Address is of the format \"IP/BitMask\" (e.g. 192.0.2.0/24), then the mask will be set to 24.\n// If the address is of the form \"IP\" (e.g. 192.0.2.1), then the mask will be added automatically.\nfunc ParseAddress(address string) (*Address, error) {\n\tvar mask int\n\tvar err error\n\tparts := strings.Split(address, \"/\")\n\tnomatch := strings.HasPrefix(parts[0], \"!\")\n\tif nomatch {\n\t\tparts[0] = parts[0][1:]\n\t}\n\tip := net.ParseIP(parts[0])\n\tif ip == nil {\n\t\treturn nil, fmt.Errorf(\"invalid ip address: %s\", parts[0])\n\t}\n\n\tif len(parts) == 1 {\n\t\tif ip.To4() != nil {\n\t\t\tmask = 32\n\t\t} else {\n\t\t\tmask = 128\n\t\t}\n\t} else {\n\t\tmask, err = strconv.Atoi(parts[1])\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"invalid mask '%s': %w\", parts[1], err)\n\t\t}\n\t}\n\n\treturn &Address{IP: ip, Mask: mask, NoMatch: nomatch}, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/acls/utils_test.go",
    "content": "package acls\n\nimport (\n\t\"net\"\n\t\"reflect\"\n\t\"testing\"\n)\n\nfunc TestParseAddress(t *testing.T) {\n\tipv4 := net.ParseIP(\"192.0.2.1\")\n\tif ipv4 == nil {\n\t\tpanic(\"ipv4 address invalid at test prerequisite\")\n\t}\n\tipv6 := net.ParseIP(\"2001:db8::1\")\n\tif ipv6 == nil {\n\t\tpanic(\"ipv6 address invalid at test prerequisite\")\n\t}\n\ttype args struct {\n\t\taddress string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    *Address\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"invalid IP address\",\n\t\t\targs: args{\n\t\t\t\taddress: \"invalid IP address\",\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"invalid network mask\",\n\t\t\targs: args{\n\t\t\t\taddress: \"192.0.2.0/invalid\",\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"IPv4 address without mask\",\n\t\t\targs: args{\n\t\t\t\taddress: \"192.0.2.1\",\n\t\t\t},\n\t\t\twant: &Address{\n\t\t\t\tIP:      ipv4,\n\t\t\t\tMask:    32,\n\t\t\t\tNoMatch: false,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"IPv6 address without mask\",\n\t\t\targs: args{\n\t\t\t\taddress: \"2001:db8::1\",\n\t\t\t},\n\t\t\twant: &Address{\n\t\t\t\tIP:      ipv6,\n\t\t\t\tMask:    128,\n\t\t\t\tNoMatch: false,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"IPv4 address with mask\",\n\t\t\targs: args{\n\t\t\t\taddress: \"192.0.2.1/24\",\n\t\t\t},\n\t\t\twant: &Address{\n\t\t\t\tIP:      ipv4,\n\t\t\t\tMask:    24,\n\t\t\t\tNoMatch: false,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"IPv6 address with mask\",\n\t\t\targs: args{\n\t\t\t\taddress: \"2001:db8::1/64\",\n\t\t\t},\n\t\t\twant: &Address{\n\t\t\t\tIP:      ipv6,\n\t\t\t\tMask:    64,\n\t\t\t\tNoMatch: false,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"IPv4 address with mask and nomatch\",\n\t\t\targs: args{\n\t\t\t\taddress: \"!192.0.2.1/24\",\n\t\t\t},\n\t\t\twant: &Address{\n\t\t\t\tIP:      ipv4,\n\t\t\t\tMask:    24,\n\t\t\t\tNoMatch: true,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"IPv6 address with mask and nomatch\",\n\t\t\targs: args{\n\t\t\t\taddress: \"!2001:db8::1/64\",\n\t\t\t},\n\t\t\twant: &Address{\n\t\t\t\tIP:      ipv6,\n\t\t\t\tMask:    64,\n\t\t\t\tNoMatch: true,\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, err := ParseAddress(tt.args.address)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"ParseAddress() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"ParseAddress() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/apiauth/apiauth.go",
    "content": "package apiauth\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/servicetokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\t// DefaultValidity is default service token validity.\n\tDefaultValidity = 60 * time.Second\n\n\t// TriremeOIDCCallbackURI is the callback URI that must be presented by\n\t// any OIDC provider.\n\tTriremeOIDCCallbackURI = \"/aporeto/oidc/callback\"\n)\n\n// Processor is an API Authorization processor.\ntype Processor struct {\n\tpuContext string\n\n\tissuer  string // the issuer ID .. need to get rid of that part with the new tokens\n\tsecrets secrets.Secrets\n\tsync.RWMutex\n}\n\n// New will create a new authorization processor.\nfunc New(contextID string, s secrets.Secrets) *Processor {\n\treturn &Processor{\n\t\tpuContext: contextID,\n\t\tsecrets:   s,\n\t}\n}\n\nfunc (p *Processor) retrieveNetworkContext(originalIP *net.TCPAddr) (*serviceregistry.PortContext, error) {\n\n\treturn serviceregistry.Instance().RetrieveExposedServiceContext(originalIP.IP, originalIP.Port, \"\")\n}\n\nfunc (p *Processor) retrieveApplicationContext(address *net.TCPAddr) (*serviceregistry.ServiceContext, *serviceregistry.DependentServiceData, error) {\n\n\treturn serviceregistry.Instance().RetrieveDependentServiceDataByIDAndNetwork(p.puContext, address.IP, address.Port, \"\")\n}\n\n// UpdateSecrets is called to update the authorizer secrets.\nfunc (p *Processor) UpdateSecrets(s secrets.Secrets) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.secrets = s\n}\n\n// ApplicationRequest processes an application side request and returns\n// the token that is associated with this application, together with an\n// error if the request must be rejected.\nfunc (p *Processor) ApplicationRequest(r *Request) (*AppAuthResponse, error) {\n\n\td := &AppAuthResponse{\n\t\tTLSListener: true,\n\t}\n\n\t// Derive the service context for this request. This is another PU\n\t// or some external service. Context is derived based on the original\n\t// destination of the request.\n\tsctx, serviceData, err := p.retrieveApplicationContext(r.OriginalDestination)\n\tif err != nil {\n\t\treturn d, &AuthError{\n\t\t\tstatus:  http.StatusBadGateway,\n\t\t\tmessage: fmt.Sprintf(\"Cannot identify application context: %s\", err),\n\t\t}\n\t}\n\td.PUContext = sctx.PUContext\n\td.ServiceID = serviceData.APICache.ID\n\n\t// First we process network type rules (L3 based decision)\n\t_, netaction, noNetAccesPolicy := sctx.PUContext.ApplicationACLPolicyFromAddr(r.OriginalDestination.IP, uint16(r.OriginalDestination.Port), uint8(packet.IPProtocolTCP))\n\td.NetworkPolicyID = netaction.PolicyID\n\td.NetworkServiceID = netaction.ServiceID\n\tif noNetAccesPolicy == nil && netaction.Action.Rejected() {\n\t\treturn d, &AuthError{\n\t\t\tstatus:  http.StatusNetworkAuthenticationRequired,\n\t\t\tmessage: \"Unauthorized Service - Rejected Outgoing Request by Network Policies\",\n\t\t}\n\t}\n\n\t// For external services we validate policy at the ingress.\n\tif serviceData.APICache.External {\n\t\td.External = true\n\n\t\t// Get the corresponding scopes\n\t\tfound, rule := serviceData.APICache.FindRule(r.Method, r.URL.Path)\n\t\tif !found {\n\t\t\treturn d, &AuthError{\n\t\t\t\tstatus:  http.StatusForbidden,\n\t\t\t\tmessage: \"Uknown or unauthorized service: policy not found\",\n\t\t\t}\n\t\t}\n\t\td.HookMethod = rule.HookMethod\n\t\t// If there is an authorization policy attached to the rule, we must validate\n\t\t// against the identity of the PU.\n\t\tif !rule.Public {\n\t\t\t// Validate the policy based on the scopes of the PU.\n\t\t\t// TODO: Add user scopes\n\t\t\tif !serviceData.APICache.MatchClaims(rule.ClaimMatchingRules, append(sctx.PUContext.Identity().GetSlice(), sctx.PUContext.Scopes()...)) {\n\t\t\t\treturn d, &AuthError{\n\t\t\t\t\tstatus:  http.StatusForbidden,\n\t\t\t\t\tmessage: \"Unauthorized service: rejected by policy\",\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\td.Action = policy.Accept | policy.Log\n\t\tif !serviceData.ServiceObject.NoTLSExternalService {\n\t\t\td.Action = d.Action | policy.Encrypt\n\t\t}\n\t\td.TLSListener = !serviceData.ServiceObject.NoTLSExternalService\n\n\t\treturn d, nil\n\t}\n\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\tsecret := p.secrets\n\n\ttoken, err := servicetokens.CreateAndSign(\n\t\tp.issuer,\n\t\tsctx.PUContext.Identity().GetSlice(),\n\t\tsctx.PUContext.Scopes(),\n\t\tsctx.PUContext.ManagementID(),\n\t\tDefaultValidity,\n\t\tsecret.EncodingKey(),\n\t\tnil,\n\t)\n\tif err != nil {\n\t\treturn d, &AuthError{\n\t\t\tstatus:  http.StatusInternalServerError,\n\t\t\tmessage: \"Unable to issue service token\",\n\t\t\terr:     err,\n\t\t}\n\t}\n\n\td.Token = token\n\n\treturn d, nil\n}\n\n// NetworkRequest authorizes a network request and either accepts the request\n// or potentially issues a redirect.\nfunc (p *Processor) NetworkRequest(ctx context.Context, r *Request) (*NetworkAuthResponse, error) {\n\n\t// First retrieve the context and policy for this request. Network\n\t// requests are indexed based on the original destination and port.\n\tpctx, err := p.retrieveNetworkContext(r.OriginalDestination)\n\tif err != nil {\n\t\treturn nil, &AuthError{\n\t\t\tstatus:  http.StatusInternalServerError,\n\t\t\tmessage: \"Internal server error - cannot identify destination policy\",\n\t\t\terr:     err,\n\t\t}\n\t}\n\n\t// Create a basic response. We will update this response with information\n\t// as we continue processing.\n\td := &NetworkAuthResponse{\n\t\tPUContext:   pctx.PUContext,\n\t\tServiceID:   pctx.Service.ID,\n\t\tAction:      policy.Reject,\n\t\tSourceType:  collector.EndPointTypeExternalIP,\n\t\tTLSListener: pctx.Service.PrivateTLSListener,\n\t\tNamespace:   pctx.PUContext.ManagementNamespace(),\n\t}\n\n\t// We process first OIDC callbacks. These are the redirects after a user\n\t// has been authorized. We do not apply any network rule checks in this\n\t// case. If the callback is authorized we return the cookie and JWT\n\t// for the user.\n\tif strings.HasPrefix(r.RequestURI, TriremeOIDCCallbackURI) {\n\t\tcallbackResponse, err := pctx.Authorizer.Callback(ctx, r.URL)\n\t\tif err == nil {\n\t\t\td.Action = policy.Accept | policy.Encrypt | policy.Log\n\t\t\td.Redirect = true\n\t\t\td.RedirectURI = callbackResponse.OriginURL\n\t\t\td.Cookie = callbackResponse.Cookie\n\t\t\td.Data = callbackResponse.Data\n\t\t\td.SourceType = collector.EndPointTypeClaims\n\t\t\td.NetworkPolicyID = \"default\"\n\t\t\td.NetworkServiceID = \"default\"\n\t\t}\n\t\treturn d, &AuthError{\n\t\t\tmessage: callbackResponse.Message,\n\t\t\tstatus:  callbackResponse.Status,\n\t\t}\n\t}\n\n\t// We first process the network access rules based on external networks or\n\t// incoming IP addresses. We cannot process yet the Aporeto authorization\n\t// rules until after we decode the claims. The aclPolicy holds the matched\n\t// rules. If the method returns no error we store it in the noNetAccessPolicy\n\t// variable. This indicates that we have found no external network rule that\n\t// allows the request and we must validate the PU to PU rules. We will not\n\t// know what to do until after we decode all the incoming claims.\n\t// We perform this function early so that we don't waste CPU cycles with\n\t// processing tokens if the network policy does not allow the connection.\n\taclReportPolicy, aclActualPolicy, noNetAccessPolicy := pctx.PUContext.NetworkACLPolicyFromAddr(\n\t\tr.SourceAddress.IP,\n\t\tuint16(r.OriginalDestination.Port),\n\t\tuint8(packet.IPProtocolTCP),\n\t)\n\td.NetworkPolicyID = aclActualPolicy.PolicyID\n\td.NetworkServiceID = aclActualPolicy.ServiceID\n\n\tif aclActualPolicy.Action.Logged() {\n\t\td.Action = d.Action | policy.Log\n\t}\n\n\tif aclReportPolicy.ObserveAction.Observed() {\n\t\td.ObservedPolicyID = aclReportPolicy.PolicyID\n\t\td.ObservedAction = aclReportPolicy.Action\n\t}\n\n\tif noNetAccessPolicy == nil && aclActualPolicy.Action.Rejected() {\n\t\td.DropReason = collector.PolicyDrop\n\t\td.SourceType = collector.EndPointTypeExternalIP\n\t\treturn d, &AuthError{\n\t\t\tmessage: \"Access denied by network policy\",\n\t\t\tstatus:  http.StatusNetworkAuthenticationRequired,\n\t\t}\n\t}\n\n\t// Retrieve the headers with the key and auth parameters. If the parameters do not\n\t// exist, we will end up with empty values, but processing can continue. The authorizer\n\t// will validate if they are needed or not.\n\ttoken, key := processHeaders(r)\n\n\t// Calculate the user attributes. User attributes can be derived either from a\n\t// token or from a certificate. The authorizer library will parse them. We don't\n\t// care if there are no user credentials. It might be a request from a PU,\n\t// or it might be a request to a public interface. Only if the service mandates\n\t// user credentials, we get the redirect directive.\n\tuserCredentials(ctx, pctx, r, d)\n\n\t// Calculate the Aporeto PU claims by parsing the token if it exists. If the token\n\t// is empty the DecodeAporetoClaims method will return no error.\n\tvar aporetoClaims []string\n\tvar pingPayload *policy.PingPayload\n\td.SourcePUID, aporetoClaims, pingPayload, err = pctx.Authorizer.DecodeAporetoClaims(token, key)\n\tif err != nil {\n\t\td.DropReason = collector.PolicyDrop\n\t\treturn d, &AuthError{\n\t\t\tmessage: fmt.Sprintf(\"Invalid Authorization Token: %s\", err),\n\t\t\tstatus:  http.StatusForbidden,\n\t\t}\n\t}\n\n\tif pingPayload != nil && pingPayload.PingID != \"\" {\n\t\td.PingConfig = &PingConfig{\n\t\t\tPingID:      pingPayload.PingID,\n\t\t\tIterationID: pingPayload.IterationID,\n\t\t\tClaims:      aporetoClaims,\n\t\t\tPayloadSize: len(token) + len(key),\n\t\t}\n\t}\n\n\t// If the other side is a PU we will always put the source type as PU.\n\tisPUSource := false\n\tif len(aporetoClaims) > 0 {\n\t\tisPUSource = true\n\t\td.SourceType = collector.EndPointTypePU\n\t}\n\n\t// We need to verify network policy, before validating the API policy. If a network\n\t// policy has given us an accept because of IP address based ACLs we proceed anyway.\n\t// This is rather convoluted, but a user might choose to implement network\n\t// policies with ACLs only, and we have to cover this case.\n\tif noNetAccessPolicy != nil || aclReportPolicy.ObserveAction.ObserveApply() {\n\n\t\t// If we have not found an IP based access policy and the other side\n\t\t// is a PU we can visit the network rules based on tag authorization.\n\t\tif isPUSource {\n\t\t\tnetReportPolicy, netActualPolicy := pctx.PUContext.SearchRcvRules(policy.NewTagStoreFromSlice(aporetoClaims))\n\n\t\t\td.NetworkPolicyID = netActualPolicy.PolicyID\n\t\t\td.NetworkServiceID = aclActualPolicy.ServiceID\n\t\t\td.ObservedPolicyID = \"\"\n\t\t\td.ObservedAction = policy.ActionType(0)\n\n\t\t\tif netReportPolicy.ObserveAction.Observed() {\n\t\t\t\td.ObservedPolicyID = netReportPolicy.PolicyID\n\t\t\t\td.ObservedAction = netReportPolicy.Action\n\t\t\t}\n\n\t\t\tif netActualPolicy.Action.Rejected() {\n\t\t\t\td.DropReason = collector.PolicyDrop\n\t\t\t\treturn d, &AuthError{\n\t\t\t\t\tmessage: \"Access not authorized by network policy\",\n\t\t\t\t\tstatus:  http.StatusNetworkAuthenticationRequired,\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t// If no network access policy and no PU claims, this request\n\t\t\t// is dropped.\n\t\t\td.DropReason = collector.PolicyDrop\n\t\t\treturn d, &AuthError{\n\t\t\t\tmessage: \"Access denied by network policy: no policy found\",\n\t\t\t\tstatus:  http.StatusNetworkAuthenticationRequired,\n\t\t\t}\n\t\t}\n\t} else {\n\t\tif aclActualPolicy.Action.Accepted() {\n\t\t\taporetoClaims = append(aporetoClaims, aclActualPolicy.Labels...)\n\t\t}\n\t}\n\n\t// We can now validate the API authorization. This is the final step\n\t// before forwarding.\n\tallClaims := append(aporetoClaims, d.UserAttributes...)\n\taccept, public := pctx.Authorizer.Check(r.Method, r.URL.Path, allClaims)\n\tif !accept && !public {\n\t\t// If the authorization check returns reject, we need to validate\n\t\t// if this is a public request, it will be accepted.\n\t\td.DropReason = collector.APIPolicyDrop\n\n\t\t// We need to process the redirects here. The reject might be forcing\n\t\t// us to issue a redirect. Redirects are valid only if the source\n\t\t// is a user. It doesn't make sense to redirect a PU.\n\t\t// If the source is not a PU, then ping cannot be enabled.\n\t\tif !isPUSource {\n\t\t\tauthError := &AuthError{\n\t\t\t\tmessage: \"No token presented or invalid token: Please authenticate first\",\n\t\t\t\tstatus:  http.StatusTemporaryRedirect,\n\t\t\t}\n\t\t\tif d.Redirect {\n\t\t\t\td.RedirectURI = pctx.Authorizer.RedirectURI(r.URL.String())\n\t\t\t\treturn d, authError\n\t\t\t} else if len(pctx.Service.UserRedirectOnAuthorizationFail) > 0 {\n\t\t\t\td.RedirectURI = pctx.Service.UserRedirectOnAuthorizationFail + \"?failure_message=authorization\"\n\t\t\t\treturn d, authError\n\t\t\t}\n\t\t}\n\n\t\tzap.L().Debug(\"No match found for the request or authorization Error\",\n\t\t\tzap.String(\"Request\", r.Method+\" \"+r.RequestURI),\n\t\t\tzap.Strings(\"User Attributes\", d.UserAttributes),\n\t\t\tzap.Strings(\"Aporeto Claims\", aporetoClaims),\n\t\t)\n\n\t\treturn d, &AuthError{\n\t\t\tmessage: fmt.Sprintf(\"Unauthorized Access to %s\", r.URL),\n\t\t\tstatus:  http.StatusUnauthorized,\n\t\t}\n\t}\n\n\td.Action = policy.Accept\n\tif r.TLS != nil {\n\t\td.Action = d.Action | policy.Encrypt\n\t}\n\n\tif aclActualPolicy.Action.Logged() {\n\t\td.Action = d.Action | policy.Log\n\t}\n\n\t// We update the request headers with the claims and pass back\n\t// the information.\n\tpctx.Authorizer.UpdateRequestHeaders(r.Header, d.UserAttributes)\n\td.Header = r.Header\n\n\treturn d, nil\n\n}\n\n// userCredentials will find all the user credentials in the http request.\n// TODO: In addition to looking at the headers, we need to look at the parameters\n// in case authorization is provided there.\n// It will return the userAttributes and a boolean instructing whether a redirect\n// must be performed. If no user credentials are found, it will allow processing\n// to proceed. It might be a\nfunc userCredentials(ctx context.Context, pctx *serviceregistry.PortContext, r *Request, d *NetworkAuthResponse) {\n\tif r.TLS == nil {\n\t\treturn\n\t}\n\n\tuserCerts := r.TLS.PeerCertificates\n\n\tvar userToken string\n\tauthToken := r.Header.Get(\"Authorization\")\n\tif len(authToken) < 7 {\n\t\tif r.Cookie != nil {\n\t\t\tuserToken = r.Cookie.Value\n\t\t}\n\t} else {\n\t\tuserToken = strings.TrimPrefix(authToken, \"Bearer \")\n\t}\n\n\tuserAttributes, redirect, refreshedToken, err := pctx.Authorizer.DecodeUserClaims(ctx, pctx.Service.ID, userToken, userCerts)\n\tif err != nil {\n\t\tzap.L().Warn(\"Partially failed to extract and decode user claims\", zap.Error(err))\n\t}\n\n\tif len(userAttributes) > 0 {\n\t\td.SourceType = collector.EndPointTypeClaims\n\t}\n\n\tif refreshedToken != userToken {\n\t\td.Cookie = &http.Cookie{\n\t\t\tName:     \"X-APORETO-AUTH\",\n\t\t\tValue:    refreshedToken,\n\t\t\tHttpOnly: true,\n\t\t\tSecure:   true,\n\t\t\tPath:     \"/\",\n\t\t}\n\t}\n\n\td.UserAttributes = userAttributes\n\td.Redirect = redirect\n}\n\nfunc processHeaders(r *Request) (string, string) {\n\ttoken := r.Header.Get(\"X-APORETO-AUTH\")\n\tif token != \"\" {\n\t\tr.Header.Del(\"X-APORETO-AUTH\")\n\t}\n\tkey := r.Header.Get(\"X-APORETO-KEY\")\n\tif key != \"\" {\n\t\tr.Header.Del(\"X-APORETO-KEY\")\n\t}\n\treturn token, key\n}\n"
  },
  {
    "path": "controller/internal/enforcer/apiauth/apiauth_test.go",
    "content": "// +build !windows\n\npackage apiauth\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\ttriremecommon \"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/testhelper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/servicetokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/mockusertokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nconst (\n\tpolicyID        = \"somepolicy\"\n\trejectPolicyID  = \"somerejectepolicy\"\n\tserviceID       = \"someservice\"\n\trejectServiceID = \"somerejectservice\"\n\tnamespace       = \"somenamespace\"\n\tappLabel        = \"app=web\"\n)\n\nfunc newBaseApplicationServices(ctrl *gomock.Controller, id string, ipAddr string, exposedPortValue, publicPortValue, privatePortValue uint16, external bool) *policy.ApplicationService {\n\n\texposedPort, err := portspec.NewPortSpec(exposedPortValue, exposedPortValue, nil)\n\tSo(err, ShouldBeNil)\n\tpublicPort, err := portspec.NewPortSpec(publicPortValue, publicPortValue, nil)\n\tSo(err, ShouldBeNil)\n\tprivatePort, err := portspec.NewPortSpec(privatePortValue, privatePortValue, nil)\n\tSo(err, ShouldBeNil)\n\n\treturn &policy.ApplicationService{\n\t\tID: id,\n\t\tNetworkInfo: &triremecommon.Service{\n\t\t\tPorts:     exposedPort,\n\t\t\tProtocol:  6,\n\t\t\tAddresses: map[string]struct{}{ipAddr: struct{}{}},\n\t\t},\n\t\tPublicNetworkInfo: &triremecommon.Service{\n\t\t\tPorts:     publicPort,\n\t\t\tProtocol:  6,\n\t\t\tAddresses: map[string]struct{}{ipAddr: struct{}{}},\n\t\t},\n\t\tPrivateNetworkInfo: &triremecommon.Service{\n\t\t\tPorts:     privatePort,\n\t\t\tProtocol:  6,\n\t\t\tAddresses: map[string]struct{}{},\n\t\t},\n\t\tType:                 policy.ServiceHTTP,\n\t\tPublicServiceTLSType: policy.ServiceTLSTypeAporeto,\n\t\tExternal:             external,\n\t\tHTTPRules: []*policy.HTTPRule{\n\t\t\t{\n\t\t\t\tURIs:    []string{\"/admin\"},\n\t\t\t\tMethods: []string{\"GET\"},\n\t\t\t\tClaimMatchingRules: [][]string{\n\t\t\t\t\t{appLabel},\n\t\t\t\t},\n\t\t\t\tPublic: false,\n\t\t\t},\n\t\t\t{\n\t\t\t\tURIs:    []string{\"/public\"},\n\t\t\t\tMethods: []string{\"GET\"},\n\t\t\t\tPublic:  true,\n\t\t\t},\n\t\t\t{\n\t\t\t\tURIs:    []string{\"/forbidden\"},\n\t\t\t\tMethods: []string{\"GET\"},\n\t\t\t\tClaimMatchingRules: [][]string{\n\t\t\t\t\t{\"Nobody\"},\n\t\t\t\t},\n\t\t\t\tPublic: false,\n\t\t\t},\n\t\t},\n\t\tUserAuthorizationType:    policy.UserAuthorizationOIDC,\n\t\tUserAuthorizationHandler: mockusertokens.NewMockVerifier(ctrl),\n\t}\n}\n\nfunc newAPIAuthProcessor(ctrl *gomock.Controller) (*serviceregistry.Registry, *pucontext.PUContext, secrets.Secrets) {\n\n\tcontextID := \"test\"\n\tbaseService := newBaseApplicationServices(ctrl, \"base\", \"10.1.1.0/24\", uint16(80), uint16(443), uint16(80), false)\n\texternalService := newBaseApplicationServices(ctrl, \"external\", \"45.0.0.0/8\", uint16(80), uint16(443), uint16(80), true)\n\texternalBadService := newBaseApplicationServices(ctrl, \"external\", \"100.0.0.0/8\", uint16(80), uint16(443), uint16(80), true)\n\n\texposedServices := policy.ApplicationServicesList{baseService}\n\tdependentServices := policy.ApplicationServicesList{baseService, externalService, externalBadService}\n\n\tnetworkACLs := policy.IPRuleList{\n\t\t{\n\t\t\tAddresses: []string{\"10.1.1.0/24\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{\"6\"},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:    policy.Accept,\n\t\t\t\tPolicyID:  policyID,\n\t\t\t\tServiceID: serviceID,\n\t\t\t\tLabels:    []string{\"service=external\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tAddresses: []string{\"45.0.0.0/8\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{\"6\"},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:    policy.Accept,\n\t\t\t\tPolicyID:  policyID,\n\t\t\t\tServiceID: serviceID,\n\t\t\t\tLabels:    []string{\"service=external\"},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tAddresses: []string{\"100.0.0.0/8\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{\"6\"},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:        policy.Reject,\n\t\t\t\tPolicyID:      rejectPolicyID,\n\t\t\t\tServiceID:     rejectServiceID,\n\t\t\t\tObserveAction: policy.ObserveApply,\n\t\t\t\tLabels:        []string{\"service=external\"},\n\t\t\t},\n\t\t},\n\t}\n\n\tapplicationACLs := policy.IPRuleList{\n\t\t{\n\t\t\tAddresses: []string{\"100.0.0.0/8\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{\"6\"},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:    policy.Reject,\n\t\t\t\tPolicyID:  rejectPolicyID,\n\t\t\t\tServiceID: rejectServiceID,\n\t\t\t\tLabels:    []string{\"service=external\"},\n\t\t\t},\n\t\t},\n\t}\n\n\tplc := policy.NewPUPolicy(\n\t\tcontextID,\n\t\tnamespace,\n\t\tpolicy.Police,\n\t\tapplicationACLs,\n\t\tnetworkACLs,\n\t\tpolicy.DNSRuleList{},\n\t\tpolicy.TagSelectorList{},\n\t\tpolicy.TagSelectorList{\n\t\t\tpolicy.TagSelector{\n\t\t\t\tClause: []policy.KeyValueOperator{\n\t\t\t\t\t{\n\t\t\t\t\t\tKey:      \"app\",\n\t\t\t\t\t\tValue:    []string{\"web\"},\n\t\t\t\t\t\tOperator: policy.Equal,\n\t\t\t\t\t\tID:       \"somepolicy\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:        policy.Accept,\n\t\t\t\t\tServiceID:     \"pu\" + serviceID,\n\t\t\t\t\tPolicyID:      \"pu\" + policyID,\n\t\t\t\t\tObserveAction: policy.ObserveApply,\n\t\t\t\t},\n\t\t\t},\n\t\t\tpolicy.TagSelector{\n\t\t\t\tClause: []policy.KeyValueOperator{\n\t\t\t\t\t{\n\t\t\t\t\t\tKey:      \"app\",\n\t\t\t\t\t\tValue:    []string{\"bad\"},\n\t\t\t\t\t\tOperator: policy.Equal,\n\t\t\t\t\t\tID:       \"rejectpolicy\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Reject,\n\t\t\t\t\tPolicyID: \"reject\" + policyID,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tpolicy.NewTagStore(),\n\t\tpolicy.NewTagStoreFromSlice([]string{appLabel, \"type=aporeto\"}),\n\t\tnil,\n\t\tnil,\n\t\t0,\n\t\t0,\n\t\texposedServices,\n\t\tdependentServices,\n\t\t[]string{appLabel},\n\t\tpolicy.EnforcerMapping,\n\t\tpolicy.Reject|policy.Log,\n\t\tpolicy.Reject|policy.Log,\n\t)\n\n\tpuInfo := policy.NewPUInfo(contextID, namespace, triremecommon.ContainerPU)\n\tpuInfo.Policy = plc\n\tpctx, err := pucontext.NewPU(contextID, puInfo, nil, time.Second*1000)\n\tSo(err, ShouldBeNil)\n\t_, s, _ := testhelper.NewTestCompactPKISecrets()\n\n\tr := serviceregistry.Instance()\n\t_, err = r.Register(contextID, puInfo, pctx, s)\n\tSo(err, ShouldBeNil)\n\n\treturn r, pctx, s\n}\n\nfunc Test_New(t *testing.T) {\n\tConvey(\"When I create a new processor it should be correctly propulated\", t, func() {\n\t\tctrl := gomock.NewController(t)\n\t\t_, _, s := newAPIAuthProcessor(ctrl)\n\t\tp := New(\"test\", s)\n\n\t\tSo(p.puContext, ShouldEqual, \"test\")\n\t\tSo(p.secrets, ShouldEqual, s)\n\t})\n}\n\nfunc Test_ApplicationRequest(t *testing.T) {\n\tConvey(\"Given a valid authorization processor\", t, func() {\n\t\tctrl := gomock.NewController(t)\n\t\t_, pctx, s := newAPIAuthProcessor(ctrl)\n\t\tp := New(\"test\", s)\n\n\t\tConvey(\"Given a request without context, it should error\", func() {\n\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"20.1.1.1\"),\n\t\t\t\t\tPort: 8080,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\t_, err := p.ApplicationRequest(r)\n\t\t\tSo(err, ShouldNotBeNil)\n\n\t\t\tauthErr, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authErr.Status(), ShouldEqual, http.StatusBadGateway)\n\t\t})\n\n\t\tConvey(\"Given a request with valid context that is not external, I should get a token\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.2\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.ApplicationRequest(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(response, ShouldNotBeNil)\n\t\t\tSo(len(response.Token), ShouldBeGreaterThan, 0)\n\t\t\tSo(response.PUContext, ShouldEqual, pctx)\n\t\t\tSo(response.TLSListener, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"Given a request for a public external service, I should accept it\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.ApplicationRequest(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(response, ShouldNotBeNil)\n\t\t\tSo(len(response.Token), ShouldEqual, 0)\n\t\t\tSo(response.PUContext, ShouldEqual, pctx)\n\t\t\tSo(response.TLSListener, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"Given a request for a controlled external service with valid policy, I should accept it\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/admin\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/admin\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.ApplicationRequest(r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(response, ShouldNotBeNil)\n\t\t\tSo(len(response.Token), ShouldEqual, 0)\n\t\t\tSo(response.PUContext, ShouldEqual, pctx)\n\t\t\tSo(response.TLSListener, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"Given a request for a controlled external service with forbidden policy, I should reject it\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/forbidden\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/forbidden\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\t_, err := p.ApplicationRequest(r)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tauthErr, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authErr.Status(), ShouldEqual, http.StatusForbidden)\n\t\t})\n\n\t\tConvey(\"Given a request for a controlled external service with an uknown URI, I should reject it\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/random\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/random\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\t_, err := p.ApplicationRequest(r)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tauthErr, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authErr.Status(), ShouldEqual, http.StatusForbidden)\n\t\t})\n\n\t\tConvey(\"Given a request for a an external service dropped by network rules it should be rejected\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/random\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"100.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\t_, err := p.ApplicationRequest(r)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tauthErr, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authErr.Status(), ShouldEqual, http.StatusNetworkAuthenticationRequired)\n\t\t})\n\t})\n}\n\nfunc Test_NetworkRequest(t *testing.T) {\n\tConvey(\"Given a valid authorization processor\", t, func() {\n\t\tctx, cancel := context.WithCancel(context.Background())\n\t\tdefer cancel()\n\n\t\tctrl := gomock.NewController(t)\n\t\t_, pctx, s := newAPIAuthProcessor(ctrl)\n\t\tp := New(\"test\", s)\n\n\t\tConvey(\"Requests for bad context should return errors\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"20.1.1.1\"),\n\t\t\t\t\tPort: 8080,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\t_, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldNotBeNil)\n\n\t\t\tauthErr, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authErr.Status(), ShouldEqual, http.StatusInternalServerError)\n\t\t})\n\n\t\tConvey(\"Requests a valid context with a drop network policy must be rejected\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"100.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(response, ShouldNotBeNil)\n\n\t\t\tauthErr, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authErr.Status(), ShouldEqual, http.StatusNetworkAuthenticationRequired)\n\t\t\tSo(response.NetworkPolicyID, ShouldEqual, rejectPolicyID)\n\t\t\tSo(response.NetworkServiceID, ShouldEqual, rejectServiceID)\n\t\t\tSo(response.DropReason, ShouldEqual, collector.PolicyDrop)\n\t\t\tSo(response.SourceType, ShouldEqual, collector.EndPointTypeExternalIP)\n\t\t})\n\n\t\tConvey(\"Requests a valid context with an invalid token, I should get forbidden\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\th := http.Header{}\n\t\t\th.Add(\"X-APORETO-AUTH\", \"badvalue\")\n\t\t\th.Add(\"X-APORETO-KEY\", \"badvalue\")\n\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     h,\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(response, ShouldNotBeNil)\n\n\t\t\tauthErr, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authErr.Status(), ShouldEqual, http.StatusForbidden)\n\t\t\tSo(authErr.Message(), ShouldContainSubstring, \"Invalid Authorization Token:\")\n\t\t\tSo(response.NetworkPolicyID, ShouldEqual, policyID)\n\t\t\tSo(response.NetworkServiceID, ShouldEqual, serviceID)\n\t\t\tSo(response.DropReason, ShouldEqual, collector.PolicyDrop)\n\t\t\tSo(response.SourceType, ShouldEqual, collector.EndPointTypeExternalIP)\n\t\t})\n\n\t\tConvey(\"Requests a valid context with a valid Aporeto token to a public URL from a valid network it should succeed\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\ttoken, err := servicetokens.CreateAndSign(\n\t\t\t\t\"somenode\",\n\t\t\t\tpctx.Identity().GetSlice(),\n\t\t\t\tpctx.Scopes(),\n\t\t\t\tpctx.ManagementID(),\n\t\t\t\tDefaultValidity,\n\t\t\t\ts.EncodingKey(),\n\t\t\t\tnil,\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\th := http.Header{}\n\t\t\th.Add(\"X-APORETO-AUTH\", token)\n\t\t\th.Add(\"X-APORETO-KEY\", string(s.TransmittedKey()))\n\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     h,\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(response.NetworkPolicyID, ShouldEqual, policyID)\n\t\t\tSo(response.NetworkServiceID, ShouldEqual, serviceID)\n\t\t\tSo(response.SourceType, ShouldEqual, collector.EndPointTypePU)\n\t\t})\n\n\t\tConvey(\"Requests a valid context with a valid Aporeto token based on PU network policy it should succeed\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\ttoken, err := servicetokens.CreateAndSign(\n\t\t\t\t\"somenode\",\n\t\t\t\tpctx.Identity().GetSlice(),\n\t\t\t\tpctx.Scopes(),\n\t\t\t\tpctx.ManagementID(),\n\t\t\t\tDefaultValidity,\n\t\t\t\ts.EncodingKey(),\n\t\t\t\tnil,\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\th := http.Header{}\n\t\t\th.Add(\"X-APORETO-AUTH\", token)\n\t\t\th.Add(\"X-APORETO-KEY\", string(s.TransmittedKey()))\n\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"60.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     h,\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(response.ObservedPolicyID, ShouldEqual, \"pu\"+policyID)\n\t\t\tSo(response.ObservedAction, ShouldEqual, policy.Accept)\n\t\t\tSo(response.NetworkPolicyID, ShouldEqual, \"pu\"+policyID)\n\t\t\tSo(response.SourceType, ShouldEqual, collector.EndPointTypePU)\n\t\t})\n\n\t\tConvey(\"Requests a valid context with no Aporeto claims and no network policy, it should be dropped\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"60.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tauthErr, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authErr.Status(), ShouldEqual, http.StatusNetworkAuthenticationRequired)\n\t\t\tSo(response.NetworkPolicyID, ShouldEqual, collector.DefaultEndPoint)\n\t\t\tSo(response.SourceType, ShouldEqual, collector.EndPointTypeExternalIP)\n\t\t})\n\n\t\tConvey(\"Requests a valid context with a valid Aporeto token but network reject, it should be rejected\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/public\") // nolint\n\t\t\tbadTags := append(pctx.Identity().GetSlice(), \"app=bad\")\n\t\t\ttoken, err := servicetokens.CreateAndSign(\n\t\t\t\t\"badnode\",\n\t\t\t\tbadTags,\n\t\t\t\tpctx.Scopes(),\n\t\t\t\t\"badnodeID\",\n\t\t\t\tDefaultValidity,\n\t\t\t\ts.EncodingKey(),\n\t\t\t\tnil,\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\th := http.Header{}\n\t\t\th.Add(\"X-APORETO-AUTH\", token)\n\t\t\th.Add(\"X-APORETO-KEY\", string(s.TransmittedKey()))\n\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"60.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/public\",\n\t\t\t\tHeader:     h,\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(response.NetworkPolicyID, ShouldEqual, \"reject\"+policyID)\n\t\t\tSo(response.SourceType, ShouldEqual, collector.EndPointTypePU)\n\t\t})\n\n\t\tConvey(\"Requests a valid context with a valid Aporeto token to a private URL it should succeed\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/admin\") // nolint\n\t\t\ttoken, err := servicetokens.CreateAndSign(\n\t\t\t\t\"somenode\",\n\t\t\t\tpctx.Identity().GetSlice(),\n\t\t\t\tpctx.Scopes(),\n\t\t\t\tpctx.ManagementID(),\n\t\t\t\tDefaultValidity,\n\t\t\t\ts.EncodingKey(),\n\t\t\t\tnil,\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\th := http.Header{}\n\t\t\th.Add(\"X-APORETO-AUTH\", token)\n\t\t\th.Add(\"X-APORETO-KEY\", string(s.TransmittedKey()))\n\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/admin\",\n\t\t\t\tHeader:     h,\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(response.NetworkPolicyID, ShouldEqual, policyID)\n\t\t\tSo(response.NetworkServiceID, ShouldEqual, serviceID)\n\t\t\tSo(response.SourceType, ShouldEqual, collector.EndPointTypePU)\n\t\t})\n\n\t\tConvey(\"Requests a valid context with a valid Aporeto token to a forbidden URL it should return error\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/forbidden\") // nolint\n\t\t\ttoken, err := servicetokens.CreateAndSign(\n\t\t\t\t\"somenode\",\n\t\t\t\tpctx.Identity().GetSlice(),\n\t\t\t\tpctx.Scopes(),\n\t\t\t\t\"forbiddennode\",\n\t\t\t\tDefaultValidity,\n\t\t\t\ts.EncodingKey(),\n\t\t\t\tnil,\n\t\t\t)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\th := http.Header{}\n\t\t\th.Add(\"X-APORETO-AUTH\", token)\n\t\t\th.Add(\"X-APORETO-KEY\", string(s.TransmittedKey()))\n\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/forbidden\",\n\t\t\t\tHeader:     h,\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tresponse, err := p.NetworkRequest(ctx, r)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tauthError, ok := err.(*AuthError)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(authError.Status(), ShouldEqual, http.StatusUnauthorized)\n\t\t\tSo(response.NetworkPolicyID, ShouldEqual, policyID)\n\t\t\tSo(response.NetworkServiceID, ShouldEqual, serviceID)\n\t\t\tSo(response.SourceType, ShouldEqual, collector.EndPointTypePU)\n\t\t})\n\t})\n}\n\nfunc Test_UserCredentials(t *testing.T) {\n\n\tConvey(\"Given a valid authorizer\", t, func() {\n\t\tctx, cancel := context.WithCancel(context.Background())\n\t\tdefer cancel()\n\n\t\tctrl := gomock.NewController(t)\n\t\tserviceRegistry, _, s := newAPIAuthProcessor(ctrl)\n\t\tp := New(\"test\", s)\n\t\tSo(p, ShouldNotBeNil)\n\n\t\tportContext, err := serviceRegistry.RetrieveExposedServiceContext(net.ParseIP(\"10.1.1.1\"), 80, \"\")\n\t\tSo(err, ShouldBeNil)\n\t\tSo(portContext, ShouldNotBeNil)\n\n\t\tverifier, ok := portContext.Service.UserAuthorizationHandler.(*mockusertokens.MockVerifier)\n\t\tSo(ok, ShouldBeTrue)\n\n\t\tConvey(\"When the request is not TLS, there is no user data\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/admin\")\n\t\t\td := &NetworkAuthResponse{}\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/admin\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        nil,\n\t\t\t}\n\t\t\tuserCredentials(ctx, portContext, r, d)\n\t\t\tSo(len(d.UserAttributes), ShouldEqual, 0)\n\t\t})\n\n\t\tConvey(\"When the request is TLS and a user is identified, the claims are correct\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/admin\")\n\t\t\td := &NetworkAuthResponse{}\n\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/admin\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        &tls.ConnectionState{},\n\t\t\t}\n\t\t\tverifier.EXPECT().Validate(ctx, gomock.Any()).Return([]string{\"user=flash\"}, false, \"\", nil)\n\t\t\tuserCredentials(ctx, portContext, r, d)\n\t\t\tSo(len(d.UserAttributes), ShouldEqual, 1)\n\t\t\tSo(d.UserAttributes[0], ShouldEqual, \"user=flash\")\n\t\t\tSo(d.SourceType, ShouldEqual, collector.EndPointTypeClaims)\n\t\t\tSo(d.Redirect, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"When the request is TLS and user authorization fails with a redirect, the redirect should be set\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/admin\")\n\t\t\td := &NetworkAuthResponse{}\n\n\t\t\th := http.Header{}\n\t\t\th.Add(\"Authorization\", \"Bearer MockJWTToken\")\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/admin\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        &tls.ConnectionState{},\n\t\t\t}\n\t\t\tverifier.EXPECT().Validate(ctx, gomock.Any()).Return(nil, true, \"MockJWTToken\", fmt.Errorf(\"auth failed\"))\n\t\t\tuserCredentials(ctx, portContext, r, d)\n\t\t\tSo(len(d.UserAttributes), ShouldEqual, 0)\n\t\t\tSo(d.Redirect, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"When the request is TLS and user authorization succeeds with a refresh token, the cookie must be set\", func() {\n\t\t\tu, _ := url.Parse(\"http://www.foo.com/admin\")\n\t\t\td := &NetworkAuthResponse{}\n\n\t\t\th := http.Header{}\n\t\t\th.Add(\"Authorization\", \"Bearer MockJWTToken\")\n\t\t\tr := &Request{\n\t\t\t\tSourceAddress: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"45.1.1.1\"),\n\t\t\t\t\tPort: 1000,\n\t\t\t\t},\n\t\t\t\tOriginalDestination: &net.TCPAddr{\n\t\t\t\t\tIP:   net.ParseIP(\"10.1.1.1\"),\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tMethod:     \"GET\",\n\t\t\t\tURL:        u,\n\t\t\t\tRequestURI: \"/admin\",\n\t\t\t\tHeader:     http.Header{},\n\t\t\t\tCookie:     nil,\n\t\t\t\tTLS:        &tls.ConnectionState{},\n\t\t\t}\n\t\t\tverifier.EXPECT().Validate(ctx, gomock.Any()).Return(nil, true, \"NewToken\", fmt.Errorf(\"auth failed\"))\n\t\t\tuserCredentials(ctx, portContext, r, d)\n\t\t\tSo(len(d.UserAttributes), ShouldEqual, 0)\n\t\t\tSo(d.Redirect, ShouldBeTrue)\n\t\t\tSo(d.Cookie, ShouldNotBeNil)\n\t\t\tSo(d.Cookie.Name, ShouldEqual, \"X-APORETO-AUTH\")\n\t\t\tSo(d.Cookie.Value, ShouldEqual, \"NewToken\")\n\t\t\tSo(d.Cookie.HttpOnly, ShouldBeTrue)\n\t\t\tSo(d.Cookie.Secure, ShouldBeTrue)\n\t\t\tSo(d.Cookie.Path, ShouldEqual, \"/\")\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/apiauth/types.go",
    "content": "package apiauth\n\nimport (\n\t\"crypto/tls\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// Request captures all the important items of request that are needed\n// for processing the authorization decision.\ntype Request struct {\n\n\t// SourceAddress, only required for network authorization requests.\n\tSourceAddress *net.TCPAddr\n\n\t// OriginalDestination required for all requests.\n\tOriginalDestination *net.TCPAddr\n\n\t// HTTP header information.\n\tMethod     string\n\tURL        *url.URL\n\tRequestURI string\n\tHeader     http.Header\n\tCookie     *http.Cookie\n\n\t// TLS information. This is optional if mutual TLS based authorization\n\t// must be supported.\n\tTLS *tls.ConnectionState\n}\n\n// NetworkAuthResponse is the decision of the authorization process.\ntype NetworkAuthResponse struct {\n\n\t// Discovered service context and associated information.\n\tPUContext *pucontext.PUContext\n\tServiceID string\n\tNamespace string\n\n\t// Network policy ID and service that affect the call.\n\tNetworkPolicyID  string\n\tNetworkServiceID string\n\tObservedPolicyID string\n\tObservedAction   policy.ActionType\n\n\t// Definition of the source.\n\tSourceType collector.EndPointType\n\tSourcePUID string\n\n\t// Action associated with the response and DropReason if dropped.\n\tAction     policy.ActionType\n\tDropReason string\n\n\t// Redirect information that should be used by the responder.\n\tRedirect    bool\n\tRedirectURI string\n\tCookie      *http.Cookie\n\tData        string\n\tHeader      http.Header\n\n\t// UserAttrbutes discovered from the tokens.\n\tUserAttributes []string\n\n\t// TLSListener determines that TLS must be re-initiated towards\n\t// the listener.\n\tTLSListener bool\n\n\t// Fields used when ping is enabled.\n\tPingConfig *PingConfig\n}\n\n// PingConfig holds config specific for ping traffic.\ntype PingConfig struct {\n\tPingID      string\n\tIterationID int\n\tClaims      []string\n\tPayloadSize int\n}\n\n// AppAuthResponse is the decision of the authorization process.\ntype AppAuthResponse struct {\n\t// Discovered context and service information\n\tPUContext *pucontext.PUContext\n\tServiceID string\n\tExternal  bool\n\n\t// Network policy ID and service ID that affect the response.\n\tNetworkPolicyID  string\n\tNetworkServiceID string\n\n\t// Action of the response and DropReason if the call must be dropped.\n\tAction     policy.ActionType\n\tDropReason string\n\n\t// Resolved token\n\tToken string\n\n\t// HookMethod is the corresponding HTTP rule hook method\n\tHookMethod string\n\n\t// TLSListener indicates that the external entity is a TLS listener,\n\t// and we must start a TLS session. Only applies to External connections.\n\tTLSListener bool\n}\n\n// AuthError implements the error interface, but provides additional information\n// for the types of errors discovered.\ntype AuthError struct {\n\tstatus  int\n\tmessage string\n\terr     error\n}\n\n// Error implement the string interface of error.\nfunc (a *AuthError) Error() string {\n\treturn a.message\n}\n\n// Message returns the message of the error.\nfunc (a *AuthError) Message() string {\n\treturn a.message\n}\n\n// Status returns the status of the message.\nfunc (a *AuthError) Status() int {\n\treturn a.status\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/applicationproxy.go",
    "content": "package applicationproxy\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\t\"sync\"\n\n\t\"github.com/blang/semver\"\n\t\"github.com/opentracing/opentracing-go\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\ttcommon \"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\thttpproxy \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/http\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/protomux\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/tcp\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\n// ServerInterface describes the methods required by an application processor.\ntype ServerInterface interface {\n\tUpdateSecrets(cert *tls.Certificate, ca *x509.CertPool, secrets secrets.Secrets, certPEM, keyPEM string)\n\tShutDown() error\n}\n\ntype clientData struct {\n\tprotomux  *protomux.MultiplexedListener\n\tnetserver map[common.ListenerType]ServerInterface\n}\n\n// AppProxy maintains state for proxies connections from listen to backend.\ntype AppProxy struct {\n\tcert *tls.Certificate\n\n\ttokenaccessor   tokenaccessor.TokenAccessor\n\tcollector       collector.EventCollector\n\tpuFromID        cache.DataStore\n\tsecrets         secrets.Secrets\n\tdatapathKeyPair ephemeralkeys.KeyAccessor\n\tagentVersion    semver.Version\n\n\tclients     cache.DataStore\n\ttokenIssuer tcommon.ServiceTokenIssuer\n\tsync.RWMutex\n}\n\n// NewAppProxy creates a new instance of the application proxy.\nfunc NewAppProxy(\n\ttp tokenaccessor.TokenAccessor,\n\tc collector.EventCollector,\n\tpuFromID cache.DataStore,\n\ts secrets.Secrets,\n\tt tcommon.ServiceTokenIssuer,\n\tdatapathKeyPair ephemeralkeys.KeyAccessor,\n\tagentVersion semver.Version,\n) (*AppProxy, error) {\n\n\treturn &AppProxy{\n\t\tcollector:       c,\n\t\ttokenaccessor:   tp,\n\t\tsecrets:         s,\n\t\tpuFromID:        puFromID,\n\t\tcert:            nil,\n\t\tclients:         cache.NewCache(\"clients\"),\n\t\ttokenIssuer:     t,\n\t\tdatapathKeyPair: datapathKeyPair,\n\t\tagentVersion:    agentVersion,\n\t}, nil\n}\n\n// Run starts all the network side proxies. Application side proxies will\n// have to start during enforce in order to support multiple Linux processes.\nfunc (p *AppProxy) Run(ctx context.Context) error {\n\n\treturn nil\n}\n\n// Enforce implements enforcer.Enforcer interface. It will create the necessary\n// proxies for the particular PU. Enforce can be called multiple times, once\n// for every policy update.\nfunc (p *AppProxy) Enforce(ctx context.Context, puID string, puInfo *policy.PUInfo) error {\n\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tspan, tctx := opentracing.StartSpanFromContext(ctx, \"applicationproxy.enforce\")\n\tdefer span.Finish()\n\n\tif puInfo.Policy.ServicesListeningPort() == \"0\" {\n\t\tzap.L().Warn(\"Services listening port not specified - not activating proxy\")\n\t\treturn nil\n\t}\n\n\tdata, err := p.puFromID.Get(puID)\n\tif err != nil || data == nil {\n\t\treturn fmt.Errorf(\"undefined PU - Context not found: %s\", puID)\n\t}\n\n\tpuContext, ok := data.(*pucontext.PUContext)\n\tif !ok {\n\t\treturn fmt.Errorf(\"bad data types for puContext\")\n\t}\n\n\tsctx, err := serviceregistry.Instance().Register(puID, puInfo, puContext, p.secrets)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"policy conflicts detected: %s\", err)\n\t}\n\n\tcaPool, err := p.expandCAPool(sctx.RootCA)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// For updates we need to update the certificates if we have new ones. Otherwise\n\t// we return. There is nothing else to do in case of policy update.\n\tif c, cerr := p.clients.Get(puID); cerr == nil {\n\t\tif _, perr := p.processCertificateUpdates(puInfo, c.(*clientData), caPool); perr != nil {\n\t\t\tzap.L().Error(\"unable to update certificates and services\", zap.Error(perr))\n\t\t\treturn perr\n\t\t}\n\t\treturn nil\n\t}\n\n\t// Create the network listener and cache it so that we can terminate it later.\n\tl, err := p.createNetworkListener(tctx, \":\"+puInfo.Policy.ServicesListeningPort())\n\tif err != nil {\n\t\tzap.L().Error(\"Failed to create network listener\", zap.Error(err))\n\t\treturn fmt.Errorf(\"Cannot create listener on port %s: %s\", puInfo.Policy.ServicesListeningPort(), err)\n\t}\n\n\t// Create a new client entry and start the servers.\n\tclient := &clientData{\n\t\tnetserver: map[common.ListenerType]ServerInterface{},\n\t}\n\tclient.protomux = protomux.NewMultiplexedListener(l, constants.ProxyMarkInt, puID)\n\n\t// Listen to HTTP requests from the clients\n\tclient.netserver[common.HTTPApplication], err = p.registerAndRun(tctx, puID, common.HTTPApplication, client.protomux, caPool, true)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Cannot create listener type %d: %s\", common.HTTPApplication, err)\n\t}\n\n\t// Listen to HTTPS requests on the network side.\n\tclient.netserver[common.HTTPSNetwork], err = p.registerAndRun(tctx, puID, common.HTTPSNetwork, client.protomux, caPool, false)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Cannot create listener type %d: %s\", common.HTTPSNetwork, err)\n\t}\n\n\t// Listen to HTTP requests on the network side - mainly used for health probes - completely insecure for\n\t// anything else.\n\tclient.netserver[common.HTTPNetwork], err = p.registerAndRun(tctx, puID, common.HTTPNetwork, client.protomux, caPool, false)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Cannot create listener type %d: %s\", common.HTTPNetwork, err)\n\t}\n\n\t// TCP Requests for clients\n\tclient.netserver[common.TCPApplication], err = p.registerAndRun(tctx, puID, common.TCPApplication, client.protomux, caPool, true)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Cannot create listener type %d: %s\", common.TCPApplication, err)\n\t}\n\n\t// TCP Requests from the network side\n\tclient.netserver[common.TCPNetwork], err = p.registerAndRun(tctx, puID, common.TCPNetwork, client.protomux, caPool, false)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Cannot create listener type %d: %s\", common.TCPNetwork, err)\n\t}\n\n\tif _, err = p.processCertificateUpdates(puInfo, client, caPool); err != nil {\n\t\tzap.L().Error(\"Failed to update certificates\", zap.Error(err))\n\t\treturn fmt.Errorf(\"Certificates not updated: %s\", err)\n\t}\n\n\t// Add the client to the cache\n\tp.clients.AddOrUpdate(puID, client)\n\n\t// Start the connection multiplexer\n\tgo client.protomux.Serve(tctx) // nolint\n\n\treturn nil\n}\n\n// Unenforce implements enforcer.Enforcer interface. It will shutdown the app side\n// of the proxy.\nfunc (p *AppProxy) Unenforce(ctx context.Context, puID string) error {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\t// Remove pu from registry\n\tif err := serviceregistry.Instance().Unregister(puID); err != nil {\n\t\treturn err\n\t}\n\n\t// Find the correct client.\n\tc, err := p.clients.Get(puID)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Unable to find client\")\n\t}\n\tclient := c.(*clientData)\n\n\t// Terminate the connection multiplexer.\n\t// Do it before shutting down servers below to avoid Accept() errors.\n\tclient.protomux.Close()\n\n\t// Shutdown all the servers and unregister listeners.\n\tfor t, server := range client.netserver {\n\t\tif err := client.protomux.UnregisterListener(t); err != nil {\n\t\t\tzap.L().Error(\"Unable to unregister client\", zap.Int(\"type\", int(t)), zap.Error(err))\n\t\t}\n\t\tif err := server.ShutDown(); err != nil {\n\t\t\tzap.L().Debug(\"Unable to shutdown client server\", zap.Error(err))\n\t\t}\n\t}\n\n\t// Remove the client from the cache.\n\treturn p.clients.Remove(puID)\n}\n\n// GetFilterQueue is a stub for TCP proxy\nfunc (p *AppProxy) GetFilterQueue() *fqconfig.FilterQueue {\n\treturn nil\n}\n\n// Ping runs ping to the given config based on the service type. Returns error on invalid types.\nfunc (p *AppProxy) Ping(ctx context.Context, contextID string, sctx *serviceregistry.ServiceContext, sdata *serviceregistry.DependentServiceData, pingConfig *policy.PingConfig) error {\n\n\tif pingConfig == nil || sctx == nil || sdata == nil {\n\t\tzap.L().Debug(\"unable to run ping\",\n\t\t\tzap.Reflect(\"pingconfig\", pingConfig),\n\t\t\tzap.Reflect(\"serviceCtx\", sctx),\n\t\t\tzap.Reflect(\"serviceData\", sdata),\n\t\t)\n\n\t\treturn nil\n\t}\n\n\tc, err := p.clients.Get(contextID)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to find client with contextID: %s\", contextID)\n\t}\n\tclient := c.(*clientData)\n\n\tswitch sdata.ServiceObject.Type {\n\tcase policy.ServiceTCP:\n\t\treturn client.netserver[common.TCPApplication].(*tcp.Proxy).InitiatePing(ctx, sctx, sdata, pingConfig)\n\tcase policy.ServiceHTTP:\n\t\treturn client.netserver[common.HTTPApplication].(*httpproxy.Config).InitiatePing(ctx, sctx, sdata, pingConfig)\n\tdefault:\n\t\treturn fmt.Errorf(\"unknown service type: %d\", sdata.ServiceObject.Type)\n\t}\n}\n\n// UpdateSecrets updates the secrets of running enforcers managed by trireme. Remote enforcers will\n// get the secret updates with the next policy push.\nfunc (p *AppProxy) UpdateSecrets(secret secrets.Secrets) error {\n\tp.Lock()\n\tdefer p.Unlock()\n\tp.secrets = secret\n\treturn nil\n}\n\n// registerAndRun registers a new listener of the given type and runs the corresponding server\nfunc (p *AppProxy) registerAndRun(ctx context.Context, puID string, ltype common.ListenerType, mux *protomux.MultiplexedListener, caPool *x509.CertPool, appproxy bool) (ServerInterface, error) {\n\n\t// Create a new sub-ordinate listerner and register it for the requested type.\n\tlistener, err := mux.RegisterListener(ltype)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Cannot register listener: %s\", err)\n\t}\n\n\t// Start the corresponding proxy\n\tswitch ltype {\n\tcase common.HTTPApplication, common.HTTPSApplication, common.HTTPNetwork, common.HTTPSNetwork:\n\n\t\t// If the protocol is encrypted, wrap it with TLS.\n\t\tencrypted := false\n\t\tif ltype == common.HTTPSNetwork {\n\t\t\tencrypted = true\n\t\t}\n\n\t\tc := httpproxy.NewHTTPProxy(p.collector, puID, caPool, appproxy, constants.ProxyMarkInt, p.secrets, p.tokenIssuer, p.datapathKeyPair, p.agentVersion)\n\t\treturn c, c.RunNetworkServer(ctx, listener, encrypted)\n\n\tdefault:\n\t\tc := tcp.NewTCPProxy(p.collector, puID, p.cert, caPool, p.agentVersion, constants.ProxyMarkInt)\n\t\treturn c, c.RunNetworkServer(ctx, listener)\n\t}\n}\n\n// createNetworkListener starts a network listener (traffic from network to PUs)\nfunc (p *AppProxy) createNetworkListener(ctx context.Context, port string) (net.Listener, error) {\n\treturn markedconn.NewSocketListener(ctx, port, constants.ProxyMarkInt)\n}\n\n// processCertificateUpdates processes the certificate information and updates\n// the servers.\nfunc (p *AppProxy) processCertificateUpdates(puInfo *policy.PUInfo, client *clientData, caPool *x509.CertPool) (bool, error) { // nolint:unparam\n\n\t// If there are certificates provided, we will need to update them for the\n\t// services. If the certificates are nil, we ignore them.\n\tcertPEM, keyPEM, caPEM := puInfo.Policy.ServiceCertificates()\n\tif certPEM == \"\" || keyPEM == \"\" {\n\t\treturn false, nil\n\t}\n\n\t// Process any updates on the cert pool\n\tif caPEM != \"\" {\n\t\tif !caPool.AppendCertsFromPEM([]byte(caPEM)) {\n\t\t\tzap.L().Warn(\"Failed to add Services CA\")\n\t\t}\n\t}\n\n\t// Create the TLS certificate\n\ttlsCert, err := tls.X509KeyPair([]byte(certPEM), []byte(keyPEM))\n\tif err != nil {\n\t\treturn false, fmt.Errorf(\"Invalid certificates: %s\", err)\n\t}\n\n\tfor _, server := range client.netserver {\n\t\tserver.UpdateSecrets(&tlsCert, caPool, p.secrets, certPEM, keyPEM)\n\t}\n\treturn true, nil\n}\n\nfunc (p *AppProxy) expandCAPool(externalCAs [][]byte) (*x509.CertPool, error) {\n\n\tcaPool := x509.NewCertPool()\n\n\tif ok := caPool.AppendCertsFromPEM(p.secrets.CertAuthority()); !ok {\n\t\treturn nil, fmt.Errorf(\"cannot append secrets CA %s\", string(p.secrets.CertAuthority()))\n\t}\n\n\tfor _, ca := range externalCAs {\n\t\tif ok := caPool.AppendCertsFromPEM(ca); !ok {\n\t\t\treturn nil, fmt.Errorf(\"cannot append external service ca %s \", string(ca))\n\t\t}\n\t}\n\n\treturn caPool, nil\n}\n\n// ServiceData returns the servicectx and dependentservice for the given ip:port.\nfunc (p *AppProxy) ServiceData(\n\tcontextID string,\n\tip net.IP,\n\tport int,\n\tserviceAddresses map[string][]string) (*serviceregistry.ServiceContext, *serviceregistry.DependentServiceData, error) {\n\n\tif sctx, sdata, err := serviceregistry.Instance().RetrieveDependentServiceDataByIDAndNetwork(contextID, ip, port, \"\"); err == nil {\n\t\treturn sctx, sdata, nil\n\t}\n\n\tif len(serviceAddresses) == 0 {\n\t\treturn nil, nil, errors.New(\"no service context found\")\n\t}\n\n\tsctx, err := serviceregistry.Instance().RetrieveServiceByID(contextID)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tupdate := false\n\tfor _, svc := range sctx.PU.Policy.DependentServices() {\n\n\t\taddrs, ok := serviceAddresses[svc.ID]\n\t\tif !ok {\n\t\t\tcontinue\n\t\t}\n\n\t\tmin, max := svc.NetworkInfo.Ports.Range()\n\n\t\tfor _, addr := range addrs {\n\n\t\t\tif ip := net.ParseIP(addr); ip.To4() == nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif _, exists := svc.NetworkInfo.Addresses[addr+\"/32\"]; exists {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t_, ipNet, _ := net.ParseCIDR(addr + \"/32\")\n\t\t\tfor i := int(min); i <= int(max); i++ {\n\t\t\t\tif err := ipsetmanager.V4().AddIPPortToDependentService(contextID, ipNet, strconv.Itoa(i)); err != nil {\n\t\t\t\t\tzap.L().Debug(\"Error adding dependent service ip port to ipset\", zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tupdate = true\n\t\t\tsvc.NetworkInfo.Addresses[ipNet.String()] = struct{}{}\n\t\t}\n\t}\n\n\tif update {\n\t\tif err := serviceregistry.Instance().UpdateDependentServicesByID(contextID); err != nil {\n\t\t\tzap.L().Error(\"Error updating dependent services\", zap.Error(err))\n\t\t}\n\t}\n\n\treturn serviceregistry.Instance().RetrieveDependentServiceDataByIDAndNetwork(contextID, ip, port, \"\")\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/common/common.go",
    "content": "package common\n\nimport (\n\t\"crypto/x509/pkix\"\n\t\"encoding/asn1\"\n\t\"net\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// ListenerType are the types of listeners that can be used.\ntype ListenerType int\n\n// Values of ListenerType\nconst (\n\tTCPApplication ListenerType = iota\n\tTCPNetwork\n\tHTTPApplication\n\tHTTPNetwork\n\tHTTPSApplication\n\tHTTPSNetwork\n)\n\n// ExtractExtension returns true and the value of the given oid If any.\nfunc ExtractExtension(oid asn1.ObjectIdentifier, extensions []pkix.Extension) (bool, []byte) {\n\n\tfor _, ext := range extensions {\n\t\tif !ext.Id.Equal(oid) {\n\t\t\tcontinue\n\t\t}\n\n\t\treturn true, ext.Value\n\t}\n\n\treturn false, nil\n}\n\n// GetTLSServerName provides the server name to use in TLS config based on service configuration and destination IP.\nfunc GetTLSServerName(\n\taddrAndPort string,\n\tservice *policy.ApplicationService,\n) (name string, err error) {\n\n\tif service != nil && service.NetworkInfo != nil && len(service.NetworkInfo.FQDNs) != 0 {\n\t\tname = service.NetworkInfo.FQDNs[0]\n\t\treturn name, nil\n\t}\n\n\tname, _, err = net.SplitHostPort(addrAndPort)\n\treturn name, err\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/http/error_handler.go",
    "content": "package httpproxy\n\nimport (\n\t\"context\"\n\t\"io\"\n\t\"net\"\n\t\"net/http\"\n)\n\nconst (\n\t// TriremeBadGatewayText is the message to send when downstream fails.\n\tTriremeBadGatewayText = \":The downstream port cannot be accessed. Please validate your service ports and address/hosts configuration\"\n\n\t// TriremeGatewayTimeout is the message to send when downstream times-out.\n\tTriremeGatewayTimeout = \":The downstream node timed-out.\"\n\n\t// StatusClientClosedRequest non-standard HTTP status code for client disconnection\n\tStatusClientClosedRequest = 499\n\n\t// StatusClientClosedRequestText non-standard HTTP status for client disconnection\n\tStatusClientClosedRequestText = \"Client Closed Request\"\n)\n\n// TriremeHTTPErrHandler Standard error handler\ntype TriremeHTTPErrHandler struct{}\n\nfunc (e TriremeHTTPErrHandler) ServeHTTP(w http.ResponseWriter, req *http.Request, err error) {\n\tstatusCode := http.StatusInternalServerError\n\n\tif e, ok := err.(net.Error); ok {\n\t\tif e.Timeout() {\n\t\t\tstatusCode = http.StatusGatewayTimeout\n\t\t} else {\n\t\t\tstatusCode = http.StatusBadGateway\n\t\t}\n\t} else if err == io.EOF {\n\t\tstatusCode = http.StatusBadGateway\n\t} else if err == context.Canceled {\n\t\tstatusCode = StatusClientClosedRequest\n\t}\n\n\tw.WriteHeader(statusCode)\n\tw.Write([]byte(statusText(statusCode))) // nolint errcheck\n}\n\nfunc statusText(statusCode int) string {\n\n\tprefix := http.StatusText(statusCode)\n\n\tswitch statusCode {\n\tcase http.StatusGatewayTimeout:\n\t\treturn prefix + TriremeGatewayTimeout\n\tcase http.StatusBadGateway:\n\t\treturn prefix + TriremeBadGatewayText\n\tcase StatusClientClosedRequest:\n\t\treturn StatusClientClosedRequestText\n\t}\n\treturn prefix\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/http/http.go",
    "content": "package httpproxy\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/blang/semver\"\n\tjwt \"github.com/dgrijalva/jwt-go\"\n\t\"github.com/vulcand/oxy/forward\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/apiauth\"\n\tpcommon \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/protomux\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/tlshelper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/flowstats\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/metadata\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/bufferpool\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/gaia\"\n\t\"go.aporeto.io/gaia/x509extensions\"\n\t\"go.uber.org/zap\"\n)\n\ntype statsContextKeyType string\n\nconst (\n\tstatsContextKey = statsContextKeyType(\"statsContext\")\n\n\t// TriremeOIDCCallbackURI is the callback URI that must be presented by\n\t// any OIDC provider.\n\tTriremeOIDCCallbackURI = \"/aporeto/oidc/callback\"\n\ttypeCertificate        = \"CERTIFICATE\"\n)\n\n// JWTClaims is the structure of the claims we are sending on the wire.\ntype JWTClaims struct {\n\tjwt.StandardClaims\n\tSourceID string\n\tScopes   []string\n\tProfile  []string\n}\n\ntype hookFunc func(w http.ResponseWriter, r *http.Request) (bool, error)\n\n// Config maintains state for proxies connections from listen to backend.\ntype Config struct {\n\tcert             *tls.Certificate\n\tca               *x509.CertPool\n\tkeyPEM           string\n\tcertPEM          string\n\tsecrets          secrets.Secrets\n\tdatapathKeyPair  ephemeralkeys.KeyAccessor\n\tcollector        collector.EventCollector\n\tpuContext        string\n\tlocalIPs         map[string]struct{}\n\tapplicationProxy bool\n\tmark             int\n\tserver           *http.Server\n\tfwd              *forward.Forwarder\n\tfwdTLS           *forward.Forwarder\n\ttlsClientConfig  *tls.Config\n\tauth             *apiauth.Processor\n\tmetadata         *metadata.Client\n\ttokenIssuer      common.ServiceTokenIssuer\n\thooks            map[string]hookFunc\n\tagentVersion     semver.Version\n\n\tsync.RWMutex\n}\n\n// NewHTTPProxy creates a new instance of proxy reate a new instance of Proxy\nfunc NewHTTPProxy(\n\tc collector.EventCollector,\n\tpuContext string,\n\tcaPool *x509.CertPool,\n\tapplicationProxy bool,\n\tmark int,\n\tsecrets secrets.Secrets,\n\ttokenIssuer common.ServiceTokenIssuer,\n\tdatapathKeyPair ephemeralkeys.KeyAccessor,\n\tagentVersion semver.Version,\n) *Config {\n\n\th := &Config{\n\t\tcollector:        c,\n\t\tpuContext:        puContext,\n\t\tca:               caPool,\n\t\tapplicationProxy: applicationProxy,\n\t\tmark:             mark,\n\t\tsecrets:          secrets,\n\t\tlocalIPs:         markedconn.GetInterfaces(),\n\t\ttlsClientConfig: &tls.Config{\n\t\t\tRootCAs: caPool,\n\t\t},\n\t\tauth:            apiauth.New(puContext, secrets),\n\t\tmetadata:        metadata.NewClient(puContext, tokenIssuer),\n\t\ttokenIssuer:     tokenIssuer,\n\t\tdatapathKeyPair: datapathKeyPair,\n\t\tagentVersion:    agentVersion,\n\t}\n\n\thooks := map[string]hookFunc{\n\t\tcommon.MetadataHookPolicy:      h.policyHook,\n\t\tcommon.MetadataHookHealth:      h.healthHook,\n\t\tcommon.MetadataHookCertificate: h.certificateHook,\n\t\tcommon.MetadataHookKey:         h.keyHook,\n\t\tcommon.MetadataHookToken:       h.tokenHook,\n\t\tcommon.AWSHookInfo:             h.awsInfoHook,\n\t\tcommon.AWSHookRole:             h.awsTokenHook,\n\t}\n\n\th.hooks = hooks\n\n\treturn h\n}\n\n// clientTLSConfiguration calculates the right certificates and requests to the clients.\nfunc (p *Config) clientTLSConfiguration(conn net.Conn, originalConfig *tls.Config) (*tls.Config, error) {\n\tif mconn, ok := conn.(*markedconn.ProxiedConnection); ok {\n\t\tip, port := mconn.GetOriginalDestination()\n\t\tportContext, err := serviceregistry.Instance().RetrieveExposedServiceContext(ip, port, \"\")\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"Unknown service: %s\", err)\n\t\t}\n\t\tif portContext.Service.UserAuthorizationType == policy.UserAuthorizationMutualTLS || portContext.Service.UserAuthorizationType == policy.UserAuthorizationJWT {\n\t\t\tclientCAs := p.ca\n\t\t\t// now append the User given CA certPool\n\t\t\tif portContext.ClientTrustedRoots != nil {\n\t\t\t\t// append only when the certpool is given\n\t\t\t\tif len(portContext.Service.MutualTLSTrustedRoots) > 0 {\n\t\t\t\t\tif !clientCAs.AppendCertsFromPEM(portContext.Service.MutualTLSTrustedRoots) {\n\t\t\t\t\t\treturn nil, fmt.Errorf(\"Unable to process client CAs\")\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tconfig := p.newBaseTLSConfig()\n\t\t\tconfig.ClientAuth = tls.VerifyClientCertIfGiven\n\t\t\tconfig.ClientCAs = clientCAs\n\t\t\treturn config, nil\n\t\t}\n\t\treturn originalConfig, nil\n\t}\n\treturn nil, fmt.Errorf(\"Invalid connection\")\n}\n\n// newBaseTLSConfig creates the new basic TLS configuration for the server.\nfunc (p *Config) newBaseTLSConfig() *tls.Config {\n\tc := tlshelper.NewBaseTLSServerConfig()\n\tc.NextProtos = []string{\"h2\"}\n\tc.GetCertificate = p.GetCertificateFunc\n\tc.ClientCAs = p.ca\n\treturn c\n}\n\n// newBaseTLSClientConfig creates the new basic TLS configuration for the client.\nfunc (p *Config) newBaseTLSClientConfig() *tls.Config {\n\tc := tlshelper.NewBaseTLSClientConfig()\n\tc.NextProtos = []string{\"h2\"}\n\tc.GetCertificate = p.GetCertificateFunc\n\tc.GetClientCertificate = p.GetClientCertificateFunc\n\treturn c\n}\n\n// GetClientCertificateFunc returns the certificate that will be used by the Proxy as a client during the TLS\nfunc (p *Config) GetClientCertificateFunc(*tls.CertificateRequestInfo) (*tls.Certificate, error) {\n\tp.RLock()\n\tdefer p.RUnlock()\n\tif p.cert != nil {\n\t\tcert, err := x509.ParseCertificate(p.cert.Certificate[0])\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"http: Cannot build the cert chain\")\n\t\t}\n\t\tif cert != nil {\n\t\t\tby, _ := x509CertToPem(cert)\n\t\t\tpemCert, err := buildCertChain(by, p.secrets.CertAuthority())\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Error(\"http: Cannot build the cert chain\")\n\t\t\t}\n\t\t\tvar certChain tls.Certificate\n\t\t\tvar certDERBlock *pem.Block\n\t\t\tfor {\n\t\t\t\tcertDERBlock, pemCert = pem.Decode(pemCert)\n\t\t\t\tif certDERBlock == nil {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tif certDERBlock.Type == typeCertificate {\n\t\t\t\t\tcertChain.Certificate = append(certChain.Certificate, certDERBlock.Bytes)\n\t\t\t\t}\n\t\t\t}\n\t\t\tcertChain.PrivateKey = p.cert.PrivateKey\n\t\t\treturn &certChain, nil\n\t\t}\n\t\treturn p.cert, nil\n\t}\n\treturn nil, nil\n}\n\n// RunNetworkServer runs an HTTP network server. If TLS is needed, the\n// listener should be already a TLS listener.\nfunc (p *Config) RunNetworkServer(ctx context.Context, l net.Listener, encrypted bool) error {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tif p.server != nil {\n\t\treturn fmt.Errorf(\"Server already running\")\n\t}\n\n\t// for usage by callbacks below\n\tprotoListener, _ := l.(*protomux.ProtoListener)\n\n\t// If its an encrypted, wrap the listener in a TLS context. This is activated\n\t// for the listener from the network, but not for the listener from a PU.\n\tif encrypted {\n\t\tconfig := p.newBaseTLSConfig()\n\t\tconfig.GetConfigForClient = func(helloMsg *tls.ClientHelloInfo) (*tls.Config, error) {\n\t\t\tp.RLock()\n\t\t\tdefer p.RUnlock()\n\t\t\treturn p.clientTLSConfiguration(helloMsg.Conn, config)\n\t\t}\n\t\tconfig.GetClientCertificate = func(*tls.CertificateRequestInfo) (*tls.Certificate, error) {\n\t\t\tp.RLock()\n\t\t\tdefer p.RUnlock()\n\t\t\treturn p.cert, nil\n\t\t}\n\t\tl = tls.NewListener(l, config)\n\t}\n\t// now create a client config, this is required if Aporeto is a client.\n\tp.tlsClientConfig = p.newBaseTLSClientConfig()\n\n\treportStats := func(ctx context.Context) {\n\t\tif state := ctx.Value(statsContextKey); state != nil {\n\t\t\tif r, ok := state.(*flowstats.ConnectionState); ok {\n\t\t\t\tr.Stats.Action = policy.Reject | policy.Log\n\t\t\t\tr.Stats.DropReason = collector.UnableToDial\n\t\t\t\tr.Stats.PolicyID = collector.DefaultEndPoint\n\t\t\t\tp.collector.CollectFlowEvent(r.Stats)\n\t\t\t}\n\t\t}\n\t}\n\n\tnetworkDialerWithContext := func(ctx context.Context, network, _ string) (net.Conn, error) {\n\t\traddr, ok := ctx.Value(http.LocalAddrContextKey).(*net.TCPAddr)\n\t\tif !ok {\n\t\t\treportStats(ctx)\n\t\t\treturn nil, fmt.Errorf(\"invalid destination address\")\n\t\t}\n\t\tvar platformData *markedconn.PlatformData\n\t\tif protoListener != nil {\n\t\t\tplatformData = markedconn.TakePlatformData(protoListener.Listener, raddr.IP, raddr.Port)\n\t\t}\n\t\tconn, err := markedconn.DialMarkedWithContext(ctx, \"tcp\", raddr.String(), platformData, p.mark)\n\t\tif err != nil {\n\t\t\treportStats(ctx)\n\t\t\treturn nil, fmt.Errorf(\"Failed to dial remote: %s\", err)\n\t\t}\n\t\treturn conn, nil\n\t}\n\n\tappDialerWithContext := func(ctx context.Context, network, _ string) (net.Conn, error) {\n\t\traddr, ok := ctx.Value(http.LocalAddrContextKey).(*net.TCPAddr)\n\t\tif !ok {\n\t\t\treportStats(ctx)\n\t\t\treturn nil, fmt.Errorf(\"invalid destination address\")\n\t\t}\n\t\tpctx, err := serviceregistry.Instance().RetrieveExposedServiceContext(raddr.IP, raddr.Port, \"\")\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\traddr.Port = pctx.TargetPort\n\t\tvar platformData *markedconn.PlatformData\n\t\tif protoListener != nil {\n\t\t\tplatformData = markedconn.TakePlatformData(protoListener.Listener, raddr.IP, raddr.Port)\n\t\t}\n\t\tconn, err := markedconn.DialMarkedWithContext(ctx, \"tcp\", raddr.String(), platformData, p.mark)\n\t\tif err != nil {\n\t\t\treportStats(ctx)\n\t\t\treturn nil, fmt.Errorf(\"Failed to dial remote: %s\", err)\n\t\t}\n\t\treturn conn, nil\n\t}\n\n\t// Dial functions for the websockets.\n\tnetDial := func(network, addr string) (net.Conn, error) {\n\t\traddr, err := net.ResolveTCPAddr(network, addr)\n\t\tif err != nil {\n\t\t\treportStats(ctx)\n\t\t\treturn nil, err\n\t\t}\n\t\tvar platformData *markedconn.PlatformData\n\t\tif protoListener != nil {\n\t\t\tplatformData = markedconn.TakePlatformData(protoListener.Listener, raddr.IP, raddr.Port)\n\t\t}\n\t\tconn, err := markedconn.DialMarkedWithContext(ctx, \"tcp\", raddr.String(), platformData, p.mark)\n\t\tif err != nil {\n\t\t\treportStats(ctx)\n\t\t\treturn nil, fmt.Errorf(\"Failed to dial remote: %s\", err)\n\t\t}\n\t\treturn conn, nil\n\t}\n\n\tappDial := func(network, addr string) (net.Conn, error) {\n\t\traddr, err := net.ResolveTCPAddr(network, addr)\n\t\tif err != nil {\n\t\t\treportStats(ctx)\n\t\t\treturn nil, err\n\t\t}\n\t\tpctx, err := serviceregistry.Instance().RetrieveExposedServiceContext(raddr.IP, raddr.Port, \"\")\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\traddr.Port = pctx.TargetPort\n\t\tvar platformData *markedconn.PlatformData\n\t\tif protoListener != nil {\n\t\t\tplatformData = markedconn.TakePlatformData(protoListener.Listener, raddr.IP, raddr.Port)\n\t\t}\n\t\tconn, err := markedconn.DialMarkedWithContext(ctx, \"tcp\", raddr.String(), platformData, p.mark)\n\t\tif err != nil {\n\t\t\treportStats(ctx)\n\t\t\treturn nil, fmt.Errorf(\"Failed to dial remote: %s\", err)\n\t\t}\n\t\treturn conn, nil\n\t}\n\n\t// Create an encrypted downstream transport. We will mark the downstream connection\n\t// to let the iptables rule capture it.\n\tencryptedTransport := &http.Transport{\n\t\tTLSClientConfig:     p.tlsClientConfig,\n\t\tDialContext:         networkDialerWithContext,\n\t\tMaxIdleConnsPerHost: 2000,\n\t\tMaxIdleConns:        2000,\n\t\tForceAttemptHTTP2:   true,\n\t}\n\n\t// Create an unencrypted transport for talking to the application. If encryption\n\t// is selected do not verify the certificates. This is supposed to be inside the\n\t// same system. TODO: use pinned certificates.\n\ttransport := &http.Transport{\n\t\tTLSClientConfig: &tls.Config{\n\t\t\tInsecureSkipVerify: true,\n\t\t\tGetClientCertificate: func(*tls.CertificateRequestInfo) (*tls.Certificate, error) { // nolint\n\t\t\t\tp.RLock()\n\t\t\t\tdefer p.RUnlock()\n\t\t\t\treturn p.cert, nil\n\t\t\t},\n\t\t},\n\t\tDialContext:         appDialerWithContext,\n\t\tMaxIdleConns:        2000,\n\t\tMaxIdleConnsPerHost: 2000,\n\t}\n\n\t// Create the proxies downwards the network and the application.\n\tvar err error\n\tp.fwdTLS, err = forward.New(\n\t\tforward.RoundTripper(encryptedTransport),\n\t\tforward.WebsocketTLSClientConfig(&tls.Config{RootCAs: p.ca}),\n\t\tforward.WebSocketNetDial(netDial),\n\t\tforward.BufferPool(bufferpool.NewPool(32*1204)),\n\t\tforward.ErrorHandler(TriremeHTTPErrHandler{}),\n\t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Cannot initialize encrypted transport: %s\", err)\n\t}\n\n\tp.fwd, err = forward.New(\n\t\tforward.RoundTripper(NewTriremeRoundTripper(transport)),\n\t\tforward.WebsocketTLSClientConfig(&tls.Config{InsecureSkipVerify: true}),\n\t\tforward.WebSocketNetDial(appDial),\n\t\tforward.BufferPool(bufferpool.NewPool(32*1204)),\n\t\tforward.ErrorHandler(TriremeHTTPErrHandler{}),\n\t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Cannot initialize unencrypted transport: %s\", err)\n\t}\n\n\tprocessor := p.processAppRequest\n\tif !p.applicationProxy {\n\t\tprocessor = p.processNetRequest\n\t}\n\n\tp.server = &http.Server{\n\t\tHandler: http.HandlerFunc(processor),\n\t}\n\n\tgo func() {\n\t\t<-ctx.Done()\n\t\tp.server.Close() // nolint\n\t}()\n\tgo p.server.Serve(l) // nolint\n\n\treturn nil\n}\n\n// ShutDown terminates the server.\nfunc (p *Config) ShutDown() error {\n\treturn p.server.Close()\n}\n\n// UpdateSecrets updates the secrets\nfunc (p *Config) UpdateSecrets(cert *tls.Certificate, caPool *x509.CertPool, s secrets.Secrets, certPEM, keyPEM string) {\n\tp.Lock()\n\tp.cert = cert\n\tp.ca = caPool\n\tp.secrets = s\n\tp.certPEM = certPEM\n\tp.keyPEM = keyPEM\n\tp.tlsClientConfig.RootCAs = caPool\n\tp.Unlock()\n\n\tp.metadata.UpdateSecrets([]byte(certPEM), []byte(keyPEM))\n\tp.auth.UpdateSecrets(s)\n}\n\n// GetCertificateFunc implements the TLS interface for getting the certificate. This\n// allows us to update the certificates of the connection on the fly.\nfunc (p *Config) GetCertificateFunc(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {\n\tp.RLock()\n\tdefer p.RUnlock()\n\t// First we check if this is a direct access to the public port. In this case\n\t// we will use the service public certificate. Otherwise, we will return the\n\t// enforcer certificate since this is internal access.\n\tif mconn, ok := clientHello.Conn.(*markedconn.ProxiedConnection); ok {\n\t\tip, port := mconn.GetOriginalDestination()\n\t\tportContext, err := serviceregistry.Instance().RetrieveExposedServiceContext(ip, port, \"\")\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"service not available: %s %d\", ip.String(), port)\n\t\t}\n\t\tservice := portContext.Service\n\t\tif service.PublicNetworkInfo != nil && service.PublicNetworkInfo.Ports.Min == uint16(port) && len(service.PublicServiceCertificate) > 0 {\n\t\t\ttlsCert, err := tls.X509KeyPair(service.PublicServiceCertificate, service.PublicServiceCertificateKey)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"failed to parse server certificate: %s\", err)\n\t\t\t}\n\t\t\treturn &tlsCert, nil\n\t\t}\n\t\tif p.cert != nil {\n\n\t\t\tcert, err := x509.ParseCertificate(p.cert.Certificate[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Leaf cert is missing\")\n\t\t\t}\n\t\t\tif cert != nil {\n\t\t\t\tby, _ := x509CertToPem(cert)\n\t\t\t\tpemCert, err := buildCertChain(by, p.secrets.CertAuthority())\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"http: Cannot build the cert chain\")\n\t\t\t\t\treturn nil, fmt.Errorf(\"Cannot build the cert chain\")\n\t\t\t\t}\n\t\t\t\tvar certChain tls.Certificate\n\t\t\t\t//certPEMBlock := []byte(rootcaBundle)\n\t\t\t\tvar certDERBlock *pem.Block\n\t\t\t\tfor {\n\t\t\t\t\tcertDERBlock, pemCert = pem.Decode(pemCert)\n\t\t\t\t\tif certDERBlock == nil {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t\tif certDERBlock.Type == typeCertificate {\n\t\t\t\t\t\tcertChain.Certificate = append(certChain.Certificate, certDERBlock.Bytes)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tcertChain.PrivateKey = p.cert.PrivateKey\n\t\t\t\t//certChain.Certificate\n\t\t\t\treturn &certChain, nil\n\t\t\t}\n\t\t\treturn p.cert, nil\n\t\t}\n\t\treturn nil, fmt.Errorf(\"no cert available - cert is nil\")\n\t}\n\tif p.cert != nil {\n\t\treturn p.cert, nil\n\t}\n\treturn nil, fmt.Errorf(\"no cert available - cert is nil\")\n}\n\nfunc buildCertChain(certPEM, caPEM []byte) ([]byte, error) {\n\tzap.L().Debug(\"http: BEFORE in buildCertChain certPEM\", zap.String(\"certPEM\", string(certPEM)), zap.String(\"caPEM\", string(caPEM)))\n\tcertChain := []*x509.Certificate{}\n\tclientPEMBlock := certPEM\n\n\tderBlock, _ := pem.Decode(clientPEMBlock)\n\tif derBlock != nil {\n\t\tif derBlock.Type == typeCertificate {\n\t\t\tcert, err := x509.ParseCertificate(derBlock.Bytes)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tcertChain = append(certChain, cert)\n\t\t} else {\n\t\t\treturn nil, fmt.Errorf(\"invalid pem block type: %s\", derBlock.Type)\n\t\t}\n\t}\n\tvar certDERBlock *pem.Block\n\tfor {\n\t\tcertDERBlock, caPEM = pem.Decode(caPEM)\n\t\tif certDERBlock == nil {\n\t\t\tbreak\n\t\t}\n\t\tif certDERBlock.Type == typeCertificate {\n\t\t\tcert, err := x509.ParseCertificate(certDERBlock.Bytes)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tcertChain = append(certChain, cert)\n\t\t} else {\n\t\t\treturn nil, fmt.Errorf(\"invalid pem block type: %s\", certDERBlock.Type)\n\t\t}\n\t}\n\tby, _ := x509CertChainToPem(certChain)\n\tzap.L().Debug(\"http: After building the cert chain\", zap.String(\"certChain\", string(by)))\n\treturn x509CertChainToPem(certChain)\n}\n\n// x509CertChainToPem converts chain of x509 certs to byte.\nfunc x509CertChainToPem(certChain []*x509.Certificate) ([]byte, error) {\n\tvar pemBytes bytes.Buffer\n\tfor _, cert := range certChain {\n\t\tif err := pem.Encode(&pemBytes, &pem.Block{Type: typeCertificate, Bytes: cert.Raw}); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn pemBytes.Bytes(), nil\n}\n\n// x509CertToPem converts x509 to byte.\nfunc x509CertToPem(cert *x509.Certificate) ([]byte, error) {\n\tvar pemBytes bytes.Buffer\n\tif err := pem.Encode(&pemBytes, &pem.Block{Type: typeCertificate, Bytes: cert.Raw}); err != nil {\n\t\treturn nil, err\n\t}\n\treturn pemBytes.Bytes(), nil\n}\nfunc (p *Config) processAppRequest(w http.ResponseWriter, r *http.Request) {\n\n\tzap.L().Debug(\"Processing Application Request\", zap.String(\"URI\", r.RequestURI), zap.String(\"Host\", r.Host))\n\toriginalDestination := r.Context().Value(http.LocalAddrContextKey).(*net.TCPAddr)\n\n\t// Authorize the request by calling the authorizer library.\n\tauthRequest := &apiauth.Request{\n\t\tOriginalDestination: originalDestination,\n\t\tMethod:              r.Method,\n\t\tURL:                 r.URL,\n\t\tRequestURI:          r.RequestURI,\n\t}\n\n\tresp, err := p.auth.ApplicationRequest(authRequest)\n\tif err != nil {\n\t\tif resp.PUContext != nil {\n\t\t\tstate := flowstats.NewAppConnectionState(p.puContext, r, authRequest, resp)\n\t\t\tstate.Stats.Action = resp.Action\n\t\t\tstate.Stats.PolicyID = resp.NetworkPolicyID\n\t\t\tp.collector.CollectFlowEvent(state.Stats)\n\t\t}\n\t\thttp.Error(w, err.Error(), err.(*apiauth.AuthError).Status())\n\t\treturn\n\t}\n\n\tstate := flowstats.NewAppConnectionState(p.puContext, r, authRequest, resp)\n\tif resp.External {\n\t\tdefer p.collector.CollectFlowEvent(state.Stats)\n\t}\n\n\tif resp.HookMethod != \"\" {\n\t\tif hook, ok := p.hooks[resp.HookMethod]; ok {\n\t\t\tif isHook, err := hook(w, r); err != nil || isHook {\n\t\t\t\tif err != nil {\n\t\t\t\t\tstate.Stats.Action = policy.Reject\n\t\t\t\t\tstate.Stats.DropReason = collector.PolicyDrop\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t} else {\n\t\t\thttp.Error(w, \"Invalid hook configuration\", http.StatusInternalServerError)\n\t\t\treturn\n\t\t}\n\t}\n\n\thttpScheme := \"http://\"\n\tif resp.TLSListener {\n\t\thttpScheme = \"https://\"\n\t}\n\n\t// Create the new target URL based on the Host parameter that we had.\n\tr.URL, err = url.ParseRequestURI(httpScheme + r.Host)\n\tif err != nil {\n\t\thttp.Error(w, \"Invalid destination host name\", http.StatusUnprocessableEntity)\n\t\treturn\n\t}\n\n\t// Add the headers with the authorization parameters and public key. The other side\n\t// must validate our public key.\n\tp.RLock()\n\tr.Header.Add(\"X-APORETO-KEY\", string(p.secrets.TransmittedKey()))\n\tp.RUnlock()\n\tr.Header.Add(\"X-APORETO-AUTH\", resp.Token)\n\n\tcontextWithStats := context.WithValue(r.Context(), statsContextKey, state)\n\t// Forward the request.\n\tp.fwdTLS.ServeHTTP(w, r.WithContext(contextWithStats))\n}\n\nfunc (p *Config) processNetRequest(w http.ResponseWriter, r *http.Request) {\n\n\tzap.L().Debug(\"Processing Network Request\", zap.String(\"URI\", r.RequestURI), zap.String(\"Host\", r.Host))\n\toriginalDestination := r.Context().Value(http.LocalAddrContextKey).(*net.TCPAddr)\n\n\tsourceAddress, err := net.ResolveTCPAddr(\"tcp\", r.RemoteAddr)\n\tif err != nil {\n\t\tzap.L().Error(\"Internal server error - cannot determine source address information\", zap.Error(err))\n\t\thttp.Error(w, \"Invalid network information\", http.StatusForbidden)\n\t\treturn\n\t}\n\n\trequestCookie, _ := r.Cookie(\"X-APORETO-AUTH\") // nolint errcheck\n\n\tpr := &collector.PingReport{}\n\n\trequest := &apiauth.Request{\n\t\tOriginalDestination: originalDestination,\n\t\tSourceAddress:       sourceAddress,\n\t\tHeader:              r.Header,\n\t\tURL:                 r.URL,\n\t\tMethod:              r.Method,\n\t\tRequestURI:          r.RequestURI,\n\t\tCookie:              requestCookie,\n\t\tTLS:                 r.TLS,\n\t}\n\n\tresponse, err := p.auth.NetworkRequest(r.Context(), request)\n\n\tvar userID string\n\tif response != nil && len(response.UserAttributes) > 0 {\n\t\tuserData := &collector.UserRecord{\n\t\t\tNamespace: response.Namespace,\n\t\t\tClaims:    response.UserAttributes,\n\t\t}\n\t\tp.collector.CollectUserEvent(userData)\n\t\tuserID = userData.ID\n\t}\n\n\tstate := flowstats.NewNetworkConnectionState(p.puContext, userID, request, response)\n\tdefer func() {\n\t\tif response != nil && response.PingConfig != nil {\n\t\t\tpr.PingID = response.PingConfig.PingID\n\t\t\tpr.IterationID = response.PingConfig.IterationID\n\t\t\tpr.Type = gaia.PingProbeTypeRequest\n\t\t\tpr.RemotePUID = response.SourcePUID\n\t\t\tpr.PUID = response.PUContext.ManagementID()\n\t\t\tpr.Namespace = response.Namespace\n\t\t\tpr.PayloadSize = response.PingConfig.PayloadSize\n\t\t\tpr.PayloadSizeType = gaia.PingProbePayloadSizeTypeReceived\n\t\t\tpr.Protocol = 6\n\t\t\tpr.ServiceType = \"L7\"\n\t\t\tpr.FourTuple = fmt.Sprintf(\"%s:%s:%d:%d\",\n\t\t\t\tsourceAddress.IP.String(),\n\t\t\t\toriginalDestination.IP.String(),\n\t\t\t\tsourceAddress.Port,\n\t\t\t\toriginalDestination.Port)\n\t\t\tpr.PolicyID = response.NetworkPolicyID\n\t\t\tpr.PolicyAction = response.Action\n\t\t\tpr.ServiceID = response.ServiceID\n\t\t\tpr.AgentVersion = p.agentVersion.String()\n\t\t\tpr.RemoteEndpointType = collector.EndPointTypePU\n\t\t\tpr.IsServer = true\n\t\t\tpr.Claims = response.PingConfig.Claims\n\t\t\tpr.ClaimsType = gaia.PingProbeClaimsTypeReceived\n\t\t\tpr.RemoteNamespaceType = gaia.PingProbeRemoteNamespaceTypePlain\n\t\t\tpr.TargetTCPNetworks = true\n\t\t\tpr.ExcludedNetworks = false\n\n\t\t\tif len(r.TLS.PeerCertificates) > 0 {\n\t\t\t\tif len(r.TLS.PeerCertificates[0].Subject.Organization) > 0 {\n\t\t\t\t\tpr.RemoteNamespace = r.TLS.PeerCertificates[0].Subject.Organization[0]\n\t\t\t\t}\n\t\t\t\tpr.PeerCertIssuer = r.TLS.PeerCertificates[0].Issuer.String()\n\t\t\t\tpr.PeerCertSubject = r.TLS.PeerCertificates[0].Subject.String()\n\t\t\t\tpr.PeerCertExpiry = r.TLS.PeerCertificates[0].NotAfter\n\n\t\t\t\tif found, controller := pcommon.ExtractExtension(x509extensions.Controller(), r.TLS.PeerCertificates[0].Extensions); found {\n\t\t\t\t\tpr.RemoteController = string(controller)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tp.collector.CollectPingEvent(pr)\n\t\t} else {\n\t\t\tp.collector.CollectFlowEvent(state.Stats)\n\t\t}\n\t}()\n\n\tif err != nil {\n\n\t\tzap.L().Debug(\"Authorization error\",\n\t\t\tzap.Error(err),\n\t\t\tzap.String(\"URI\", r.RequestURI),\n\t\t\tzap.String(\"Host\", r.Host),\n\t\t)\n\t\tauthError, ok := err.(*apiauth.AuthError)\n\t\tif !ok {\n\t\t\thttp.Error(w, \"Internal type error\", http.StatusInternalServerError)\n\t\t\treturn\n\t\t}\n\n\t\tif response == nil {\n\t\t\t// Basic errors are captured here.\n\t\t\thttp.Error(w, authError.Message(), authError.Status())\n\t\t\treturn\n\t\t}\n\n\t\tif response.PingConfig != nil {\n\t\t\tpr.Error = response.DropReason\n\t\t}\n\n\t\tif !response.Redirect {\n\t\t\t// If there is no redirect, we also return an error.\n\t\t\thttp.Error(w, authError.Message(), authError.Status())\n\t\t\treturn\n\t\t}\n\n\t\t// Redirect logic. Populate information here. This is forcing a\n\t\t// redirect rather than an error.\n\t\tif response.Cookie != nil {\n\t\t\thttp.SetCookie(w, response.Cookie)\n\t\t}\n\t\tw.Header().Add(\"Location\", response.RedirectURI)\n\t\thttp.Error(w, response.Data, authError.Status())\n\n\t\treturn\n\t}\n\n\t// Select as http or https for communication with listening service.\n\thttpPrefix := \"http://\"\n\tif response.TLSListener {\n\t\thttpPrefix = \"https://\"\n\t}\n\n\t// Create the target URI. Websocket Gorilla proxy takes it from the URL. For normal\n\t// connections we don't want that.\n\tif forward.IsWebsocketRequest(r) {\n\t\tr.URL, err = url.ParseRequestURI(httpPrefix + originalDestination.String())\n\t} else {\n\t\tr.URL, err = url.ParseRequestURI(httpPrefix + r.Host)\n\t}\n\tif err != nil {\n\t\tstate.Stats.DropReason = collector.InvalidFormat\n\t\thttp.Error(w, fmt.Sprintf(\"Invalid HTTP Host parameter: %s\", err), http.StatusBadRequest)\n\t\treturn\n\t}\n\n\t// Update the request headers with the user attributes as defined by the mappings\n\tr.Header = response.Header\n\n\t// Update the statistics and forward the request. We always encrypt downstream\n\tstate.Stats.Action = policy.Accept | policy.Encrypt | policy.Log\n\n\t// // Treat the remote proxy scenario where the destination IPs are in a remote\n\t// // host. Check of network rules that allow this transfer and report the corresponding\n\t// // flows.\n\t// if _, ok := p.localIPs[originalDestination.IP.String()]; !ok {\n\t// \t_, action, err := pctx.PUContext.ApplicationACLPolicyFromAddr(originalDestination.IP, uint16(originalDestination.Port))\n\t// \tif err != nil || action.Action.Rejected() {\n\t// \t\tdefer p.collector.CollectFlowEvent(reportDownStream(state.stats, action))\n\t// \t\thttp.Error(w, fmt.Sprintf(\"Access denied by network policy to downstream IP: %s\", originalDestination.IP.String()), http.StatusNetworkAuthenticationRequired)\n\t// \t\treturn\n\t// \t}\n\t// \tif action.Action.Accepted() {\n\t// \t\tdefer p.collector.CollectFlowEvent(reportDownStream(state.stats, action))\n\t// \t}\n\t// }\n\n\tcontextWithStats := context.WithValue(r.Context(), statsContextKey, state)\n\tp.fwd.ServeHTTP(w, r.WithContext(contextWithStats))\n\tzap.L().Debug(\"Forwarding Request\", zap.String(\"URI\", r.RequestURI), zap.String(\"Host\", r.Host))\n}\n\nfunc (p *Config) policyHook(w http.ResponseWriter, r *http.Request) (bool, error) {\n\tif r.Header.Get(common.MetadataKey) != common.MetadataValue {\n\t\thttp.Error(w, \"unauthorized request for policy\", http.StatusForbidden)\n\t\treturn true, fmt.Errorf(\"unauthorized\")\n\t}\n\n\tdata, _, err := p.metadata.GetCurrentPolicy()\n\tif err != nil {\n\t\thttp.Error(w, \"Unable to retrieve current policy\", http.StatusInternalServerError)\n\t\treturn true, err\n\t}\n\tif _, err := w.Write(data); err != nil {\n\t\tzap.L().Error(\"Unable to write policy response\")\n\t}\n\n\treturn true, nil\n}\n\nfunc (p *Config) certificateHook(w http.ResponseWriter, r *http.Request) (bool, error) {\n\tif r.Header.Get(common.MetadataKey) != common.MetadataValue {\n\t\thttp.Error(w, \"unauthorized request for certificate\", http.StatusForbidden)\n\t\treturn true, fmt.Errorf(\"unauthorized\")\n\t}\n\n\tif _, err := w.Write(p.metadata.GetCertificate()); err != nil {\n\t\tzap.L().Error(\"Unable to write response\")\n\t}\n\n\treturn true, nil\n}\n\nfunc (p *Config) keyHook(w http.ResponseWriter, r *http.Request) (bool, error) {\n\tif r.Header.Get(common.MetadataKey) != common.MetadataValue {\n\t\thttp.Error(w, \"unauthorized request for private key\", http.StatusForbidden)\n\t\treturn true, fmt.Errorf(\"unauthorized\")\n\t}\n\n\tif _, err := w.Write(p.metadata.GetPrivateKey()); err != nil {\n\t\tzap.L().Error(\"Unable to write response\")\n\t}\n\n\treturn true, nil\n}\n\nfunc (p *Config) healthHook(w http.ResponseWriter, r *http.Request) (bool, error) {\n\n\t// Health hook will only return ok if the current policy is already populated.\n\tplc, _, err := p.metadata.GetCurrentPolicy()\n\tif err != nil || plc == nil {\n\t\thttp.Error(w, \"Unable to retrieve current policy\", http.StatusInternalServerError)\n\t\treturn true, err\n\t}\n\n\tif _, err := w.Write([]byte(\"OK\\n\")); err != nil {\n\t\tzap.L().Error(\"Unable to write response to health API\")\n\t}\n\treturn true, nil\n}\n\nfunc (p *Config) tokenHook(w http.ResponseWriter, r *http.Request) (bool, error) {\n\n\tif r.Header.Get(common.MetadataKey) != common.MetadataValue {\n\t\thttp.Error(w, \"unauthorized request for token\", http.StatusForbidden)\n\t\treturn true, fmt.Errorf(\"unauthorized\")\n\t}\n\n\taudience := r.URL.Query().Get(\"audience\")\n\tvalidityString := r.URL.Query().Get(\"validity\")\n\n\tvalidity := time.Minute * 60\n\tvar err error\n\tif validityString != \"\" {\n\t\tvalidity, err = time.ParseDuration(validityString)\n\t\tif err != nil {\n\t\t\thttp.Error(w, \"Invalid validity time requested. Please use notation of number+unit. Example: `10m`\", http.StatusUnprocessableEntity)\n\t\t\treturn true, nil\n\t\t}\n\t}\n\n\ttoken, err := p.tokenIssuer.Issue(r.Context(), p.puContext, common.ServiceTokenTypeOAUTH, audience, validity)\n\tif err != nil {\n\t\thttp.Error(w, fmt.Sprintf(\"Unable to issue token: %s\", err), http.StatusBadRequest)\n\t\treturn true, nil\n\t}\n\n\tif _, err := w.Write([]byte(token)); err != nil {\n\t\tzap.L().Error(\"Unable to write response on token API\")\n\t}\n\treturn true, nil\n}\n\nfunc (p *Config) awsInfoHook(w http.ResponseWriter, r *http.Request) (bool, error) {\n\n\tif err := validateAWSHeaders(r); err != nil {\n\t\thttp.Error(w, fmt.Sprintf(\"invalid user agent: %s\", err), http.StatusForbidden)\n\t\treturn true, err\n\t}\n\n\tawsRole, id, err := p.awsRole()\n\tif err != nil {\n\t\treturn true, err\n\t}\n\n\ttype info struct {\n\t\tCode               string    `json:\"Code,omitempty\"`\n\t\tLastUpdated        time.Time `json:\"LastUpdated,omitempty\"`\n\t\tInstanceProfileArn string    `json:\"InstanceProfileArn,omitempty\"`\n\t\tInstanceProfileID  string    `json:\"InstanceProfileId,omitempty\"`\n\t}\n\n\tout := &info{\n\t\tCode:               \"Success\",\n\t\tLastUpdated:        time.Now(),\n\t\tInstanceProfileArn: awsRole,\n\t\tInstanceProfileID:  id,\n\t}\n\n\tdata, err := json.MarshalIndent(out, \" \", \" \")\n\tif err != nil {\n\t\treturn true, fmt.Errorf(\"error in marshall of info: %s\", err)\n\t}\n\n\tif _, err = w.Write(data); err != nil {\n\t\treturn true, fmt.Errorf(\"unable to write data response: %s\", err)\n\t}\n\n\treturn true, nil\n}\n\nfunc (p *Config) awsTokenHook(w http.ResponseWriter, r *http.Request) (bool, error) {\n\n\tif err := validateAWSHeaders(r); err != nil {\n\t\thttp.Error(w, fmt.Sprintf(\"invalid user agent: %s\", err), http.StatusForbidden)\n\t\treturn true, err\n\t}\n\n\tawsRole, id, err := p.awsRole()\n\tif err != nil {\n\t\treturn true, err\n\t}\n\n\tawsRoleParts := strings.Split(awsRole, \"/\")\n\tif len(awsRoleParts) == 0 {\n\t\thttp.Error(w, fmt.Sprintf(\"invalid role: %s\", err), http.StatusNotFound)\n\t\treturn true, fmt.Errorf(\"invalid role: %s\", awsRole)\n\t}\n\n\tawsRoleName := awsRoleParts[len(awsRoleParts)-1]\n\n\tif strings.HasSuffix(r.RequestURI, \"security-credentials/\") {\n\t\tif _, err := w.Write([]byte(awsRoleName)); err != nil {\n\t\t\treturn true, err\n\t\t}\n\t\treturn true, nil\n\t}\n\n\tif !strings.HasSuffix(r.RequestURI, \"security-credentials/\"+awsRoleName) {\n\t\thttp.Error(w, \"not found\", http.StatusNotFound)\n\t\treturn true, fmt.Errorf(\"not found\")\n\t}\n\n\ttoken, err := p.tokenIssuer.Issue(r.Context(), id, common.ServiceTokenTypeAWS, awsRole, time.Hour)\n\tif err != nil {\n\t\thttp.Error(w, fmt.Sprintf(\"Unable to issue token: %s\", err), http.StatusBadRequest)\n\t\treturn true, nil\n\t}\n\n\tif _, err := w.Write([]byte(token)); err != nil {\n\t\tzap.L().Error(\"Unable to write response on token API\")\n\t}\n\treturn true, nil\n}\n\nfunc (p *Config) awsRole() (string, string, error) {\n\n\t_, plc, err := p.metadata.GetCurrentPolicy()\n\tif err != nil {\n\t\treturn \"\", \"\", err\n\t}\n\n\tawsRole := \"\"\n\tfor _, scope := range plc.Scopes {\n\t\tif strings.HasPrefix(scope, common.AWSRoleARNPrefix) {\n\t\t\tif awsRole != \"\" && awsRole != scope[len(common.AWSRolePrefix):] {\n\t\t\t\treturn \"\", \"\", fmt.Errorf(\"overlapping roles detected\")\n\t\t\t}\n\t\t\tawsRole = scope[len(common.AWSRolePrefix):]\n\t\t}\n\t}\n\n\tif awsRole == \"\" {\n\t\treturn \"\", \"\", fmt.Errorf(\"role not found\")\n\t}\n\n\treturn awsRole, plc.ManagementID, nil\n}\n\nvar (\n\tallowedAgents = []string{\"aws-cli/\", \"aws-chalice/\", \"Boto3/\", \"Botocore/\", \"aws-sdk-\"}\n)\n\nfunc validateAWSHeaders(r *http.Request) error {\n\n\tuserAgent, ok := r.Header[\"User-Agent\"]\n\tif !ok {\n\t\treturn fmt.Errorf(\"no user-agent provided\")\n\t}\n\n\tfor _, u := range userAgent {\n\t\tfor _, t := range allowedAgents {\n\t\t\tif strings.HasPrefix(u, t) {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t}\n\n\treturn fmt.Errorf(\"invalid user agent: %v\", userAgent)\n}\n\n// func reportDownStream(record *collector.FlowRecord, action *policy.FlowPolicy) *collector.FlowRecord {\n// \treturn &collector.FlowRecord{\n// \t\tContextID: record.ContextID,\n// \t\tDestination: &collector.EndPoint{\n// \t\t\tURI:        record.Destination.URI,\n// \t\t\tHTTPMethod: record.Destination.HTTPMethod,\n// \t\t\tType:       collector.EndPointTypeExternalIP,\n// \t\t\tPort:       record.Destination.Port,\n// \t\t\tIP:         record.Destination.IP,\n// \t\t\tID:         action.ServiceID,\n// \t\t},\n// \t\tSource: &collector.EndPoint{\n// \t\t\tType: record.Destination.Type,\n// \t\t\tID:   record.Destination.ID,\n// \t\t\tIP:   \"0.0.0.0\",\n// \t\t},\n// \t\tAction:      action.Action,\n// \t\tL4Protocol:  record.L4Protocol,\n// \t\tServiceType: record.ServiceType,\n// \t\tServiceID:   record.ServiceID,\n// \t\tTags:        record.Tags,\n// \t\tPolicyID:    action.PolicyID,\n// \t\tCount:       1,\n// \t}\n// }\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/http/ping_http.go",
    "content": "package httpproxy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/apiauth\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/servicetokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/gaia\"\n\t\"go.aporeto.io/gaia/x509extensions\"\n\t\"go.uber.org/zap\"\n)\n\nconst fourTupleKey = \"fourTuple\"\n\ntype fourTuple struct {\n\tsourceAddress      net.IP\n\tdestinationAddress net.IP\n\tsourcePort         int\n\tdestinationPort    int\n}\n\n// InitiatePing starts an encrypted connection to the given config.\nfunc (p *Config) InitiatePing(ctx context.Context, sctx *serviceregistry.ServiceContext, sdata *serviceregistry.DependentServiceData, pingConfig *policy.PingConfig) error {\n\n\tzap.L().Debug(\"Initiating L7 ping\")\n\n\tfor i := 0; i < pingConfig.Iterations; i++ {\n\t\tif err := p.sendPingRequest(ctx, pingConfig, sctx, sdata, i); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (p *Config) sendPingRequest(\n\tctx context.Context,\n\tpingConfig *policy.PingConfig,\n\tsctx *serviceregistry.ServiceContext,\n\tsdata *serviceregistry.DependentServiceData,\n\titerationID int) error {\n\n\tpingID := pingConfig.ID\n\tdestIP := pingConfig.IP\n\tdestPort := pingConfig.Port\n\n\t_, netaction, _ := sctx.PUContext.ApplicationACLPolicyFromAddr(destIP, destPort, packet.IPProtocolTCP)\n\n\tpingErr := \"dial\"\n\tif e := pingConfig.Error(); e != \"\" {\n\t\tpingErr = e\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pingID,\n\t\tIterationID:          iterationID,\n\t\tServiceID:            sdata.APICache.ID,\n\t\tPUID:                 sctx.PUContext.ManagementID(),\n\t\tNamespace:            sctx.PUContext.ManagementNamespace(),\n\t\tProtocol:             6,\n\t\tServiceType:          \"L7\",\n\t\tAgentVersion:         p.agentVersion.String(),\n\t\tApplicationListening: false,\n\t\tACLPolicyID:          netaction.PolicyID,\n\t\tACLPolicyAction:      netaction.Action,\n\t\tError:                pingErr,\n\t\tTargetTCPNetworks:    pingConfig.TargetTCPNetworks,\n\t\tExcludedNetworks:     pingConfig.ExcludedNetworks,\n\t\tType:                 gaia.PingProbeTypeRequest,\n\t\tRemoteEndpointType:   collector.EndPointTypeExternalIP,\n\t\tClaims:               sctx.PUContext.Identity().GetSlice(),\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypePlain,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeTransmitted,\n\t}\n\n\tft := &fourTuple{}\n\n\tp.RLock()\n\tencodingKey := p.secrets.EncodingKey()\n\tpubKey := p.secrets.TransmittedKey()\n\tp.RUnlock()\n\n\tpingPayload := &policy.PingPayload{\n\t\tPingID:      pingID,\n\t\tIterationID: iterationID,\n\t}\n\n\ttoken, err := servicetokens.CreateAndSign(\n\t\t\"\",\n\t\tsctx.PUContext.Identity().GetSlice(),\n\t\tsctx.PUContext.Scopes(),\n\t\tsctx.PUContext.ManagementID(),\n\t\tapiauth.DefaultValidity,\n\t\tencodingKey,\n\t\tpingPayload,\n\t)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tnetworkDialerWithContext := func(ctx context.Context, _, addr string) (net.Conn, error) {\n\n\t\tconn, err := dial(ctx, addr, p.mark)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to dial remote: %s\", err)\n\t\t}\n\n\t\tif v := ctx.Value(fourTupleKey); v != nil {\n\t\t\tif r, ok := v.(*fourTuple); ok {\n\t\t\t\tladdr := conn.LocalAddr().(*net.TCPAddr)\n\t\t\t\traddr := conn.RemoteAddr().(*net.TCPAddr)\n\t\t\t\tr.sourceAddress = laddr.IP\n\t\t\t\tr.sourcePort = laddr.Port\n\t\t\t\tr.destinationAddress = raddr.IP\n\t\t\t\tr.destinationPort = raddr.Port\n\t\t\t}\n\t\t}\n\n\t\treturn conn, nil\n\t}\n\n\traddr := &net.TCPAddr{\n\t\tIP:   destIP,\n\t\tPort: int(destPort),\n\t}\n\n\t// ServerName: Use first configured FQDN or the destination IP\n\tserverName, err := common.GetTLSServerName(raddr.String(), sdata.ServiceObject)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to get the server name: %s\", err)\n\t}\n\n\t// Used to validate the hostname in the returned server certs.\n\t// TODO: Maybe we should elevate this as first class citizen ?\n\tp.tlsClientConfig.ServerName = serverName\n\n\tencryptedTransport := &http.Transport{\n\t\tTLSClientConfig:     p.tlsClientConfig,\n\t\tDialContext:         networkDialerWithContext,\n\t\tMaxIdleConnsPerHost: 2000,\n\t\tMaxIdleConns:        2000,\n\t\tForceAttemptHTTP2:   true,\n\t}\n\n\tclient := &http.Client{\n\t\tTransport: encryptedTransport,\n\t\tTimeout:   5 * time.Second,\n\t}\n\n\thost := fmt.Sprintf(\"https://%s:%d\", destIP, destPort)\n\tctxWithReport := context.WithValue(ctx, fourTupleKey, ft) // nolint: golint,staticcheck\n\treq, err := http.NewRequestWithContext(ctxWithReport, \"GET\", host, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tdefer p.collector.CollectPingEvent(pr)\n\n\tpr.PayloadSize = len(pubKey) + len(token)\n\n\treq.Header.Add(\"X-APORETO-KEY\", string(pubKey))\n\treq.Header.Add(\"X-APORETO-AUTH\", token)\n\n\tstartTime := time.Now()\n\tres, err := client.Do(req)\n\tif err != nil {\n\t\tpr.Error = err.Error()\n\t\tpr.FourTuple = fmt.Sprintf(\n\t\t\t\"%s:%s:%d:%d\",\n\t\t\tft.sourceAddress.String(),\n\t\t\tft.destinationAddress.String(),\n\t\t\tft.sourcePort,\n\t\t\tft.destinationPort,\n\t\t)\n\t\treturn err\n\t}\n\n\tres.Body.Close() // nolint: errcheck\n\n\tpr.Error = \"\"\n\tpr.RTT = time.Since(startTime).String()\n\tpr.ApplicationListening = true\n\tpr.Type = gaia.PingProbeTypeResponse\n\tpr.FourTuple = fmt.Sprintf(\n\t\t\"%s:%s:%d:%d\",\n\t\tft.destinationAddress.String(),\n\t\tft.sourceAddress.String(),\n\t\tft.destinationPort,\n\t\tft.sourcePort,\n\t)\n\n\tif len(res.TLS.PeerCertificates) > 0 {\n\t\tpr.RemotePUID = res.TLS.PeerCertificates[0].Subject.CommonName\n\t\tpr.RemoteEndpointType = collector.EndPointTypePU\n\t\tif len(res.TLS.PeerCertificates[0].Subject.Organization) > 0 {\n\t\t\tpr.RemoteNamespace = res.TLS.PeerCertificates[0].Subject.Organization[0]\n\t\t}\n\t\tpr.PeerCertIssuer = res.TLS.PeerCertificates[0].Issuer.String()\n\t\tpr.PeerCertSubject = res.TLS.PeerCertificates[0].Subject.String()\n\t\tpr.PeerCertExpiry = res.TLS.PeerCertificates[0].NotAfter\n\n\t\tif found, controller := common.ExtractExtension(x509extensions.Controller(), res.TLS.PeerCertificates[0].Extensions); found {\n\t\t\tpr.RemoteController = string(controller)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc dial(ctx context.Context, addr string, mark int) (net.Conn, error) {\n\n\td := net.Dialer{\n\t\tTimeout: 5 * time.Second,\n\t\tControl: markedconn.ControlFunc(mark, false, nil),\n\t}\n\treturn d.DialContext(ctx, \"tcp\", addr)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/http/transport.go",
    "content": "package httpproxy\n\nimport (\n\t\"net/http\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/flowstats\"\n)\n\n// TriremeRoundTripper is the Trireme RoundTripper that will handle\n// responses.\ntype TriremeRoundTripper struct {\n\thttp.RoundTripper\n}\n\n// NewTriremeRoundTripper creates a new RoundTripper that handles the\n// responses.\nfunc NewTriremeRoundTripper(r http.RoundTripper) *TriremeRoundTripper {\n\treturn &TriremeRoundTripper{\n\t\tRoundTripper: r,\n\t}\n}\n\n// RoundTrip implements the RoundTripper interface. It will add a cookie\n// in the response in case of OIDC requests with refresh tokens.\nfunc (t *TriremeRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {\n\n\tres, err := t.RoundTripper.RoundTrip(req)\n\tif err != nil || res == nil {\n\t\treturn res, err\n\t}\n\n\tdata := req.Context().Value(statsContextKey)\n\tif data == nil {\n\t\treturn res, nil\n\t}\n\n\tstate, ok := data.(*flowstats.ConnectionState)\n\tif ok && state.Cookie == nil {\n\t\treturn res, nil\n\t}\n\n\tif v := state.Cookie.String(); v != \"\" {\n\t\tres.Header.Add(\"Set-Cookie\", v)\n\t}\n\n\treturn res, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/mark_linux.go",
    "content": "// +build linux\n\npackage markedconn\n\nimport (\n\t\"net\"\n\t\"syscall\"\n\n\t\"go.uber.org/zap\"\n)\n\nfunc makeListenerConfig(mark int) net.ListenConfig {\n\treturn net.ListenConfig{\n\t\tControl: func(_, _ string, c syscall.RawConn) error {\n\t\t\treturn c.Control(func(fd uintptr) {\n\t\t\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, syscall.SO_MARK, mark); err != nil {\n\t\t\t\t\tzap.L().Error(\"Failed to mark connection\", zap.Error(err))\n\t\t\t\t}\n\t\t\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_TCP, 23, 16*1024); err != nil {\n\t\t\t\t\tzap.L().Error(\"Cannot set tcp fast open options\", zap.Error(err))\n\t\t\t\t}\n\t\t\t})\n\t\t},\n\t}\n}\n\n// ControlFunc used in the dialer.\nfunc ControlFunc(mark int, block bool, platformData *PlatformData) Control {\n\n\treturn func(_, _ string, c syscall.RawConn) error {\n\t\treturn c.Control(func(fd uintptr) {\n\n\t\t\tif err := syscall.SetNonblock(int(fd), !block); err != nil {\n\t\t\t\tzap.L().Error(\"unable to set socket options\", zap.Error(err))\n\t\t\t}\n\t\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, syscall.SO_MARK, mark); err != nil {\n\t\t\t\tzap.L().Error(\"Failed to assing mark to socket\", zap.Error(err))\n\t\t\t}\n\t\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_TCP, 30, 1); err != nil {\n\t\t\t\tzap.L().Debug(\"Failed to set fast open socket option\", zap.Error(err))\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/mark_windows.go",
    "content": "// +build windows\n\npackage markedconn\n\nimport (\n\t\"net\"\n\t\"syscall\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n\t\"go.uber.org/zap\"\n)\n\nfunc makeListenerConfig(mark int) net.ListenConfig {\n\treturn net.ListenConfig{}\n}\n\n// ControlFunc used in the dialer.\nfunc ControlFunc(mark int, block bool, platformData *PlatformData) Control {\n\n\treturn func(_, _ string, c syscall.RawConn) error {\n\t\treturn c.Control(func(fd uintptr) {\n\n\t\t\tif platformData == nil {\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\t// call FrontmanApplyDestHandle to update WFP redirect data before the connect() call on the new socket\n\t\t\tif err := frontman.Wrapper.ApplyDestHandle(fd, platformData.handle); err != nil {\n\t\t\t\tzap.L().Error(\"could not update proxy redirect\", zap.Error(err))\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/markedconn.go",
    "content": "// +build linux windows\n\npackage markedconn\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/netinterfaces\"\n\t\"go.uber.org/zap\"\n)\n\n// Control represents the dial control used to manipulate the raw connection.\ntype Control func(network, address string, c syscall.RawConn) error\n\nfunc makeDialer(mark int, platformData *PlatformData) net.Dialer {\n\t// platformData is the destHandle\n\treturn net.Dialer{\n\t\tControl: ControlFunc(mark, true, platformData),\n\t}\n}\n\n// DialMarkedWithContext will dial a TCP connection to the provide address and mark the socket\n// with the provided mark.\nfunc DialMarkedWithContext(ctx context.Context, network string, addr string, platformData *PlatformData, mark int) (net.Conn, error) {\n\t// platformData is for Windows\n\tif platformData != nil && platformData.postConnectFunc != nil {\n\t\tdefer platformData.postConnectFunc(platformData.handle)\n\t}\n\td := makeDialer(mark, platformData)\n\n\tconn, err := d.DialContext(ctx, network, addr)\n\tif err != nil {\n\t\tzap.L().Error(\"Failed to dial to downstream node\",\n\t\t\tzap.Error(err),\n\t\t\tzap.String(\"Address\", addr),\n\t\t\tzap.String(\"Network type\", network),\n\t\t)\n\t}\n\treturn conn, err\n}\n\n// NewSocketListener will create a listener and mark the socket with the provided mark.\nfunc NewSocketListener(ctx context.Context, port string, mark int) (net.Listener, error) {\n\tlistenerCfg := makeListenerConfig(mark)\n\n\tlistener, err := listenerCfg.Listen(ctx, \"tcp\", port)\n\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Failed to create listener: %s\", err)\n\t}\n\n\treturn ProxiedListener{\n\t\tnetListener:      listener,\n\t\tmark:             mark,\n\t\tplatformDataCtrl: NewPlatformDataControl(),\n\t}, nil\n}\n\n// ProxiedConnection is a proxied connection where we can recover the\n// original destination.\ntype ProxiedConnection struct {\n\toriginalIP            net.IP\n\toriginalPort          int\n\toriginalTCPConnection *net.TCPConn\n\tplatformData          *PlatformData\n}\n\n// PlatformData is proxy/socket data (platform-specific)\ntype PlatformData struct {\n\thandle          uintptr\n\tpostConnectFunc func(fd uintptr)\n}\n\n// GetOriginalDestination sets the original destination of the connection.\nfunc (p *ProxiedConnection) GetOriginalDestination() (net.IP, int) {\n\treturn p.originalIP, p.originalPort\n}\n\n// GetPlatformData gets the platform-specific socket data (needed for Windows)\nfunc (p *ProxiedConnection) GetPlatformData() *PlatformData {\n\treturn p.platformData\n}\n\n// GetTCPConnection returns the TCP connection object.\nfunc (p *ProxiedConnection) GetTCPConnection() *net.TCPConn {\n\treturn p.originalTCPConnection\n}\n\n// LocalAddr implements the corresponding method of net.Conn, but returns the original\n// address.\nfunc (p *ProxiedConnection) LocalAddr() net.Addr {\n\n\treturn &net.TCPAddr{\n\t\tIP:   p.originalIP,\n\t\tPort: p.originalPort,\n\t}\n}\n\n// RemoteAddr returns the remote address\nfunc (p *ProxiedConnection) RemoteAddr() net.Addr {\n\treturn p.originalTCPConnection.RemoteAddr()\n}\n\n// Read reads data from the connection.\nfunc (p *ProxiedConnection) Read(b []byte) (n int, err error) {\n\treturn p.originalTCPConnection.Read(b)\n}\n\n// Write writes data to the connection.\nfunc (p *ProxiedConnection) Write(b []byte) (n int, err error) {\n\treturn p.originalTCPConnection.Write(b)\n}\n\n// Close closes the connection.\nfunc (p *ProxiedConnection) Close() error {\n\treturn p.originalTCPConnection.Close()\n}\n\n// SetDeadline passes the read deadline to the original TCP connection.\nfunc (p *ProxiedConnection) SetDeadline(t time.Time) error {\n\treturn p.originalTCPConnection.SetDeadline(t)\n}\n\n// SetReadDeadline implements the call by passing it to the original connection.\nfunc (p *ProxiedConnection) SetReadDeadline(t time.Time) error {\n\treturn p.originalTCPConnection.SetReadDeadline(t)\n}\n\n// SetWriteDeadline implements the call by passing it to the original connection.\nfunc (p *ProxiedConnection) SetWriteDeadline(t time.Time) error {\n\treturn p.originalTCPConnection.SetWriteDeadline(t)\n}\n\n// ProxiedListener is a proxied listener that uses proxied connections.\ntype ProxiedListener struct {\n\tnetListener      net.Listener\n\tmark             int\n\tplatformDataCtrl *PlatformDataControl\n}\n\ntype passFD interface {\n\tControl(func(uintptr)) error\n}\n\nfunc getOriginalDestination(conn *net.TCPConn) (net.IP, int, *PlatformData, error) { // nolint interfacer\n\n\trawconn, err := conn.SyscallConn()\n\tif err != nil {\n\t\treturn nil, 0, nil, err\n\t}\n\n\tlocalIPString, _, err := net.SplitHostPort(conn.LocalAddr().String())\n\tif err != nil {\n\t\treturn nil, 0, nil, err\n\t}\n\n\tlocalIP := net.ParseIP(localIPString)\n\n\treturn getOriginalDestPlatform(rawconn, localIP.To4() != nil)\n}\n\n// Accept implements the accept method of the interface.\nfunc (l ProxiedListener) Accept() (c net.Conn, err error) {\n\tnc, err := l.netListener.Accept()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttcpConn, ok := nc.(*net.TCPConn)\n\tif !ok {\n\t\tzap.L().Error(\"Received a non-TCP connection - this should never happen\", zap.Error(err))\n\t\treturn nil, fmt.Errorf(\"Not a tcp connection - ignoring\")\n\t}\n\n\tip, port, platformData, err := getOriginalDestination(tcpConn)\n\tif err != nil {\n\t\tzap.L().Error(\"Failed to discover original destination - aborting\", zap.Error(err))\n\t\treturn nil, err\n\t}\n\tl.platformDataCtrl.StorePlatformData(ip, port, platformData)\n\n\treturn &ProxiedConnection{\n\t\toriginalIP:            ip,\n\t\toriginalPort:          port,\n\t\toriginalTCPConnection: tcpConn,\n\t\tplatformData:          platformData,\n\t}, nil\n}\n\n// Addr implements the Addr method of net.Listener.\nfunc (l ProxiedListener) Addr() net.Addr {\n\treturn l.netListener.Addr()\n}\n\n// Close implements the Close method of the net.Listener.\nfunc (l ProxiedListener) Close() error {\n\treturn l.netListener.Close()\n}\n\n// GetInterfaces retrieves all the local interfaces.\nfunc GetInterfaces() map[string]struct{} {\n\tipmap := map[string]struct{}{}\n\n\tifaces, err := netinterfaces.GetInterfacesInfo()\n\tif err != nil {\n\t\tzap.L().Error(\"Unable to get interfaces info\", zap.Error(err))\n\t}\n\n\tfor _, iface := range ifaces {\n\t\tfor _, ip := range iface.IPs {\n\t\t\tipmap[ip.String()] = struct{}{}\n\t\t}\n\t}\n\n\treturn ipmap\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/markedconn_darwin.go",
    "content": "// +build darwin\n\npackage markedconn\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"syscall\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/netinterfaces\"\n\t\"go.uber.org/zap\"\n)\n\n// Control represents the dial control used to manipulate the raw connection.\ntype Control func(network, address string, c syscall.RawConn) error\n\n// DialMarkedWithContext dials a TCP connection and associates a mark. Propagates the context.\nfunc DialMarkedWithContext(ctx context.Context, network string, addr string, platformData *PlatformData, mark int) (net.Conn, error) {\n\td := net.Dialer{}\n\tconn, err := d.DialContext(ctx, network, addr)\n\tif err != nil {\n\t\tzap.L().Error(\"Failed to dial to downstream node\",\n\t\t\tzap.Error(err),\n\t\t\tzap.String(\"Address\", addr),\n\t\t\tzap.String(\"Network type\", network),\n\t\t)\n\t}\n\treturn conn, err\n\n}\n\n// ControlFunc used in the dialer.\nfunc ControlFunc(mark int, block bool, platformData *PlatformData) Control {\n\treturn nil\n}\n\n// NewSocketListener creates a socket listener with marked connections.\nfunc NewSocketListener(ctx context.Context, port string, mark int) (net.Listener, error) {\n\tlistenerCfg := net.ListenConfig{}\n\tlistener, err := listenerCfg.Listen(ctx, \"tcp\", port)\n\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Failed to create listener: %s\", err)\n\t}\n\n\treturn ProxiedListener{netListener: listener, mark: mark}, nil\n}\n\n// ProxiedConnection is a proxied connection where we can recover the\n// original destination.\ntype ProxiedConnection struct {\n\tnet.Conn\n\toriginalIP            net.IP\n\toriginalPort          int\n\toriginalTCPConnection *net.TCPConn\n}\n\n// PlatformData is proxy/socket data (platform-specific)\ntype PlatformData struct {\n\thandle          uintptr          // nolint: structcheck\n\tpostConnectFunc func(fd uintptr) // nolint: structcheck\n}\n\n// GetTCPConnection returns the TCP connection object.\nfunc (p *ProxiedConnection) GetTCPConnection() *net.TCPConn {\n\treturn p.originalTCPConnection\n}\n\n// GetOriginalDestination sets the original destination of the connection.\nfunc (p *ProxiedConnection) GetOriginalDestination() (net.IP, int) {\n\treturn p.originalIP, p.originalPort\n}\n\n// GetPlatformData gets the socket data (needed for Windows)\nfunc (p *ProxiedConnection) GetPlatformData() *PlatformData {\n\treturn nil\n}\n\n// ProxiedListener is a proxied listener that uses proxied connections.\ntype ProxiedListener struct {\n\tnetListener net.Listener\n\tmark        int\n}\n\n// Accept implements the accept method of the interface.\nfunc (l ProxiedListener) Accept() (c net.Conn, err error) {\n\tnc, err := l.netListener.Accept()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &ProxiedConnection{nc, net.IP{}, 0, nil}, nil\n}\n\n// Addr implements the Addr method of net.Listener.\nfunc (l ProxiedListener) Addr() net.Addr {\n\treturn l.netListener.Addr()\n}\n\n// Close implements the Close method of the net.Listener.\nfunc (l ProxiedListener) Close() error {\n\treturn l.netListener.Close()\n}\n\n// GetInterfaces retrieves all the local interfaces.\nfunc GetInterfaces() map[string]struct{} {\n\tipmap := map[string]struct{}{}\n\n\tifaces, err := netinterfaces.GetInterfacesInfo()\n\tif err != nil {\n\t\tzap.L().Debug(\"Unable to get interfaces info\", zap.Error(err))\n\t}\n\n\tfor _, iface := range ifaces {\n\t\tfor _, ip := range iface.IPs {\n\t\t\tipmap[ip.String()] = struct{}{}\n\t\t}\n\t}\n\n\treturn ipmap\n\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/markedconn_test.go",
    "content": "// +build linux\n\npackage markedconn\n\nimport (\n\t\"context\"\n\t\"encoding/binary\"\n\t\"net\"\n\t\"syscall\"\n\t\"testing\"\n\t\"unsafe\"\n\n\t\"github.com/magiconair/properties/assert\"\n)\n\ntype testPassFD struct {\n}\n\nfunc (*testPassFD) Control(f func(uintptr)) error {\n\tf(0)\n\treturn nil\n}\n\nfunc TestGetOrigDestV4(t *testing.T) {\n\n\ttestGetOrigV4 := func(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) {\n\n\t\tassert.Equal(t, trap, uintptr(syscall.SYS_GETSOCKOPT), \"expected syscall trap to be SYS.GETSOCKOPT\")\n\t\tassert.Equal(t, unsafe.Alignof(a4), unsafe.Alignof(&sockaddr4{}), \"uintptr and sockaddr4 alignment must be the same\")\n\t\tsa := (*(*sockaddr4)(unsafe.Pointer(a4))) // nolint\n\t\tsa.family = syscall.AF_INET\n\n\t\tcopy(sa.data[2:6], []byte{127, 0, 0, 1})\n\t\tbinary.BigEndian.PutUint16(sa.data[:2], 3000)\n\n\t\t*(*sockaddr4)(unsafe.Pointer(a4)) = sa // nolint\n\t\treturn 0, 0, 0\n\t}\n\n\ttest := &testPassFD{}\n\tip, port, _, _ := getOriginalDestInternal(test, true, testGetOrigV4)\n\tassert.Equal(t, ip.String(), \"127.0.0.1\", \"ip should be 127.0.0.1\")\n\tassert.Equal(t, port, 3000, \"port should be 3000\")\n}\n\nfunc TestGetOrigDestV6(t *testing.T) {\n\ttestGetOrigV6 := func(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) {\n\t\tassert.Equal(t, trap, uintptr(syscall.SYS_GETSOCKOPT), \"expected syscall trap to be SYS.GETSOCKOPT\")\n\t\tassert.Equal(t, unsafe.Alignof(a4), unsafe.Alignof(&sockaddr6{}), \"uintptr and sockaddr6 alignment must be the same\")\n\t\tsa := (*(*sockaddr6)(unsafe.Pointer(a4))) // nolint\n\t\tsa.family = syscall.AF_INET6\n\n\t\tcopy(sa.ip[:], net.ParseIP(\"::1\"))\n\t\tbinary.BigEndian.PutUint16(sa.port[:], 3000)\n\n\t\t*(*sockaddr6)(unsafe.Pointer(a4)) = sa // nolint\n\t\treturn 0, 0, 0\n\t}\n\n\ttest := &testPassFD{}\n\tip, port, _, _ := getOriginalDestInternal(test, false, testGetOrigV6) // nolint\n\tassert.Equal(t, ip.String(), \"::1\", \"ip should be ::1\")\n\tassert.Equal(t, port, 3000, \"port should be 3000\")\n}\n\nfunc TestGetOrigDestV4Err1(t *testing.T) {\n\n\ttestGetOrigV4 := func(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) {\n\t\treturn 0, 0, 0\n\t}\n\n\ttest := &testPassFD{}\n\t_, _, _, err := getOriginalDestInternal(test, true, testGetOrigV4) // nolint\n\n\tassert.Equal(t, err != nil, true, \"error should not be nil\")\n}\n\nfunc TestGetOrigDestV6Err1(t *testing.T) {\n\ttestGetOrigV6 := func(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) {\n\t\treturn 0, 0, 0\n\t}\n\n\ttest := &testPassFD{}\n\t_, _, _, err := getOriginalDestInternal(test, false, testGetOrigV6) // nolint\n\tassert.Equal(t, err != nil, true, \"error should not be nil\")\n}\n\nfunc TestGetOrigDestV4Err2(t *testing.T) {\n\n\ttestGetOrigV4 := func(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) {\n\t\treturn 0, 0, 1\n\t}\n\n\ttest := &testPassFD{}\n\t_, _, _, err := getOriginalDestInternal(test, true, testGetOrigV4) // nolint\n\tassert.Equal(t, err != nil, true, \"error should not be nil\")\n}\n\nfunc TestGetOrigDestV6Err2(t *testing.T) {\n\ttestGetOrigV6 := func(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) {\n\t\treturn 0, 0, 1\n\t}\n\n\ttest := &testPassFD{}\n\n\t_, _, _, err := getOriginalDestInternal(test, false, testGetOrigV6) // nolint\n\tassert.Equal(t, err != nil, true, \"error should not be nil\")\n}\n\nfunc TestLocalAddr(t *testing.T) {\n\tip, _, _ := net.ParseCIDR(\"172.17.0.2/24\")\n\tproxyConn := ProxiedConnection{originalIP: ip, originalPort: 80}\n\n\tnaddr := proxyConn.LocalAddr()\n\tnetAddr := naddr.(*net.TCPAddr)\n\n\tassert.Equal(t, ip, netAddr.IP, \"ip should be equal to 172.17.0.2\")\n\tassert.Equal(t, 80, netAddr.Port, \"ip should be equal to 172.17.0.2\")\n}\n\nfunc TestSocketListener(t *testing.T) {\n\tctx, cancel := context.WithCancel(context.Background()) // nolint\n\t_, err := NewSocketListener(ctx, \":1111\", 100)\n\n\tassert.Equal(t, err, nil, \"error  should be nil\")\n\tcancel() // nolint\n}\n\nfunc TestGetInterfaces(t *testing.T) {\n\n\tipmap := GetInterfaces()\n\n\tb := len(ipmap) != 0\n\tassert.Equal(t, b, true, \"ipmap should not be empty\")\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/markedconn_windows_test.go",
    "content": "// +build windows\n\npackage markedconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"syscall\"\n\t\"testing\"\n\t\"unsafe\"\n\n\t\"github.com/magiconair/properties/assert\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n)\n\ntype abi struct {\n\tdestHandle        uintptr\n\tdestHandleApplied uintptr\n\tdestHandleFreed   uintptr\n\tdestInfoOverride  *frontman.DestInfo\n}\n\nvar (\n\tgoodIP   = syscall.StringToUTF16(\"192.168.100.101\") // nolint\n\tbadIP    = syscall.StringToUTF16(\"192.xxxxxxx\")     // nolint\n\tgoodPort = uint16(8080)\n)\n\nfunc (a *abi) FrontmanOpenShared() (uintptr, error) {\n\treturn 1234, nil\n}\n\nfunc (a *abi) GetDestInfo(driverHandle, socket, destInfo uintptr) (uintptr, error) {\n\tdestInfoPtr := (*frontman.DestInfo)(unsafe.Pointer(destInfo)) // nolint:govet\n\tif a.destInfoOverride != nil {\n\t\tif a.destInfoOverride.DestHandle == uintptr(syscall.InvalidHandle) {\n\t\t\treturn 0, errors.New(\"INVALID_HANDLE_VALUE\")\n\t\t}\n\t\t*destInfoPtr = *a.destInfoOverride\n\t\treturn 1, nil\n\t}\n\tdestInfoPtr.IPAddr = &goodIP[0]\n\tdestInfoPtr.Port = goodPort\n\ta.destHandle++\n\tdestInfoPtr.DestHandle = a.destHandle\n\treturn 1, nil\n}\n\nfunc (a *abi) ApplyDestHandle(socket, destHandle uintptr) (uintptr, error) {\n\tif a.destHandleApplied == destHandle {\n\t\treturn 0, errors.New(\"ApplyDestHandle called more than once\")\n\t}\n\ta.destHandleApplied = destHandle\n\treturn 1, nil\n}\n\nfunc (a *abi) FreeDestHandle(destHandle uintptr) (uintptr, error) {\n\ta.destHandleFreed = destHandle\n\treturn 1, nil\n}\n\nfunc (a *abi) NewIpset(driverHandle, name, ipsetType, ipset uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) GetIpset(driverHandle, name, ipset uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) DestroyAllIpsets(driverHandle, prefix uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) ListIpsets(driverHandle, ipsetNames, ipsetNamesSize, bytesReturned uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetAdd(driverHandle, ipset, entry, timeout uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetAddOption(driverHandle, ipset, entry, option, timeout uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetDelete(driverHandle, ipset, entry uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetDestroy(driverHandle, ipset uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetFlush(driverHandle, ipset uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetTest(driverHandle, ipset, entry uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) PacketFilterStart(frontman, firewallName, receiveCallback, loggingCallback uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) PacketFilterClose() (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) PacketFilterForward(info, packet uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) AppendFilter(driverHandle, outbound, filterName, isGotoFilter uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) InsertFilter(driverHandle, outbound, priority, filterName, isGotoFilter uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) DestroyFilter(driverHandle, filterName uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) EmptyFilter(driverHandle, filterName uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) GetFilterList(driverHandle, outbound, buffer, bufferSize, bytesReturned uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) AppendFilterCriteria(driverHandle, filterName, criteriaName, ruleSpec, ipsetRuleSpecs, ipsetRuleSpecCount uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) DeleteFilterCriteria(driverHandle, filterName, criteriaName uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) ListIpsetsDetail(driverHandle, format, ipsetNames, ipsetNamesSize, bytesReturned uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) GetCriteriaList(driverHandle, format, criteriaList, criteriaListSize, bytesReturned uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\ntype testPassFD struct {\n}\n\nfunc (*testPassFD) Control(f func(uintptr)) error {\n\tf(0)\n\treturn nil\n}\n\nfunc TestWindowsGetOrigDest(t *testing.T) {\n\n\ta := &abi{}\n\tfrontman.Driver = a\n\ttest := &testPassFD{}\n\n\tip, port, pd, _ := getOriginalDestPlatform(test, true)\n\tassert.Equal(t, ip.String(), syscall.UTF16ToString(goodIP), \"ip is wrong\")\n\tassert.Equal(t, port, int(goodPort), \"port is wrong\")\n\tpd.postConnectFunc(a.destHandle)\n\tassert.Equal(t, a.destHandleFreed != 0, true, \"destHandle not freed\")\n}\n\nfunc TestWindowsGetOrigDestBadDestInfo(t *testing.T) {\n\n\ta := &abi{}\n\tfrontman.Driver = a\n\ttest := &testPassFD{}\n\ta.destInfoOverride = &frontman.DestInfo{DestHandle: uintptr(syscall.InvalidHandle)}\n\n\t_, _, _, err := getOriginalDestPlatform(test, true)\n\tassert.Equal(t, err != nil, true, \"GetDestInfo should fail\")\n\tassert.Equal(t, a.destHandleApplied == 0, true, \"ApplyDestHandle should not be called\")\n\tassert.Equal(t, a.destHandleFreed == 0, true, \"FreeDestHandle should not be called\")\n}\n\nfunc TestWindowsGetOrigDestBadIP(t *testing.T) {\n\n\ta := &abi{}\n\tfrontman.Driver = a\n\ttest := &testPassFD{}\n\ta.destHandle = 1000\n\ta.destInfoOverride = &frontman.DestInfo{IPAddr: &badIP[0], Port: goodPort, DestHandle: a.destHandle}\n\n\t_, _, _, err := getOriginalDestPlatform(test, true)\n\tassert.Equal(t, err != nil, true, \"GetDestInfo should fail\")\n\tassert.Equal(t, a.destHandleApplied == 0, true, \"ApplyDestHandle should not be called\")\n\tassert.Equal(t, a.destHandleFreed == a.destHandle, true, \"FreeDestHandle should be called\")\n}\n\nfunc TestSocketListenerWindows(t *testing.T) {\n\tctx, cancel := context.WithCancel(context.Background()) // nolint\n\t_, err := NewSocketListener(ctx, \":1111\", 100)\n\n\tassert.Equal(t, err, nil, \"error  should be nil\")\n\tcancel() // nolint\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/origdest_linux.go",
    "content": "// +build linux\n\npackage markedconn\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"syscall\"\n\t\"unsafe\"\n)\n\nconst (\n\tsockOptOriginalDst = 80\n)\n\ntype sockaddr4 struct {\n\tfamily uint16\n\tdata   [14]byte\n}\n\ntype sockaddr6 struct {\n\tfamily   uint16\n\tport     [2]byte\n\tflowInfo [4]byte // nolint\n\tip       [16]byte\n\tscopeID  [4]byte // nolint\n}\n\ntype origDest func(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno)\n\nfunc getOriginalDestPlatform(rawConn passFD, v4Proto bool) (net.IP, int, *PlatformData, error) {\n\treturn getOriginalDestInternal(rawConn, v4Proto, syscall.Syscall6)\n}\n\nfunc getOriginalDestInternal(rawConn passFD, v4Proto bool, getOrigDest origDest) (net.IP, int, *PlatformData, error) { // nolint interfacer{\n\tvar getsockopt func(fd uintptr)\n\tvar netIP net.IP\n\tvar port int\n\tvar err error\n\n\tgetsockopt4 := func(fd uintptr) {\n\t\tvar addr sockaddr4\n\t\tsize := uint32(unsafe.Sizeof(addr))\n\t\t_, _, e1 := getOrigDest(syscall.SYS_GETSOCKOPT, uintptr(fd), uintptr(syscall.SOL_IP), uintptr(sockOptOriginalDst), uintptr(unsafe.Pointer(&addr)), uintptr(unsafe.Pointer(&size)), 0) // nolint\n\n\t\tif e1 != 0 {\n\t\t\terr = fmt.Errorf(\"Failed to get original destination: %s\", e1)\n\t\t\treturn\n\t\t}\n\n\t\tif addr.family != syscall.AF_INET {\n\t\t\terr = fmt.Errorf(\"invalid address family. Expected AF_INET\")\n\t\t\treturn\n\t\t}\n\n\t\tnetIP = addr.data[2:6]\n\t\tport = int(addr.data[0])<<8 + int(addr.data[1])\n\t}\n\n\tgetsockopt6 := func(fd uintptr) {\n\t\tvar addr sockaddr6\n\t\tsize := uint32(unsafe.Sizeof(addr))\n\n\t\t_, _, e1 := getOrigDest(syscall.SYS_GETSOCKOPT, uintptr(fd), uintptr(syscall.SOL_IPV6), uintptr(sockOptOriginalDst), uintptr(unsafe.Pointer(&addr)), uintptr(unsafe.Pointer(&size)), 0) // nolint\n\t\tif e1 != 0 {\n\t\t\terr = fmt.Errorf(\"Failed to get original destination: %s\", e1)\n\t\t\treturn\n\t\t}\n\n\t\tif addr.family != syscall.AF_INET6 {\n\t\t\terr = fmt.Errorf(\"invalid address family. Expected AF_INET6\")\n\t\t\treturn\n\t\t}\n\n\t\tnetIP = addr.ip[:]\n\t\tport = int(addr.port[0])<<8 + int(addr.port[1])\n\t}\n\n\tif v4Proto {\n\t\tgetsockopt = getsockopt4\n\t} else {\n\t\tgetsockopt = getsockopt6\n\t}\n\n\tif err1 := rawConn.Control(getsockopt); err1 != nil {\n\t\treturn nil, 0, nil, fmt.Errorf(\"Failed to get original destination: %s\", err)\n\t}\n\n\tif err != nil {\n\t\treturn nil, 0, nil, err\n\t}\n\n\treturn netIP, port, nil, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/origdest_windows.go",
    "content": "// +build windows\n\npackage markedconn\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n\t\"go.uber.org/zap\"\n)\n\nfunc getOriginalDestPlatform(rawConn passFD, v4Proto bool) (net.IP, int, *PlatformData, error) {\n\tvar netIP net.IP\n\tvar port int\n\tvar destHandle uintptr\n\tvar err error\n\n\tfreeFunc := func(fd uintptr) {\n\t\tif err1 := frontman.Wrapper.FreeDestHandle(fd); err1 != nil {\n\t\t\tzap.L().Error(\"failed to free dest handle\", zap.Error(err1))\n\t\t}\n\t}\n\n\tctrlFunc := func(fd uintptr) {\n\t\tvar destInfo frontman.DestInfo\n\t\tif err1 := frontman.Wrapper.GetDestInfo(fd, &destInfo); err1 != nil {\n\t\t\terr = err1\n\t\t\treturn\n\t\t}\n\t\tdestHandle = destInfo.DestHandle\n\t\tport = int(destInfo.Port)\n\t\t// convert allocated wchar_t* to golang string\n\t\tipAddrStr := frontman.WideCharPointerToString(destInfo.IPAddr)\n\t\tnetIP = net.ParseIP(ipAddrStr)\n\t\tif netIP == nil {\n\t\t\terr = fmt.Errorf(\"GetDestInfo failed to get valid IP (%s)\", ipAddrStr)\n\t\t\t// FrontmanGetDestInfo returned success, so clean up acquired resources\n\t\t\tfreeFunc(destHandle)\n\t\t}\n\t}\n\n\tif err1 := rawConn.Control(ctrlFunc); err1 != nil {\n\t\treturn nil, 0, nil, fmt.Errorf(\"Failed to get original destination: %s\", err)\n\t}\n\n\tif err != nil {\n\t\treturn nil, 0, nil, err\n\t}\n\n\treturn netIP, port, &PlatformData{destHandle, freeFunc}, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/platformdata.go",
    "content": "// +build !windows\n\npackage markedconn\n\nimport (\n\t\"net\"\n)\n\n// PlatformDataControl dummy impl\n// PlatformDataControl is only needed for Windows now, and allows retrieval of kernel socket data.\ntype PlatformDataControl struct {\n}\n\n// NewPlatformDataControl returns initialized PlatformDataControl\nfunc NewPlatformDataControl() *PlatformDataControl {\n\treturn &PlatformDataControl{}\n}\n\n// StorePlatformData saves the data after GetDestInfo is called.\nfunc (n *PlatformDataControl) StorePlatformData(ip net.IP, port int, platformData *PlatformData) {\n}\n\n// RemovePlatformData removes the data from storage and returns it\nfunc RemovePlatformData(l net.Listener, conn net.Conn) *PlatformData {\n\treturn nil\n}\n\n// TakePlatformData removes the data from storage and returns it\nfunc TakePlatformData(l net.Listener, ip net.IP, port int) *PlatformData {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/platformdata_test.go",
    "content": "// +build windows\n\npackage markedconn\n\nimport (\n\t\"net\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestStoreTake(t *testing.T) {\n\n\tvar nd *PlatformData\n\n\tConvey(\"Given a ProxiedListener with a PlatformDataControl\", t, func() {\n\n\t\tproxiedListener := ProxiedListener{\n\t\t\tnetListener:      nil,\n\t\t\tmark:             100,\n\t\t\tplatformDataCtrl: NewPlatformDataControl(),\n\t\t}\n\n\t\tplatformData := &PlatformData{\n\t\t\t1, func(fd uintptr) {},\n\t\t}\n\n\t\tip := net.ParseIP(\"192.168.100.100\")\n\t\tport := 20992\n\n\t\tConvey(\"When I store PlatformData for an ip/port, it should be retained until removed\", func() {\n\n\t\t\tnd = TakePlatformData(proxiedListener, ip, port)\n\t\t\tSo(nd, ShouldBeNil)\n\n\t\t\tproxiedListener.platformDataCtrl.StorePlatformData(ip, port, platformData)\n\t\t\tnd = TakePlatformData(proxiedListener, ip, port)\n\t\t\tSo(nd, ShouldEqual, platformData)\n\t\t\tSo(nd.handle, ShouldEqual, 1)\n\n\t\t\tnd = TakePlatformData(proxiedListener, ip, port)\n\t\t\tSo(nd, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestStoreRemove(t *testing.T) {\n\n\tvar nd *PlatformData\n\n\tConvey(\"Given a ProxiedListener with a PlatformDataControl\", t, func() {\n\n\t\tproxiedListener := ProxiedListener{\n\t\t\tnetListener:      nil,\n\t\t\tmark:             101,\n\t\t\tplatformDataCtrl: NewPlatformDataControl(),\n\t\t}\n\n\t\tplatformData := &PlatformData{\n\t\t\t2, func(fd uintptr) {},\n\t\t}\n\n\t\tip := net.ParseIP(\"192.168.100.101\")\n\t\tport := 20993\n\n\t\tproxiedConn := &ProxiedConnection{\n\t\t\toriginalIP:            ip,\n\t\t\toriginalPort:          port,\n\t\t\toriginalTCPConnection: nil,\n\t\t\tplatformData:          platformData,\n\t\t}\n\n\t\tConvey(\"When I store PlatformData for an ip/port, it should be retained until removed\", func() {\n\n\t\t\tnd = TakePlatformData(proxiedListener, ip, port)\n\t\t\tSo(nd, ShouldBeNil)\n\n\t\t\tproxiedListener.platformDataCtrl.StorePlatformData(ip, port, platformData)\n\t\t\tnd = RemovePlatformData(proxiedListener, proxiedConn)\n\t\t\tSo(nd, ShouldEqual, platformData)\n\t\t\tSo(nd.handle, ShouldEqual, 2)\n\n\t\t\tnd = TakePlatformData(proxiedListener, ip, port)\n\t\t\tSo(nd, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestStoreMultiple(t *testing.T) {\n\n\tvar nd *PlatformData\n\n\tConvey(\"Given a ProxiedListener with a PlatformDataControl\", t, func() {\n\n\t\tproxiedListener := ProxiedListener{\n\t\t\tnetListener:      nil,\n\t\t\tmark:             100,\n\t\t\tplatformDataCtrl: NewPlatformDataControl(),\n\t\t}\n\n\t\tplatformData1 := &PlatformData{\n\t\t\t1, func(fd uintptr) {},\n\t\t}\n\t\tplatformData2 := &PlatformData{\n\t\t\t2, func(fd uintptr) {},\n\t\t}\n\t\tplatformData3 := &PlatformData{\n\t\t\t3, func(fd uintptr) {},\n\t\t}\n\n\t\tip1 := net.ParseIP(\"192.168.100.100\")\n\t\tport1 := 20992\n\n\t\tip2 := net.ParseIP(\"192.168.100.101\")\n\t\tport2 := 20993\n\n\t\tConvey(\"When I store PlatformData for an ip/port, it should be retained until removed\", func() {\n\n\t\t\tproxiedListener.platformDataCtrl.StorePlatformData(ip1, port1, platformData1)\n\t\t\tproxiedListener.platformDataCtrl.StorePlatformData(ip2, port2, platformData2)\n\t\t\tproxiedListener.platformDataCtrl.StorePlatformData(ip1, port1, platformData3)\n\n\t\t\tnd = TakePlatformData(proxiedListener, ip2, port2)\n\t\t\tSo(nd, ShouldEqual, platformData2)\n\t\t\tSo(nd.handle, ShouldEqual, 2)\n\n\t\t\tnd = TakePlatformData(proxiedListener, ip1, port1)\n\t\t\tSo(nd, ShouldEqual, platformData3)\n\t\t\tSo(nd.handle, ShouldEqual, 3)\n\n\t\t\tnd = TakePlatformData(proxiedListener, ip1, port1)\n\t\t\tSo(nd, ShouldBeNil)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/markedconn/platformdata_windows.go",
    "content": "// +build windows\n\npackage markedconn\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"sync\"\n)\n\n// PlatformDataControl is for Windows.\n// For proxied connections, we map the original ip/port to platform-specific data that needs to be retrieved\n// when we make the real connection.\ntype PlatformDataControl struct {\n\tplatformData map[string]*PlatformData\n\tmu           *sync.Mutex\n}\n\n// NewPlatformDataControl returns initialized PlatformDataControl\nfunc NewPlatformDataControl() *PlatformDataControl {\n\treturn &PlatformDataControl{\n\t\tplatformData: make(map[string]*PlatformData),\n\t\tmu:           &sync.Mutex{},\n\t}\n}\n\n// StorePlatformData saves the data after GetDestInfo is called.\nfunc (n *PlatformDataControl) StorePlatformData(ip net.IP, port int, platformData *PlatformData) {\n\tkey := fmt.Sprintf(\"%s:%d\", ip.String(), port)\n\tn.mu.Lock()\n\tn.platformData[key] = platformData\n\tn.mu.Unlock()\n}\n\n// RemovePlatformData returns the data for the given ip/port, and removes it from the map.\n// The listener should be a ProxiedListener, from which we can get the ctrl.\nfunc RemovePlatformData(l net.Listener, conn net.Conn) *PlatformData {\n\tn := getPlatformDataControlFromListener(l)\n\tif n == nil {\n\t\treturn nil\n\t}\n\tif proxyConn, ok := conn.(*ProxiedConnection); ok {\n\t\tip, port := proxyConn.GetOriginalDestination()\n\t\treturn n.takePlatformData(ip, port)\n\t}\n\treturn nil\n}\n\n// TakePlatformData returns the data for the given ip/port, and removes it from the map.\n// The listener should be a ProxiedListener, from which we can get the ctrl.\nfunc TakePlatformData(l net.Listener, ip net.IP, port int) *PlatformData {\n\tn := getPlatformDataControlFromListener(l)\n\tif n == nil {\n\t\treturn nil\n\t}\n\treturn n.takePlatformData(ip, port)\n}\n\n// takePlatformData returns the data for the given ip/port, and removes it from the map\nfunc (n *PlatformDataControl) takePlatformData(ip net.IP, port int) *PlatformData {\n\tkey := fmt.Sprintf(\"%s:%d\", ip.String(), port)\n\tn.mu.Lock()\n\tnd := n.platformData[key]\n\tdelete(n.platformData, key)\n\tn.mu.Unlock()\n\treturn nd\n}\n\n// if listener is a ProxiedListener, we get its platform data ctrl\nfunc getPlatformDataControlFromListener(l net.Listener) *PlatformDataControl {\n\tif proxiedL, ok := l.(ProxiedListener); ok {\n\t\treturn proxiedL.platformDataCtrl\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/pingrequest/pingrequest.go",
    "content": "package pingrequest\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"errors\"\n\t\"net/http\"\n\n\t\"github.com/vmihailenco/msgpack\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// PingHeaderKey holds the value for aporeto ping.\nconst PingHeaderKey = \"X-APORETO-PING\"\n\n// CreateRaw is same as 'Create' but will return raw bytes of\n// the request (wire format) returned by 'Create'.\nfunc CreateRaw(host string, pingPayload *policy.PingPayload) ([]byte, error) {\n\n\treq, err := Create(host, pingPayload)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar buf bytes.Buffer\n\tif err := req.Write(&buf); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn buf.Bytes(), nil\n}\n\n// ExtractRaw is same as 'Extract' but will parse the raw\n// bytes of the request passed and calls 'Validate'.\nfunc ExtractRaw(rawReq []byte) (*policy.PingPayload, error) {\n\n\tbuf := bytes.NewBuffer(rawReq)\n\treq, err := http.ReadRequest(bufio.NewReader(buf))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn Extract(req)\n}\n\n// Create creates a new http request with the given host.\n// It encodes the pingPayload passed with msgpack encoding and\n// adds the data bytes to the header with key 'X-APORETO-PING'.\n// It also returns the request.\nfunc Create(host string, pingPayload *policy.PingPayload) (*http.Request, error) {\n\n\treq, err := http.NewRequest(\"GET\", host, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tpayload, err := encode(pingPayload)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treq.Header.Add(PingHeaderKey, string(payload))\n\n\treturn req, nil\n}\n\n// Extract verifies If the given request has the header\n// 'X-APORETO-PING'. If it doesn't returns error, If it did have\n// the header, it will try to decode the data using msgpack\n// encoding and will return the ping payload.\nfunc Extract(req *http.Request) (*policy.PingPayload, error) {\n\n\tpayload := req.Header.Get(PingHeaderKey)\n\tif payload == \"\" {\n\t\treturn nil, errors.New(\"missing ping payload in header\")\n\t}\n\n\tpingPayload, err := decode([]byte(payload))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn pingPayload, nil\n}\n\nfunc encode(pingPayload *policy.PingPayload) ([]byte, error) {\n\treturn msgpack.Marshal(pingPayload)\n}\n\nfunc decode(data []byte) (*policy.PingPayload, error) {\n\n\tpingPayload := &policy.PingPayload{}\n\n\tif err := msgpack.Unmarshal(data, pingPayload); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn pingPayload, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/protomux/protomux.go",
    "content": "package protomux\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.uber.org/zap\"\n)\n\n// ProtoListener is\ntype ProtoListener struct {\n\tnet.Listener\n\tconnection chan net.Conn\n\tmark       int\n}\n\n// NewProtoListener creates a listener for a particular protocol.\nfunc NewProtoListener(mark int) *ProtoListener {\n\treturn &ProtoListener{\n\t\tconnection: make(chan net.Conn),\n\t\tmark:       mark,\n\t}\n}\n\n// Accept accepts new connections over the channel.\nfunc (p *ProtoListener) Accept() (net.Conn, error) {\n\tc, ok := <-p.connection\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"mux: listener closed\")\n\t}\n\treturn c, nil\n}\n\n// MultiplexedListener is the root listener that will split\n// connections to different protocols.\ntype MultiplexedListener struct {\n\troot     net.Listener\n\tdone     chan struct{}\n\tshutdown chan struct{}\n\twg       sync.WaitGroup\n\tprotomap map[common.ListenerType]*ProtoListener\n\tpuID     string\n\n\tdefaultListener *ProtoListener\n\tlocalIPs        map[string]struct{}\n\tmark            int\n\tsync.RWMutex\n}\n\n// NewMultiplexedListener returns a new multiplexed listener. Caller\n// must register protocols outside of the new object creation.\nfunc NewMultiplexedListener(l net.Listener, mark int, puID string) *MultiplexedListener {\n\n\treturn &MultiplexedListener{\n\t\troot:     l,\n\t\tdone:     make(chan struct{}),\n\t\tshutdown: make(chan struct{}),\n\t\twg:       sync.WaitGroup{},\n\t\tprotomap: map[common.ListenerType]*ProtoListener{},\n\t\tlocalIPs: markedconn.GetInterfaces(),\n\t\tmark:     mark,\n\t\tpuID:     puID,\n\t}\n}\n\n// RegisterListener registers a new listener. It returns the listener that the various\n// protocol servers should use. If defaultListener is set, this will become\n// the default listener if no match is found. Obviously, there cannot be more\n// than one default.\nfunc (m *MultiplexedListener) RegisterListener(ltype common.ListenerType) (*ProtoListener, error) {\n\tm.Lock()\n\tdefer m.Unlock()\n\n\tif _, ok := m.protomap[ltype]; ok {\n\t\treturn nil, fmt.Errorf(\"Cannot register same listener type multiple times\")\n\t}\n\n\tp := &ProtoListener{\n\t\tListener:   m.root,\n\t\tconnection: make(chan net.Conn),\n\t\tmark:       m.mark,\n\t}\n\tm.protomap[ltype] = p\n\n\treturn p, nil\n}\n\n// UnregisterListener unregisters a listener. It returns an error if there are services\n// associated with this listener.\nfunc (m *MultiplexedListener) UnregisterListener(ltype common.ListenerType) error {\n\tm.Lock()\n\tdefer m.Unlock()\n\n\tdelete(m.protomap, ltype)\n\n\treturn nil\n}\n\n// RegisterDefaultListener registers a default listener.\nfunc (m *MultiplexedListener) RegisterDefaultListener(p *ProtoListener) error {\n\tm.Lock()\n\tdefer m.Unlock()\n\n\tif m.defaultListener != nil {\n\t\treturn fmt.Errorf(\"Default listener already registered\")\n\t}\n\n\tm.defaultListener = p\n\treturn nil\n}\n\n// UnregisterDefaultListener unregisters the default listener.\nfunc (m *MultiplexedListener) UnregisterDefaultListener() error {\n\tm.Lock()\n\tdefer m.Unlock()\n\n\tif m.defaultListener == nil {\n\t\treturn fmt.Errorf(\"No default listener registered\")\n\t}\n\n\tm.defaultListener = nil\n\n\treturn nil\n}\n\n// Close terminates the server without the context.\nfunc (m *MultiplexedListener) Close() {\n\tclose(m.shutdown)\n}\n\n// Serve will demux the connections\nfunc (m *MultiplexedListener) Serve(ctx context.Context) error {\n\n\tdefer func() {\n\t\tclose(m.done)\n\t\tm.wg.Wait()\n\n\t\tm.RLock()\n\t\tdefer m.RUnlock()\n\n\t\tfor _, l := range m.protomap {\n\t\t\tclose(l.connection)\n\t\t\t// Drain the connections enqueued for the listener.\n\t\t\tfor c := range l.connection {\n\t\t\t\tc.Close() // nolint\n\t\t\t}\n\t\t}\n\t}()\n\n\tgo func() {\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-time.After(5 * time.Second):\n\t\t\t\tm.Lock()\n\t\t\t\tm.localIPs = markedconn.GetInterfaces()\n\t\t\t\tm.Unlock()\n\t\t\tcase <-ctx.Done():\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}()\n\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn nil\n\t\tcase <-m.shutdown:\n\t\t\treturn nil\n\t\tdefault:\n\n\t\t\tc, err := m.root.Accept()\n\t\t\tif err != nil {\n\t\t\t\t// check if the error is due to shutdown in progress\n\t\t\t\tselect {\n\t\t\t\tcase <-ctx.Done():\n\t\t\t\t\treturn nil\n\t\t\t\tcase <-m.shutdown:\n\t\t\t\t\treturn nil\n\t\t\t\tdefault:\n\t\t\t\t}\n\t\t\t\t// if it is an actual error (which can happen in Windows we can't get origin ip/port from our driver),\n\t\t\t\t// then log an error and continue accepting connections.\n\t\t\t\tzap.L().Error(\"error from Accept\", zap.Error(err))\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tm.wg.Add(1)\n\t\t\tgo m.serve(c)\n\t\t}\n\t}\n}\n\nfunc (m *MultiplexedListener) serve(conn net.Conn) {\n\tdefer m.wg.Done()\n\n\tc, ok := conn.(*markedconn.ProxiedConnection)\n\tif !ok {\n\t\tzap.L().Error(\"Wrong connection type\")\n\t\treturn\n\t}\n\n\tip, port := c.GetOriginalDestination()\n\tremoteAddr := c.RemoteAddr()\n\tif remoteAddr == nil {\n\t\tzap.L().Error(\"Connection remote address cannot be found. Abort\")\n\t\treturn\n\t}\n\n\tlocal := false\n\tm.Lock()\n\tlocalIPs := m.localIPs\n\tm.Unlock()\n\tif _, ok = localIPs[networkOfAddress(remoteAddr.String())]; ok {\n\t\tlocal = true\n\t}\n\n\tvar listenerType common.ListenerType\n\tif local {\n\t\t_, serviceData, err := serviceregistry.Instance().RetrieveDependentServiceDataByIDAndNetwork(m.puID, ip, port, \"\")\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Cannot discover target service\",\n\t\t\t\tzap.String(\"ContextID\", m.puID),\n\t\t\t\tzap.String(\"ip\", ip.String()),\n\t\t\t\tzap.Int(\"port\", port),\n\t\t\t\tzap.String(\"Remote IP\", remoteAddr.String()),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t\treturn\n\t\t}\n\t\tlistenerType = serviceData.ServiceType\n\t} else {\n\t\tpctx, err := serviceregistry.Instance().RetrieveExposedServiceContext(ip, port, \"\")\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Cannot discover target service\",\n\t\t\t\tzap.String(\"ip\", ip.String()),\n\t\t\t\tzap.Int(\"port\", port),\n\t\t\t\tzap.String(\"Remote IP\", remoteAddr.String()),\n\t\t\t)\n\t\t\treturn\n\t\t}\n\n\t\tlistenerType = pctx.Type\n\t}\n\n\tm.RLock()\n\ttarget, ok := m.protomap[listenerType]\n\tm.RUnlock()\n\tif !ok {\n\t\tc.Close() // nolint\n\t\treturn\n\t}\n\n\tselect {\n\tcase target.connection <- c:\n\tcase <-m.done:\n\t\tc.Close() // nolint\n\t}\n}\n\nfunc networkOfAddress(addr string) string {\n\tip, _, err := net.SplitHostPort(addr)\n\tif err != nil {\n\t\treturn addr\n\t}\n\n\treturn ip\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/protomux/protomux_test.go",
    "content": "// +build !windows\n\npackage protomux\n\nimport (\n\t\"testing\"\n\n\t\"github.com/magiconair/properties/assert\"\n)\n\nfunc TestNetworkAddress(t *testing.T) {\n\tip := networkOfAddress(\"172.17.0.2:80\")\n\tassert.Equal(t, ip, \"172.17.0.2\", \"ip should be 172.17.0.2\")\n\n\tip = networkOfAddress(\"[ff::1]:80\")\n\tassert.Equal(t, ip, \"ff::1\", \"ip should be ff::1\")\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/servicecache/servicecache.go",
    "content": "package servicecache\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/ipprefix\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\ntype entry struct {\n\tid    string\n\tports *portspec.PortSpec\n\tdata  interface{}\n}\n\ntype entryList []*entry\n\nfunc (e entryList) Delete(i int) entryList {\n\tif i >= len(e) || i < 0 {\n\t\treturn e\n\t}\n\treturn append(e[:i], e[i+1:]...)\n}\n\n// ServiceCache is a new service cache\ntype ServiceCache struct {\n\t// ipprefixs is map[prefixlength][prefix] -> array of entries indexed by port\n\tlocal  ipprefix.IPcache\n\tremote ipprefix.IPcache\n\t// hostcaches is map[host] -> array of entries indexed by port.\n\tremoteHosts map[string]entryList\n\tlocalHosts  map[string]entryList\n\t// portCaches is list of all ports where we can retrieve a service based on the port.\n\tremotePorts entryList\n\tlocalPorts  entryList\n\tsync.RWMutex\n}\n\n// NewTable creates a new table\nfunc NewTable() *ServiceCache {\n\n\treturn &ServiceCache{\n\t\tlocal:       ipprefix.NewIPCache(),\n\t\tremote:      ipprefix.NewIPCache(),\n\t\tremoteHosts: map[string]entryList{},\n\t\tlocalHosts:  map[string]entryList{},\n\t}\n}\n\n// Add adds a service into the cache. Returns error of if any overlap has been detected.\nfunc (s *ServiceCache) Add(e *common.Service, id string, data interface{}, local bool) error {\n\ts.Lock()\n\tdefer s.Unlock()\n\n\trecord := &entry{\n\t\tports: e.Ports,\n\t\tdata:  data,\n\t\tid:    id,\n\t}\n\tif err := s.addPorts(e, record, local); err != nil {\n\t\treturn err\n\t}\n\n\tif err := s.addHostService(e, record, local); err != nil {\n\t\treturn err\n\t}\n\n\treturn s.addIPService(e, record, local)\n}\n\n// Find searches for a matching service, given an IP and port. Caller must specify\n// the local or remote context.\nfunc (s *ServiceCache) Find(ip net.IP, port int, host string, local bool) interface{} {\n\ts.RLock()\n\tdefer s.RUnlock()\n\n\tif host != \"\" {\n\t\tif data := s.findHost(host, port, local); data != nil {\n\t\t\treturn data\n\t\t}\n\t}\n\n\treturn s.findIP(ip, port, local)\n}\n\n// FindListeningServicesForPU returns a service that is found and the associated\n// portSpecifications that refer to this service.\nfunc (s *ServiceCache) FindListeningServicesForPU(id string) (interface{}, *portspec.PortSpec) {\n\ts.RLock()\n\tdefer s.RUnlock()\n\n\tfor _, spec := range s.localPorts {\n\t\tif spec.id == id {\n\t\t\treturn spec.data, spec.ports\n\t\t}\n\t}\n\treturn nil, nil\n}\n\n// DeleteByID will delete all entries related to this ID from all references.\nfunc (s *ServiceCache) DeleteByID(id string, local bool) {\n\ts.Lock()\n\tdefer s.Unlock()\n\n\thosts := s.remoteHosts\n\tcache := s.remote\n\tif local {\n\t\thosts = s.localHosts\n\t\tcache = s.local\n\t}\n\n\tif local {\n\t\ts.localPorts = deleteMatchingPorts(s.localPorts, id)\n\t} else {\n\t\ts.remotePorts = deleteMatchingPorts(s.remotePorts, id)\n\t}\n\n\tfor host, ports := range hosts {\n\t\thosts[host] = deleteMatchingPorts(ports, id)\n\t\tif len(hosts[host]) == 0 {\n\t\t\tdelete(hosts, host)\n\t\t}\n\t}\n\n\tdeleteMatching := func(val interface{}) interface{} {\n\t\tif val == nil {\n\t\t\treturn nil\n\t\t}\n\n\t\tentryL := val.(entryList)\n\t\tr := deleteMatchingPorts(entryL, id)\n\t\tif len(r) == 0 {\n\t\t\treturn nil\n\t\t}\n\n\t\treturn r\n\t}\n\n\tcache.RunFuncOnVals(deleteMatching)\n}\n\nfunc deleteMatchingPorts(list entryList, id string) entryList {\n\tremainingPorts := entryList{}\n\tfor _, spec := range list {\n\t\tif spec.id != id {\n\t\t\tremainingPorts = append(remainingPorts, spec)\n\t\t}\n\t}\n\treturn remainingPorts\n}\n\nfunc (s *ServiceCache) addIPService(e *common.Service, record *entry, local bool) error {\n\n\tcache := s.remote\n\tif local {\n\t\tcache = s.local\n\t}\n\n\taddresses := e.Addresses\n\n\t// If addresses are nil, I only care about ports.\n\tif len(e.Addresses) == 0 && len(e.FQDNs) == 0 {\n\t\taddresses = map[string]struct{}{}\n\t\taddresses[\"0.0.0.0/0\"] = struct{}{}\n\t\taddresses[\"::/0\"] = struct{}{}\n\t}\n\n\tfor addrS := range addresses {\n\t\tvar records entryList\n\t\tvar addr *net.IPNet\n\t\tvar err error\n\n\t\tif _, addr, err = net.ParseCIDR(addrS); err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tmask, _ := addr.Mask.Size()\n\t\tv, exists := cache.Get(addr.IP, mask)\n\n\t\tif !exists {\n\t\t\trecords = entryList{}\n\t\t} else {\n\t\t\trecords = v.(entryList)\n\t\t\tfor _, spec := range records {\n\t\t\t\tif spec.ports.Overlaps(e.Ports) {\n\t\t\t\t\treturn fmt.Errorf(\"service port overlap for a given IP not allowed: ip %s, port %s\", addr.String(), e.Ports.String())\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\trecords = append(records, record)\n\t\tcache.Put(addr.IP, mask, records)\n\t}\n\n\treturn nil\n}\n\nfunc (s *ServiceCache) addHostService(e *common.Service, record *entry, local bool) error {\n\thostCache := s.remoteHosts\n\tif local {\n\t\thostCache = s.localHosts\n\t}\n\n\t// If addresses are nil, I only care about ports.\n\tif len(e.FQDNs) == 0 {\n\t\treturn nil\n\t}\n\n\tfor _, host := range e.FQDNs {\n\t\tif _, ok := hostCache[host]; !ok {\n\t\t\thostCache[host] = entryList{}\n\t\t}\n\t\tfor _, spec := range hostCache[host] {\n\t\t\tif spec.ports.Overlaps(e.Ports) {\n\t\t\t\treturn fmt.Errorf(\"service port overlap for a given host not allowed: host %s, port %s\", host, e.Ports.String())\n\t\t\t}\n\t\t}\n\t\thostCache[host] = append(hostCache[host], record)\n\t}\n\treturn nil\n}\n\n// findIP searches for a matching service, given an IP and port\nfunc (s *ServiceCache) findIP(ip net.IP, port int, local bool) interface{} {\n\n\tcache := s.remote\n\tif local {\n\t\tcache = s.local\n\t}\n\n\tif ip == nil {\n\t\treturn nil\n\t}\n\n\tvar data interface{}\n\n\tfindMatch := func(val interface{}) bool {\n\t\tif val != nil {\n\t\t\trecords := val.(entryList)\n\t\t\tfor _, e := range records {\n\t\t\t\tif e.ports.IsIncluded(port) {\n\t\t\t\t\tdata = e.data\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\n\tcache.RunFuncOnLpmIP(ip, findMatch)\n\treturn data\n}\n\n// findIP searches for a matching service, given an IP and port\nfunc (s *ServiceCache) findHost(host string, port int, local bool) interface{} {\n\thostCache := s.remoteHosts\n\tif local {\n\t\thostCache = s.localHosts\n\t}\n\n\tentries, ok := hostCache[host]\n\tif !ok {\n\t\treturn nil\n\t}\n\tfor _, e := range entries {\n\t\tif e.ports.IsIncluded(port) {\n\t\t\treturn e.data\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// addPorts will only work for local ports.\nfunc (s *ServiceCache) addPorts(e *common.Service, record *entry, local bool) error {\n\tif !local {\n\t\treturn nil\n\t}\n\n\tfor _, spec := range s.localPorts {\n\t\tif spec.ports.Overlaps(e.Ports) {\n\t\t\treturn fmt.Errorf(\"service port overlap in the global port list: %+v %s\", e.Addresses, e.Ports.String())\n\t\t}\n\t}\n\n\ts.localPorts = append(s.localPorts, record)\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/servicecache/servicecache_test.go",
    "content": "// +build !windows\n\npackage servicecache\n\nimport (\n\t\"net\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nfunc TestEntries(t *testing.T) {\n\tConvey(\"Given an entry list\", t, func() {\n\n\t\tConvey(\"If I delete the last element, I should the right data\", func() {\n\t\t\te := entryList{\n\t\t\t\t{\n\t\t\t\t\tid: \"1\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tid: \"2\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tid: \"3\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tid: \"4\",\n\t\t\t\t},\n\t\t\t}\n\t\t\tnew := e.Delete(3)\n\t\t\tSo(len(new), ShouldEqual, 3)\n\t\t\tSo(new[0], ShouldResemble, &entry{id: \"1\"})\n\t\t\tSo(new[1], ShouldResemble, &entry{id: \"2\"})\n\t\t\tSo(new[2], ShouldResemble, &entry{id: \"3\"})\n\t\t})\n\n\t\tConvey(\"If I delete the first element in the list, I should get the right data\", func() {\n\t\t\te := entryList{\n\t\t\t\t{\n\t\t\t\t\tid: \"1\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tid: \"2\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tid: \"3\",\n\t\t\t\t},\n\t\t\t}\n\t\t\tnew := e.Delete(0)\n\t\t\tSo(len(new), ShouldEqual, 2)\n\t\t\tSo(new[0], ShouldResemble, &entry{id: \"2\"})\n\t\t\tSo(new[1], ShouldResemble, &entry{id: \"3\"})\n\t\t})\n\n\t\tConvey(\"If I try to delete out of bounds, the list should not be modified\", func() {\n\t\t\te := entryList{\n\t\t\t\t{\n\t\t\t\t\tid: \"1\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tid: \"3\",\n\t\t\t\t},\n\t\t\t}\n\t\t\tnew := e.Delete(4)\n\t\t\tSo(len(new), ShouldEqual, 2)\n\t\t\tSo(new[0], ShouldResemble, &entry{id: \"1\"})\n\t\t\tSo(new[1], ShouldResemble, &entry{id: \"3\"})\n\t\t})\n\t\tConvey(\"If I delete the last element list I should get an empty list\", func() {\n\t\t\te := entryList{\n\t\t\t\t{\n\t\t\t\t\tid: \"1\",\n\t\t\t\t},\n\t\t\t}\n\t\t\tnew := e.Delete(0)\n\t\t\tSo(len(new), ShouldEqual, 0)\n\t\t})\n\t})\n}\n\nfunc createServices() (*common.Service, *common.Service, *common.Service) {\n\tn1 := \"172.17.1.0/24\"\n\tn2 := \"192.168.0.0/16\"\n\tn3 := \"20.1.1.1/32\"\n\n\ts1 := &common.Service{\n\t\tPorts: &portspec.PortSpec{\n\t\t\tMin: uint16(0),\n\t\t\tMax: uint16(100),\n\t\t},\n\t\tProtocol:  6,\n\t\tAddresses: map[string]struct{}{n1: struct{}{}, n2: struct{}{}, n3: struct{}{}},\n\t\tFQDNs:     []string{\"host1\", \"host2\", \"host3\"},\n\t}\n\n\tn4 := \"10.1.1.0/28\"\n\n\ts2 := &common.Service{\n\t\tPorts: &portspec.PortSpec{\n\t\t\tMin: uint16(150),\n\t\t\tMax: uint16(200),\n\t\t},\n\t\tProtocol:  6,\n\t\tAddresses: map[string]struct{}{n4: struct{}{}},\n\t\tFQDNs:     []string{\"host4\"},\n\t}\n\n\ts3 := &common.Service{\n\t\tPorts: &portspec.PortSpec{\n\t\t\tMin: uint16(1000),\n\t\t\tMax: uint16(2000),\n\t\t},\n\t\tProtocol:  6,\n\t\tAddresses: map[string]struct{}{},\n\t}\n\n\treturn s1, s2, s3\n}\nfunc TestServiceCache(t *testing.T) {\n\tConvey(\"Given a new cache\", t, func() {\n\t\tc := NewTable()\n\t\tConvey(\"When I add a set of entries, I should succeed\", func() {\n\n\t\t\ts1, s2, s3 := createServices()\n\n\t\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tSo(c.local, ShouldNotBeNil)\n\t\t\tSo(c.localHosts, ShouldNotBeNil)\n\t\t\tSo(len(c.localHosts), ShouldEqual, 3)\n\n\t\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tSo(c.local, ShouldNotBeNil)\n\t\t\tSo(c.localHosts, ShouldNotBeNil)\n\t\t\tSo(len(c.localHosts), ShouldEqual, 4)\n\n\t\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tSo(len(c.localHosts), ShouldEqual, 4)\n\n\t\t})\n\n\t\tConvey(\"If I try to add overlapping ports for a given prefix, I should get error\", func() {\n\t\t\ts1, s2, s3 := createServices()\n\t\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\n\t\t\tn5 := \"10.1.1.0/28\"\n\t\t\ts4 := &common.Service{\n\t\t\t\tPorts: &portspec.PortSpec{\n\t\t\t\t\tMin: uint16(100),\n\t\t\t\t\tMax: uint16(300),\n\t\t\t\t},\n\t\t\t\tProtocol:  6,\n\t\t\t\tAddresses: map[string]struct{}{n5: struct{}{}},\n\t\t\t}\n\t\t\tcerr = c.Add(s4, \"4\", \"failed data\", true)\n\t\t\tSo(cerr, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If I try to add overlapping ports for a given host, I should get error\", func() {\n\t\t\ts1, s2, s3 := createServices()\n\t\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\n\t\t\ts4 := &common.Service{\n\t\t\t\tPorts: &portspec.PortSpec{\n\t\t\t\t\tMin: uint16(100),\n\t\t\t\t\tMax: uint16(300),\n\t\t\t\t},\n\t\t\t\tProtocol:  6,\n\t\t\t\tAddresses: nil,\n\t\t\t\tFQDNs:     []string{\"host4\"},\n\t\t\t}\n\t\t\tcerr = c.Add(s4, \"4\", \"failed data\", true)\n\t\t\tSo(cerr, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I search for valid entries, I should get the right responses\", func() {\n\t\t\ts1, s2, s3 := createServices()\n\t\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\n\t\t\tdata := c.Find(net.ParseIP(\"10.1.1.1\").To4(), 175, \"\", true)\n\t\t\tSo(data, ShouldNotBeNil)\n\t\t\tSo(data.(string), ShouldResemble, \"second data\")\n\n\t\t\tdata = c.Find(net.ParseIP(\"192.168.1.1\").To4(), 50, \"\", true)\n\t\t\tSo(data, ShouldNotBeNil)\n\t\t\tSo(data.(string), ShouldResemble, \"first data\")\n\n\t\t\tdata = c.Find(net.ParseIP(\"50.50.50.50\").To4(), 1001, \"\", true)\n\t\t\tSo(data, ShouldNotBeNil)\n\t\t\tSo(data.(string), ShouldResemble, \"third data\")\n\n\t\t\tdata = c.Find(nil, 50, \"host2\", true)\n\t\t\tSo(data, ShouldNotBeNil)\n\t\t\tSo(data.(string), ShouldResemble, \"first data\")\n\n\t\t\tdata = c.Find(nil, 150, \"host4\", true)\n\t\t\tSo(data, ShouldNotBeNil)\n\t\t\tSo(data.(string), ShouldResemble, \"second data\")\n\t\t})\n\n\t\tConvey(\"When I search for a good IP, but invalid port, I should get nil \", func() {\n\t\t\ts1, s2, s3 := createServices()\n\t\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\n\t\t\tdata := c.Find(net.ParseIP(\"10.1.1.1\").To4(), 50, \"\", true)\n\t\t\tSo(data, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I search for a good IP, but uknown host, I should get nil \", func() {\n\t\t\ts1, s2, s3 := createServices()\n\t\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\n\t\t\tdata := c.Find(nil, 50, \"uknown\", true)\n\t\t\tSo(data, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I search for a good IP, but invalid host, I should get nil \", func() {\n\t\t\ts1, s2, s3 := createServices()\n\t\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\n\t\t\tdata := c.Find(nil, 50, \"host4\", true)\n\t\t\tSo(data, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I search for a good exact IP, and valid port, I should get the data \", func() {\n\t\t\ts1, s2, s3 := createServices()\n\t\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\t\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\t\tSo(cerr, ShouldBeNil)\n\n\t\t\tdata := c.Find(net.ParseIP(\"20.1.1.1\").To4(), 50, \"\", true)\n\t\t\tSo(data, ShouldNotBeNil)\n\t\t\tSo(data.(string), ShouldResemble, \"first data\")\n\t\t})\n\t})\n}\n\nfunc TestDelete(t *testing.T) {\n\tConvey(\"When I delete the first of entries, I should not be able to find them any more\", t, func() {\n\t\tc := NewTable()\n\t\ts1, s2, s3 := createServices()\n\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\tSo(cerr, ShouldBeNil)\n\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\tSo(cerr, ShouldBeNil)\n\t\tcerr = c.Add(s3, \"3\", \"third data\", false)\n\t\tSo(cerr, ShouldBeNil)\n\n\t\tc.DeleteByID(\"1\", true)\n\t\tdata := c.Find(net.ParseIP(\"192.168.1.1\").To4(), 50, \"\", true)\n\t\tSo(data, ShouldBeNil)\n\t})\n}\n\nfunc TestFindExistingServices(t *testing.T) {\n\tConvey(\"Given a table with entries\", t, func() {\n\t\tc := NewTable()\n\t\ts1, s2, s3 := createServices()\n\t\tcerr := c.Add(s1, \"1\", \"first data\", true)\n\t\tSo(cerr, ShouldBeNil)\n\t\tcerr = c.Add(s2, \"2\", \"second data\", true)\n\t\tSo(cerr, ShouldBeNil)\n\t\tcerr = c.Add(s3, \"3\", \"third data\", true)\n\t\tSo(cerr, ShouldBeNil)\n\n\t\tConvey(\"When I retrieve the service list from the local, it should be correct\", func() {\n\t\t\tdata, spec := c.FindListeningServicesForPU(\"3\")\n\t\t\tSo(data, ShouldNotBeNil)\n\t\t\tSo(data, ShouldResemble, \"third data\")\n\t\t\tSo(spec, ShouldNotBeNil)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/serviceregistry/serviceregistry.go",
    "content": "package serviceregistry\n\nimport (\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"net\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/servicecache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/auth\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/urisearch\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/tg/tglib\"\n)\n\n// ServiceContext includes all the all the service related information\n// for dependent services. It is indexed by the PU ID and a PU can\n// easily retrieve all the state with a simple lookup. Note, that\n// there is one ServiceContext for every PU.\ntype ServiceContext struct {\n\tPU        *policy.PUInfo\n\tPUContext *pucontext.PUContext\n\tRootCA    [][]byte\n\n\t// Dependent Services are services that are consumed by the PU.\n\t// The dependent service cache is only accessible internally,\n\t// so that all types are properly converted.\n\tdependentServiceCache *servicecache.ServiceCache\n}\n\n// DependentServiceData are the data that are held for each service\n// in the dependentServiceCache.\ntype DependentServiceData struct {\n\t// Used for authorization\n\tAPICache *urisearch.APICache\n\t// Used by the protomux to find the right service type.\n\tServiceType common.ListenerType\n\t// ServiceObject is the original service object.\n\tServiceObject *policy.ApplicationService\n}\n\n// PortContext includes all the needed associations to refer to a service by port.\n// For incoming connections the only available information is the IP/port\n// pair of the original request and we use this to map the connection and\n// request to a port. For network services we have additional state data\n// such as the authorizers. Note that there is one PortContext for every\n// service of every PU.\ntype PortContext struct {\n\tID                 string\n\tType               common.ListenerType\n\tService            *policy.ApplicationService\n\tAuthorizer         *auth.Processor\n\tPUContext          *pucontext.PUContext\n\tTargetPort         int\n\tClientTrustedRoots *x509.CertPool\n}\n\n// Registry is a service registry. It maintains all the state information\n// and provides a simple API to retrieve the data. The registry always\n// locks and allows multi-threading.\ntype Registry struct {\n\tindexByName map[string]*ServiceContext\n\tindexByPort *servicecache.ServiceCache\n\tsync.Mutex\n}\n\nvar instance = Registry{\n\tindexByName: map[string]*ServiceContext{},\n\tindexByPort: servicecache.NewTable(),\n}\n\n//Instance returns the service registry instance\nfunc Instance() *Registry {\n\treturn &instance\n}\n\n// Register registers a new service with the registry. If the service\n// already exists it updates the service with the new information, otherwise\n// it creates a new service.\nfunc (r *Registry) Register(\n\tpuID string,\n\tpu *policy.PUInfo,\n\tpuContext *pucontext.PUContext,\n\tsecrets secrets.Secrets,\n) (*ServiceContext, error) {\n\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tsctx := &ServiceContext{\n\t\tPU:                    pu,\n\t\tPUContext:             puContext,\n\t\tdependentServiceCache: servicecache.NewTable(),\n\t\tRootCA:                [][]byte{},\n\t}\n\n\t// Delete all old references first. Since the registry is locked\n\t// nobody will be affected.\n\tr.indexByPort.DeleteByID(puID, true)\n\tr.indexByPort.DeleteByID(puID, false)\n\n\tif err := r.updateDependentServices(sctx); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif err := r.updateExposedServices(sctx, secrets); err != nil {\n\t\treturn nil, err\n\t}\n\n\tr.indexByName[puID] = sctx\n\n\treturn sctx, nil\n}\n\n// buildExposedServices builds the caches for the exposed services. It assumes that an authorization\nfunc (r *Registry) updateExposedServices(sctx *ServiceContext, secrets secrets.Secrets) error {\n\n\tfor _, service := range sctx.PU.Policy.ExposedServices() {\n\t\tif service.Type != policy.ServiceHTTP && service.Type != policy.ServiceTCP {\n\t\t\tcontinue\n\t\t}\n\t\tif err := r.updateExposedPortAssociations(sctx, service, secrets); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Unregister unregisters a pu from the registry.\nfunc (r *Registry) Unregister(puID string) error {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tdelete(r.indexByName, puID)\n\tr.indexByPort.DeleteByID(puID, true)\n\tr.indexByPort.DeleteByID(puID, false)\n\treturn nil\n}\n\n// RetrieveServiceByID retrieves a service by the PU ID. Returns error if not found.\nfunc (r *Registry) RetrieveServiceByID(id string) (*ServiceContext, error) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tsvc, ok := r.indexByName[id]\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"Service not found: %s\", id)\n\t}\n\n\treturn svc, nil\n}\n\n// RetrieveExposedServiceContext retrieves a service by the provided IP and or port. This\n// is called by the network side of processing to find the context.\nfunc (r *Registry) RetrieveExposedServiceContext(ip net.IP, port int, host string) (*PortContext, error) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tdata := r.indexByPort.Find(ip, port, host, true)\n\tif data == nil {\n\t\treturn nil, fmt.Errorf(\"Service information not found: %s %d %s\", ip.String(), port, host)\n\t}\n\n\tportContext, ok := data.(*PortContext)\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"Internal server error\")\n\t}\n\n\treturn portContext, nil\n}\n\n// RetrieveDependentServiceDataByIDAndNetwork will return the service data that match the given\n// PU and the given IP/port information.\nfunc (r *Registry) RetrieveDependentServiceDataByIDAndNetwork(id string, ip net.IP, port int, host string) (*ServiceContext, *DependentServiceData, error) {\n\tsctx, err := r.RetrieveServiceByID(id)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"Services for PU %s not found: %s\", id, err)\n\t}\n\tdata := sctx.dependentServiceCache.Find(ip, port, \"\", false)\n\tif data == nil {\n\t\treturn nil, nil, fmt.Errorf(\"Service not found for this PU: %s\", id)\n\t}\n\tserviceData, ok := data.(*DependentServiceData)\n\tif !ok {\n\t\treturn nil, nil, fmt.Errorf(\"Internal server error - bad data types\")\n\t}\n\treturn sctx, serviceData, nil\n}\n\n// updateExposedPortAssociations will insert the association between a port\n// and a service in the global exposed service cache. This is  needed\n// for all incoming connections, so that can determine both the type\n// of proxy as well the correct policy for this connection. This\n// association cannot have overlaps.\nfunc (r *Registry) updateExposedPortAssociations(sctx *ServiceContext, service *policy.ApplicationService, secrets secrets.Secrets) error {\n\n\t// Do All the basic validations first.\n\tif service.PrivateNetworkInfo == nil {\n\t\treturn fmt.Errorf(\"Private network is required for exposed services\")\n\t}\n\tport, err := service.PrivateNetworkInfo.Ports.SinglePort()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Multi-port is not supported for exposed services: %s\", err)\n\t}\n\tif service.PublicNetworkInfo != nil {\n\t\tif _, err := service.PublicNetworkInfo.Ports.SinglePort(); err != nil {\n\t\t\treturn fmt.Errorf(\"Multi-port is not supported for public network services: %s\", err)\n\t\t}\n\t}\n\n\t// Find any existing state and get the authorizer. We do not want\n\t// to re-initialize the authorizer for every policy update.\n\tauthProcessor, err := r.createOrUpdateAuthProcessor(sctx, service, secrets)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tclientCAs := x509.NewCertPool()\n\tif (service.UserAuthorizationType == policy.UserAuthorizationMutualTLS || service.UserAuthorizationType == policy.UserAuthorizationJWT) &&\n\t\tlen(service.MutualTLSTrustedRoots) > 0 {\n\t\tif !clientCAs.AppendCertsFromPEM(service.MutualTLSTrustedRoots) {\n\t\t\treturn fmt.Errorf(\"Unable to process client CAs\")\n\t\t}\n\t}\n\n\t// Add the new references.\n\tif err := r.indexByPort.Add(\n\t\tservice.PrivateNetworkInfo,\n\t\tsctx.PU.ContextID,\n\t\t&PortContext{\n\t\t\tID:                 sctx.PU.ContextID,\n\t\t\tService:            service,\n\t\t\tTargetPort:         int(port),\n\t\t\tType:               serviceTypeToNetworkListenerType(service.Type, false),\n\t\t\tAuthorizer:         authProcessor,\n\t\t\tClientTrustedRoots: clientCAs,\n\t\t\tPUContext:          sctx.PUContext,\n\t\t},\n\t\ttrue,\n\t); err != nil {\n\t\treturn fmt.Errorf(\"Possible port overlap: %s\", err)\n\t}\n\n\tif service.PublicNetworkInfo != nil {\n\t\tif err := r.indexByPort.Add(\n\t\t\tservice.PublicNetworkInfo,\n\t\t\tsctx.PU.ContextID,\n\t\t\t&PortContext{\n\t\t\t\tID:                 sctx.PU.ContextID,\n\t\t\t\tService:            service,\n\t\t\t\tTargetPort:         int(port),\n\t\t\t\tType:               serviceTypeToNetworkListenerType(service.Type, service.PublicServiceTLSType == policy.ServiceTLSTypeNone),\n\t\t\t\tAuthorizer:         authProcessor,\n\t\t\t\tClientTrustedRoots: clientCAs,\n\t\t\t\tPUContext:          sctx.PUContext,\n\t\t\t},\n\t\t\ttrue,\n\t\t); err != nil {\n\t\t\treturn fmt.Errorf(\"Possible port overlap with public services: %s\", err)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc updateDependentService(service *policy.ApplicationService, sctx *ServiceContext) error {\n\n\tif len(service.CACert) != 0 {\n\t\tsctx.RootCA = append(sctx.RootCA, service.CACert)\n\t}\n\n\tserviceData := &DependentServiceData{\n\t\tServiceType:   serviceTypeToApplicationListenerType(service.Type),\n\t\tServiceObject: service,\n\t}\n\tif service.Type == policy.ServiceHTTP {\n\t\tserviceData.APICache = urisearch.NewAPICache(service.HTTPRules, service.ID, service.External)\n\t}\n\n\tif err := sctx.dependentServiceCache.Add(\n\t\tservice.NetworkInfo,\n\t\tsctx.PU.ContextID,\n\t\tserviceData,\n\t\tfalse,\n\t); err != nil {\n\t\treturn fmt.Errorf(\"Possible overlap in dependent services: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// UpdateDependentServicesByID will the dependent services for the ID\nfunc (r *Registry) UpdateDependentServicesByID(id string) error {\n\n\tsctx, err := r.RetrieveServiceByID(id)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Services for PU %s not found: %s\", id, err)\n\t}\n\n\tr.Lock()\n\tsctx.dependentServiceCache = servicecache.NewTable()\n\terr = r.updateDependentServices(sctx)\n\tr.Unlock()\n\n\treturn err\n}\n\n// updateDependentService will update all the information in the\n// ServiceContext for the dependent services.\nfunc (r *Registry) updateDependentServices(sctx *ServiceContext) error {\n\n\tfor _, service := range sctx.PU.Policy.DependentServices() {\n\t\tif err := updateDependentService(service, sctx); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (r *Registry) createOrUpdateAuthProcessor(sctx *ServiceContext, service *policy.ApplicationService, secrets secrets.Secrets) (*auth.Processor, error) {\n\n\tvar cert *x509.Certificate\n\tif len(service.FallbackJWTAuthorizationCert) > 0 {\n\t\tvar err error\n\t\tcert, err = tglib.ParseCertificate([]byte(service.FallbackJWTAuthorizationCert))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tportContext, _ := r.indexByPort.FindListeningServicesForPU(sctx.PU.ContextID)\n\tvar authProcessor *auth.Processor\n\tif portContext != nil {\n\t\texistingPortCtx, ok := portContext.(*PortContext)\n\t\tif !ok {\n\t\t\treturn nil, fmt.Errorf(\"Internal error - unusable data structure\")\n\t\t}\n\t\tauthProcessor = existingPortCtx.Authorizer\n\t\tauthProcessor.UpdateSecrets(secrets, cert)\n\t} else {\n\t\tauthProcessor = auth.NewProcessor(secrets, cert)\n\t}\n\n\tauthProcessor.AddOrUpdateService(\n\t\turisearch.NewAPICache(service.HTTPRules, service.ID, false),\n\t\tservice.UserAuthorizationType,\n\t\tservice.UserAuthorizationHandler,\n\t\tservice.UserTokenToHTTPMappings,\n\t)\n\n\treturn authProcessor, nil\n}\n\nfunc serviceTypeToNetworkListenerType(serviceType policy.ServiceType, noTLS bool) common.ListenerType {\n\tswitch serviceType {\n\tcase policy.ServiceHTTP:\n\t\tif noTLS {\n\t\t\treturn common.HTTPNetwork\n\t\t}\n\t\treturn common.HTTPSNetwork\n\tdefault:\n\t\treturn common.TCPNetwork\n\t}\n}\n\nfunc serviceTypeToApplicationListenerType(serviceType policy.ServiceType) common.ListenerType {\n\tswitch serviceType {\n\tcase policy.ServiceHTTP:\n\t\treturn common.HTTPApplication\n\tdefault:\n\t\treturn common.TCPApplication\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/serviceregistry/serviceregistry_test.go",
    "content": "// +build !windows\n\npackage serviceregistry\n\nimport (\n\t\"net\"\n\t\"testing\"\n\t\"time\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\ttriremecommon \"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/testhelper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nfunc newBaseApplicationServices(exposedPortValue, publicPortValue, privatePortValue, dependentPortValue uint16) (*policy.ApplicationService, *policy.ApplicationService) {\n\n\texposed1 := \"10.1.1.0/24\"\n\texposed2 := \"20.1.1.0/24\"\n\tpublic1 := \"30.1.1.0/24\"\n\tpublic2 := \"40.1.1.0/24\"\n\tdependent1 := \"50.1.1.0/24\"\n\tdependent2 := \"60.1.1.0/24\"\n\n\texposedPort, err := portspec.NewPortSpec(exposedPortValue, exposedPortValue, nil)\n\tSo(err, ShouldBeNil)\n\tpublicPort, err := portspec.NewPortSpec(publicPortValue, publicPortValue, nil)\n\tSo(err, ShouldBeNil)\n\tprivatePort, err := portspec.NewPortSpec(privatePortValue, privatePortValue, nil)\n\tSo(err, ShouldBeNil)\n\tdependentPort, err := portspec.NewPortSpec(dependentPortValue, dependentPortValue, nil)\n\tSo(err, ShouldBeNil)\n\n\treturn &policy.ApplicationService{\n\t\t\tID: \"policyExposed\",\n\t\t\tNetworkInfo: &triremecommon.Service{\n\t\t\t\tPorts:     exposedPort,\n\t\t\t\tProtocol:  6,\n\t\t\t\tAddresses: map[string]struct{}{exposed1: struct{}{}, exposed2: struct{}{}},\n\t\t\t},\n\t\t\tPublicNetworkInfo: &triremecommon.Service{\n\t\t\t\tPorts:     publicPort,\n\t\t\t\tProtocol:  6,\n\t\t\t\tAddresses: map[string]struct{}{public1: struct{}{}},\n\t\t\t},\n\t\t\tPrivateNetworkInfo: &triremecommon.Service{\n\t\t\t\tPorts:     privatePort,\n\t\t\t\tProtocol:  6,\n\t\t\t\tAddresses: map[string]struct{}{},\n\t\t\t},\n\t\t\tType:                 policy.ServiceHTTP,\n\t\t\tPublicServiceTLSType: policy.ServiceTLSTypeAporeto,\n\t\t},\n\t\t&policy.ApplicationService{\n\t\t\tID: \"policyDepend\",\n\t\t\tNetworkInfo: &triremecommon.Service{\n\t\t\t\tPorts:    dependentPort,\n\t\t\t\tProtocol: 6,\n\t\t\t\tFQDNs:    []string{\"www.google.com\"},\n\t\t\t\tAddresses: map[string]struct{}{\n\t\t\t\t\tdependent1: struct{}{},\n\t\t\t\t\tdependent2: struct{}{},\n\t\t\t\t},\n\t\t\t},\n\t\t\tPublicNetworkInfo: &triremecommon.Service{\n\t\t\t\tPorts:     publicPort,\n\t\t\t\tProtocol:  6,\n\t\t\t\tAddresses: map[string]struct{}{public2: struct{}{}},\n\t\t\t},\n\t\t\tPrivateNetworkInfo: &triremecommon.Service{\n\t\t\t\tPorts:     privatePort,\n\t\t\t\tProtocol:  6,\n\t\t\t\tAddresses: map[string]struct{}{},\n\t\t\t},\n\t\t\tType:                 policy.ServiceHTTP,\n\t\t\tPublicServiceTLSType: policy.ServiceTLSTypeAporeto,\n\t\t}\n}\n\nfunc newPU(name string, exposedPort, publicPort, privatePort, dependentPort uint16, doubleExposed, doubleDependent bool) (*policy.PUInfo, *pucontext.PUContext, secrets.Secrets) {\n\texposed, dependent := newBaseApplicationServices(exposedPort, publicPort, privatePort, dependentPort)\n\n\texposedServices := policy.ApplicationServicesList{exposed}\n\tif doubleExposed {\n\t\texposedServices = append(exposedServices, exposed)\n\t}\n\n\tdependentServices := policy.ApplicationServicesList{dependent}\n\tif doubleDependent {\n\t\tdependentServices = append(dependentServices, dependent)\n\t}\n\tplc := policy.NewPUPolicy(\n\t\tname+\"-policyid1\",\n\t\t\"/ns1\",\n\t\tpolicy.Police,\n\t\tpolicy.IPRuleList{},\n\t\tpolicy.IPRuleList{},\n\t\tpolicy.DNSRuleList{},\n\t\tpolicy.TagSelectorList{},\n\t\tpolicy.TagSelectorList{},\n\t\tpolicy.NewTagStore(),\n\t\tpolicy.NewTagStoreFromSlice([]string{\"app=web\", \"type=aporeto\"}),\n\t\tnil,\n\t\tnil,\n\t\t0,\n\t\t0,\n\t\texposedServices,\n\t\tdependentServices,\n\t\t[]string{},\n\t\tpolicy.EnforcerMapping,\n\t\tpolicy.Reject|policy.Log,\n\t\tpolicy.Reject|policy.Log,\n\t)\n\n\tpuInfo := policy.NewPUInfo(name, \"/ns1\", triremecommon.ContainerPU)\n\tpuInfo.Policy = plc\n\tpctx, err := pucontext.NewPU(name, puInfo, nil, time.Second*1000)\n\tSo(err, ShouldBeNil)\n\t_, s, _ := testhelper.NewTestCompactPKISecrets()\n\treturn puInfo, pctx, s\n}\n\nfunc TestRegister(t *testing.T) {\n\tConvey(\"Given a new registry\", t, func() {\n\t\tr := Instance()\n\t\tConvey(\"When I register a new PU with no services\", func() {\n\t\t\tpuInfo, pctx, s := newPU(\"pu1\", 8080, 443, 80, 8080, false, false)\n\t\t\tsctx, err := r.Register(\"pu1\", puInfo, pctx, s)\n\t\t\tConvey(\"The data structures should be correct and I should be able to retrieve the service\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(sctx, ShouldNotBeNil)\n\t\t\t\tSo(sctx.PU, ShouldResemble, puInfo)\n\t\t\t\tSo(sctx.PUContext, ShouldResemble, pctx)\n\t\t\t\tSo(sctx.dependentServiceCache, ShouldNotBeNil)\n\t\t\t})\n\t\t\tConvey(\"And I should be able to retrieve the services using the three provided methods\", func() {\n\t\t\t\tserviceContext, rerr := r.RetrieveServiceByID(\"pu1\")\n\t\t\t\tSo(rerr, ShouldBeNil)\n\t\t\t\tSo(serviceContext, ShouldNotBeNil)\n\t\t\t\tSo(serviceContext, ShouldResemble, sctx)\n\n\t\t\t\tportContext, perr := r.RetrieveExposedServiceContext(net.ParseIP(\"10.1.1.1\").To4(), 80, \"\")\n\t\t\t\tSo(perr, ShouldBeNil)\n\t\t\t\tSo(portContext, ShouldNotBeNil)\n\t\t\t\tSo(portContext.ID, ShouldResemble, \"pu1\")\n\t\t\t\tSo(portContext.TargetPort, ShouldEqual, 80)\n\t\t\t\tSo(portContext.Service, ShouldResemble, puInfo.Policy.ExposedServices()[0])\n\t\t\t\tSo(portContext.Type, ShouldEqual, common.HTTPSNetwork)\n\n\t\t\t})\n\t\t\tConvey(\"Update the dependent services by fqdn, and it should be able to find\", func() {\n\t\t\t\tfor _, dependentService := range pctx.DependentServices(\"www.google.com\") {\n\t\t\t\t\tdependentService.NetworkInfo.Addresses[\"4.4.4.4/32\"] = struct{}{}\n\t\t\t\t}\n\n\t\t\t\terr := r.UpdateDependentServicesByID(\"pu1\")\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t_, _, err = r.RetrieveDependentServiceDataByIDAndNetwork(\"pu1\", net.ParseIP(\"4.4.4.4\").To4(), 8080, \"\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t\tConvey(\"But I should get errors for non existing ports or services\", func() {\n\t\t\t\tserviceContext, rerr := r.RetrieveServiceByID(\"badpu\")\n\t\t\t\tSo(rerr, ShouldNotBeNil)\n\t\t\t\tSo(serviceContext, ShouldBeNil)\n\n\t\t\t\tportContext, perr := r.RetrieveExposedServiceContext(net.ParseIP(\"100.1.1.1\").To4(), 100, \"\")\n\t\t\t\tSo(perr, ShouldNotBeNil)\n\t\t\t\tSo(portContext, ShouldBeNil)\n\t\t\t})\n\n\t\t\tConvey(\"When I register a second service with no overlaps\", func() {\n\t\t\t\tpuInfo, pctx, s := newPU(\"pu2\", 8000, 4443, 8080, 10000, false, false)\n\t\t\t\tsctx, err := r.Register(\"pu2\", puInfo, pctx, s)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(sctx, ShouldNotBeNil)\n\n\t\t\t\tConvey(\"And I should be able to retrieve the updated services using the three provided methods\", func() {\n\t\t\t\t\tserviceContext, rerr := r.RetrieveServiceByID(\"pu2\")\n\t\t\t\t\tSo(rerr, ShouldBeNil)\n\t\t\t\t\tSo(serviceContext, ShouldNotBeNil)\n\t\t\t\t\tSo(serviceContext, ShouldResemble, sctx)\n\n\t\t\t\t\tportContext, perr := r.RetrieveExposedServiceContext(net.ParseIP(\"10.1.1.1\").To4(), 8080, \"\")\n\t\t\t\t\tSo(perr, ShouldBeNil)\n\t\t\t\t\tSo(portContext, ShouldNotBeNil)\n\t\t\t\t\tSo(portContext.ID, ShouldResemble, \"pu2\")\n\t\t\t\t\tSo(portContext.TargetPort, ShouldEqual, 8080)\n\t\t\t\t\tSo(portContext.Service, ShouldResemble, puInfo.Policy.ExposedServices()[0])\n\t\t\t\t\tSo(portContext.Type, ShouldEqual, common.HTTPSNetwork)\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I register a second service with port overlaps, I should get errors\", func() {\n\t\t\t\t// exposedService overlap\n\t\t\t\tpuInfo, pctx, s := newPU(\"pu2\", 8080, 4443, 8080, 10000, true, false)\n\t\t\t\tsctx, err := r.Register(\"pu2\", puInfo, pctx, s)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(sctx, ShouldBeNil)\n\n\t\t\t\t// dependentService overlap\n\t\t\t\tpuInfo, pctx, s = newPU(\"pu2\", 8080, 4443, 8080, 10000, false, true)\n\t\t\t\tsctx, err = r.Register(\"pu2\", puInfo, pctx, s)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(sctx, ShouldBeNil)\n\n\t\t\t\t// both overlaps\n\t\t\t\tpuInfo, pctx, s = newPU(\"pu2\", 8080, 4443, 8080, 10000, true, true)\n\t\t\t\tsctx, err = r.Register(\"pu2\", puInfo, pctx, s)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(sctx, ShouldBeNil)\n\n\t\t\t})\n\n\t\t\tConvey(\"When I re-register the service with updates on the ports\", func() {\n\t\t\t\tpuInfo, pctx, s := newPU(\"pu1\", 8000, 4443, 8080, 10000, false, false)\n\t\t\t\tsctx, err := r.Register(\"pu1\", puInfo, pctx, s)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(sctx, ShouldNotBeNil)\n\n\t\t\t\tConvey(\"And I should be able to retrieve the updated services using the three provided methods\", func() {\n\t\t\t\t\tserviceContext, rerr := r.RetrieveServiceByID(\"pu1\")\n\t\t\t\t\tSo(rerr, ShouldBeNil)\n\t\t\t\t\tSo(serviceContext, ShouldNotBeNil)\n\t\t\t\t\tSo(serviceContext, ShouldResemble, sctx)\n\n\t\t\t\t\tportContext, perr := r.RetrieveExposedServiceContext(net.ParseIP(\"10.1.1.1\").To4(), 8080, \"\")\n\t\t\t\t\tSo(perr, ShouldBeNil)\n\t\t\t\t\tSo(portContext, ShouldNotBeNil)\n\t\t\t\t\tSo(portContext.ID, ShouldResemble, \"pu1\")\n\t\t\t\t\tSo(portContext.TargetPort, ShouldEqual, 8080)\n\t\t\t\t\tSo(portContext.Service, ShouldResemble, puInfo.Policy.ExposedServices()[0])\n\t\t\t\t\tSo(portContext.Type, ShouldEqual, common.HTTPSNetwork)\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I unregister the service, it should be deleted\", func() {\n\t\t\t\tuerr := r.Unregister(\"pu1\")\n\t\t\t\tSo(uerr, ShouldBeNil)\n\t\t\t\tretrievedContext, rerr := r.RetrieveServiceByID(\"pu1\")\n\t\t\t\tSo(rerr, ShouldNotBeNil)\n\t\t\t\tSo(retrievedContext, ShouldBeNil)\n\t\t\t\tportContext, perr := r.RetrieveExposedServiceContext(net.ParseIP(\"10.1.1.1\").To4(), 80, \"\")\n\t\t\t\tSo(perr, ShouldNotBeNil)\n\t\t\t\tSo(portContext, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestServiceTypeToNetworkListenerType(t *testing.T) {\n\tConvey(\"When I convert a network HTTP service it should be HTTPNetwork\", t, func() {\n\t\tt := serviceTypeToNetworkListenerType(policy.ServiceHTTP, true)\n\t\tSo(t, ShouldEqual, common.HTTPNetwork)\n\t})\n\tConvey(\"When I convert a network HTTPS service it should be HTTPSNetwork\", t, func() {\n\t\tt := serviceTypeToNetworkListenerType(policy.ServiceHTTP, false)\n\t\tSo(t, ShouldEqual, common.HTTPSNetwork)\n\t})\n\tConvey(\"When I convert a TCP service it should be TCPNetwork\", t, func() {\n\t\tt := serviceTypeToNetworkListenerType(policy.ServiceTCP, false)\n\t\tSo(t, ShouldEqual, common.TCPNetwork)\n\t})\n}\n\nfunc TestServiceTypeToApplicationListenerType(t *testing.T) {\n\tConvey(\"When I convert an application HTTP service it should be HTTPApplication\", t, func() {\n\t\tt := serviceTypeToApplicationListenerType(policy.ServiceHTTP)\n\t\tSo(t, ShouldEqual, common.HTTPApplication)\n\t})\n\tConvey(\"When I convert an application TCP service it should be TCPApplication\", t, func() {\n\t\tt := serviceTypeToApplicationListenerType(policy.ServiceTCP)\n\t\tSo(t, ShouldEqual, common.TCPApplication)\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/lookup.go",
    "content": "package tcp\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\nconst indeterminateRemoteController = \"\"\n\n// proxyFlowProperties is a struct used to pass flow information up\ntype proxyFlowProperties struct {\n\tmyControllerID string\n\tSourceIP       string\n\tDestIP         string\n\tPolicyID       string\n\tServiceID      string\n\tDestType       collector.EndPointType\n\tSourceType     collector.EndPointType\n\tSourcePort     uint16\n\tDestPort       uint16\n}\n\ntype lookup struct {\n\tSourceIP   net.IP\n\tDestIP     net.IP\n\tSourcePort uint16\n\tDestPort   uint16\n\tcollector  collector.EventCollector\n\tpuContext  *pucontext.PUContext\n\tpfp        *proxyFlowProperties\n\tclient     bool\n}\n\n// IDLookup performs policy lookup based on incoming tags from remote PU and matching against our policy DB.\n// Stats reporting is done for:\n//   - all rejects\n//   - accepts on server\nfunc (l *lookup) IDLookup(remoteController, remotePUID string, tags *policy.TagStore) bool { // nolint: staticcheck\n\n\tremoteController = \"\"\n\t// TODO: Enable this\n\t// if remoteController == l.pfp.myControllerID {\n\t// \tremoteController = \"\"\n\t// }\n\n\treport, packet := l.Policy(tags)\n\n\tif packet.Action.Rejected() {\n\t\tl.ReportStats(\n\t\t\tcollector.EndPointTypePU,\n\t\t\tremoteController,\n\t\t\tremotePUID,\n\t\t\tcollector.PolicyDrop,\n\t\t\treport,\n\t\t\tpacket,\n\t\t\tfalse,\n\t\t)\n\t\tzap.L().Debug(\"lookup reject\", zap.Bool(\"client\", l.client), zap.Strings(\"tags\", tags.GetSlice()))\n\t\treturn false\n\t}\n\n\tif !l.client && packet.Action.Accepted() {\n\t\tl.ReportStats(\n\t\t\tcollector.EndPointTypePU,\n\t\t\tremoteController,\n\t\t\tremotePUID,\n\t\t\t\"N/A\",\n\t\t\treport,\n\t\t\tpacket,\n\t\t\tfalse,\n\t\t)\n\t\tzap.L().Debug(\"lookup accept\", zap.Bool(\"client\", l.client), zap.Strings(\"tags\", tags.GetSlice()))\n\t}\n\treturn true\n}\n\n// Policy performs policy lookup based on incoming tags from remote PU and matching against our policy DB.\n// It also returns the report and packet policy.\nfunc (l *lookup) Policy(tags *policy.TagStore) (*policy.FlowPolicy, *policy.FlowPolicy) {\n\n\tvar report *policy.FlowPolicy\n\tvar packet *policy.FlowPolicy\n\n\tif l.client {\n\t\ttags.AppendKeyValue(constants.PortNumberLabelString, fmt.Sprintf(\"%s/%s\", constants.TCPProtoString, strconv.Itoa(int(l.DestPort))))\n\t\treport, packet = l.puContext.SearchTxtRules(tags, false)\n\t} else {\n\t\ttags.AppendKeyValue(constants.PortNumberLabelString, fmt.Sprintf(\"%s/%s\", constants.TCPProtoString, strconv.Itoa(int(l.DestPort))))\n\t\treport, packet = l.puContext.SearchRcvRules(tags)\n\t}\n\n\treturn report, packet\n}\n\nfunc (l *lookup) IPLookup() bool {\n\n\tvar report *policy.FlowPolicy\n\tvar packetPolicy *policy.FlowPolicy\n\tvar noPolicy error\n\n\tif l.client {\n\t\treport, packetPolicy, noPolicy = l.puContext.ApplicationACLPolicyFromAddr(l.DestIP, l.DestPort, packet.IPProtocolTCP)\n\t} else {\n\t\treport, packetPolicy, noPolicy = l.puContext.NetworkACLPolicyFromAddr(l.SourceIP, l.DestPort, packet.IPProtocolTCP)\n\t}\n\n\tmatchString := \"none\"\n\tif noPolicy != nil {\n\t\tmatchString = noPolicy.Error()\n\t}\n\n\t// Clients and Servers should reject and report if a reject action is found.\n\tif packetPolicy.Action.Rejected() {\n\n\t\tl.ReportStats(\n\t\t\tcollector.EndPointTypeExternalIP,\n\t\t\tindeterminateRemoteController,\n\t\t\tpacketPolicy.ServiceID,\n\t\t\tcollector.PolicyDrop,\n\t\t\treport,\n\t\t\tpacketPolicy,\n\t\t\tfalse,\n\t\t)\n\t\tzap.L().Debug(\n\t\t\t\"IP ACL Lookup Reject\",\n\t\t\tzap.Bool(\"client\", l.client),\n\t\t\tzap.String(\"match\", matchString),\n\t\t\tzap.String(\"src-ip\", l.pfp.SourceIP),\n\t\t\tzap.Uint16(\"src-port\", l.SourcePort),\n\t\t\tzap.String(\"dst-ip\", l.pfp.DestIP),\n\t\t\tzap.Uint16(\"dst-port\", l.DestPort),\n\t\t\tzap.String(\"report\", report.PolicyID),\n\t\t\tzap.String(\"policy\", packetPolicy.PolicyID))\n\t\treturn false\n\t}\n\n\tif !l.client && packetPolicy.Action.Accepted() {\n\t\tl.ReportStats(\n\t\t\tcollector.EndPointTypeExternalIP,\n\t\t\tindeterminateRemoteController,\n\t\t\tpacketPolicy.ServiceID,\n\t\t\t\"N/A\",\n\t\t\treport,\n\t\t\tpacketPolicy,\n\t\t\tfalse,\n\t\t)\n\t}\n\tzap.L().Debug(\n\t\t\"IP ACL Lookup Accept\",\n\t\tzap.Bool(\"client\", l.client),\n\t\tzap.String(\"match\", matchString),\n\t\tzap.String(\"src-ip\", l.pfp.SourceIP),\n\t\tzap.Uint16(\"src-port\", l.SourcePort),\n\t\tzap.String(\"dst-ip\", l.pfp.DestIP),\n\t\tzap.Uint16(\"dst-port\", l.DestPort),\n\t\tzap.String(\"report\", report.PolicyID),\n\t\tzap.String(\"policy\", packetPolicy.PolicyID))\n\treturn true\n}\n\nfunc (l *lookup) ReportStats(remoteType collector.EndPointType, remoteController string, remotePUID string, mode string, report *policy.FlowPolicy, packet *policy.FlowPolicy, accept bool) {\n\n\tdstController, dstID, srcController, srcID := \"\", \"\", \"\", \"\"\n\n\tif l.client {\n\t\tl.pfp.DestType = remoteType\n\t\tif l.pfp.myControllerID != remoteController {\n\t\t\tdstController = remoteController\n\t\t}\n\t\tdstID = remotePUID\n\t\tsrcID = l.puContext.ManagementID()\n\t} else {\n\t\tl.pfp.SourceType = remoteType\n\t\tdstID = l.puContext.ManagementID()\n\t\tif l.pfp.myControllerID != remoteController {\n\t\t\tsrcController = remoteController\n\t\t}\n\t\tsrcID = remotePUID\n\t}\n\n\tif accept {\n\t\tl.reportAcceptedFlow(\n\t\t\tl.pfp,\n\t\t\tsrcID,\n\t\t\tdstID,\n\t\t\tl.puContext,\n\t\t\treport,\n\t\t\tpacket,\n\t\t\tsrcController,\n\t\t\tdstController,\n\t\t)\n\t\treturn\n\t}\n\n\tl.reportRejectedFlow(\n\t\tl.pfp,\n\t\tsrcID,\n\t\tdstID,\n\t\tl.puContext,\n\t\tmode,\n\t\treport,\n\t\tpacket,\n\t\tsrcController,\n\t\tdstController,\n\t)\n}\n\nfunc (l *lookup) reportFlow(flowproperties *proxyFlowProperties, sourceID string, destID string, context *pucontext.PUContext, mode string, report *policy.FlowPolicy, actual *policy.FlowPolicy, sourceController string, destController string) {\n\n\tc := &collector.FlowRecord{\n\t\tContextID: context.ID(),\n\t\tSource: collector.EndPoint{\n\t\t\tID:   sourceID,\n\t\t\tIP:   flowproperties.SourceIP,\n\t\t\tPort: flowproperties.SourcePort,\n\t\t\tType: flowproperties.SourceType,\n\t\t},\n\t\tDestination: collector.EndPoint{\n\t\t\tID:   destID,\n\t\t\tIP:   flowproperties.DestIP,\n\t\t\tPort: flowproperties.DestPort,\n\t\t\tType: flowproperties.DestType,\n\t\t},\n\n\t\tAction:                actual.Action,\n\t\tDropReason:            mode,\n\t\tPolicyID:              actual.PolicyID,\n\t\tL4Protocol:            packet.IPProtocolTCP,\n\t\tServiceType:           policy.ServiceTCP,\n\t\tServiceID:             flowproperties.ServiceID,\n\t\tNamespace:             context.ManagementNamespace(),\n\t\tSourceController:      sourceController,\n\t\tDestinationController: destController,\n\t}\n\n\tif context.Annotations() != nil {\n\t\tc.Tags = context.Annotations().GetSlice()\n\t}\n\n\tif report.ObserveAction.Observed() {\n\t\tc.ObservedAction = report.Action\n\t\tc.ObservedPolicyID = report.PolicyID\n\t\tc.ObservedActionType = report.ObserveAction\n\t}\n\n\tl.collector.CollectFlowEvent(c)\n}\n\nfunc (l *lookup) reportAcceptedFlow(flowproperties *proxyFlowProperties, sourceID string, destID string, context *pucontext.PUContext, report *policy.FlowPolicy, packet *policy.FlowPolicy, sourceController string, destController string) {\n\n\tl.reportFlow(flowproperties, sourceID, destID, context, \"N/A\", report, packet, sourceController, destController)\n}\n\nfunc (l *lookup) reportRejectedFlow(flowproperties *proxyFlowProperties, sourceID string, destID string, context *pucontext.PUContext, mode string, report *policy.FlowPolicy, packet *policy.FlowPolicy, sourceController string, destController string) {\n\n\tif report == nil {\n\t\treport = &policy.FlowPolicy{\n\t\t\tAction:   policy.Reject | policy.Log,\n\t\t\tPolicyID: \"default\",\n\t\t}\n\t}\n\tif packet == nil {\n\t\tpacket = report\n\t}\n\tl.reportFlow(flowproperties, sourceID, destID, context, mode, report, packet, sourceController, destController)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/ping_tcp.go",
    "content": "package tcp\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/pingrequest\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/gaia\"\n\t\"go.aporeto.io/gaia/x509extensions\"\n\t\"go.uber.org/zap\"\n)\n\n// InitiatePing initiates the ping request\nfunc (p *Proxy) InitiatePing(ctx context.Context, sctx *serviceregistry.ServiceContext, sdata *serviceregistry.DependentServiceData, pingConfig *policy.PingConfig) error {\n\n\tzap.L().Debug(\"Initiating L4 ping\")\n\n\tfor i := 0; i < pingConfig.Iterations; i++ {\n\t\tif err := p.sendPingRequest(ctx, pingConfig, sctx, sdata, i); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (p *Proxy) sendPingRequest(\n\tctx context.Context,\n\tpingConfig *policy.PingConfig,\n\tsctx *serviceregistry.ServiceContext,\n\tsdata *serviceregistry.DependentServiceData,\n\titerationID int) error {\n\n\tpingID := pingConfig.ID\n\tdestIP := pingConfig.IP\n\tdestPort := pingConfig.Port\n\n\t_, netaction, _ := sctx.PUContext.ApplicationACLPolicyFromAddr(destIP, destPort, packet.IPProtocolTCP)\n\n\tpingErr := \"dial\"\n\tif e := pingConfig.Error(); e != \"\" {\n\t\tpingErr = e\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pingID,\n\t\tIterationID:          iterationID,\n\t\tPUID:                 sctx.PUContext.ManagementID(),\n\t\tNamespace:            sctx.PUContext.ManagementNamespace(),\n\t\tProtocol:             6,\n\t\tServiceType:          \"L4\",\n\t\tAgentVersion:         p.agentVersion.String(),\n\t\tApplicationListening: false,\n\t\tACLPolicyID:          netaction.PolicyID,\n\t\tACLPolicyAction:      netaction.Action,\n\t\tError:                pingErr,\n\t\tTargetTCPNetworks:    pingConfig.TargetTCPNetworks,\n\t\tExcludedNetworks:     pingConfig.ExcludedNetworks,\n\t\tType:                 gaia.PingProbeTypeRequest,\n\t\tRemoteEndpointType:   collector.EndPointTypeExternalIP,\n\t\tClaimsType:           gaia.PingProbeClaimsTypeReceived,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypePlain,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeTransmitted,\n\t}\n\n\tdefer p.collector.CollectPingEvent(pr)\n\n\tconn, err := dial(ctx, destIP, destPort, p.mark)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer conn.Close() // nolint: errcheck\n\n\tsrc := conn.RemoteAddr().(*net.TCPAddr)\n\tpl := p.getPolicyReporter(sctx.PUContext, src.IP, src.Port, destIP, int(destPort), sdata.ServiceObject)\n\tpl.client = true\n\n\t// ServerName: Use first configured FQDN or the destination IP\n\tserverName, err := common.GetTLSServerName(conn.RemoteAddr().String(), sdata.ServiceObject)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to get the server name: %s\", err)\n\t}\n\n\t// Encrypt Down Connection\n\tp.RLock()\n\tca := p.caPool\n\tp.RUnlock()\n\n\ttlsCert, err := tls.X509KeyPair([]byte(pingConfig.ServiceCertificate), []byte(pingConfig.ServiceKey))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to parse X509 certificate: %w\", err)\n\t}\n\n\tcerts := []tls.Certificate{\n\t\ttlsCert,\n\t}\n\n\tt, err := getClientTLSConfig(ca, certs, serverName, false)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to generate tls configuration: %s\", err)\n\t}\n\n\t// Do TLS\n\ttlsConn := tls.Client(conn, t)\n\tdefer tlsConn.Close() // nolint errcheck\n\n\tpayload := &policy.PingPayload{\n\t\tPingID:      pingID,\n\t\tIterationID: iterationID,\n\t\tServiceType: policy.ServiceTCP,\n\t}\n\n\thost := fmt.Sprintf(\"https://%s:%d\", destIP, destPort)\n\tdata, err := pingrequest.CreateRaw(host, payload)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tladdr := tlsConn.LocalAddr().(*net.TCPAddr)\n\traddr := tlsConn.RemoteAddr().(*net.TCPAddr)\n\n\tstartTime := time.Now()\n\tif err := write(tlsConn, data); err != nil {\n\t\tpr.Error = err.Error()\n\t\tpr.FourTuple = fmt.Sprintf(\n\t\t\t\"%s:%s:%d:%d\",\n\t\t\tladdr.IP.String(),\n\t\t\traddr.IP.String(),\n\t\t\tladdr.Port,\n\t\t\traddr.Port,\n\t\t)\n\t\treturn err\n\t}\n\n\tpr.Error = \"\"\n\tpr.RTT = time.Since(startTime).String()\n\tpr.PayloadSize = len(data)\n\tpr.ApplicationListening = true\n\tpr.Type = gaia.PingProbeTypeResponse\n\tpr.FourTuple = fmt.Sprintf(\n\t\t\"%s:%s:%d:%d\",\n\t\traddr.IP.String(),\n\t\tladdr.IP.String(),\n\t\traddr.Port,\n\t\tladdr.Port,\n\t)\n\n\tif len(tlsConn.ConnectionState().PeerCertificates) > 0 {\n\t\treturn extract(pr, tlsConn.ConnectionState().PeerCertificates[0], pl)\n\t}\n\n\treturn nil\n}\n\nfunc (p *Proxy) processPingRequest(conn *tls.Conn, pl *lookup) error {\n\n\tzap.L().Debug(\"Processing ping request\")\n\n\tif err := conn.SetReadDeadline(time.Now().Add(3 * time.Second)); err != nil {\n\t\treturn err\n\t}\n\n\tvar dst bytes.Buffer\n\tif _, err := io.Copy(&dst, conn); err != nil {\n\t\treturn err\n\t}\n\n\tpp, err := pingrequest.ExtractRaw(dst.Bytes())\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:          pp.PingID,\n\t\tIterationID:     pp.IterationID,\n\t\tType:            gaia.PingProbeTypeRequest,\n\t\tPUID:            pl.puContext.ManagementID(),\n\t\tNamespace:       pl.puContext.ManagementNamespace(),\n\t\tPayloadSize:     len(dst.Bytes()),\n\t\tPayloadSizeType: gaia.PingProbePayloadSizeTypeReceived,\n\t\tProtocol:        6,\n\t\tServiceType:     \"L4\",\n\t\tFourTuple: fmt.Sprintf(\"%s:%s:%d:%d\",\n\t\t\tpl.SourceIP.String(),\n\t\t\tpl.DestIP.String(),\n\t\t\tpl.SourcePort,\n\t\t\tpl.DestPort),\n\t\tAgentVersion:        p.agentVersion.String(),\n\t\tRemoteEndpointType:  collector.EndPointTypePU,\n\t\tIsServer:            true,\n\t\tClaimsType:          gaia.PingProbeClaimsTypeReceived,\n\t\tRemoteNamespaceType: gaia.PingProbeRemoteNamespaceTypePlain,\n\t\tTargetTCPNetworks:   true,\n\t\tExcludedNetworks:    false,\n\t}\n\n\tif pp.ServiceType != policy.ServiceTCP {\n\t\tpr.Error = fmt.Sprintf(\"service type mismatch, expected: %d, actual: %d\", policy.ServiceTCP, pp.ServiceType)\n\t}\n\n\tif len(conn.ConnectionState().PeerCertificates) > 0 {\n\t\tif err := extract(pr, conn.ConnectionState().PeerCertificates[0], pl); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tp.collector.CollectPingEvent(pr)\n\n\treturn nil\n}\n\nfunc extract(pr *collector.PingReport, cert *x509.Certificate, pl *lookup) error {\n\n\tpr.RemotePUID = cert.Subject.CommonName\n\tpr.RemoteEndpointType = collector.EndPointTypePU\n\tif len(cert.Subject.Organization) > 0 {\n\t\tpr.RemoteNamespace = cert.Subject.Organization[0]\n\t}\n\tpr.PeerCertIssuer = cert.Issuer.String()\n\tpr.PeerCertSubject = cert.Subject.String()\n\tpr.PeerCertExpiry = cert.NotAfter\n\n\tif found, controller := common.ExtractExtension(x509extensions.Controller(), cert.Extensions); found {\n\t\tpr.RemoteController = string(controller)\n\t}\n\n\tif found, value := common.ExtractExtension(x509extensions.IdentityTags(), cert.Extensions); found {\n\n\t\tclaims := []string{}\n\t\tif err := json.Unmarshal(value, &claims); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to unmarshal tags: %w\", err)\n\t\t}\n\n\t\tpr.Claims = claims\n\n\t\ttags := policy.NewTagStoreFromSlice(claims)\n\t\t_, pkt := pl.Policy(tags)\n\n\t\tpr.PolicyID = pkt.PolicyID\n\t\tpr.PolicyAction = pkt.Action\n\t\tif pkt.Action.Rejected() {\n\t\t\tpr.Error = collector.PolicyDrop\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc pingEnabled(conn *tls.Conn) bool {\n\n\tpeerCerts := conn.ConnectionState().PeerCertificates\n\tif len(peerCerts) <= 0 {\n\t\treturn false\n\t}\n\n\tfound, _ := common.ExtractExtension(x509extensions.Ping(), peerCerts[0].Extensions)\n\treturn found\n}\n\nfunc dial(ctx context.Context, ip net.IP, port uint16, mark int) (net.Conn, error) {\n\n\traddr := &net.TCPAddr{\n\t\tIP:   ip,\n\t\tPort: int(port),\n\t}\n\n\td := net.Dialer{\n\t\tTimeout: 5 * time.Second,\n\t\tControl: markedconn.ControlFunc(mark, false, nil),\n\t}\n\treturn d.DialContext(ctx, \"tcp\", raddr.String())\n}\n\nfunc write(conn net.Conn, data []byte) error {\n\n\tif err := conn.SetWriteDeadline(time.Now().Add(5 * time.Second)); err != nil {\n\t\treturn err\n\t}\n\n\tn, err := conn.Write(data)\n\tif err != nil && err != io.EOF {\n\t\treturn err\n\t}\n\n\tif n != len(data) {\n\t\treturn fmt.Errorf(\"failed to write data, expected: %v, written: %v\", len(data), n)\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/tcp.go",
    "content": "package tcp\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"sync\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/blang/semver\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/protomux\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/tcp/verifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/tlshelper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\n// Proxy maintains state for proxies connections from listen to backend.\ntype Proxy struct {\n\tcollector      collector.EventCollector\n\tmyControllerID string\n\tpuID           string\n\tmark           int\n\n\t// TLS cert for the service\n\tcertificate *tls.Certificate\n\t// caPool contains the system roots and in addition the services external CAs\n\tcaPool *x509.CertPool\n\n\t// Verfier implements ID and IP ACL rules using the Peer Certificate Validation Handler\n\tverifier verifier.Verifier\n\n\t// List of local IP's\n\tlocalIPs map[string]struct{}\n\n\tagentVersion semver.Version\n\n\tsync.RWMutex\n}\n\n// NewTCPProxy creates a new instance of proxy reate a new instance of Proxy\nfunc NewTCPProxy(\n\tc collector.EventCollector,\n\tpuID string,\n\tcertificate *tls.Certificate,\n\tcaPool *x509.CertPool,\n\tagentVersion semver.Version,\n\tmark int,\n) *Proxy {\n\n\tlocalIPs := markedconn.GetInterfaces()\n\n\treturn &Proxy{\n\t\tcollector:    c,\n\t\tpuID:         puID,\n\t\tverifier:     verifier.New(caPool),\n\t\tlocalIPs:     localIPs,\n\t\tcertificate:  certificate,\n\t\tcaPool:       caPool,\n\t\tagentVersion: agentVersion,\n\t\tmark:         mark,\n\t}\n}\n\n// RunNetworkServer implements enforcer.Enforcer interface\nfunc (p *Proxy) RunNetworkServer(\n\tctx context.Context,\n\tlistener net.Listener,\n) error {\n\n\tgo func() {\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-time.After(5 * time.Second):\n\t\t\t\tp.Lock()\n\t\t\t\tp.localIPs = markedconn.GetInterfaces()\n\t\t\t\tp.Unlock()\n\t\t\tcase <-ctx.Done():\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}()\n\n\t// Encryption is done transparently for TCP.\n\tgo p.serve(ctx, listener)\n\n\treturn nil\n}\n\n// UpdateSecrets updates the secrets of the connections.\nfunc (p *Proxy) UpdateSecrets(\n\tcert *tls.Certificate,\n\tcaPool *x509.CertPool,\n\ts secrets.Secrets,\n\tcertPEM string,\n\tkeyPEM string,\n) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.certificate = cert\n\tp.caPool = caPool\n\n\tp.verifier.TrustCAs(caPool)\n}\n\nfunc (p *Proxy) serve(\n\tctx context.Context,\n\tlistener net.Listener,\n) {\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\tdefault:\n\t\t\tconn, err := listener.Accept()\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif protoListener, ok := listener.(*protomux.ProtoListener); ok {\n\t\t\t\t// Windows: we don't really need the platform-specific data map for plain tcp (we can get it from the conn).\n\t\t\t\t// So just remove from the map here.\n\t\t\t\tmarkedconn.RemovePlatformData(protoListener.Listener, conn)\n\t\t\t}\n\t\t\tgo p.handle(ctx, conn)\n\t\t}\n\t}\n}\n\n// ShutDown shuts down the server.\nfunc (p *Proxy) ShutDown() error {\n\treturn nil\n}\n\nfunc (p *Proxy) getService(\n\tip net.IP,\n\tport int,\n\tlocal bool,\n) (*policy.ApplicationService, error) {\n\n\t// If the destination is a local IP, it means that we are processing a client connection.\n\tif local {\n\t\t_, serviceData, err := serviceregistry.Instance().RetrieveDependentServiceDataByIDAndNetwork(p.puID, ip, port, \"\")\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unknown dependent service pu:%s %s/%d: %s\", p.puID, ip.String(), port, err)\n\t\t}\n\t\treturn serviceData.ServiceObject, nil\n\t}\n\n\tportContext, err := serviceregistry.Instance().RetrieveExposedServiceContext(ip, port, \"\")\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unknown exposed service %s/%d: %s\", ip.String(), port, err)\n\t}\n\treturn portContext.Service, nil\n}\n\n// handle handles a connection. upstream connection is the connection\n// to the next hop while downstream connection is the client who\n// initiated this connection.\n// Client PU:\n//   - upstream connection is from client to proxy.\n//   - downstream connection is from proxy to the nexthop (service, LB, PU)\n// Server PU:\n//   - upstream connection is from client or another enforcer\n//   - downstream connection is from proxy to the server nexthop\nfunc (p *Proxy) handle(ctx context.Context, upConn net.Conn) {\n\n\tdefer upConn.Close() // nolint\n\n\t// TODO: handle proxy protocol\n\n\tproxiedUpConn := upConn.(*markedconn.ProxiedConnection)\n\tip, port := proxiedUpConn.GetOriginalDestination()\n\tplatformData := proxiedUpConn.GetPlatformData()\n\n\tservice, err := p.getService(ip, port, p.isLocal(upConn))\n\tif err != nil {\n\t\tzap.L().Error(\"no service found\", zap.Error(err))\n\t\treturn\n\t}\n\n\tpuContext, err := p.puContextFromContextID(p.puID)\n\tif err != nil {\n\t\tzap.L().Error(\"no pu found\", zap.String(\"puid\", p.puID), zap.Error(err))\n\t\treturn\n\t}\n\n\tp.handleWithPUAndService(ctx, upConn, ip, port, platformData, puContext, service)\n}\n\nfunc (p *Proxy) getPolicyReporter(\n\tpuContext *pucontext.PUContext,\n\tsip net.IP,\n\tsport int,\n\tdip net.IP,\n\tdport int,\n\tservice *policy.ApplicationService,\n) *lookup {\n\n\tpfp := &proxyFlowProperties{\n\t\tmyControllerID: p.myControllerID,\n\t\tDestIP:         dip.String(),\n\t\tDestPort:       uint16(dport),\n\t\tSourceIP:       sip.String(),\n\t\tSourcePort:     0, // TODO: Investigate if this should be set\n\t\tServiceID:      service.ID,\n\t\tDestType:       collector.EndPointTypePU,\n\t\tSourceType:     collector.EndPointTypePU,\n\t}\n\n\treturn &lookup{\n\t\tSourceIP:   sip,\n\t\tDestIP:     dip,\n\t\tSourcePort: uint16(sport),\n\t\tDestPort:   uint16(dport),\n\t\tcollector:  p.collector,\n\t\tpuContext:  puContext,\n\t\tpfp:        pfp,\n\t}\n}\n\nfunc (p *Proxy) handleWithPUAndService(\n\tctx context.Context,\n\tupConn net.Conn,\n\torigDestIP net.IP,\n\torigDestPort int,\n\tplatformData *markedconn.PlatformData,\n\tpuContext *pucontext.PUContext,\n\tservice *policy.ApplicationService,\n) {\n\t// If we received connection isn't on private port, downstream connection has to be changed to\n\t// service listening port.\n\tdownPort := origDestPort\n\tif downPort == service.PublicPort() {\n\t\tdownPort = service.PrivatePort()\n\t}\n\n\t// Initialize a policy and reporting object\n\tsrc := upConn.RemoteAddr().(*net.TCPAddr)\n\tpr := p.getPolicyReporter(puContext, src.IP, src.Port, origDestIP, origDestPort, service)\n\n\tdownConn, err := p.initiateDownstreamTCPConnection(ctx, origDestIP, downPort, platformData)\n\tif err != nil {\n\t\t// Report rejection\n\t\tpr.ReportStats(collector.EndPointTypeExternalIP, \"\", \"default\", collector.UnableToDial, nil, nil, false)\n\t\treturn\n\t}\n\tdefer downConn.Close() // nolint\n\n\tif err := p.proxyData(ctx, upConn, downConn, service, pr); err != nil {\n\t\tzap.L().Debug(\"Error with proxying data\", zap.Error(err))\n\t}\n}\n\nfunc (p *Proxy) startEncryptedClientDataPath(\n\tctx context.Context,\n\tdownConn net.Conn,\n\tupConn net.Conn,\n\tservice *policy.ApplicationService,\n\tpr *lookup,\n) error {\n\n\t// Set a flag so policy engine knows if its on server or client\n\tpr.client = true\n\n\t// ServerName: Use first configured FQDN or the destination IP\n\tserverName, err := common.GetTLSServerName(downConn.RemoteAddr().String(), service)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to get the server name: %s\", err)\n\t}\n\n\t// Encrypt Down Connection\n\tp.RLock()\n\tca := p.caPool\n\tcerts := []tls.Certificate{}\n\tif p.certificate != nil {\n\t\tcerts = append(certs, *p.certificate)\n\t}\n\tp.RUnlock()\n\n\tt, err := getClientTLSConfig(ca, certs, serverName, service.External)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to generate tls configuration: %s\", err)\n\t}\n\n\tt.VerifyPeerCertificate = func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {\n\t\treturn p.verifier.VerifyPeerCertificate(rawCerts, verifiedChains, pr, false)\n\t}\n\n\t// Do TLS\n\ttlsConn := tls.Client(downConn, t)\n\tdefer tlsConn.Close() // nolint errcheck\n\tdownConn = tlsConn\n\n\tzap.L().Debug(\n\t\t\"Handle client connection\",\n\t\tzap.String(\"src\", upConn.RemoteAddr().String()),\n\t\tzap.String(\"dst\", downConn.RemoteAddr().String()),\n\t\tzap.String(\"tls.server\", t.ServerName),\n\t\tzap.Bool(\"tls.rootCAs\", t.RootCAs != nil),\n\t\tzap.Int(\"tls.certs\", len(t.Certificates)),\n\t)\n\n\t// TLS will automatically start negotiation on write. Nothing to do for us.\n\tp.copyData(ctx, upConn, downConn)\n\treturn nil\n}\n\nfunc (p *Proxy) startEncryptedServerDataPath(\n\tctx context.Context,\n\tdownConn net.Conn,\n\tupConn net.Conn,\n\tservice *policy.ApplicationService,\n\tpr *lookup,\n) error {\n\n\tzap.L().Debug(\n\t\t\"Handle server connection\",\n\t\tzap.String(\"src\", upConn.RemoteAddr().String()),\n\t\tzap.String(\"dst\", downConn.RemoteAddr().String()),\n\t\tzap.String(\"orig-dst\", pr.DestIP.String()),\n\t\tzap.Uint16(\"orig-dstport\", pr.DestPort),\n\t)\n\n\tif service.PrivateTLSListener {\n\t\tzap.L().Debug(\"convert connection to server as TLS\")\n\t\tdownConn = tls.Client(downConn, &tls.Config{\n\t\t\tInsecureSkipVerify: true,\n\t\t})\n\t}\n\n\tproxiedUpConn := upConn.(*markedconn.ProxiedConnection)\n\t_, originalPort := proxiedUpConn.GetOriginalDestination()\n\n\t// Use Aporeto certs\n\tp.RLock()\n\tcaPool := p.caPool\n\tclientCerts := []tls.Certificate{}\n\tif p.certificate != nil {\n\t\tclientCerts = []tls.Certificate{*p.certificate}\n\t}\n\tp.RUnlock()\n\n\ttlsConfig, err := getServerTLSConfig(\n\t\tcaPool,\n\t\tclientCerts,\n\t\toriginalPort,\n\t\tservice,\n\t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"invalid tls server configuration: %s\", err)\n\t}\n\n\tif tlsConfig != nil {\n\t\t// Register Peer Certificate Verification so we can apply policies.\n\t\ttlsConfig.VerifyPeerCertificate = func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {\n\t\t\treturn p.verifier.VerifyPeerCertificate(rawCerts, verifiedChains, pr, tlsConfig.ClientAuth == tls.RequireAndVerifyClientCert)\n\t\t}\n\n\t\ttlsConn := tls.Server(upConn.(*markedconn.ProxiedConnection).GetTCPConnection(), tlsConfig)\n\t\tdefer tlsConn.Close() // nolint errcheck\n\n\t\t// Manually initiating the TLS handshake to get the connection state.\n\t\t// The call to write will skip TLS handshake.\n\t\tif err := tlsConn.Handshake(); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif pingEnabled(tlsConn) {\n\t\t\treturn p.processPingRequest(tlsConn, pr)\n\t\t}\n\n\t\tupConn = tlsConn\n\t} else {\n\t\t// In case of no TLS, apply IP policies right here.\n\t\taction := pr.IPLookup()\n\t\tzap.L().Debug(\"ip acl lookup\", zap.Bool(\"action\", action))\n\t\tif !action {\n\t\t\treturn fmt.Errorf(\"ip acl drop\")\n\t\t}\n\t}\n\n\t// TLS will automatically start negotiation on write. Nothing to for us.\n\tp.copyData(ctx, upConn, downConn)\n\treturn nil\n}\n\nfunc (p *Proxy) copyData(\n\tctx context.Context,\n\tsource, dest net.Conn,\n) {\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\tgo func() {\n\t\tdataprocessor(ctx, source, dest)\n\t\twg.Done()\n\t}()\n\tgo func() {\n\t\tdataprocessor(ctx, dest, source)\n\t\twg.Done()\n\t}()\n\twg.Wait()\n}\n\ntype readwithContext func(p []byte) (n int, err error)\n\nfunc (r readwithContext) Read(p []byte) (int, error) { return r(p) }\n\nfunc dataprocessor(\n\tctx context.Context,\n\tsource net.Conn,\n\tdest net.Conn,\n) { // nolint\n\tdefer func() {\n\t\tswitch connType := dest.(type) {\n\t\tcase *tls.Conn:\n\t\t\tconnType.CloseWrite() // nolint errcheck\n\t\tcase *net.TCPConn:\n\t\t\tconnType.CloseWrite() // nolint errcheck\n\t\tcase *markedconn.ProxiedConnection:\n\t\t\tconnType.GetTCPConnection().CloseWrite() // nolint errcheck\n\t\t}\n\t}()\n\n\tif _, err := io.Copy(dest, readwithContext(\n\t\tfunc(p []byte) (int, error) {\n\t\t\tselect {\n\t\t\tcase <-ctx.Done():\n\t\t\t\treturn 0, ctx.Err()\n\t\t\tdefault:\n\t\t\t\treturn source.Read(p)\n\t\t\t}\n\t\t},\n\t),\n\t); err != nil { // nolint\n\t\tlogErr(err)\n\t}\n}\n\nfunc (p *Proxy) proxyData(\n\tctx context.Context,\n\tupConn net.Conn,\n\tdownConn net.Conn,\n\tservice *policy.ApplicationService,\n\tpr *lookup,\n) error {\n\n\t// If the destination is not a local IP, it means that we are processing a client connection.\n\tif p.isLocal(upConn) {\n\t\treturn p.startEncryptedClientDataPath(ctx, downConn, upConn, service, pr)\n\t}\n\n\treturn p.startEncryptedServerDataPath(ctx, downConn, upConn, service, pr)\n}\n\nfunc (p *Proxy) puContextFromContextID(\n\tpuID string,\n) (*pucontext.PUContext, error) {\n\n\tsctx, err := serviceregistry.Instance().RetrieveServiceByID(puID)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Context not found %s\", puID)\n\t}\n\n\treturn sctx.PUContext, nil\n}\n\n// initiateDownstreamTCPConnection initiates a downstream TCP connection\nfunc (p *Proxy) initiateDownstreamTCPConnection(\n\tctx context.Context,\n\tip net.IP,\n\tport int,\n\tplatformData *markedconn.PlatformData,\n) (net.Conn, error) {\n\n\traddr := &net.TCPAddr{\n\t\tIP:   ip,\n\t\tPort: port,\n\t}\n\treturn markedconn.DialMarkedWithContext(ctx, \"tcp\", raddr.String(), platformData, p.mark)\n}\n\nfunc (p *Proxy) isLocal(conn net.Conn) bool {\n\n\thost, _, err := net.SplitHostPort(conn.RemoteAddr().String())\n\tif err != nil {\n\t\treturn false\n\t}\n\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\tif _, ok := p.localIPs[host]; ok {\n\t\treturn true\n\t}\n\treturn false\n}\n\nfunc logErr(err error) bool {\n\tswitch err.(type) {\n\tcase syscall.Errno:\n\t\tzap.L().Error(\"Connection error to destination\", zap.Error(err))\n\tdefault:\n\t\tzap.L().Error(\"Connection terminated\", zap.Error(err))\n\t}\n\treturn false\n}\n\n// getPublicServerTLSConfig provides the TLS configuration for the public port.\n// There is a valid case where we dont provide TLS configuration (nil) but error\n// is also nil to support the case of publicly exposed port.\nfunc getPublicServerTLSConfig(\n\tcaPool *x509.CertPool,\n\tclientCerts []tls.Certificate,\n\tservice *policy.ApplicationService,\n) (t *tls.Config, err error) {\n\n\t// Apply Public configuration\n\tif (service.PublicServiceTLSType != policy.ServiceTLSTypeCustom) && (service.PublicServiceTLSType != policy.ServiceTLSTypeAporeto) {\n\t\treturn nil, nil\n\t}\n\n\tt = tlshelper.NewBaseTLSServerConfig()\n\n\t// Server Cert and Key.\n\tif service.PublicServiceTLSType == policy.ServiceTLSTypeCustom {\n\t\t// Use custom certs\n\t\tif len(service.PublicServiceCertificate) > 0 && len(service.PublicServiceCertificateKey) > 0 {\n\n\t\t\tcert, err := tls.X509KeyPair(service.PublicServiceCertificate, service.PublicServiceCertificateKey)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"invalid public cert pair\")\n\t\t\t}\n\t\t\tt.Certificates = []tls.Certificate{cert}\n\t\t}\n\t} else if service.PublicServiceTLSType == policy.ServiceTLSTypeAporeto {\n\t\t// Use Aporeto certs\n\t\tt.Certificates = clientCerts\n\t}\n\n\t// mTLS with client\n\tif service.UserAuthorizationType == policy.UserAuthorizationMutualTLS {\n\t\tt.ClientAuth = tls.RequireAndVerifyClientCert\n\t\tt.ClientCAs = caPool\n\t\tif len(service.MutualTLSTrustedRoots) > 0 {\n\t\t\tif !t.ClientCAs.AppendCertsFromPEM(service.MutualTLSTrustedRoots) {\n\t\t\t\treturn nil, fmt.Errorf(\"Unable to process client CAs\")\n\t\t\t}\n\t\t}\n\t}\n\n\treturn t, nil\n}\n\n// getExposedServerMTLSConfig provides the mTLS configuration for the server.\nfunc getExposedServerMTLSConfig(\n\tcaPool *x509.CertPool,\n\tcerts []tls.Certificate,\n) (t *tls.Config, err error) {\n\n\tif len(certs) == 0 {\n\t\treturn nil, fmt.Errorf(\"Failed to start encryption\")\n\t}\n\n\tt = tlshelper.NewBaseTLSServerConfig()\n\tt.Certificates = certs\n\tt.ClientCAs = caPool\n\tt.ClientAuth = tls.RequireAndVerifyClientCert\n\treturn t, nil\n}\n\n// getServerTLSConfig provides the server TLS configuration. It handles the\n// server on public and exposed ports.\n// returns:\n//    - error\n//    - tls.Config which can be nil even when error is nil to indicate no TLS\nfunc getServerTLSConfig(\n\tcaPool *x509.CertPool,\n\tcerts []tls.Certificate,\n\toriginalPort int,\n\tservice *policy.ApplicationService,\n) (t *tls.Config, err error) {\n\n\tif originalPort != service.PublicPort() {\n\t\t// mTLS for Up Connection for exposed ports protected by Aporeto\n\t\treturn getExposedServerMTLSConfig(caPool, certs)\n\t}\n\t// TLS configuration supported on public ports\n\treturn getPublicServerTLSConfig(caPool, certs, service)\n}\n\n// getTLSConfig generates a tls.Config for a given client based on the service it may be accessing.\n// - Services protected by Aporeto should do mTLS.\n// - External (Third Party) Services do TLS only.\nfunc getClientTLSConfig(\n\tcaPool *x509.CertPool,\n\tclientCerts []tls.Certificate,\n\tserverName string,\n\texternal bool,\n) (t *tls.Config, err error) {\n\n\tt = tlshelper.NewBaseTLSClientConfig()\n\tt.RootCAs = caPool\n\tt.ServerName = serverName\n\n\tif !external {\n\t\tif len(clientCerts) == 0 {\n\t\t\treturn nil, fmt.Errorf(\"no client certs provided for mTLS\")\n\t\t}\n\t\t// Do mTLS enforcer protected services. TLS for external service.\n\t\tt.Certificates = clientCerts\n\t}\n\treturn t, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/tcp_test.go",
    "content": "package tcp\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"log\"\n\t\"reflect\"\n\t\"testing\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\tacommon \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc testTLSCertificate() tls.Certificate {\n\tcertPem := []byte(`-----BEGIN CERTIFICATE-----\nMIIBhTCCASugAwIBAgIQIRi6zePL6mKjOipn+dNuaTAKBggqhkjOPQQDAjASMRAw\nDgYDVQQKEwdBY21lIENvMB4XDTE3MTAyMDE5NDMwNloXDTE4MTAyMDE5NDMwNlow\nEjEQMA4GA1UEChMHQWNtZSBDbzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABD0d\n7VNhbWvZLWPuj/RtHFjvtJBEwOkhbN/BnnE8rnZR8+sbwnc/KhCk3FhnpHZnQz7B\n5aETbbIgmuvewdjvSBSjYzBhMA4GA1UdDwEB/wQEAwICpDATBgNVHSUEDDAKBggr\nBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MCkGA1UdEQQiMCCCDmxvY2FsaG9zdDo1\nNDUzgg4xMjcuMC4wLjE6NTQ1MzAKBggqhkjOPQQDAgNIADBFAiEA2zpJEPQyz6/l\nWf86aX6PepsntZv2GYlA5UpabfT2EZICICpJ5h/iI+i341gBmLiAFQOyTDT+/wQc\n6MF9+Yw1Yy0t\n-----END CERTIFICATE-----`)\n\tkeyPem := []byte(`-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIIrYSSNQFaA2Hwf1duRSxKtLYX5CB04fSeQ6tF1aY/PuoAoGCCqGSM49\nAwEHoUQDQgAEPR3tU2Fta9ktY+6P9G0cWO+0kETA6SFs38GecTyudlHz6xvCdz8q\nEKTcWGekdmdDPsHloRNtsiCa697B2O9IFA==\n-----END EC PRIVATE KEY-----`)\n\tcert, err := tls.X509KeyPair(certPem, keyPem)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\treturn cert\n}\n\nfunc Test_getClientTLSConfig(t *testing.T) {\n\ttype args struct {\n\t\tcaPool      *x509.CertPool\n\t\tclientCerts []tls.Certificate\n\t\tserverName  string\n\t\texternal    bool\n\t}\n\tbasicCaPool, _ := x509.SystemCertPool()\n\tbasicTLSCert := testTLSCertificate()\n\tbasicTLSCertList := []tls.Certificate{basicTLSCert}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twantT   *tls.Config\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"basic external service\",\n\t\t\targs: args{\n\t\t\t\texternal:    true,\n\t\t\t\tcaPool:      nil,                 // no caPool => we dont need additional CAs to validate server certs. they might be using digicert/letsencrypt/any std cert.\n\t\t\t\tclientCerts: []tls.Certificate{}, // no certs. for external service dont use client certs\n\t\t\t\tserverName:  \"www.google.com\",\n\t\t\t},\n\t\t\twantT: &tls.Config{\n\t\t\t\tPreferServerCipherSuites: true,\n\t\t\t\tSessionTicketsDisabled:   true,\n\t\t\t\tMaxVersion:               tls.VersionTLS12,\n\t\t\t\tServerName:               \"www.google.com\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"basic external service ignored client certs\",\n\t\t\targs: args{\n\t\t\t\texternal:    true,\n\t\t\t\tcaPool:      nil,              // no caPool => we dont need additional CAs to validate server certs. they might be using digicert/letsencrypt/any std cert.\n\t\t\t\tclientCerts: basicTLSCertList, // clientCerts should be ignored for external service.\n\t\t\t\tserverName:  \"www.google.com\",\n\t\t\t},\n\t\t\twantT: &tls.Config{\n\t\t\t\tPreferServerCipherSuites: true,\n\t\t\t\tSessionTicketsDisabled:   true,\n\t\t\t\tMaxVersion:               tls.VersionTLS12,\n\t\t\t\tServerName:               \"www.google.com\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"basic external service with trusted ca pool and ignored cert list\",\n\t\t\targs: args{\n\t\t\t\tcaPool:      basicCaPool,      // caPool should be used to validate server certs\n\t\t\t\tclientCerts: basicTLSCertList, // should be ignored as we dont provide client certs for external service\n\t\t\t\tserverName:  \"www.google.com\",\n\t\t\t\texternal:    true,\n\t\t\t},\n\t\t\twantT: &tls.Config{\n\t\t\t\tPreferServerCipherSuites: true,\n\t\t\t\tSessionTicketsDisabled:   true,\n\t\t\t\tMaxVersion:               tls.VersionTLS12,\n\t\t\t\tRootCAs:                  basicCaPool,\n\t\t\t\tServerName:               \"www.google.com\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgotT, err := getClientTLSConfig(tt.args.caPool, tt.args.clientCerts, tt.args.serverName, tt.args.external)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"getClientTLSConfig() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(gotT, tt.wantT) {\n\t\t\t\tt.Errorf(\"getClientTLSConfig() = %+v, want %+v\", gotT, tt.wantT)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_getTLSServerName(t *testing.T) {\n\ttype args struct {\n\t\taddrAndPort string\n\t\tservice     *policy.ApplicationService\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\twantName string\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname:     \"nil service and bad addr (error)\",\n\t\t\targs:     args{},\n\t\t\twantName: \"\",\n\t\t\twantErr:  true,\n\t\t},\n\t\t{\n\t\t\tname: \"service with nil network info and bad addr (error)\",\n\t\t\targs: args{\n\t\t\t\tservice: &policy.ApplicationService{},\n\t\t\t},\n\t\t\twantName: \"\",\n\t\t\twantErr:  true,\n\t\t},\n\t\t{\n\t\t\tname: \"no fqdn and bad addr (error)\",\n\t\t\targs: args{\n\t\t\t\tservice: &policy.ApplicationService{\n\t\t\t\t\tNetworkInfo: &common.Service{\n\t\t\t\t\t\tFQDNs: []string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantName: \"\",\n\t\t\twantErr:  true,\n\t\t},\n\t\t{\n\t\t\tname: \"no fqdn and valid addr (success)\",\n\t\t\targs: args{\n\t\t\t\taddrAndPort: \"dns:80\",\n\t\t\t\tservice: &policy.ApplicationService{\n\t\t\t\t\tNetworkInfo: &common.Service{\n\t\t\t\t\t\tFQDNs: []string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantName: \"dns\",\n\t\t\twantErr:  false,\n\t\t},\n\t\t{\n\t\t\tname: \"fqdn and valid addr use fqdn[0]\",\n\t\t\targs: args{\n\t\t\t\taddrAndPort: \"dns:80\",\n\t\t\t\tservice: &policy.ApplicationService{\n\t\t\t\t\tNetworkInfo: &common.Service{\n\t\t\t\t\t\tFQDNs: []string{\"www.google.com\", \"alt.google.com\"},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantName: \"www.google.com\",\n\t\t\twantErr:  false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgotName, err := acommon.GetTLSServerName(tt.args.addrAndPort, tt.args.service)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"getTLSServerName() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif gotName != tt.wantName {\n\t\t\t\tt.Errorf(\"getTLSServerName() = %v, want %v\", gotName, tt.wantName)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/verifier/testdata/generate-certs.sh",
    "content": "#!/bin/bash\n\n# This files generates new certs if the generated certs expire. We cant use openssl as this has aporeto extensions\n\necho \"Generate CA\"\ntg cert --name myca --org acme --common-name root --is-ca --pass secret --force\n\necho \"Generate Client-IP Cert With Aporeto extensions but missing key tags\"\ntg cert --name myclient-bad --org acme --common-name client-bad \\\n        --auth-client --signing-cert myca-cert.pem \\\n        --signing-cert-key myca-key.pem \\\n        --signing-cert-key-pass secret \\\n        --tags \"\\$controller=10.10.10.10\" \\\n        --ip 10.10.10.10 --force\n\necho \"Generate Client-IP Cert\"\ntg cert --name myclient-ip --org acme --common-name client-ip \\\n        --auth-client --signing-cert myca-cert.pem \\\n        --signing-cert-key myca-key.pem \\\n        --signing-cert-key-pass secret \\\n        --tags \"\\$identity=processingunit\" --tags \"\\$id=some\" --tags \"\\$controller=10.10.10.10\" \\\n        --ip 10.10.10.10 --force\n\necho \"Generate Client-DNS Cert\"\ntg cert --name myclient-dns --org acme --common-name client-dns \\\n        --auth-client --signing-cert myca-cert.pem \\\n        --signing-cert-key myca-key.pem \\\n        --signing-cert-key-pass secret \\\n        --tags \"\\$identity=processingunit\" --tags \"\\$id=some\" --tags \"\\$controller=www.client.com\" \\\n        --dns www.client.com --force\n\necho \"Generate Server Cert\"\ntg cert --name myserver --org acme --common-name server \\\n        --auth-server --signing-cert myca-cert.pem \\\n        --signing-cert-key myca-key.pem \\\n        --signing-cert-key-pass secret \\\n        --tags \"\\$identity=processingunit\" --tags \"\\$id=some\" --tags \"\\$controller=www.server.com\" \\\n        --dns www.server.com --force\n\nrm -f *-key.pem\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/verifier/testdata/myca-cert.pem",
    "content": "-----BEGIN CERTIFICATE-----\nMIIBcDCCARWgAwIBAgIQWzOkU1NT1yBh8UFbjCAo5zAKBggqhkjOPQQDAjAeMQ0w\nCwYDVQQKEwRhY21lMQ0wCwYDVQQDEwRyb290MB4XDTIwMDUzMTE2MjYxNFoXDTMw\nMDQwOTE2MjYxNFowHjENMAsGA1UEChMEYWNtZTENMAsGA1UEAxMEcm9vdDBZMBMG\nByqGSM49AgEGCCqGSM49AwEHA0IABGnz6woeFke5RO87STAXabmBbI1AaVAU41vk\nzVts0dsr5rPi+ao61GLTU9Hb10d1z9g+YQFKRi2P+hV5gyArl1KjNTAzMA4GA1Ud\nDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MBAGCisGAQQBg4xuAQEEAltdMAoG\nCCqGSM49BAMCA0kAMEYCIQCPgFyYMc1uv0lsimdTkM+WEc4ffansxrAcGKjlouJ+\n0AIhAMyJE1UFshHOsEgowZblHHrNgk+RzBGgwogC4Z7NqK5u\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/verifier/testdata/myclient-bad-cert.pem",
    "content": "-----BEGIN CERTIFICATE-----\nMIIBsjCCAVigAwIBAgIRAKXo6pjvtuJSeCQrXdRtLYUwCgYIKoZIzj0EAwIwHjEN\nMAsGA1UEChMEYWNtZTENMAsGA1UEAxMEcm9vdDAeFw0yMDA1MzExNjI2MTRaFw0z\nMDA0MDkxNjI2MTRaMCQxDTALBgNVBAoTBGFjbWUxEzARBgNVBAMTCmNsaWVudC1i\nYWQwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAREEB67QHsxYDrR56gf75ziphGw\n2Food8R6K9PFbE7myRZUE4iDZOiMTjMZ5NnclRdnwG1S38XdwQPSh2RCvGnOo3Ew\nbzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwIwDAYDVR0TAQH/\nBAIwADAPBgNVHREECDAGhwQKCgoKMCkGCisGAQQBg4xuAQEEG1siJGNvbnRyb2xs\nZXI9MTAuMTAuMTAuMTAiXTAKBggqhkjOPQQDAgNIADBFAiEA1BRp64Y06JfrzaWn\ngqo+GlOhdRL9hQbLF3lrUrRlEeICIEarze+yQBrEkoON6e5Iw+W1XDuCCMr//D1M\nsL7/6mXB\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/verifier/testdata/myclient-dns-cert.pem",
    "content": "-----BEGIN CERTIFICATE-----\nMIIB5zCCAY2gAwIBAgIRANny7E6cG8+6zqSZfqwnz/QwCgYIKoZIzj0EAwIwHjEN\nMAsGA1UEChMEYWNtZTENMAsGA1UEAxMEcm9vdDAeFw0yMDA1MzExNjI2MTVaFw0z\nMDA0MDkxNjI2MTVaMCQxDTALBgNVBAoTBGFjbWUxEzARBgNVBAMTCmNsaWVudC1k\nbnMwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQHxnm6sG42k2I+KObch3FUMkJW\nXrH+3t+4xo0x25y/Yv+KFB2PSujhEi9rOPTvZ8qr6HafeWP8S4sZX0imhDn6o4Gl\nMIGiMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDAjAMBgNVHRMB\nAf8EAjAAMBkGA1UdEQQSMBCCDnd3dy5jbGllbnQuY29tMFIGCisGAQQBg4xuAQEE\nRFsiJGlkZW50aXR5PXByb2Nlc3Npbmd1bml0IiwiJGlkPXNvbWUiLCIkY29udHJv\nbGxlcj13d3cuY2xpZW50LmNvbSJdMAoGCCqGSM49BAMCA0gAMEUCIDxMgcWlLYop\n7r3CM3xztxtp3ztNLu9P0A2K7MQVIudrAiEA9tx5Qy1UysHqtnryRyYmH8aYZdg2\nvg5/nZLC+fVwVII=\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/verifier/testdata/myclient-ip-cert.pem",
    "content": "-----BEGIN CERTIFICATE-----\nMIIB2TCCAX+gAwIBAgIRANWzODJFxt9yZ5VPKV6ZDu8wCgYIKoZIzj0EAwIwHjEN\nMAsGA1UEChMEYWNtZTENMAsGA1UEAxMEcm9vdDAeFw0yMDA1MzExNjI2MTVaFw0z\nMDA0MDkxNjI2MTVaMCMxDTALBgNVBAoTBGFjbWUxEjAQBgNVBAMTCWNsaWVudC1p\ncDBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABOD/wAKaXveeeLAcL9pyUVsKWVdZ\ntUheLlTrK7trS1+b6aj+JRd4+jH0pFW81GsVBP4+BmZpbz58Mcdvcl0lkaKjgZgw\ngZUwDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMCMAwGA1UdEwEB\n/wQCMAAwDwYDVR0RBAgwBocECgoKCjBPBgorBgEEAYOMbgEBBEFbIiRpZGVudGl0\neT1wcm9jZXNzaW5ndW5pdCIsIiRpZD1zb21lIiwiJGNvbnRyb2xsZXI9MTAuMTAu\nMTAuMTAiXTAKBggqhkjOPQQDAgNIADBFAiEArSuUzPLzYsQlEyJEsz1Ezr5APKLy\nFHqunYKPZ2OrXcgCIAJx2tsToRHRd0CX8z9Rg2ccfFDrvRiyTI9e7Ex2Vi3R\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/verifier/testdata/myserver-cert.pem",
    "content": "-----BEGIN CERTIFICATE-----\nMIIB4zCCAYmgAwIBAgIRAL/SlY1qRcZo+xiwdErtkvAwCgYIKoZIzj0EAwIwHjEN\nMAsGA1UEChMEYWNtZTENMAsGA1UEAxMEcm9vdDAeFw0yMDA1MzExNjI2MTVaFw0z\nMDA0MDkxNjI2MTVaMCAxDTALBgNVBAoTBGFjbWUxDzANBgNVBAMTBnNlcnZlcjBZ\nMBMGByqGSM49AgEGCCqGSM49AwEHA0IABNwHz7QdTpN0UF8aNEGloW34h2lBD0Vt\n4bF9zubzFU8iTTOrZSuSsdk/85sJ4QWMU9VKiUhZLuHN+tRXQCurB+ujgaUwgaIw\nDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQC\nMAAwGQYDVR0RBBIwEIIOd3d3LnNlcnZlci5jb20wUgYKKwYBBAGDjG4BAQREWyIk\naWRlbnRpdHk9cHJvY2Vzc2luZ3VuaXQiLCIkaWQ9c29tZSIsIiRjb250cm9sbGVy\nPXd3dy5zZXJ2ZXIuY29tIl0wCgYIKoZIzj0EAwIDSAAwRQIgW5ukpr1XFZyXcMWx\nsNnvKiYE8Wgbm89PJf4Ra26104oCIQCSfI1nu7d7Y95CBmq8Gq8Tz27ysGEitRZY\n68ANaB+R2w==\n-----END CERTIFICATE-----\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/verifier/verifier.go",
    "content": "package verifier\n\nimport (\n\t\"crypto/x509\"\n\t\"encoding/asn1\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// aporetoASNTagsExtension holds the value of the Aporeto Tags Extension\nvar aporetoASNTagsExtension asn1.ObjectIdentifier\n\n// aporetoPingExtension holds the value of the Aporeto Ping Extension\nvar aporetoPingExtension asn1.ObjectIdentifier\n\n// PolicyReporter is the interface to allow looking up policies and report stats\ntype PolicyReporter interface {\n\tIDLookup(remoteContoller, remotePUID string, tags *policy.TagStore) bool\n\tIPLookup() bool\n\tPolicy(tags *policy.TagStore) (*policy.FlowPolicy, *policy.FlowPolicy)\n\tReportStats(remoteType collector.EndPointType, remoteController string, remotePUID string, mode string, report *policy.FlowPolicy, packet *policy.FlowPolicy, accept bool)\n}\n\n// Verifier interface defines the methods a verifier must implement\ntype Verifier interface {\n\n\t// TrustCA replaces the trusted CA list.\n\tTrustCAs(caPool *x509.CertPool)\n\n\t// VerifyPeerCertificate verifies if this TLS connection should be admitted.\n\tVerifyPeerCertificate(rawCerts [][]byte, verifiedChains [][]*x509.Certificate, policy PolicyReporter, mustHaveClientIDCert bool) error\n}\n\n// verifier implements the Verifier interface\ntype verifier struct {\n\tsync.RWMutex\n\t// trustedCAs stores the list of certs to be trusted\n\ttrustedCAPool *x509.CertPool\n}\n\nfunc init() {\n\taporetoASNTagsExtension = asn1.ObjectIdentifier{1, 3, 6, 1, 4, 1, 50798, 1, 1}\n\taporetoPingExtension = asn1.ObjectIdentifier{1, 3, 6, 1, 4, 1, 50798, 1, 4}\n}\n\n// New returns a new instance of Verifier\nfunc New(caPool *x509.CertPool) Verifier {\n\treturn &verifier{\n\t\ttrustedCAPool: caPool,\n\t}\n}\n\n// certHasDNSOrIPSAN checks if a given name exists in a SAN for the certificate.\nfunc certHasDNSOrIPSAN(san string, cert *x509.Certificate) bool {\n\n\t// san found in SAN in certs\n\tfor _, name := range cert.DNSNames {\n\t\tif san == name {\n\t\t\treturn true\n\t\t}\n\t}\n\n\tfor _, ip := range cert.IPAddresses {\n\t\tif san == ip.String() {\n\t\t\treturn true\n\t\t}\n\t}\n\n\treturn false\n}\n\n// TrustCA replaces the trusted CA list.\nfunc (v *verifier) TrustCAs(caPool *x509.CertPool) {\n\n\t// Update verifier\n\tv.Lock()\n\tv.trustedCAPool = caPool\n\tv.Unlock()\n}\n\n// VerifyPeerCertificate validates that policies allow mTLS between two enforcers based on\n// aporeto-tags. If no aporeto tags are found, it applies IP based ACLs.\n//\nfunc (v *verifier) VerifyPeerCertificate(rawCerts [][]byte, verifiedChains [][]*x509.Certificate, pr PolicyReporter, mustHaveClientIDCert bool) error {\n\n\tv.RLock()\n\topts := x509.VerifyOptions{\n\t\tRoots: v.trustedCAPool,\n\t\tKeyUsages: []x509.ExtKeyUsage{\n\t\t\tx509.ExtKeyUsageServerAuth,\n\t\t\tx509.ExtKeyUsageClientAuth,\n\t\t},\n\t}\n\tv.RUnlock()\n\n\t// Is this an Aporeto Cert we are trusting\n\tif opts.Roots != nil {\n\t\tfor _, certChain := range verifiedChains {\n\t\t\ttags := []string{}\n\t\t\tping := false\n\t\t\tfor _, cert := range certChain {\n\t\t\t\tfor _, e := range cert.Extensions {\n\t\t\t\t\tif e.Id.Equal(aporetoPingExtension) {\n\t\t\t\t\t\tping = true\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\n\t\t\t\t\t// If there is an Aporeto extension, get the value\n\t\t\t\t\tif e.Id.Equal(aporetoASNTagsExtension) {\n\t\t\t\t\t\tif err := json.Unmarshal(e.Value, &tags); err == nil {\n\t\t\t\t\t\t\tif ping {\n\t\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t// No Aporeto tags\n\t\t\t\tif len(tags) == 0 {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\trtags := policy.NewTagStoreFromSlice(tags)\n\n\t\t\t\t// check if we have remote controller\n\t\t\t\trcontroller, ok := rtags.Get(policy.TagKeyController)\n\t\t\t\tif !ok {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\t// check if $identity == processingunit\n\t\t\t\tif pu, ok := rtags.Get(policy.TagKeyIdentity); !ok && pu != policy.TagValueProcessingUnit {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\t// check if we have remote puid\n\t\t\t\trpuid, ok := rtags.Get(policy.TagKeyID)\n\t\t\t\tif !ok {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\tif _, err := cert.Verify(opts); err != nil {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\t// TODO: Check controller against verified CA\n\t\t\t\t// fmt.Println(strings.Join(tags, \" \"))\n\t\t\t\t// if !certHasDNSOrIPSAN(controller, cert) {\n\t\t\t\t// \tfmt.Println(\"No IP or DNS SAN\", strings.Join(cert.DNSNames, \" \"))\n\t\t\t\t// \tcontinue\n\t\t\t\t// }\n\n\t\t\t\t// If ping is enabled in the certificate, we defer the policy lookup and the server\n\t\t\t\t// application will never receive any packets related to ping irrespective of policy.\n\t\t\t\tif ping {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\n\t\t\t\tif !pr.IDLookup(rcontroller, rpuid, rtags) {\n\t\t\t\t\treturn fmt.Errorf(\"ID policy lookup rejection\")\n\t\t\t\t}\n\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t}\n\n\tif mustHaveClientIDCert {\n\t\treturn fmt.Errorf(\"ID lookup not performed\")\n\t}\n\n\tif !pr.IPLookup() {\n\t\treturn fmt.Errorf(\"IP policy lookup rejection\")\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tcp/verifier/verifier_test.go",
    "content": "package verifier\n\nimport (\n\t\"crypto/x509\"\n\t\"encoding/pem\"\n\t\"io/ioutil\"\n\t\"testing\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc Test_verifier_TrustCAs(t *testing.T) {\n\ttype fields struct {\n\t\ttrustedCAPool *x509.CertPool\n\t}\n\ttype args struct {\n\t\tcaPool *x509.CertPool\n\t}\n\tsysPool, err := x509.SystemCertPool()\n\tif err != nil {\n\t\tt.Errorf(\"unable to get system certs\")\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields fields\n\t\targs   args\n\t}{\n\t\t{\n\t\t\t\"basic nil\",\n\t\t\tfields{\n\t\t\t\ttrustedCAPool: nil,\n\t\t\t},\n\t\t\targs{\n\t\t\t\tcaPool: nil,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t\"basic valid\",\n\t\t\tfields{\n\t\t\t\ttrustedCAPool: sysPool,\n\t\t\t},\n\t\t\targs{\n\t\t\t\tcaPool: sysPool,\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tv := &verifier{\n\t\t\t\ttrustedCAPool: tt.fields.trustedCAPool,\n\t\t\t}\n\t\t\tv.TrustCAs(tt.args.caPool)\n\t\t\tif v.trustedCAPool != tt.args.caPool {\n\t\t\t\tt.Errorf(\"ca pool not appropriately setup\")\n\t\t\t}\n\t\t})\n\t}\n}\n\n// Dummy PolicyLookup implementer to test\ntype policyReporter struct {\n\tip    int\n\tipret bool\n\tid    int\n\tidret bool\n}\n\nfunc (p *policyReporter) IDLookup(remoteController, remotePUID string, tags *policy.TagStore) bool {\n\tp.id++\n\treturn p.idret\n}\nfunc (p *policyReporter) IPLookup() bool {\n\tp.ip++\n\treturn p.ipret\n}\n\nfunc (p *policyReporter) ReportStats(remoteType collector.EndPointType, remoteController string, remotePUID string, mode string, report *policy.FlowPolicy, packet *policy.FlowPolicy, accept bool) {\n\n}\n\nfunc (p *policyReporter) Policy(tags *policy.TagStore) (*policy.FlowPolicy, *policy.FlowPolicy) {\n\treturn nil, nil\n}\n\nfunc Test_verifier_VerifyPeerCertificate(t *testing.T) {\n\ttype fields struct {\n\t\ttrustedCAs *x509.CertPool\n\t}\n\ttype args struct {\n\t\trawCerts             [][]byte\n\t\tverifiedChains       [][]*x509.Certificate\n\t\tpolicyReporter       *policyReporter\n\t\tmustHaveClientIDCert bool\n\t}\n\ttype want struct {\n\t\terr     bool\n\t\tipCount int\n\t\tidCount int\n\t}\n\n\t// Aporeto CA Root\n\tcerts, err := ioutil.ReadFile(\"./testdata/myca-cert.pem\")\n\tif err != nil {\n\t\tpanic(\"unable to load CA\")\n\t}\n\tblock, _ := pem.Decode(certs)\n\tif block == nil {\n\t\tpanic(\"failed to parse certificate PEM\")\n\t}\n\taporetoCertRoot, err := x509.ParseCertificate(block.Bytes)\n\tif err != nil {\n\t\tpanic(\"failed to parse certificate: \" + err.Error())\n\t}\n\taporetoCAPool := x509.NewCertPool()\n\tif ok := aporetoCAPool.AppendCertsFromPEM(certs); !ok {\n\t\tpanic(\"unable to append CA to root\")\n\t}\n\n\t// Aporeto Client Bad Cert Leaf with Tags\n\tcerts, err = ioutil.ReadFile(\"./testdata/myclient-bad-cert.pem\")\n\tif err != nil {\n\t\tpanic(\"unable to load client bad cert\")\n\t}\n\tblock, _ = pem.Decode(certs)\n\tif block == nil {\n\t\tpanic(\"failed to parse client bad certificate PEM\")\n\t}\n\taporetoClientBadCertWithExtension, err := x509.ParseCertificate(block.Bytes)\n\tif err != nil {\n\t\tpanic(\"failed to parse client bad certificate: \" + err.Error())\n\t}\n\n\t// Aporeto Client IP Cert Leaf with Tags\n\tcerts, err = ioutil.ReadFile(\"./testdata/myclient-ip-cert.pem\")\n\tif err != nil {\n\t\tpanic(\"unable to load client ip cert\")\n\t}\n\tblock, _ = pem.Decode(certs)\n\tif block == nil {\n\t\tpanic(\"failed to parse client ip certificate PEM\")\n\t}\n\taporetoClientIPCertWithExtension, err := x509.ParseCertificate(block.Bytes)\n\tif err != nil {\n\t\tpanic(\"failed to parse client ip certificate: \" + err.Error())\n\t}\n\n\t// Aporeto Client DNS Cert Leaf with Tags\n\tcerts, err = ioutil.ReadFile(\"./testdata/myclient-dns-cert.pem\")\n\tif err != nil {\n\t\tpanic(\"unable to load client dns cert\")\n\t}\n\tblock, _ = pem.Decode(certs)\n\tif block == nil {\n\t\tpanic(\"failed to parse client dns certificate PEM\")\n\t}\n\taporetoClientDNSCertWithExtension, err := x509.ParseCertificate(block.Bytes)\n\tif err != nil {\n\t\tpanic(\"failed to parse client dns certificate: \" + err.Error())\n\t}\n\n\t// Aporeto Server Cert Leaf with Tags\n\tcerts, err = ioutil.ReadFile(\"./testdata/myserver-cert.pem\")\n\tif err != nil {\n\t\tpanic(\"unable to load server cert\")\n\t}\n\tblock, _ = pem.Decode(certs)\n\tif block == nil {\n\t\tpanic(\"failed to parse server certificate PEM\")\n\t}\n\taporetoServerCertWithExtension, err := x509.ParseCertificate(block.Bytes)\n\tif err != nil {\n\t\tpanic(\"failed to parse server certificate: \" + err.Error())\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields fields\n\t\targs   args\n\t\twant   want\n\t}{\n\t\t{\n\t\t\tname:   \"ip lookup - success (no aporeto tags)\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     false,\n\t\t\t\tipCount: 1,\n\t\t\t\tidCount: 0,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:   \"ip lookup - failure (no aporeto tags)\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: false,\n\t\t\t\t\tidret: false,\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     true,\n\t\t\t\tipCount: 1,\n\t\t\t\tidCount: 0,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"ip lookup - success (no trusted CAs)\",\n\t\t\tfields: fields{\n\t\t\t\ttrustedCAs: x509.NewCertPool(),\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tverifiedChains: [][]*x509.Certificate{\n\t\t\t\t\t{\n\t\t\t\t\t\taporetoClientIPCertWithExtension,\n\t\t\t\t\t\taporetoCertRoot,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     false,\n\t\t\t\tipCount: 1,\n\t\t\t\tidCount: 0,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:   \"ip lookup - success (missing chain in trusted CAs)\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tverifiedChains: [][]*x509.Certificate{\n\t\t\t\t\t{\n\t\t\t\t\t\taporetoClientIPCertWithExtension,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     false,\n\t\t\t\tipCount: 1,\n\t\t\t\tidCount: 0,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"ip lookup - success (missing aporeto tags in verified chain)\",\n\t\t\tfields: fields{\n\t\t\t\ttrustedCAs: aporetoCAPool,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tverifiedChains: [][]*x509.Certificate{\n\t\t\t\t\t{\n\t\t\t\t\t\taporetoCertRoot,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     false,\n\t\t\t\tipCount: 1,\n\t\t\t\tidCount: 0,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"id lookup (client Bad) - failure\",\n\t\t\tfields: fields{\n\t\t\t\ttrustedCAs: aporetoCAPool,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tverifiedChains: [][]*x509.Certificate{\n\t\t\t\t\t{\n\t\t\t\t\t\taporetoClientBadCertWithExtension,\n\t\t\t\t\t\taporetoCertRoot,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t\tmustHaveClientIDCert: true,\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     true,\n\t\t\t\tipCount: 0,\n\t\t\t\tidCount: 0,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"id lookup (client IP) - success\",\n\t\t\tfields: fields{\n\t\t\t\ttrustedCAs: aporetoCAPool,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tverifiedChains: [][]*x509.Certificate{\n\t\t\t\t\t{\n\t\t\t\t\t\taporetoClientIPCertWithExtension,\n\t\t\t\t\t\taporetoCertRoot,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     false,\n\t\t\t\tipCount: 0,\n\t\t\t\tidCount: 1,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"id lookup (client DNS) - success\",\n\t\t\tfields: fields{\n\t\t\t\ttrustedCAs: aporetoCAPool,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tverifiedChains: [][]*x509.Certificate{\n\t\t\t\t\t{\n\t\t\t\t\t\taporetoClientDNSCertWithExtension,\n\t\t\t\t\t\taporetoCertRoot,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     false,\n\t\t\t\tipCount: 0,\n\t\t\t\tidCount: 1,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"id lookup (server) - success\",\n\t\t\tfields: fields{\n\t\t\t\ttrustedCAs: aporetoCAPool,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tverifiedChains: [][]*x509.Certificate{\n\t\t\t\t\t{\n\t\t\t\t\t\taporetoServerCertWithExtension,\n\t\t\t\t\t\taporetoCertRoot,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     false,\n\t\t\t\tipCount: 0,\n\t\t\t\tidCount: 1,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"id lookup (server) - must have client id - failure\",\n\t\t\tfields: fields{\n\t\t\t\ttrustedCAs: aporetoCAPool,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tpolicyReporter: &policyReporter{\n\t\t\t\t\tipret: true,\n\t\t\t\t\tidret: true,\n\t\t\t\t},\n\t\t\t\tmustHaveClientIDCert: true,\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\terr:     true,\n\t\t\t\tipCount: 0,\n\t\t\t\tidCount: 0,\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tv := New(tt.fields.trustedCAs)\n\t\t\terr := v.VerifyPeerCertificate(tt.args.rawCerts, tt.args.verifiedChains, tt.args.policyReporter, tt.args.mustHaveClientIDCert)\n\t\t\tif (err != nil) != tt.want.err {\n\t\t\t\tt.Errorf(\"verifier.VerifyPeerCertificate() error = %v, want.err %v\", err, tt.want.err)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif tt.args.policyReporter.id != tt.want.idCount {\n\t\t\t\tt.Errorf(\"verifier.VerifyPeerCertificate() id have = %v, want %v\", tt.args.policyReporter.id, tt.want.idCount)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif tt.args.policyReporter.ip != tt.want.ipCount {\n\t\t\t\tt.Errorf(\"verifier.VerifyPeerCertificate() ip have = %v, want %v\", tt.args.policyReporter.ip, tt.want.ipCount)\n\t\t\t\treturn\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/applicationproxy/tlshelper/tlshelper.go",
    "content": "package tlshelper\n\nimport \"crypto/tls\"\n\n// The intent of this file is to provide secure base TLS configurations across all our proxies.\n\n// TODO: This configuration can become limiting but thats what we support.\n//   Feature: Users might want to add additional configs or alternatively\n//            if the service is exposed, auto-discover them from the certificate\n//            provided.\n\n// NewBaseTLSClientConfig provides the generic base config to be used on a client.\nfunc NewBaseTLSClientConfig() *tls.Config {\n\n\treturn &tls.Config{\n\t\tPreferServerCipherSuites: true,\n\t\tSessionTicketsDisabled:   true,\n\t\t// for now lets make it TLS1.2 as supported max Version.\n\t\t// TODO: Need to test before enabling TLS 1.3, currently TLS 1.3 doesn't work with envoy.\n\t\tMaxVersion: tls.VersionTLS12,\n\t}\n}\n\n// NewBaseTLSServerConfig provides the generic base config to be used on a server.\nfunc NewBaseTLSServerConfig() *tls.Config {\n\treturn &tls.Config{\n\t\tPreferServerCipherSuites: true,\n\t\tSessionTicketsDisabled:   true,\n\t\tClientAuth:               tls.VerifyClientCertIfGiven,\n\t\tCipherSuites: []uint16{\n\t\t\ttls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,\n\t\t\ttls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,\n\t\t\ttls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,\n\t\t\ttls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,\n\t\t\ttls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,\n\t\t},\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/constants/constants.go",
    "content": "package enforcerconstants\n\nconst (\n\t// TCPAuthenticationOptionBaseLen specifies the length of base TCP Authentication Option packet\n\tTCPAuthenticationOptionBaseLen = 4\n\t// TCPAuthenticationOptionAckLen specifies the length of TCP Authentication Option in the ack packet\n\tTCPAuthenticationOptionAckLen = 20\n\t// TransmitterLabel is the name of the label used to identify the Transmitter Context\n\tTransmitterLabel = \"AporetoContextID\"\n\t// DefaultNetwork to be used\n\tDefaultNetwork = \"0.0.0.0/0\"\n\t// DefaultExternalIPTimeout is the default used for the cache for External IPTimeout.\n\tDefaultExternalIPTimeout = \"500ms\"\n)\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/common.go",
    "content": "package dnsproxy\n\nimport (\n\t\"net\"\n\t\"strconv\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.uber.org/zap\"\n)\n\nfunc configureDependentServices(puCtx *pucontext.PUContext, fqdn string, ips []string) {\n\n\tdependentServicesModified := false\n\n\tfor _, dependentService := range puCtx.DependentServices(fqdn) {\n\t\tmin, max := dependentService.NetworkInfo.Ports.Range()\n\n\t\tfor _, ipString := range ips {\n\t\t\tif ip := net.ParseIP(ipString); ip.To4() != nil {\n\t\t\t\tif _, exists := dependentService.NetworkInfo.Addresses[ipString+\"/32\"]; exists {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\t_, ipNet, _ := net.ParseCIDR(ipString + \"/32\")\n\t\t\t\tfor i := int(min); i <= int(max); i++ {\n\t\t\t\t\tif err := ipsetmanager.V4().AddIPPortToDependentService(puCtx.ID(), ipNet, strconv.Itoa(i)); err != nil {\n\t\t\t\t\t\tzap.L().Debug(\"dnsproxy: error adding dependent service ip port to ipset\", zap.Error(err))\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tdependentServicesModified = true\n\t\t\t\tdependentService.NetworkInfo.Addresses[ipNet.String()] = struct{}{}\n\t\t\t} else {\n\t\t\t\tif _, exists := dependentService.NetworkInfo.Addresses[ipString+\"/128\"]; exists {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\t_, ipNet, _ := net.ParseCIDR(ipString + \"/128\")\n\t\t\t\tfor i := int(min); i <= int(max); i++ {\n\t\t\t\t\tif err := ipsetmanager.V6().AddIPPortToDependentService(puCtx.ID(), ipNet, strconv.Itoa(i)); err != nil {\n\t\t\t\t\t\tzap.L().Debug(\"dnsproxy: error adding dependent service ip port to ipset\", zap.Error(err))\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tdependentServicesModified = true\n\t\t\t\tdependentService.NetworkInfo.Addresses[ipNet.String()] = struct{}{}\n\t\t\t}\n\t\t}\n\t}\n\n\tif dependentServicesModified {\n\t\tif err := serviceregistry.Instance().UpdateDependentServicesByID(puCtx.ID()); err != nil {\n\t\t\tzap.L().Error(\"dnsproxy: error updating dependent services\", zap.Error(err))\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns.go",
    "content": "// +build linux\n\npackage dnsproxy\n\nimport (\n\t\"context\"\n\t\"net\"\n\t\"strconv\"\n\t\"sync\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/miekg/dns\"\n\t\"go.aporeto.io/trireme-lib/collector\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/flowtracking\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\n// Proxy struct represents the object for dns proxy\ntype Proxy struct {\n\tpuFromID          cache.DataStore\n\tconntrack         flowtracking.FlowClient\n\tcollector         collector.EventCollector\n\tcontextIDToServer map[string]*dns.Server\n\tchreports         chan dnsReport\n\tupdateIPsets      ipsetmanager.ACLManager\n\tsync.RWMutex\n}\n\ntype serveDNS struct {\n\tcontextID string\n\t*Proxy\n}\n\nconst (\n\tdnsRequestTimeout = 2 * time.Second\n\tproxyMarkInt      = 0x40 //Duplicated from supervisor/iptablesctrl refer to it\n)\n\nfunc socketOptions(_, _ string, c syscall.RawConn) error {\n\tvar opErr error\n\terr := c.Control(func(fd uintptr) {\n\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, syscall.SO_MARK, proxyMarkInt); err != nil {\n\t\t\tzap.L().Error(\"Failed to mark connection\", zap.Error(err))\n\t\t}\n\t})\n\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn opErr\n}\n\nfunc listenUDP(network, addr string) (net.PacketConn, error) {\n\tvar lc net.ListenConfig\n\n\tlc.Control = socketOptions\n\n\treturn lc.ListenPacket(context.Background(), network, addr)\n}\n\nfunc forwardDNSReq(r *dns.Msg, ip net.IP, port uint16) (*dns.Msg, []string, error) {\n\tvar ips []string\n\tc := new(dns.Client)\n\tc.Dialer = &net.Dialer{\n\t\tControl: func(_, _ string, c syscall.RawConn) error {\n\t\t\treturn c.Control(func(fd uintptr) {\n\t\t\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, syscall.SO_MARK, proxyMarkInt); err != nil {\n\t\t\t\t\tzap.L().Error(\"Failed to assing mark to socket\", zap.Error(err))\n\t\t\t\t}\n\t\t\t})\n\t\t},\n\t\tTimeout: dnsRequestTimeout,\n\t}\n\n\tin, _, err := c.Exchange(r, net.JoinHostPort(ip.String(), strconv.Itoa(int(port))))\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tfor _, ans := range in.Answer {\n\t\tif ans.Header().Rrtype == dns.TypeA {\n\t\t\tt, _ := ans.(*dns.A)\n\t\t\tips = append(ips, t.A.String())\n\t\t}\n\n\t\tif ans.Header().Rrtype == dns.TypeAAAA {\n\t\t\tt, _ := ans.(*dns.AAAA)\n\t\t\tips = append(ips, t.AAAA.String())\n\t\t}\n\t}\n\n\treturn in, ips, nil\n}\n\nfunc (s *serveDNS) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {\n\tvar err error\n\tlAddr := w.LocalAddr().(*net.UDPAddr)\n\trAddr := w.RemoteAddr().(*net.UDPAddr)\n\tvar puCtx *pucontext.PUContext\n\n\tdefer func() {\n\t\tif puCtx != nil {\n\t\t\ts.reportDNSLookup(r.Question[0].Name, puCtx, rAddr.IP, \"\")\n\t\t}\n\t}()\n\n\torigIP, origPort, _, err := s.conntrack.GetOriginalDest(net.ParseIP(\"127.0.0.1\"), rAddr.IP, uint16(lAddr.Port), uint16(rAddr.Port), 17)\n\tif err != nil {\n\t\tzap.L().Error(\"Failed to find flow for the redirected dns traffic\", zap.Error(err))\n\t\treturn\n\t}\n\n\tdata, err := s.puFromID.Get(s.contextID)\n\tif err != nil {\n\t\tzap.L().Error(\"context not found for the PU with ID\", zap.String(\"contextID\", s.contextID))\n\t\treturn\n\t}\n\n\tdnsReply, ips, err := forwardDNSReq(r, origIP, origPort)\n\tif err != nil {\n\t\tzap.L().Debug(\"Forwarded dns request returned error\", zap.Error(err))\n\t\treturn\n\t}\n\n\tpuCtx = data.(*pucontext.PUContext)\n\tps, err1 := puCtx.GetPolicyFromFQDN(r.Question[0].Name)\n\tif err1 == nil {\n\t\tfor _, p := range ps {\n\t\t\ts.updateIPsets.UpdateIPsets(ips, p.Policy.ServiceID)\n\t\t\tif err1 := puCtx.UpdateApplicationACLs(policy.IPRuleList{{Addresses: ips,\n\t\t\t\tPorts:     p.Ports,\n\t\t\t\tProtocols: p.Protocols,\n\t\t\t\tPolicy:    p.Policy,\n\t\t\t}}); err1 != nil {\n\t\t\t\tzap.L().Error(\"Adding IP rule returned error\", zap.Error(err1))\n\t\t\t}\n\t\t}\n\t}\n\n\tif err = w.WriteMsg(dnsReply); err != nil {\n\t\tzap.L().Error(\"Writing dns response back to the client returned error\", zap.Error(err))\n\t}\n}\n\n// StartDNSServer starts the dns server on the port provided for contextID\nfunc (p *Proxy) StartDNSServer(contextID, port string) error {\n\tnetPacketConn, err := listenUDP(\"udp\", \"127.0.0.1:\"+port)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar server *dns.Server\n\n\tstoreInMap := func() {\n\t\tp.Lock()\n\t\tdefer p.Unlock()\n\n\t\tp.contextIDToServer[contextID] = server\n\t}\n\n\tserver = &dns.Server{NotifyStartedFunc: storeInMap, PacketConn: netPacketConn, Handler: &serveDNS{contextID, p}}\n\n\tgo func() {\n\t\tif err := server.ActivateAndServe(); err != nil {\n\t\t\tzap.L().Error(\"Could not start DNS proxy server\", zap.Error(err))\n\t\t}\n\t}()\n\n\treturn nil\n}\n\n// ShutdownDNS shuts down the dns server for contextID\nfunc (p *Proxy) ShutdownDNS(contextID string) {\n\tp.Lock()\n\tdefer p.Unlock()\n\tif s, ok := p.contextIDToServer[contextID]; ok {\n\t\tif err := s.Shutdown(); err != nil {\n\t\t\tzap.L().Error(\"shutdown of dns server returned error\", zap.String(\"contextID\", contextID), zap.Error(err))\n\t\t}\n\t\tdelete(p.contextIDToServer, contextID)\n\t}\n}\n\n// New creates an instance of the dns proxy\nfunc New(puFromID cache.DataStore, conntrack flowtracking.FlowClient, c collector.EventCollector, aclmanager ipsetmanager.ACLManager) *Proxy {\n\tch := make(chan dnsReport)\n\tp := &Proxy{chreports: ch, puFromID: puFromID, collector: c, conntrack: conntrack, contextIDToServer: map[string]*dns.Server{}, updateIPsets: aclmanager}\n\tgo p.reportDNSRequests(ch)\n\treturn p\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns_darwin.go",
    "content": "package dnsproxy\n\nimport (\n\t\"go.aporeto.io/trireme-lib/collector\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/flowtracking\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/trireme-lib/utils/cache\"\n)\n\n// Proxy struct represents the object for dns proxy\ntype Proxy struct {\n}\n\n// New creates an instance of the dns proxy\nfunc New(puFromID cache.DataStore, conntrack flowtracking.FlowClient, c collector.EventCollector, aclmanager ipsetmanager.ACLManager) *Proxy {\n\treturn &Proxy{}\n}\n\n// ShutdownDNS shuts down the dns server for contextID\nfunc (p *Proxy) ShutdownDNS(contextID string) {\n\n}\n\n// StartDNSServer starts the dns server on the port provided for contextID\nfunc (p *Proxy) StartDNSServer(contextID, port string) error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns_linux.go",
    "content": "// +build linux\n\npackage dnsproxy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\t\"sync\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/miekg/dns\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/flowtracking\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\n// removeExpiredEntryFunc is the type of the function that gets called when an IP entry expires (when its TTL hits 0)\ntype removeExpiredEntryFunc func(string)\n\n// Proxy struct represents the object for dns proxy\ntype Proxy struct {\n\tpuFromID                 cache.DataStore\n\tconntrack                flowtracking.FlowClient\n\tcollector                collector.EventCollector\n\tcontextIDToServer        map[string]*dns.Server\n\tchreports                chan dnsReport\n\tcontextIDToDNSNames      *cache.Cache\n\tcontextIDToDNSNamesLocks *mutexMap\n\tIPToTTL                  *cache.Cache\n\tIPToTTLLocks             *mutexMap\n\tremoveExpiredEntry       removeExpiredEntryFunc\n\tsync.Mutex\n}\ntype dnsNamesToIP struct {\n\tnameToIP     map[string][]string\n\tdnsNamesLock sync.Mutex\n}\ntype dnsttlinfo struct {\n\tipaddress string\n\tttl       uint32\n}\n\ntype iptottlinfo struct {\n\tipaddress  string\n\texpiryTime time.Time\n\ttimer      *time.Timer\n\tcontextIDs map[string]struct{}\n\tfqdns      map[string]struct{}\n}\ntype serveDNS struct {\n\tcontextID string\n\t*Proxy\n}\n\nconst (\n\tdnsRequestTimeout = 2 * time.Second\n)\n\nfunc socketOptions(_, _ string, c syscall.RawConn) error {\n\tvar opErr error\n\terr := c.Control(func(fd uintptr) {\n\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, syscall.SO_MARK, constants.ProxyMarkInt); err != nil {\n\t\t\tzap.L().Error(\"dnsproxy: failed to mark connection\", zap.Error(err))\n\t\t}\n\t})\n\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn opErr\n}\n\nfunc listenUDP(ctx context.Context, network, addr string) (net.PacketConn, error) {\n\tvar lc net.ListenConfig\n\n\tlc.Control = socketOptions\n\n\treturn lc.ListenPacket(ctx, network, addr)\n}\n\nfunc forwardDNSReq(r *dns.Msg, ip net.IP, port uint16) ([]byte, []string, []*dnsttlinfo, error) {\n\tvar ips []string\n\tvar resp []byte\n\tvar msg *dns.Msg\n\tvar conn *dns.Conn\n\tvar err error\n\n\tc := new(dns.Client)\n\n\tdial := func(address string) (*dns.Conn, error) {\n\t\tc.Dialer = &net.Dialer{\n\t\t\tControl: func(_, _ string, c syscall.RawConn) error {\n\t\t\t\treturn c.Control(func(fd uintptr) {\n\t\t\t\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, syscall.SO_MARK, constants.ProxyMarkInt); err != nil {\n\t\t\t\t\t\tzap.L().Error(\"dnsproxy: failed to assing mark to socket\", zap.Error(err))\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t},\n\t\t\tTimeout: dnsRequestTimeout,\n\t\t}\n\n\t\tconn, err := c.Dial(address)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\treturn conn, nil\n\t}\n\n\tsendRequest := func(r *dns.Msg, conn *dns.Conn) error {\n\t\topt := r.IsEdns0()\n\t\t// If EDNS0 is used use that for size.\n\t\tif opt != nil && opt.UDPSize() >= dns.MinMsgSize {\n\t\t\tconn.UDPSize = opt.UDPSize()\n\t\t}\n\t\t// Otherwise use the client's configured UDP size.\n\t\tif opt == nil && c.UDPSize >= dns.MinMsgSize {\n\t\t\tconn.UDPSize = c.UDPSize\n\t\t}\n\n\t\tt := time.Now()\n\t\t// write with the appropriate write timeout\n\t\tif err = conn.SetWriteDeadline(t.Add(c.Dialer.Timeout)); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif err = conn.WriteMsg(r); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn nil\n\t}\n\n\treadResponse := func(conn *dns.Conn) ([]byte, *dns.Msg, error) {\n\t\tif err := conn.SetReadDeadline(time.Now().Add(c.Dialer.Timeout)); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\tp, err := conn.ReadMsgHeader(nil)\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\tm := new(dns.Msg)\n\t\tif err := m.Unpack(p); err != nil {\n\t\t\t// If an error was returned, we still want to allow the user to use\n\t\t\t// the message, but naively they can just check err if they don't want\n\t\t\t// to use an erroneous message\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\treturn p, m, nil\n\t}\n\n\tif conn, err = dial(net.JoinHostPort(ip.String(), strconv.Itoa(int(port)))); err != nil {\n\t\treturn nil, nil, nil, err\n\t}\n\n\tdefer conn.Close() // nolint: errcheck\n\n\tif err := sendRequest(r, conn); err != nil {\n\t\treturn nil, nil, nil, err\n\t}\n\n\tif resp, msg, err = readResponse(conn); err != nil {\n\t\treturn nil, nil, nil, err\n\t}\n\tdnsttlinfolist := []*dnsttlinfo{}\n\n\tfor _, ans := range msg.Answer {\n\t\tif ans.Header().Rrtype == dns.TypeA {\n\t\t\tt, _ := ans.(*dns.A)\n\n\t\t\tips = append(ips, t.A.String())\n\t\t\tdnsttlinfolist = append(dnsttlinfolist, &dnsttlinfo{\n\t\t\t\tipaddress: t.A.String(),\n\t\t\t\tttl:       ans.Header().Ttl,\n\t\t\t})\n\t\t}\n\n\t\tif ans.Header().Rrtype == dns.TypeAAAA {\n\t\t\tt, _ := ans.(*dns.AAAA)\n\t\t\tips = append(ips, t.AAAA.String())\n\n\t\t\tdnsttlinfolist = append(dnsttlinfolist, &dnsttlinfo{\n\t\t\t\tipaddress: t.AAAA.String(),\n\t\t\t\tttl:       ans.Header().Ttl,\n\t\t\t})\n\t\t}\n\t}\n\treturn resp, ips, dnsttlinfolist, nil\n}\n\nconst (\n\tstrInvalidDNSRequest = \"invalid DNS request\"\n)\n\nfunc (s *serveDNS) ServeDNS(w dns.ResponseWriter, r *dns.Msg) {\n\tvar err error\n\tlAddr := w.LocalAddr().(*net.UDPAddr)\n\trAddr := w.RemoteAddr().(*net.UDPAddr)\n\tvar pctx *pucontext.PUContext\n\tvar ipsRaw []string\n\tvar origIP net.IP\n\tvar origPort uint16\n\tvar reportError string\n\n\tdefer func() {\n\t\tif pctx != nil {\n\t\t\t// if there is no question section, this was an invalid request\n\t\t\tname := \"invalid\"\n\t\t\tif len(r.Question) > 0 {\n\t\t\t\tname = r.Question[0].Name\n\t\t\t}\n\t\t\ts.reportDNSLookup(name, pctx, rAddr.IP, uint16(rAddr.Port), origIP, origPort, ipsRaw, reportError)\n\t\t}\n\t}()\n\n\tpctxRaw, err := s.puFromID.Get(s.contextID)\n\tif err != nil {\n\t\tzap.L().Error(\"dnsproxy: context not found for the PU with ID\", zap.String(\"contextID\", s.contextID), zap.Error(err))\n\t\treportError = fmt.Sprintf(\"PU context: %s\", err)\n\t\treturn\n\t}\n\tpctx = pctxRaw.(*pucontext.PUContext)\n\n\t// check if the DNS request is actually valid\n\t// we have seen with the AWS resolve in the past that it *does* respond with an empty Question section\n\tif len(r.Question) <= 0 {\n\t\tpctx.Counters().IncrementCounter(counters.ErrDNSInvalidRequest)\n\t\tzap.L().Debug(\"dnsproxy: invalid DNS request received (missing question section)\", zap.String(\"contextID\", s.contextID))\n\t\treportError = strInvalidDNSRequest\n\t\treturn\n\t}\n\n\t// TODO: shouldn't we let the lookup go regardless of our problems?\n\torigIP, origPort, _, err = s.conntrack.GetOriginalDest(net.ParseIP(\"127.0.0.1\"), rAddr.IP, uint16(lAddr.Port), uint16(rAddr.Port), 17)\n\tif err != nil {\n\t\tzap.L().Error(\"dnsproxy: failed to find flow for the redirected DNS traffic\", zap.String(\"contextID\", s.contextID), zap.Error(err))\n\t\treportError = fmt.Sprintf(\"conntrack: DNS request flow: %s\", err)\n\t\treturn\n\t}\n\n\t// perform the upstream DNS lookup\n\tdnsReply, ipsRaw, dnsttlinfolistRaw, err := forwardDNSReq(r, origIP, origPort)\n\tif err != nil {\n\t\tpctx.Counters().IncrementCounter(counters.ErrDNSForwardFailed)\n\t\tzap.L().Debug(\"dnsproxy: forwarded DNS request returned error\", zap.String(\"contextID\", s.contextID), zap.Error(err))\n\t\treportError = fmt.Sprintf(\"DNS request failed: %s\", err)\n\t\treturn\n\t}\n\n\t// get all policies associated with the FQDN from\n\tpolicies, policyName, err1 := pctx.GetPolicyFromFQDN(r.Question[0].Name)\n\n\t// if they exist, then err1 is nil, and we need to update\n\t// - the ipsets\n\t// - the applicationacls inside of the enforcer\n\t// - the internal cache\n\tif err1 == nil {\n\t\ttype ipDetail struct {\n\t\t\tttl        uint32\n\t\t\tupdateOnly bool\n\t\t}\n\t\tipsToProcess := make(map[string]ipDetail, len(ipsRaw))\n\t\tfor _, pol := range policies {\n\t\t\tfor _, i := range dnsttlinfolistRaw {\n\t\t\t\t// TODO: this does not work yet - will come in a separate PR\n\t\t\t\t//if checkIfACLExists(pctx, pol, i.ipaddress) {\n\t\t\t\t//\t// no need to program ACLs and ipsets\n\t\t\t\t//\t// however, there are two cases here:\n\t\t\t\t//\t// 1. this was a static entry in the external network, and we truly want to skip it\n\t\t\t\t//\t// 2. this comes from us programming it down below\n\t\t\t\t//\t// In case (2) we need to actually call handleTTLInfoList but with updateOnly to extend the TTL\n\t\t\t\t//\t// This way no new expiry entries will be made when not necessary as for static entries,\n\t\t\t\t//\t// but they will be extended when necessary as well.\n\t\t\t\t//\tif _, ok := ipsToProcess[i.ipaddress]; !ok {\n\t\t\t\t//\t\tipsToProcess[i.ipaddress] = ipDetail{ttl: i.ttl, updateOnly: true}\n\t\t\t\t//\t}\n\t\t\t\t//\tcontinue\n\t\t\t\t//}\n\n\t\t\t\tif _, ok := ipsToProcess[i.ipaddress]; !ok {\n\t\t\t\t\tipsToProcess[i.ipaddress] = ipDetail{ttl: i.ttl, updateOnly: false}\n\t\t\t\t}\n\t\t\t\tips := []string{i.ipaddress}\n\n\t\t\t\t// makes sure to update any ipsets related to the serviceID of the policy\n\t\t\t\t// this matches the case when the destinations are *not* overlapping with the target networks and the decision is not done\n\t\t\t\t// in the enforcer\n\t\t\t\tzap.L().Debug(\"dnsproxy: ipset: adding IP addresses\", zap.String(\"contextID\", s.contextID), zap.String(\"serviceID\", pol.Policy.ServiceID), zap.Strings(\"ipaddresses\", ips))\n\t\t\t\tipsetmanager.V4().UpdateACLIPsets(ips, pol.Policy.ServiceID)\n\t\t\t\tipsetmanager.V6().UpdateACLIPsets(ips, pol.Policy.ServiceID)\n\n\t\t\t\t// makes sure to update the ApplicationACLs inside of the enforcer\n\t\t\t\t// this matches the case when the destination is overlapping with the target networks, and the decision is made inside\n\t\t\t\t// of the enforcer, and not with ipsets\n\t\t\t\tzap.L().Debug(\"dnsproxy: adding IP addresses to enforcer ApplicationACLs\", zap.String(\"contextID\", s.contextID), zap.String(\"serviceID\", pol.Policy.ServiceID), zap.Strings(\"ipaddresses\", ips))\n\t\t\t\tif err1 := pctx.UpdateApplicationACLs(policy.IPRuleList{{\n\t\t\t\t\tAddresses: ips,\n\t\t\t\t\tPorts:     pol.Ports,\n\t\t\t\t\tProtocols: pol.Protocols,\n\t\t\t\t\tPolicy:    pol.Policy,\n\t\t\t\t}}); err1 != nil {\n\t\t\t\t\tzap.L().Error(\"dnsproxy: adding IP rule returned error\", zap.String(\"contextID\", s.contextID), zap.Error(err1))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// for processing in our caches, we only care about the IPs that we needed to program ACLs/ipsets\n\t\tips := make([]string, 0, len(ipsToProcess))\n\t\tvar dnsttlinfolist, dnsttlinfolistUpdateOnly []*dnsttlinfo\n\t\tfor ip, d := range ipsToProcess {\n\t\t\tif d.updateOnly {\n\t\t\t\tdnsttlinfolistUpdateOnly = append(dnsttlinfolistUpdateOnly, &dnsttlinfo{ipaddress: ip, ttl: d.ttl})\n\t\t\t} else {\n\t\t\t\tips = append(ips, ip)\n\t\t\t\tdnsttlinfolist = append(dnsttlinfolist, &dnsttlinfo{ipaddress: ip, ttl: d.ttl})\n\t\t\t}\n\t\t}\n\n\t\t// update the cache/map for this FQDN with new\n\t\ts.updateFQDNWithIPs(s.contextID, policyName, ips)\n\n\t\t// add or update only if required the expiry entries\n\t\ts.handleTTLInfoList(s.contextID, policyName, dnsttlinfolist, false)\n\t\ts.handleTTLInfoList(s.contextID, policyName, dnsttlinfolistUpdateOnly, true)\n\t}\n\n\tconfigureDependentServices(pctx, r.Question[0].Name, ipsRaw)\n\n\t// write the DNS reply back to the client\n\tif _, err = w.Write(dnsReply); err != nil {\n\t\tpctx.Counters().IncrementCounter(counters.ErrDNSResponseFailed)\n\t\tzap.L().Error(\"dnsproxy: writing DNS response back to the client returned error\", zap.String(\"contextID\", s.contextID), zap.Error(err))\n\t}\n}\n\n// TODO: this does not work yet - will come in a separate PR\n// func checkIfACLExists(pctx *pucontext.PUContext, pol policy.PortProtocolPolicy, ipStr string) bool {\n// \tip := net.ParseIP(ipStr)\n// \tif ip == nil {\n// \t\treturn false\n// \t}\n// \tfor _, protoStr := range pol.Protocols {\n// \t\tproto, err := strconv.Atoi(protoStr)\n// \t\tif err != nil {\n// \t\t\tcontinue\n// \t\t}\n// \t\tfor _, portStr := range pol.Ports {\n// \t\t\t// it could be a range definition\n// \t\t\tvar ports []uint16\n// \t\t\tif strings.Contains(portStr, \":\") {\n// \t\t\t\ttmp := strings.SplitN(portStr, \":\", 2)\n// \t\t\t\tif len(tmp) != 2 {\n// \t\t\t\t\tcontinue\n// \t\t\t\t}\n// \t\t\t\tstartPort, err := strconv.Atoi(tmp[0])\n// \t\t\t\tif err != nil {\n// \t\t\t\t\tcontinue\n// \t\t\t\t}\n// \t\t\t\tendPort, err := strconv.Atoi(tmp[1])\n// \t\t\t\tif err != nil {\n// \t\t\t\t\tcontinue\n// \t\t\t\t}\n// \t\t\t\tports = make([]uint16, 0, endPort-startPort+1)\n// \t\t\t\tfor i := startPort; i <= endPort; i++ {\n// \t\t\t\t\tports = append(ports, uint16(i))\n// \t\t\t\t}\n// \t\t\t} else {\n// \t\t\t\tport, err := strconv.Atoi(portStr)\n// \t\t\t\tif err != nil {\n// \t\t\t\t\tcontinue\n// \t\t\t\t}\n// \t\t\t\tports = []uint16{uint16(port)}\n// \t\t\t}\n\n// \t\t\tfor _, port := range ports {\n// \t\t\t\treportPol, actionPol, err := pctx.ApplicationACLPolicyFromAddr(ip, port, uint8(proto))\n// \t\t\t\tif err == nil && (reportPol != nil || actionPol != nil) {\n// \t\t\t\t\tzap.L().Debug(\"dnsproxy: ACL already found for IP\",\n// \t\t\t\t\t\tzap.String(\"contextID\", pctx.ManagementID()),\n// \t\t\t\t\t\tzap.String(\"ipaddress\", ipStr),\n// \t\t\t\t\t\tzap.Any(\"reportPol\", reportPol),\n// \t\t\t\t\t\tzap.Any(\"actionPol\", actionPol),\n// \t\t\t\t\t)\n// \t\t\t\t\treturn true\n// \t\t\t\t}\n// \t\t\t}\n// \t\t}\n// \t}\n// \treturn false\n// }\n\nfunc (p *Proxy) handleTTLInfoList(contextID, fqdn string, dnsttlinfolist []*dnsttlinfo, updateOnly bool) {\n\tfor _, dnsinfo := range dnsttlinfolist {\n\t\tzap.L().Debug(\"handleTTLInfoList\", zap.String(\"fqdn\", fqdn), zap.String(\"ipaddress\", dnsinfo.ipaddress), zap.Bool(\"updateOnly\", updateOnly))\n\t\tp.handleTTLInfo(contextID, fqdn, dnsinfo, updateOnly)\n\t}\n}\n\nfunc (p *Proxy) handleTTLInfo(contextID, fqdn string, dnsinfo *dnsttlinfo, updateOnly bool) {\n\tnewEntryExpiryTime := time.Now().Add(time.Duration(dnsinfo.ttl) * time.Second)\n\tul := p.IPToTTLLocks.Lock(dnsinfo.ipaddress)\n\tdefer ul.Unlock()\n\tttlInfoRaw, err := p.IPToTTL.Get(dnsinfo.ipaddress)\n\tif err != nil {\n\t\t// if we are supposed to be updating only\n\t\t// then skip this entry\n\t\tif updateOnly {\n\t\t\treturn\n\t\t}\n\n\t\t// otherwise add a new entry\n\t\tnewEntry := iptottlinfo{\n\t\t\tipaddress:  dnsinfo.ipaddress,\n\t\t\texpiryTime: newEntryExpiryTime,\n\t\t\tcontextIDs: map[string]struct{}{contextID: {}},\n\t\t\tfqdns:      map[string]struct{}{fqdn: {}},\n\t\t}\n\t\t// NOTE: the dnsinfo.ipaddress is in a for loop\n\t\t// so we need to make sure the IP address is on the stack when the callback is called\n\t\t// hence the anonymous function wrapping of the timer\n\t\tfunc(ipaddress string) {\n\t\t\tnewEntry.timer = time.AfterFunc(time.Duration(dnsinfo.ttl)*time.Second, func() {\n\t\t\t\tif p.removeExpiredEntry != nil {\n\t\t\t\t\tp.removeExpiredEntry(ipaddress)\n\t\t\t\t}\n\t\t\t})\n\t\t}(dnsinfo.ipaddress)\n\t\tif err := p.IPToTTL.Add(dnsinfo.ipaddress, newEntry); err != nil {\n\t\t\tzap.L().Debug(\"dnsproxy: failed to add entry to IPToTTL cache\", zap.String(\"contextID\", contextID), zap.Any(\"iptottlinfo\", newEntry), zap.Error(err))\n\t\t\treturn\n\t\t}\n\t} else {\n\t\t// update TTL info and reset timer if necessary\n\t\tttlInfo := ttlInfoRaw.(iptottlinfo)\n\t\tif newEntryExpiryTime.After(ttlInfo.expiryTime) {\n\t\t\tttlInfo.timer.Reset(time.Duration(dnsinfo.ttl) * time.Second)\n\t\t}\n\t\tttlInfo.expiryTime = newEntryExpiryTime\n\t\tttlInfo.contextIDs[contextID] = struct{}{}\n\t\tttlInfo.fqdns[fqdn] = struct{}{}\n\t\tp.IPToTTL.AddOrUpdate(dnsinfo.ipaddress, ttlInfo)\n\t}\n}\n\nfunc (p *Proxy) defaultRemoveExpiredEntry(ipaddress string) {\n\t// retrieve the IPtoTTLInfo\n\tul := p.IPToTTLLocks.Lock(ipaddress)\n\tdefer ul.Unlock()\n\tttlInfoRaw, err := p.IPToTTL.Get(ipaddress)\n\tif err != nil {\n\t\tzap.L().Debug(\"dnsproxy: entry already gone from IPToTTL cache\", zap.String(\"ipaddress\", ipaddress))\n\t\treturn\n\t}\n\tttlInfo := ttlInfoRaw.(iptottlinfo)\n\n\tfor contextID := range ttlInfo.contextIDs {\n\t\tpctxRaw, err := p.puFromID.Get(contextID)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"dnsproxy: context not found for the PU with ID\", zap.String(\"contextID\", contextID))\n\t\t\tcontinue\n\t\t}\n\t\tpctx := pctxRaw.(*pucontext.PUContext)\n\n\t\tfor fqdn := range ttlInfo.fqdns {\n\t\t\tpolicies, policyName, err := pctx.GetPolicyFromFQDN(fqdn)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// remove IP address from ipsets\n\t\t\tfor _, pol := range policies {\n\t\t\t\tzap.L().Debug(\"dnsproxy: ipset: removing IP address\", zap.String(\"contextID\", contextID), zap.String(\"serviceID\", pol.Policy.ServiceID), zap.String(\"ipaddress\", ipaddress))\n\t\t\t\tipsetmanager.V4().DeleteEntryFromIPset([]string{ipaddress}, pol.Policy.ServiceID)\n\t\t\t\tipsetmanager.V6().DeleteEntryFromIPset([]string{ipaddress}, pol.Policy.ServiceID)\n\n\t\t\t\t// remove IP address from enforcer ApplicationACLs\n\t\t\t\tzap.L().Debug(\"dnsproxy: removing IP address from enforcer ApplicationACL\", zap.String(\"contextID\", contextID), zap.String(\"ipaddress\", ipaddress))\n\t\t\t\tif err := pctx.RemoveApplicationACL(ipaddress, pol.Protocols, pol.Ports, pol.Policy); err != nil {\n\t\t\t\t\tzap.L().Debug(\"dnsproxy: RemoveApplicationACL failed\", zap.String(\"contextID\", contextID), zap.String(\"serviceID\", pol.Policy.ServiceID), zap.String(\"ipaddress\", ipaddress), zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tp.removeIPfromFQDN(contextID, policyName, ipaddress)\n\t\t}\n\t}\n\n\t// clean up after ourselves and remove ourselves from the cache\n\tif err := p.IPToTTL.Remove(ipaddress); err != nil {\n\t\tzap.L().Debug(\"dnsproxy: failed to remove entry from IPToTTL cache\", zap.String(\"ipaddress\", ipaddress), zap.Error(err))\n\t}\n\tp.IPToTTLLocks.Remove(ipaddress)\n\n}\n\n// StartDNSServer starts the dns server on the port provided for contextID\nfunc (p *Proxy) StartDNSServer(ctx context.Context, contextID, port string) error {\n\tnetPacketConn, err := listenUDP(ctx, \"udp\", \"127.0.0.1:\"+port)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar server *dns.Server\n\n\tstoreInMap := func() {\n\t\tp.Lock()\n\t\tdefer p.Unlock()\n\n\t\tp.contextIDToServer[contextID] = server\n\t}\n\n\tserver = &dns.Server{NotifyStartedFunc: storeInMap, PacketConn: netPacketConn, Handler: &serveDNS{contextID, p}}\n\n\tgo func() {\n\t\tif err := server.ActivateAndServe(); err != nil {\n\t\t\tzap.L().Error(\"dnsproxy: could not start DNS proxy server\", zap.String(\"contextID\", contextID), zap.Error(err))\n\t\t}\n\t}()\n\n\treturn nil\n}\n\n// shutdownDNS shuts down the dns server for contextID\nfunc (p *Proxy) shutdownDNS(contextID string) {\n\n\tif s, ok := p.contextIDToServer[contextID]; ok {\n\t\tif err := s.Shutdown(); err != nil {\n\t\t\tzap.L().Error(\"dnsproxy: shutdown of DNS server returned error\", zap.String(\"contextID\", contextID), zap.Error(err))\n\t\t}\n\t\tdelete(p.contextIDToServer, contextID)\n\t}\n}\n\n// New creates an instance of the dns proxy\nfunc New(ctx context.Context, puFromID cache.DataStore, conntrack flowtracking.FlowClient, c collector.EventCollector) *Proxy {\n\tch := make(chan dnsReport)\n\tp := &Proxy{\n\t\tchreports:                ch,\n\t\tpuFromID:                 puFromID,\n\t\tcollector:                c,\n\t\tconntrack:                conntrack,\n\t\tcontextIDToServer:        map[string]*dns.Server{},\n\t\tcontextIDToDNSNames:      cache.NewCache(\"contextIDtoDNSNames\"),\n\t\tcontextIDToDNSNamesLocks: newMutexMap(),\n\t\tIPToTTL:                  cache.NewCache(\"IPToTTL\"),\n\t\tIPToTTLLocks:             newMutexMap(),\n\t}\n\tp.removeExpiredEntry = p.defaultRemoveExpiredEntry\n\tgo p.reportDNSRequests(ctx, ch)\n\treturn p\n}\n\n// SyncWithPlatformCache is only needed in Windows currently\nfunc (p *Proxy) SyncWithPlatformCache(ctx context.Context, pctx *pucontext.PUContext) error {\n\treturn nil\n}\n\n// HandleDNSResponsePacket is only needed in Windows currently\nfunc (p *Proxy) HandleDNSResponsePacket(dnsPacketData []byte, sourceIP net.IP, sourcePort uint16, destIP net.IP, destPort uint16, puFromContextID func(string) (*pucontext.PUContext, error)) error {\n\treturn nil\n}\n\n// Enforce starts enforcing policies for the given policy.PUInfo.\nfunc (p *Proxy) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\t// during the first Enforce call, we still need to initialize map\n\t// we do that and return\n\tul := p.contextIDToDNSNamesLocks.Lock(contextID)\n\tdefer ul.Unlock()\n\ttmp, err := p.contextIDToDNSNames.Get(contextID)\n\tif err != nil {\n\t\t// this means that the map is not initialized yet, do so now\n\t\treturn p.doHandleCreate(ctx, contextID, puInfo)\n\t}\n\n\t// during a policy refresh, we will enter this part here:\n\t// - iterate over all DNSACLs for this PU\n\t// - for all already learned IPs for all DNS names: program ipsets and enforcer ApplicationACLs\n\tdnsNames := tmp.(*dnsNamesToIP).Copy()\n\tfor fqdn, policies := range puInfo.Policy.DNSACLs {\n\t\tips, ok := dnsNames.nameToIP[fqdn]\n\t\tif ok {\n\t\t\t// we have already learned those DNS names\n\t\t\t// make sure to reprogram ipsets and ApplicationACLs in the enforcer\n\t\t\t// on a policy refresh\n\t\t\tfor _, pol := range policies {\n\t\t\t\tzap.L().Debug(\"dnsproxy: ipset: adding IP addresses after policy refresh\", zap.String(\"contextID\", contextID), zap.String(\"fqdn\", fqdn), zap.String(\"serviceID\", pol.Policy.ServiceID), zap.Strings(\"ipaddresses\", ips))\n\t\t\t\tipsetmanager.V4().UpdateACLIPsets(ips, pol.Policy.ServiceID)\n\t\t\t\tipsetmanager.V6().UpdateACLIPsets(ips, pol.Policy.ServiceID)\n\n\t\t\t\t// makes sure to update the ApplicationACLs inside of the enforcer\n\t\t\t\t// this matches the case when the destination is overlapping with the target networks, and the decision is made inside\n\t\t\t\t// of the enforcer, and not with ipsets\n\t\t\t\tdata, err := p.puFromID.Get(contextID)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"dnsproxy: context not found for the PU with ID\", zap.String(\"contextID\", contextID))\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tpctx := data.(*pucontext.PUContext)\n\t\t\t\tif err1 := pctx.UpdateApplicationACLs(policy.IPRuleList{{\n\t\t\t\t\tAddresses: ips,\n\t\t\t\t\tPorts:     pol.Ports,\n\t\t\t\t\tProtocols: pol.Protocols,\n\t\t\t\t\tPolicy:    pol.Policy,\n\t\t\t\t}}); err1 != nil {\n\t\t\t\t\tzap.L().Error(\"dnsproxy: adding IP rule returned error after policy refresh\", zap.String(\"contextID\", contextID), zap.Error(err1))\n\t\t\t\t}\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\t// This is a new fqdn. DNS proxy will fix these IPs as it learns them\n\t\tdnsNames.nameToIP[fqdn] = []string{}\n\t}\n\t// this is only necessary to add new FQDNs to the map - which is essential for the DNS proxy to know about\n\tp.contextIDToDNSNames.AddOrUpdate(contextID, dnsNames)\n\treturn nil\n}\n\nfunc (p *Proxy) doHandleCreate(_ context.Context, contextID string, puInfo *policy.PUInfo) error {\n\tnameToIP := &dnsNamesToIP{\n\t\tnameToIP: map[string][]string{},\n\t}\n\tfor name := range puInfo.Policy.DNSACLs {\n\t\tnameToIP.nameToIP[name] = []string{}\n\t}\n\tif err := p.contextIDToDNSNames.Add(contextID, nameToIP); err != nil {\n\t\tzap.L().Error(\"dnsproxy: contextID already enforced\", zap.String(\"contextID\", contextID))\n\t}\n\n\treturn nil\n}\n\n// Unenforce stops enforcing policy for the given IP.\nfunc (p *Proxy) Unenforce(_ context.Context, contextID string) error {\n\tp.Lock()\n\tdefer p.Unlock()\n\tul := p.contextIDToDNSNamesLocks.Lock(contextID)\n\tif err := p.contextIDToDNSNames.Remove(contextID); err != nil {\n\t\tzap.L().Error(\"dnsproxy: contextID already removed/unenforced\", zap.String(\"contextID\", contextID))\n\t}\n\tp.contextIDToDNSNamesLocks.Remove(contextID)\n\tul.Unlock()\n\tp.shutdownDNS(contextID)\n\treturn nil\n}\n\nfunc (d *dnsNamesToIP) Copy() *dnsNamesToIP {\n\td.dnsNamesLock.Lock()\n\tdefer d.dnsNamesLock.Unlock()\n\tnewdns := &dnsNamesToIP{\n\t\tnameToIP: make(map[string][]string, len(d.nameToIP)),\n\t}\n\tfor key, value := range d.nameToIP {\n\t\tnewvalue := make([]string, len(value))\n\t\tcopy(newvalue, value)\n\t\tnewdns.nameToIP[key] = newvalue\n\t}\n\n\treturn newdns\n}\n\n// updateFQDNWithIPs will add any new IPs in `ips` and add it to the internal map of `contextIDToDNSNames` for our s.contextID\nfunc (p *Proxy) updateFQDNWithIPs(contextID, fqdn string, ips []string) {\n\tul := p.contextIDToDNSNamesLocks.Lock(contextID)\n\tdefer ul.Unlock()\n\ttmp, err := p.contextIDToDNSNames.Get(contextID)\n\tif err != nil {\n\t\tzap.L().Debug(\"dnsproxy: failed to get fqdn map for contextID in updateFQDNWithIPs\", zap.String(\"contextID\", contextID))\n\t\treturn\n\t}\n\tfqdntoIPs := tmp.(*dnsNamesToIP).Copy()\n\texistingIPsMap := make(map[string]struct{}, len(fqdntoIPs.nameToIP[fqdn]))\n\tfor _, e := range fqdntoIPs.nameToIP[fqdn] {\n\t\texistingIPsMap[e] = struct{}{}\n\t}\n\ttoAdd := make([]string, 0, len(ips))\n\tfor _, newIP := range ips {\n\t\tif _, ok := existingIPsMap[newIP]; ok {\n\t\t\tcontinue\n\t\t}\n\t\ttoAdd = append(toAdd, newIP)\n\t}\n\tfqdntoIPs.nameToIP[fqdn] = append(fqdntoIPs.nameToIP[fqdn], toAdd...)\n\t_ = p.contextIDToDNSNames.AddOrUpdate(contextID, fqdntoIPs)\n\tzap.L().Debug(\"dnsproxy: updating FQDN map after IP addresses were added\", zap.String(\"contextID\", contextID), zap.Any(\"fqdntoIPs\", fqdntoIPs.nameToIP))\n}\n\nfunc (p *Proxy) removeIPfromFQDN(contextID, fqdn string, ipAddress string) {\n\tul := p.contextIDToDNSNamesLocks.Lock(contextID)\n\tdefer ul.Unlock()\n\ttmp, err := p.contextIDToDNSNames.Get(contextID)\n\tif err != nil {\n\t\tzap.L().Debug(\"dnsproxy: failed to get fqdn map for contextID in removeIPfromFQDN\", zap.String(\"contextID\", contextID))\n\t\treturn\n\t}\n\tfqdntoIPs := tmp.(*dnsNamesToIP).Copy()\n\texistingIPsMap := make(map[string]struct{}, len(fqdntoIPs.nameToIP[fqdn]))\n\tfor _, e := range fqdntoIPs.nameToIP[fqdn] {\n\t\texistingIPsMap[e] = struct{}{}\n\t}\n\n\t// remove IP from map/cache\n\tif len(fqdntoIPs.nameToIP[fqdn]) > 0 {\n\t\tips := make([]string, 0, len(fqdntoIPs.nameToIP[fqdn])-1)\n\t\tvar found bool\n\t\tfor _, ip := range fqdntoIPs.nameToIP[fqdn] {\n\t\t\tif ip == ipAddress {\n\t\t\t\tfound = true\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tips = append(ips, ip)\n\t\t}\n\t\tif !found {\n\t\t\tzap.L().Debug(\"dnsproxy: ipaddress was already removed from list\", zap.String(\"contextID\", contextID), zap.String(\"fqdn\", fqdn), zap.String(\"ipaddress\", ipAddress))\n\t\t}\n\t\tfqdntoIPs.nameToIP[fqdn] = ips\n\t}\n\n\tzap.L().Debug(\"dnsproxy: updating FQDN map after IP address was deleted\", zap.String(\"contextID\", contextID), zap.Any(\"iplist\", fqdntoIPs.nameToIP))\n\t_ = p.contextIDToDNSNames.AddOrUpdate(contextID, fqdntoIPs) // nolint\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns_linux_test.go",
    "content": "// +build linux\n\npackage dnsproxy\n\nimport (\n\t\"bufio\"\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"os\"\n\t\"reflect\"\n\t\"regexp\"\n\t\"strings\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/magiconair/properties/assert\"\n\t\"github.com/miekg/dns\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/acls\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n)\n\ntype flowClientDummy struct {\n}\n\nfunc (c *flowClientDummy) Close() error {\n\treturn nil\n}\n\nfunc (c *flowClientDummy) UpdateMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32, network bool) error {\n\treturn nil\n}\n\nfunc (c *flowClientDummy) UpdateNetworkFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\treturn nil\n}\n\nfunc (c *flowClientDummy) UpdateApplicationFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\treturn nil\n}\n\nfunc findDNSServerIP() net.IP {\n\n\tfile, err := os.Open(\"/etc/resolv.conf\")\n\n\tif err != nil {\n\t\treturn net.ParseIP(\"8.8.8.8\")\n\t}\n\n\tscanner := bufio.NewScanner(file)\n\n\t// this regex is doing a whole word search\n\ts := \"\\\\b\" + \"nameserver\" + \"\\\\b\"\n\tmatch := regexp.MustCompile(s)\n\n\tfor scanner.Scan() {\n\t\tline := scanner.Text()\n\t\tif match.MatchString(line) {\n\t\t\treturn net.ParseIP(strings.Fields(line)[1])\n\t\t}\n\t}\n\n\treturn net.ParseIP(\"8.8.8.8\")\n}\n\nfunc (c *flowClientDummy) GetOriginalDest(ipSrc, ipDst net.IP, srcport, dstport uint16, protonum uint8) (net.IP, uint16, uint32, error) {\n\n\tdnsServerIP := findDNSServerIP()\n\tfmt.Println(\"using DNS Server IP\", dnsServerIP)\n\treturn dnsServerIP, 53, 100, nil\n}\n\nfunc addDNSNamePolicy(context *pucontext.PUContext) {\n\tcontext.DNSACLs = policy.DNSRuleList{\n\t\t\"www.google.com.\": []policy.PortProtocolPolicy{\n\t\t\t{Ports: []string{\"80\"},\n\t\t\t\tProtocols: []string{\"tcp\"},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"2\",\n\t\t\t\t}},\n\t\t},\n\t}\n}\n\nfunc CustomDialer(ctx context.Context, network, address string) (net.Conn, error) {\n\td := net.Dialer{}\n\treturn d.DialContext(ctx, \"udp\", \"127.0.0.1:53001\")\n}\n\nfunc createCustomResolver() *net.Resolver {\n\tr := &net.Resolver{\n\t\tPreferGo: true,\n\t\tDial:     CustomDialer,\n\t}\n\n\treturn r\n}\n\n// DNSCollector implements a default collector infrastructure to syslog\ntype DNSCollector struct{}\n\n// CollectFlowEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectFlowEvent(record *collector.FlowRecord) {}\n\n// CollectContainerEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectContainerEvent(record *collector.ContainerRecord) {}\n\n// CollectUserEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectUserEvent(record *collector.UserRecord) {}\n\n// CollectTraceEvent collects iptables trace events\nfunc (d *DNSCollector) CollectTraceEvent(records []string) {}\n\n// CollectPingEvent collects ping events\nfunc (d *DNSCollector) CollectPingEvent(report *collector.PingReport) {}\n\n// CollectPacketEvent collects packet events from the datapath\nfunc (d *DNSCollector) CollectPacketEvent(report *collector.PacketReport) {}\n\n// CollectCounterEvent collect counters from the datapath\nfunc (d *DNSCollector) CollectCounterEvent(report *collector.CounterReport) {}\n\n// CollectConnectionExceptionReport collects the connection exception report\nfunc (d *DNSCollector) CollectConnectionExceptionReport(_ *collector.ConnectionExceptionReport) {\n}\n\nvar r collector.DNSRequestReport\nvar l sync.Mutex\n\n// CollectDNSRequests collect counters from the datapath\nfunc (d *DNSCollector) CollectDNSRequests(report *collector.DNSRequestReport) {\n\tl.Lock()\n\tr = *report\n\tl.Unlock()\n}\n\nfunc TestDNS(t *testing.T) {\n\tctx, cancel := context.WithCancel(context.Background())\n\tdefer cancel()\n\tpuIDcache := cache.NewCache(\"puFromContextID\")\n\n\tfp := &policy.PUInfo{\n\t\tRuntime: policy.NewPURuntimeWithDefaults(),\n\t\tPolicy:  policy.NewPUPolicyWithDefaults(),\n\t}\n\tpu, _ := pucontext.NewPU(\"pu1\", fp, nil, 24*time.Hour) // nolint\n\n\taddDNSNamePolicy(pu)\n\n\tpuIDcache.AddOrUpdate(\"pu1\", pu)\n\tconntrack := &flowClientDummy{}\n\tcollector := &DNSCollector{}\n\n\tproxy := New(ctx, puIDcache, conntrack, collector)\n\n\terr := proxy.StartDNSServer(ctx, \"pu1\", \"53001\")\n\tassert.Equal(t, err == nil, true, \"start dns server\")\n\n\tresolver := createCustomResolver()\n\twaitTimeBeforeReport = 3 * time.Second\n\tresolver.LookupIPAddr(ctx, \"www.google.com\") // nolint\n\tresolver.LookupIPAddr(ctx, \"www.google.com\") // nolint\n\n\tassert.Equal(t, err == nil, true, \"err should be nil\")\n\n\ttime.Sleep(5 * time.Second)\n\tl.Lock()\n\tassert.Equal(t, r.NameLookup == \"www.google.com.\", true, \"lookup should be www.google.com\")\n\tassert.Equal(t, r.Count >= 2 && r.Count <= 10, true, fmt.Sprintf(\"count should be 2, got %d\", r.Count))\n\tl.Unlock()\n\tproxy.Unenforce(ctx, \"pu1\") // nolint\n}\n\nconst (\n\tcontextID   = \"host\"\n\tserviceID   = \"serviceID\"\n\tport80      = \"80\"\n\tfqdn        = \"www.example.com.\"\n\tfqdnTwo     = \"two.example.com.\"\n\tfqdnKeep    = \"keep.example.com.\"\n\tip192_0_2_1 = \"192.0.2.1\"\n\tip192_0_2_2 = \"192.0.2.2\"\n\tip192_0_2_3 = \"192.0.2.3\"\n)\n\nfunc TestProxy_removeIPfromFQDN(t *testing.T) {\n\ttype args struct {\n\t\tcontextID string\n\t\tfqdn      string\n\t\tipAddress string\n\t}\n\ttests := []struct {\n\t\tname             string\n\t\targs             args\n\t\texisting         map[string]*dnsNamesToIP\n\t\twantContextEntry bool\n\t\twant             map[string][]string\n\t}{\n\t\t{\n\t\t\tname: \"context not in cache\",\n\t\t\targs: args{\n\t\t\t\tcontextID: \"does not exist\",\n\t\t\t\tfqdn:      fqdn,\n\t\t\t\tipAddress: \"\",\n\t\t\t},\n\t\t\twantContextEntry: false,\n\t\t},\n\t\t{\n\t\t\tname: \"nothing to remove from empty list\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tfqdn:      fqdn,\n\t\t\t\tipAddress: ip192_0_2_1,\n\t\t\t},\n\t\t\twantContextEntry: true,\n\t\t\texisting: map[string]*dnsNamesToIP{\n\t\t\t\tcontextID: {\n\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\t\t\tfqdn:     {},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string][]string{\n\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\tfqdn:     {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"IP does not match from existing list\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tfqdn:      fqdn,\n\t\t\t\tipAddress: ip192_0_2_1,\n\t\t\t},\n\t\t\twantContextEntry: true,\n\t\t\texisting: map[string]*dnsNamesToIP{\n\t\t\t\tcontextID: {\n\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\t\t\tfqdn:     {ip192_0_2_2},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string][]string{\n\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\tfqdn:     {ip192_0_2_2},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"IP successfully being removed from list\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tfqdn:      fqdn,\n\t\t\t\tipAddress: ip192_0_2_1,\n\t\t\t},\n\t\t\twantContextEntry: true,\n\t\t\texisting: map[string]*dnsNamesToIP{\n\t\t\t\tcontextID: {\n\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\t\t\tfqdn:     {ip192_0_2_1},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string][]string{\n\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\tfqdn:     {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"IP successfully being removed from list with other entries\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tfqdn:      fqdn,\n\t\t\t\tipAddress: ip192_0_2_2,\n\t\t\t},\n\t\t\twantContextEntry: true,\n\t\t\texisting: map[string]*dnsNamesToIP{\n\t\t\t\tcontextID: {\n\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\t\t\tfqdn:     {ip192_0_2_1, ip192_0_2_2, ip192_0_2_3},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string][]string{\n\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\tfqdn:     {ip192_0_2_1, ip192_0_2_3},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tp := &Proxy{\n\t\t\t\tcontextIDToDNSNames:      cache.NewCache(\"contextIDtoDNSNames\"),\n\t\t\t\tcontextIDToDNSNamesLocks: newMutexMap(),\n\t\t\t}\n\t\t\tfor k, v := range tt.existing {\n\t\t\t\tp.contextIDToDNSNames.AddOrUpdate(k, v)\n\t\t\t}\n\t\t\tp.removeIPfromFQDN(tt.args.contextID, tt.args.fqdn, tt.args.ipAddress)\n\n\t\t\tval, err := p.contextIDToDNSNames.Get(tt.args.contextID)\n\t\t\tif (err == nil) != tt.wantContextEntry {\n\t\t\t\tt.Errorf(\"entry for context %q does not exist\", tt.args.contextID)\n\t\t\t}\n\t\t\tif err == nil {\n\t\t\t\tm := val.(*dnsNamesToIP)\n\t\t\t\tif !reflect.DeepEqual(m.nameToIP, tt.want) {\n\t\t\t\t\tt.Errorf(\"want %#v, have %#v\", tt.want, m.nameToIP)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestProxy_updateFQDNWithIPs(t *testing.T) {\n\ttype args struct {\n\t\tcontextID   string\n\t\tfqdn        string\n\t\tipAddresses []string\n\t}\n\ttests := []struct {\n\t\tname             string\n\t\targs             args\n\t\texisting         map[string]*dnsNamesToIP\n\t\twantContextEntry bool\n\t\twant             map[string][]string\n\t}{\n\t\t{\n\t\t\tname: \"context not in cache\",\n\t\t\targs: args{\n\t\t\t\tcontextID:   \"does not exist\",\n\t\t\t\tfqdn:        fqdn,\n\t\t\t\tipAddresses: nil,\n\t\t\t},\n\t\t\twantContextEntry: false,\n\t\t},\n\t\t{\n\t\t\tname: \"adding an empty list\",\n\t\t\targs: args{\n\t\t\t\tcontextID:   contextID,\n\t\t\t\tfqdn:        fqdn,\n\t\t\t\tipAddresses: []string{},\n\t\t\t},\n\t\t\twantContextEntry: true,\n\t\t\texisting: map[string]*dnsNamesToIP{\n\t\t\t\tcontextID: {\n\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\t\t\tfqdn:     {},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string][]string{\n\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\tfqdn:     {},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"adding IPs to an empty list\",\n\t\t\targs: args{\n\t\t\t\tcontextID:   contextID,\n\t\t\t\tfqdn:        fqdn,\n\t\t\t\tipAddresses: []string{ip192_0_2_2, ip192_0_2_3},\n\t\t\t},\n\t\t\twantContextEntry: true,\n\t\t\texisting: map[string]*dnsNamesToIP{\n\t\t\t\tcontextID: {\n\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\t\t\tfqdn:     {},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string][]string{\n\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\tfqdn:     {ip192_0_2_2, ip192_0_2_3},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"adding an existing IP to the list\",\n\t\t\targs: args{\n\t\t\t\tcontextID:   contextID,\n\t\t\t\tfqdn:        fqdn,\n\t\t\t\tipAddresses: []string{ip192_0_2_2},\n\t\t\t},\n\t\t\twantContextEntry: true,\n\t\t\texisting: map[string]*dnsNamesToIP{\n\t\t\t\tcontextID: {\n\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\t\t\tfqdn:     {ip192_0_2_1, ip192_0_2_2},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string][]string{\n\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\tfqdn:     {ip192_0_2_1, ip192_0_2_2},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"adding an existing IP and a new IP to an existing list\",\n\t\t\targs: args{\n\t\t\t\tcontextID:   contextID,\n\t\t\t\tfqdn:        fqdn,\n\t\t\t\tipAddresses: []string{ip192_0_2_2, ip192_0_2_3},\n\t\t\t},\n\t\t\twantContextEntry: true,\n\t\t\texisting: map[string]*dnsNamesToIP{\n\t\t\t\tcontextID: {\n\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\t\t\tfqdn:     {ip192_0_2_1, ip192_0_2_2},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string][]string{\n\t\t\t\tfqdnKeep: {ip192_0_2_1},\n\t\t\t\tfqdn:     {ip192_0_2_1, ip192_0_2_2, ip192_0_2_3},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tp := &Proxy{\n\t\t\t\tcontextIDToDNSNames:      cache.NewCache(\"contextIDtoDNSNames\"),\n\t\t\t\tcontextIDToDNSNamesLocks: newMutexMap(),\n\t\t\t}\n\t\t\tfor k, v := range tt.existing {\n\t\t\t\tp.contextIDToDNSNames.AddOrUpdate(k, v)\n\t\t\t}\n\t\t\tp.updateFQDNWithIPs(tt.args.contextID, tt.args.fqdn, tt.args.ipAddresses)\n\n\t\t\tval, err := p.contextIDToDNSNames.Get(tt.args.contextID)\n\t\t\tif (err == nil) != tt.wantContextEntry {\n\t\t\t\tt.Errorf(\"entry for context %q does not exist\", tt.args.contextID)\n\t\t\t}\n\t\t\tif err == nil {\n\t\t\t\tm := val.(*dnsNamesToIP)\n\t\t\t\tif !reflect.DeepEqual(m.nameToIP, tt.want) {\n\t\t\t\t\tt.Errorf(\"want %#v, have %#v\", tt.want, m.nameToIP)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestProxy_defaultRemoveExpiredEntry(t *testing.T) {\n\ttype args struct {\n\t\tipaddress string\n\t}\n\ttype existing struct {\n\t\tpus   map[string]*pucontext.PUContext\n\t\tfqdns map[string]*dnsNamesToIP\n\t\tips   map[string]iptottlinfo\n\t}\n\ttype want struct {\n\t\tremovedFromIPToTTL bool\n\t\tfqdns              map[string]*dnsNamesToIP\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\texisting existing\n\t\twant     want\n\t}{\n\t\t{\n\t\t\tname: \"TTL info for IP does not exist in cache\",\n\t\t\targs: args{ipaddress: ip192_0_2_1},\n\t\t\texisting: existing{\n\t\t\t\tips: map[string]iptottlinfo{\n\t\t\t\t\tip192_0_2_2: {\n\t\t\t\t\t\tcontextIDs: map[string]struct{}{\n\t\t\t\t\t\t\tcontextID: {},\n\t\t\t\t\t\t},\n\t\t\t\t\t\tfqdns: map[string]struct{}{\n\t\t\t\t\t\t\tfqdn:    {},\n\t\t\t\t\t\t\tfqdnTwo: {},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tremovedFromIPToTTL: true,\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"PUContext in TTL info for IP does not exist in cache\",\n\t\t\targs: args{ipaddress: ip192_0_2_1},\n\t\t\texisting: existing{\n\t\t\t\tips: map[string]iptottlinfo{\n\t\t\t\t\tip192_0_2_1: {\n\t\t\t\t\t\tcontextIDs: map[string]struct{}{\n\t\t\t\t\t\t\tcontextID: {},\n\t\t\t\t\t\t},\n\t\t\t\t\t\tfqdns: map[string]struct{}{\n\t\t\t\t\t\t\tfqdn:    {},\n\t\t\t\t\t\t\tfqdnTwo: {},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpus: map[string]*pucontext.PUContext{},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tremovedFromIPToTTL: true,\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"successfully remove 192_0_2_1\",\n\t\t\targs: args{ipaddress: ip192_0_2_1},\n\t\t\texisting: existing{\n\t\t\t\tpus: map[string]*pucontext.PUContext{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tRWMutex:         sync.RWMutex{},\n\t\t\t\t\t\tApplicationACLs: acls.NewACLCache(),\n\t\t\t\t\t\tDNSACLs: map[string][]policy.PortProtocolPolicy{\n\t\t\t\t\t\t\tfqdn: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\t\tfqdn: {ip192_0_2_1, ip192_0_2_2},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tips: map[string]iptottlinfo{\n\t\t\t\t\tip192_0_2_1: {\n\t\t\t\t\t\tcontextIDs: map[string]struct{}{\n\t\t\t\t\t\t\tcontextID: {},\n\t\t\t\t\t\t},\n\t\t\t\t\t\tfqdns: map[string]struct{}{\n\t\t\t\t\t\t\tfqdn:    {},\n\t\t\t\t\t\t\tfqdnTwo: {},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tremovedFromIPToTTL: true,\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\t\tfqdn: {ip192_0_2_2},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tp := &Proxy{\n\t\t\t\tpuFromID:                 cache.NewCache(\"puFromContextID\"),\n\t\t\t\tcontextIDToServer:        map[string]*dns.Server{},\n\t\t\t\tcontextIDToDNSNames:      cache.NewCache(\"contextIDtoDNSNames\"),\n\t\t\t\tcontextIDToDNSNamesLocks: newMutexMap(),\n\t\t\t\tIPToTTL:                  cache.NewCache(\"IPToTTL\"),\n\t\t\t\tIPToTTLLocks:             newMutexMap(),\n\t\t\t}\n\t\t\tfor k, v := range tt.existing.pus {\n\t\t\t\tp.puFromID.AddOrUpdate(k, v)\n\t\t\t}\n\t\t\tfor k, v := range tt.existing.fqdns {\n\t\t\t\tp.contextIDToDNSNames.AddOrUpdate(k, v)\n\t\t\t}\n\t\t\tfor k, v := range tt.existing.ips {\n\t\t\t\tp.IPToTTL.AddOrUpdate(k, v)\n\t\t\t}\n\t\t\tp.defaultRemoveExpiredEntry(tt.args.ipaddress)\n\n\t\t\t// we check to see if the entries got removed correctly\n\t\t\tif _, err := p.IPToTTL.Get(tt.args.ipaddress); (err != nil) != tt.want.removedFromIPToTTL {\n\t\t\t\tt.Errorf(\"entry exists in IPToTTL cache: %v, want.removedFromIPToTTL %v\", (err != nil), tt.want.removedFromIPToTTL)\n\t\t\t}\n\t\t\tfor contextID, wantFqdns := range tt.want.fqdns {\n\t\t\t\texistingFqdnsRaw, err := p.contextIDToDNSNames.Get(contextID)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Errorf(\"no map for context %q in contextIDToDNSNames any longer\", contextID)\n\t\t\t\t}\n\t\t\t\texistingFqdns := existingFqdnsRaw.(*dnsNamesToIP)\n\t\t\t\tif !reflect.DeepEqual(wantFqdns.nameToIP, existingFqdns.nameToIP) {\n\t\t\t\t\tt.Errorf(\"context %q: have fqdns %#v - want fqdns %#v\", contextID, existingFqdns.nameToIP, wantFqdns.nameToIP)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestProxy_handleTTLInfoList(t *testing.T) {\n\ttype args struct {\n\t\tcontextID      string\n\t\tfqdn           string\n\t\tdnsttlinfolist []*dnsttlinfo\n\t\tupdateOnly     bool\n\t}\n\ttype existing struct {\n\t\tips map[string]iptottlinfo\n\t}\n\ttype want struct {\n\t\tmustFireExpiry               bool\n\t\tnewEntry                     bool\n\t\texistingEntryIncreasedExpiry bool\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\texisting existing\n\t\twant     want\n\t}{\n\t\t{\n\t\t\tname: \"creating new TTL info in the cache for new IP when updateOnly is false\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tfqdn:      fqdn,\n\t\t\t\tdnsttlinfolist: []*dnsttlinfo{\n\t\t\t\t\t{\n\t\t\t\t\t\tipaddress: ip192_0_2_1,\n\t\t\t\t\t\tttl:       1,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tupdateOnly: false,\n\t\t\t},\n\t\t\texisting: existing{\n\t\t\t\tips: map[string]iptottlinfo{},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tmustFireExpiry: true,\n\t\t\t\tnewEntry:       true,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"not creating new TTL info in the cache for new IP when updateOnly is true\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tfqdn:      fqdn,\n\t\t\t\tdnsttlinfolist: []*dnsttlinfo{\n\t\t\t\t\t{\n\t\t\t\t\t\tipaddress: ip192_0_2_1,\n\t\t\t\t\t\tttl:       1,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tupdateOnly: true,\n\t\t\t},\n\t\t\texisting: existing{\n\t\t\t\tips: map[string]iptottlinfo{},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tmustFireExpiry: false,\n\t\t\t\tnewEntry:       false,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"updating existing TTL info in the cache\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tfqdn:      fqdn,\n\t\t\t\tdnsttlinfolist: []*dnsttlinfo{\n\t\t\t\t\t{\n\t\t\t\t\t\tipaddress: ip192_0_2_1,\n\t\t\t\t\t\tttl:       1,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tupdateOnly: true,\n\t\t\t},\n\t\t\texisting: existing{\n\t\t\t\tips: map[string]iptottlinfo{\n\t\t\t\t\tip192_0_2_1: {\n\t\t\t\t\t\tipaddress:  ip192_0_2_1,\n\t\t\t\t\t\texpiryTime: time.Now(),\n\t\t\t\t\t\tcontextIDs: map[string]struct{}{},\n\t\t\t\t\t\tfqdns:      map[string]struct{}{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tmustFireExpiry:               true,\n\t\t\t\texistingEntryIncreasedExpiry: true,\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tp := &Proxy{\n\t\t\t\tpuFromID:                 cache.NewCache(\"puFromContextID\"),\n\t\t\t\tcontextIDToServer:        map[string]*dns.Server{},\n\t\t\t\tcontextIDToDNSNames:      cache.NewCache(\"contextIDtoDNSNames\"),\n\t\t\t\tcontextIDToDNSNamesLocks: newMutexMap(),\n\t\t\t\tIPToTTL:                  cache.NewCache(\"IPToTTL\"),\n\t\t\t\tIPToTTLLocks:             newMutexMap(),\n\t\t\t}\n\n\t\t\t// exercises the expiry trigger\n\t\t\tvar wg sync.WaitGroup\n\t\t\tif tt.want.mustFireExpiry {\n\t\t\t\twg.Add(1)\n\t\t\t}\n\t\t\tp.removeExpiredEntry = func(ipaddress string) {\n\t\t\t\t// this is just to exercise the expiry trigger\n\t\t\t\tt.Logf(\"removeExpiredEntry %q\", ipaddress)\n\t\t\t\twg.Done()\n\t\t\t}\n\n\t\t\tfor k, v := range tt.existing.ips {\n\t\t\t\t// TODO: this is awful, not sure how to better mock this at this point\n\t\t\t\tv.timer = time.AfterFunc(time.Second, func() {\n\t\t\t\t\tif p.removeExpiredEntry != nil {\n\t\t\t\t\t\tp.removeExpiredEntry(k)\n\t\t\t\t\t}\n\t\t\t\t})\n\t\t\t\tp.IPToTTL.AddOrUpdate(k, v)\n\t\t\t}\n\t\t\tp.handleTTLInfoList(tt.args.contextID, tt.args.fqdn, tt.args.dnsttlinfolist, tt.args.updateOnly)\n\t\t\twg.Wait()\n\n\t\t\tif tt.want.newEntry {\n\t\t\t\tfor _, i := range tt.args.dnsttlinfolist {\n\t\t\t\t\tif _, err := p.IPToTTL.Get(i.ipaddress); err != nil {\n\t\t\t\t\t\tt.Errorf(\"no new entry for IP %q\", i.ipaddress)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif tt.want.existingEntryIncreasedExpiry {\n\t\t\t\tfor _, i := range tt.args.dnsttlinfolist {\n\t\t\t\t\tiptottlExisting, ok := tt.existing.ips[i.ipaddress]\n\t\t\t\t\tif !ok {\n\t\t\t\t\t\tt.Errorf(\"not in the existing map %q\", i.ipaddress)\n\t\t\t\t\t}\n\t\t\t\t\tiptottlRaw, err := p.IPToTTL.Get(i.ipaddress)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tt.Errorf(\"no entry for IP %q\", i.ipaddress)\n\t\t\t\t\t}\n\t\t\t\t\tiptottlUpdated := iptottlRaw.(iptottlinfo)\n\n\t\t\t\t\tif !iptottlUpdated.expiryTime.After(iptottlExisting.expiryTime) {\n\t\t\t\t\t\tt.Errorf(\"expiry time not updated\")\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestProxy_Enforce(t *testing.T) {\n\ttype args struct {\n\t\tcontextID string\n\t\tpuInfo    *policy.PUInfo\n\t}\n\ttype existing struct {\n\t\tfqdns map[string]*dnsNamesToIP\n\t\tpus   map[string]*pucontext.PUContext\n\t}\n\ttype want struct {\n\t\tfqdns map[string]*dnsNamesToIP\n\t}\n\ttests := []struct {\n\t\tname     string\n\t\targs     args\n\t\texisting existing\n\t\twant     want\n\t\twantErr  bool\n\t}{\n\t\t{\n\t\t\tname: \"if this is the first enforce call on a PU, simply initialize the data structures, with FQDNs from policy\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tpuInfo: &policy.PUInfo{\n\t\t\t\t\tRuntime:   nil,\n\t\t\t\t\tContextID: contextID,\n\t\t\t\t\tPolicy: &policy.PUPolicy{\n\t\t\t\t\t\tDNSACLs: map[string][]policy.PortProtocolPolicy{\n\t\t\t\t\t\t\tfqdn: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tfqdnTwo: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texisting: existing{\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{},\n\t\t\t\tpus:   map[string]*pucontext.PUContext{},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\t\tfqdn:    {},\n\t\t\t\t\t\t\tfqdnTwo: {},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"new FQDNs from a policy simply register with the DNS proxy\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tpuInfo: &policy.PUInfo{\n\t\t\t\t\tRuntime:   nil,\n\t\t\t\t\tContextID: contextID,\n\t\t\t\t\tPolicy: &policy.PUPolicy{\n\t\t\t\t\t\tDNSACLs: map[string][]policy.PortProtocolPolicy{\n\t\t\t\t\t\t\tfqdn: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tfqdnTwo: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texisting: existing{\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpus: map[string]*pucontext.PUContext{},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\t\tfqdn:    {},\n\t\t\t\t\t\t\tfqdnTwo: {},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"existing FQDNs update ipsets and ApplicationACLs\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tpuInfo: &policy.PUInfo{\n\t\t\t\t\tRuntime:   nil,\n\t\t\t\t\tContextID: contextID,\n\t\t\t\t\tPolicy: &policy.PUPolicy{\n\t\t\t\t\t\tDNSACLs: map[string][]policy.PortProtocolPolicy{\n\t\t\t\t\t\t\tfqdn: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tfqdnTwo: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texisting: existing{\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\t\tfqdn:    {ip192_0_2_1},\n\t\t\t\t\t\t\tfqdnTwo: {ip192_0_2_2},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpus: map[string]*pucontext.PUContext{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tRWMutex:         sync.RWMutex{},\n\t\t\t\t\t\tApplicationACLs: acls.NewACLCache(),\n\t\t\t\t\t\tDNSACLs: map[string][]policy.PortProtocolPolicy{\n\t\t\t\t\t\t\tfqdn: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\t\tfqdn:    {ip192_0_2_1},\n\t\t\t\t\t\t\tfqdnTwo: {ip192_0_2_2},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"existing FQDNs do not update ipsets and ApplicationACLs if PU context does not exist\",\n\t\t\targs: args{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tpuInfo: &policy.PUInfo{\n\t\t\t\t\tRuntime:   nil,\n\t\t\t\t\tContextID: contextID,\n\t\t\t\t\tPolicy: &policy.PUPolicy{\n\t\t\t\t\t\tDNSACLs: map[string][]policy.PortProtocolPolicy{\n\t\t\t\t\t\t\tfqdn: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tfqdnTwo: {\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tPorts:     []string{port80},\n\t\t\t\t\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\t\t\tServiceID: serviceID,\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\texisting: existing{\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\t\tfqdn:    {ip192_0_2_1},\n\t\t\t\t\t\t\tfqdnTwo: {ip192_0_2_2},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpus: map[string]*pucontext.PUContext{},\n\t\t\t},\n\t\t\twant: want{\n\t\t\t\tfqdns: map[string]*dnsNamesToIP{\n\t\t\t\t\tcontextID: {\n\t\t\t\t\t\tnameToIP: map[string][]string{\n\t\t\t\t\t\t\tfqdn:    {ip192_0_2_1},\n\t\t\t\t\t\t\tfqdnTwo: {ip192_0_2_2},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\tp := &Proxy{\n\t\t\t\tpuFromID:                 cache.NewCache(\"puFromContextID\"),\n\t\t\t\tcontextIDToServer:        map[string]*dns.Server{},\n\t\t\t\tcontextIDToDNSNames:      cache.NewCache(\"contextIDtoDNSNames\"),\n\t\t\t\tcontextIDToDNSNamesLocks: newMutexMap(),\n\t\t\t}\n\t\t\tfor k, v := range tt.existing.fqdns {\n\t\t\t\tp.contextIDToDNSNames.AddOrUpdate(k, v)\n\t\t\t}\n\t\t\tfor k, v := range tt.existing.pus {\n\t\t\t\tp.puFromID.AddOrUpdate(k, v)\n\t\t\t}\n\t\t\tif err := p.Enforce(ctx, tt.args.contextID, tt.args.puInfo); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"Proxy.Enforce() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tfor contextID, wantFqdns := range tt.want.fqdns {\n\t\t\t\texistingFqdnsRaw, err := p.contextIDToDNSNames.Get(contextID)\n\t\t\t\tif err != nil {\n\t\t\t\t\tt.Errorf(\"no map for context %q in contextIDToDNSNames any longer\", contextID)\n\t\t\t\t}\n\t\t\t\texistingFqdns := existingFqdnsRaw.(*dnsNamesToIP)\n\t\t\t\tif !reflect.DeepEqual(wantFqdns.nameToIP, existingFqdns.nameToIP) {\n\t\t\t\t\tt.Errorf(\"context %q: have fqdns %#v - want fqdns %#v\", contextID, existingFqdns.nameToIP, wantFqdns.nameToIP)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns_report.go",
    "content": "// +build linux windows\n\npackage dnsproxy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n)\n\nvar (\n\twaitTimeBeforeReport = 5 * time.Minute\n)\n\ntype dnsReport struct {\n\tkey        string\n\tcontextID  string\n\tnameLookup string\n\terror      string\n\tsource     collector.EndPoint\n\tdest       collector.EndPoint\n\tnamespace  string\n\tips        []string\n}\n\nfunc (p *Proxy) sendToCollector(report dnsReport, count int) {\n\tr := &collector.DNSRequestReport{\n\t\tContextID:   report.contextID,\n\t\tNameLookup:  report.nameLookup,\n\t\tSource:      &report.source,\n\t\tDestination: &report.dest,\n\t\tNamespace:   report.namespace,\n\t\tError:       report.error,\n\t\tCount:       count,\n\t\tTs:          time.Now(),\n\t\tIPs:         report.ips,\n\t}\n\tp.collector.CollectDNSRequests(r)\n}\n\nfunc (p *Proxy) reportDNSRequests(ctx context.Context, chreport chan dnsReport) {\n\tdnsReports := map[string]int{}\n\tsendReport := make(chan dnsReport)\n\tdeleteReport := make(chan dnsReport)\n\n\tfor {\n\t\tselect {\n\t\tcase r := <-chreport:\n\t\t\tdnsReports[r.key]++\n\t\t\tswitch dnsReports[r.key] {\n\t\t\tcase 1:\n\t\t\t\t// dispatch immediately\n\t\t\t\tp.sendToCollector(r, 1)\n\t\t\t\tgo func(r dnsReport) {\n\t\t\t\t\t<-time.After(waitTimeBeforeReport)\n\t\t\t\t\tdeleteReport <- r\n\t\t\t\t}(r)\n\t\t\tcase 2:\n\t\t\t\tgo func(r dnsReport) {\n\t\t\t\t\t<-time.After(waitTimeBeforeReport)\n\t\t\t\t\tsendReport <- r\n\t\t\t\t}(r)\n\t\t\t}\n\t\tcase r := <-sendReport:\n\t\t\tp.sendToCollector(r, dnsReports[r.key]-1)\n\t\t\tdelete(dnsReports, r.key)\n\t\tcase r := <-deleteReport:\n\t\t\tif dnsReports[r.key] == 1 {\n\t\t\t\tdelete(dnsReports, r.key)\n\t\t\t}\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\t}\n\t}\n}\n\nfunc (p *Proxy) reportDNSLookup(name string, pucontext *pucontext.PUContext, srcIP net.IP, srcPort uint16, dnsIP net.IP, dnsPort uint16, ips []string, err string) {\n\tp.chreports <- dnsReport{\n\t\tcontextID:  pucontext.ID(),\n\t\tnameLookup: name,\n\t\terror:      err,\n\t\tnamespace:  pucontext.ManagementNamespace(),\n\t\tsource: collector.EndPoint{\n\t\t\tIP:   srcIP.String(),\n\t\t\tPort: srcPort,\n\t\t\tID:   pucontext.ManagementID(),\n\t\t\tType: collector.EndPointTypePU,\n\t\t},\n\t\tdest: collector.EndPoint{\n\t\t\tIP:   dnsIP.String(),\n\t\t\tPort: dnsPort,\n\t\t\tID:   pucontext.ManagementID(),\n\t\t\tType: collector.EndPointTypePU,\n\t\t},\n\t\tips: ips,\n\t\tkey: fmt.Sprintf(\"%s:%s:%s:%s:%s:%s\", pucontext.ID(), name, err, pucontext.ManagementNamespace(), srcIP.String(), pucontext.ManagementID()),\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns_test.go",
    "content": "// +build linux\n\npackage dnsproxy\n\nimport (\n\t\"context\"\n\t\"net\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/magiconair/properties/assert\"\n\t\"go.aporeto.io/trireme-lib/collector\"\n\tprovider \"go.aporeto.io/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/cache\"\n)\n\ntype flowClientDummy struct {\n}\n\nfunc (c *flowClientDummy) Close() error {\n\treturn nil\n}\n\nfunc (c *flowClientDummy) UpdateMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32, network bool) error {\n\treturn nil\n}\n\nfunc (c *flowClientDummy) UpdateNetworkFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\treturn nil\n}\n\nfunc (c *flowClientDummy) UpdateApplicationFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\treturn nil\n}\n\nfunc (c *flowClientDummy) GetOriginalDest(ipSrc, ipDst net.IP, srcport, dstport uint16, protonum uint8) (net.IP, uint16, uint32, error) {\n\treturn net.ParseIP(\"8.8.8.8\"), 53, 100, nil\n}\n\nfunc addDNSNamePolicy(context *pucontext.PUContext) {\n\tcontext.DNSACLs = policy.DNSRuleList{\n\t\t\"www.google.com\": []policy.PortProtocolPolicy{\n\t\t\t{Ports: []string{\"80\"},\n\t\t\t\tProtocols: []string{\"tcp\"},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"2\",\n\t\t\t\t}},\n\t\t},\n\t}\n}\n\nfunc CustomDialer(ctx context.Context, network, address string) (net.Conn, error) {\n\td := net.Dialer{}\n\treturn d.DialContext(ctx, \"udp\", \"127.0.0.1:53001\")\n}\n\nfunc createCustomResolver() *net.Resolver {\n\tr := &net.Resolver{\n\t\tPreferGo: true,\n\t\tDial:     CustomDialer,\n\t}\n\n\treturn r\n}\n\n// DNSCollector implements a default collector infrastructure to syslog\ntype DNSCollector struct{}\n\n// CollectFlowEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectFlowEvent(record *collector.FlowRecord) {}\n\n// CollectContainerEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectContainerEvent(record *collector.ContainerRecord) {}\n\n// CollectUserEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectUserEvent(record *collector.UserRecord) {}\n\n// CollectTraceEvent collects iptables trace events\nfunc (d *DNSCollector) CollectTraceEvent(records []string) {}\n\n// CollectPingEvent collects ping events\nfunc (d *DNSCollector) CollectPingEvent(report *collector.PingReport) {}\n\n// CollectPacketEvent collects packet events from the datapath\nfunc (d *DNSCollector) CollectPacketEvent(report *collector.PacketReport) {}\n\n// CollectCounterEvent collect counters from the datapath\nfunc (d *DNSCollector) CollectCounterEvent(report *collector.CounterReport) {}\n\nvar r collector.DNSRequestReport\nvar l sync.Mutex\n\n// CollectDNSRequests collect counters from the datapath\nfunc (d *DNSCollector) CollectDNSRequests(report *collector.DNSRequestReport) {\n\tl.Lock()\n\tr = *report\n\tl.Unlock()\n}\n\nfunc TestDNS(t *testing.T) {\n\tpuIDcache := cache.NewCache(\"puFromContextID\")\n\n\tfp := &policy.PUInfo{\n\t\tRuntime: policy.NewPURuntimeWithDefaults(),\n\t\tPolicy:  policy.NewPUPolicyWithDefaults(),\n\t}\n\tpu, _ := pucontext.NewPU(\"pu1\", fp, 24*time.Hour) // nolint\n\n\taddDNSNamePolicy(pu)\n\n\tpuIDcache.AddOrUpdate(\"pu1\", pu)\n\tconntrack := &flowClientDummy{}\n\tcollector := &DNSCollector{}\n\n\tips := provider.NewTestIpsetProvider()\n\tproxy := New(puIDcache, conntrack, collector, ipsetmanager.CreateIPsetManager(ips, ips))\n\n\terr := proxy.StartDNSServer(\"pu1\", \"53001\")\n\tassert.Equal(t, err == nil, true, \"start dns server\")\n\n\tresolver := createCustomResolver()\n\tctx := context.Background()\n\twaitTimeBeforeReport = 3 * time.Second\n\tresolver.LookupIPAddr(ctx, \"www.google.com\") //nolint\n\tresolver.LookupIPAddr(ctx, \"www.google.com\") //nolint\n\n\tassert.Equal(t, err == nil, true, \"err should be nil\")\n\n\ttime.Sleep(5 * time.Second)\n\tl.Lock()\n\tassert.Equal(t, r.NameLookup == \"www.google.com.\", true, \"lookup should be www.google.com\")\n\tassert.Equal(t, r.Count >= 2 && r.Count <= 10, true, \"count should be 2\")\n\tl.Unlock()\n\tproxy.ShutdownDNS(\"pu1\")\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns_unsupported.go",
    "content": "// +build !linux,!windows\n\npackage dnsproxy\n\nimport (\n\t\"context\"\n\t\"net\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/flowtracking\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n)\n\n// Proxy struct represents the object for dns proxy\ntype Proxy struct {\n}\n\n// New creates an instance of the dns proxy\nfunc New(ctx context.Context, puFromID cache.DataStore, conntrack flowtracking.FlowClient, c collector.EventCollector) *Proxy {\n\treturn &Proxy{}\n}\n\n// ShutdownDNS shuts down the dns server for contextID\nfunc (p *Proxy) ShutdownDNS(contextID string) {\n\n}\n\n// StartDNSServer starts the dns server on the port provided for contextID\nfunc (p *Proxy) StartDNSServer(ctx context.Context, contextID, port string) error {\n\treturn nil\n}\n\n// SyncWithPlatformCache is only needed in Windows currently\nfunc (p *Proxy) SyncWithPlatformCache(ctx context.Context, pctx *pucontext.PUContext) error {\n\treturn nil\n}\n\n// HandleDNSResponsePacket is only needed in Windows currently\nfunc (p *Proxy) HandleDNSResponsePacket(dnsPacketData []byte, sourceIP net.IP, sourcePort uint16, destIP net.IP, destPort uint16, puFromContextID func(string) (*pucontext.PUContext, error)) error {\n\treturn nil\n}\n\n// Enforce starts enforcing policies for the given policy.PUInfo.\nfunc (p *Proxy) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\treturn nil\n}\n\n// Unenforce stops enforcing policy for the given IP.\nfunc (p *Proxy) Unenforce(_ context.Context, contextID string) error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns_windows.go",
    "content": "// +build windows\n\npackage dnsproxy\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"net\"\n\t\"sync\"\n\t\"syscall\"\n\n\t\"github.com/miekg/dns\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/flowtracking\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\nvar clearWindowsDNSCacheFunc = clearWindowsDNSCache\n\n// Proxy struct represents the object for dns proxy\ntype Proxy struct {\n\tpuFromID   cache.DataStore\n\tcollector  collector.EventCollector\n\tcontextIDs map[string]struct{}\n\tchreports  chan dnsReport\n\tsync.RWMutex\n}\n\n// New creates an instance of the dns proxy\nfunc New(ctx context.Context, puFromID cache.DataStore, conntrack flowtracking.FlowClient, c collector.EventCollector) *Proxy {\n\tch := make(chan dnsReport)\n\tp := &Proxy{chreports: ch, puFromID: puFromID, collector: c, contextIDs: make(map[string]struct{})}\n\tgo p.reportDNSRequests(ctx, ch)\n\treturn p\n}\n\n// StartDNSServer starts the dns server on the port provided for contextID\nfunc (p *Proxy) StartDNSServer(ctx context.Context, contextID, port string) error {\n\tp.Lock()\n\tdefer p.Unlock()\n\tp.contextIDs[contextID] = struct{}{}\n\treturn nil\n}\n\n// ShutdownDNS shuts down the dns server for contextID\nfunc (p *Proxy) ShutdownDNS(contextID string) {\n\tp.Lock()\n\tdefer p.Unlock()\n\tdelete(p.contextIDs, contextID)\n}\n\n// SyncWithPlatformCache is called on policy change.\n// Clear the Windows DNS cache in order to guarantee proxying.\nfunc (p *Proxy) SyncWithPlatformCache(ctx context.Context, pctx *pucontext.PUContext) error {\n\n\tif pctx.UsesFQDN() {\n\t\treturn clearWindowsDNSCacheFunc()\n\t}\n\treturn nil\n}\n\n// HandleDNSResponsePacket parses the DNS response and forwards the information to each PU based on policy\nfunc (p *Proxy) HandleDNSResponsePacket(dnsPacketData []byte, sourceIP net.IP, sourcePort uint16, destIP net.IP, destPort uint16, puFromContextID func(string) (*pucontext.PUContext, error)) error {\n\n\t// parse dns\n\tmsg := &dns.Msg{}\n\terr := msg.Unpack(dnsPacketData)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Make sure we have a question\n\tif len(msg.Question) <= 0 {\n\t\treturn nil\n\t}\n\n\tvar ips []string\n\tfor _, ans := range msg.Answer {\n\t\tif ans.Header().Rrtype == dns.TypeA {\n\t\t\tt, _ := ans.(*dns.A)\n\t\t\tips = append(ips, t.A.String())\n\t\t}\n\n\t\tif ans.Header().Rrtype == dns.TypeAAAA {\n\t\t\tt, _ := ans.(*dns.AAAA)\n\t\t\tips = append(ips, t.AAAA.String())\n\t\t}\n\t}\n\n\t// let each pu handle it\n\tpus := make([]*pucontext.PUContext, 0, len(p.contextIDs))\n\tp.Lock()\n\tfor id := range p.contextIDs {\n\t\tpuCtx, err := puFromContextID(id)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"dnsproxy: DNS Proxy failed to get PUContext\", zap.Error(err))\n\t\t\tcontinue\n\t\t}\n\t\tpus = append(pus, puCtx)\n\t}\n\tp.Unlock()\n\n\tfor _, puCtx := range pus {\n\t\tppps, _, err := puCtx.GetPolicyFromFQDN(msg.Question[0].Name)\n\t\tif err == nil {\n\t\t\tfor _, ppp := range ppps {\n\t\t\t\tipsetmanager.V4().UpdateACLIPsets(ips, ppp.Policy.ServiceID)\n\t\t\t\tipsetmanager.V6().UpdateACLIPsets(ips, ppp.Policy.ServiceID)\n\t\t\t\tif err = puCtx.UpdateApplicationACLs(policy.IPRuleList{{Addresses: ips,\n\t\t\t\t\tPorts:     ppp.Ports,\n\t\t\t\t\tProtocols: ppp.Protocols,\n\t\t\t\t\tPolicy:    ppp.Policy,\n\t\t\t\t}}); err != nil {\n\t\t\t\t\tzap.L().Error(\"dnsproxy: adding IP rule returned error\", zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// source and destination is swapped because we are looking at response packet\n\t\tp.reportDNSLookup(msg.Question[0].Name, puCtx, destIP, destPort, sourceIP, sourcePort, ips, \"\")\n\n\t\tconfigureDependentServices(puCtx, msg.Question[0].Name, ips)\n\t}\n\n\treturn nil\n}\n\nfunc clearWindowsDNSCache() error {\n\tdnsAPIDll := syscall.NewLazyDLL(\"dnsapi.dll\")\n\tflushDNSCacheProc := dnsAPIDll.NewProc(\"DnsFlushResolverCache\")\n\tret, _, err := flushDNSCacheProc.Call()\n\tif err != syscall.Errno(0) {\n\t\treturn err\n\t}\n\tif ret == 0 {\n\t\treturn errors.New(\"DnsFlushResolverCache failed\")\n\t}\n\treturn nil\n}\n\n// Enforce starts enforcing policies for the given policy.PUInfo.\nfunc (p *Proxy) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\treturn nil\n}\n\n// Unenforce stops enforcing policy for the given IP.\nfunc (p *Proxy) Unenforce(_ context.Context, contextID string) error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dns_windows_test.go",
    "content": "// +build windows\n\npackage dnsproxy\n\nimport (\n\t\"context\"\n\t\"encoding/hex\"\n\t\"errors\"\n\t\"net\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/magiconair/properties/assert\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n)\n\nfunc addDNSNamePolicy(context *pucontext.PUContext) {\n\tcontext.DNSACLs = policy.DNSRuleList{\n\t\t\"google.com.\": []policy.PortProtocolPolicy{\n\t\t\t{Ports: []string{\"80\"},\n\t\t\t\tProtocols: []string{\"6\"},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"2\",\n\t\t\t\t}},\n\t\t},\n\t}\n}\n\n// DNSCollector implements a default collector infrastructure to syslog\ntype DNSCollector struct{}\n\n// CollectFlowEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectFlowEvent(record *collector.FlowRecord) {}\n\n// CollectContainerEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectContainerEvent(record *collector.ContainerRecord) {}\n\n// CollectUserEvent is part of the EventCollector interface.\nfunc (d *DNSCollector) CollectUserEvent(record *collector.UserRecord) {}\n\n// CollectTraceEvent collects iptables trace events\nfunc (d *DNSCollector) CollectTraceEvent(records []string) {}\n\n// CollectPacketEvent collects packet events from the datapath\nfunc (d *DNSCollector) CollectPacketEvent(report *collector.PacketReport) {}\n\n// CollectCounterEvent collect counters from the datapath\nfunc (d *DNSCollector) CollectCounterEvent(report *collector.CounterReport) {}\n\n// CollectPingEvent collects ping events from the datapath\nfunc (d *DNSCollector) CollectPingEvent(report *collector.PingReport) {}\n\n// CollectConnectionExceptionReport collects the connection exception report\nfunc (d *DNSCollector) CollectConnectionExceptionReport(_ *collector.ConnectionExceptionReport) {}\n\nvar r collector.DNSRequestReport\nvar l sync.Mutex\n\n// CollectDNSRequests collect counters from the datapath\nfunc (d *DNSCollector) CollectDNSRequests(report *collector.DNSRequestReport) {\n\tl.Lock()\n\tr = *report\n\tl.Unlock()\n}\n\nconst (\n\tdnsResponseHex1 = \"45200048d22f00006a11ad8f08080808c0a8000e0035e7560034385a00088180000100010000000006676f6f676c6503636f6d0000010001c00c000100010000009d0004acd90d0e\"\n\tdnsResponseHex2 = \"45200054eb6700006a11944b08080808c0a8000e0035e7570040863400098180000100010000000006676f6f676c6503636f6d00001c0001c00c001c00010000012b00102607f8b040020c030000000000000066\"\n)\n\nfunc TestDNS(t *testing.T) {\n\tpuIDcache := cache.NewCache(\"puFromContextID\")\n\n\tdnsResponsePacket1, _ := hex.DecodeString(dnsResponseHex1)\n\tdnsResponsePacket2, _ := hex.DecodeString(dnsResponseHex2)\n\n\tparsedPacket1, _ := packet.New(uint64(packet.PacketTypeNetwork), dnsResponsePacket1, \"83\", true)\n\tparsedPacket2, _ := packet.New(uint64(packet.PacketTypeNetwork), dnsResponsePacket2, \"83\", true)\n\n\tfp := &policy.PUInfo{\n\t\tRuntime: policy.NewPURuntimeWithDefaults(),\n\t\tPolicy:  policy.NewPUPolicyWithDefaults(),\n\t}\n\tpu, _ := pucontext.NewPU(\"pu1\", fp, nil, 24*time.Hour) // nolint\n\n\tfindPU := func(id string) (*pucontext.PUContext, error) {\n\t\tif id == \"pu1\" {\n\t\t\treturn pu, nil\n\t\t}\n\t\treturn nil, errors.New(\"unknown PU\")\n\t}\n\n\taddDNSNamePolicy(pu)\n\n\tpuIDcache.AddOrUpdate(\"pu1\", pu)\n\tcollector := &DNSCollector{}\n\n\tips := ipsetmanager.NewTestIpsetProvider()\n\tipsetmanager.SetIpsetTestInstance(ips)\n\tproxy := New(context.Background(), puIDcache, nil, collector)\n\n\terr := proxy.StartDNSServer(context.Background(), \"pu1\", \"53001\")\n\tassert.Equal(t, err == nil, true, \"start dns server\")\n\n\terr = proxy.HandleDNSResponsePacket(parsedPacket1.GetUDPData(), parsedPacket1.SourceAddress(), parsedPacket1.SourcePort(), parsedPacket1.DestinationAddress(), parsedPacket1.DestPort(), findPU)\n\tassert.Equal(t, err == nil, true, \"dns packet 1 failed\")\n\n\terr = proxy.HandleDNSResponsePacket(parsedPacket2.GetUDPData(), parsedPacket2.SourceAddress(), parsedPacket2.SourcePort(), parsedPacket2.DestinationAddress(), parsedPacket2.DestPort(), findPU)\n\tassert.Equal(t, err == nil, true, \"dns packet 2 failed\")\n\n\t// wait a sec for report delivered via channel, and then expect one report since the next will be time-delayed\n\ttime.Sleep(1 * time.Second)\n\tl.Lock()\n\tassert.Equal(t, r.NameLookup == \"google.com.\", true, \"lookup should be google.com\")\n\tassert.Equal(t, r.Count == 1, true, \"count should be 1\")\n\tl.Unlock()\n\n\tdefaultFlowPolicy := &policy.FlowPolicy{Action: policy.Reject | policy.Log, PolicyID: \"default\", ServiceID: \"default\"}\n\n\t// test acls updated\n\trpt, pkt, err := pu.ApplicationACLs.GetMatchingAction(net.ParseIP(\"172.217.13.14\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\tassert.Equal(t, err == nil, true, \"GetMatchingAction failed\")\n\tassert.Equal(t, rpt.Action.Accepted(), true, \"should be accepted (report)\")\n\tassert.Equal(t, pkt.Action.Accepted(), true, \"should be accepted (packet)\")\n\trpt, pkt, err = pu.ApplicationACLs.GetMatchingAction(net.ParseIP(\"2607:f8b0:4002:c03::66\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\tassert.Equal(t, err == nil, true, \"GetMatchingAction failed\")\n\tassert.Equal(t, rpt.Action.Accepted(), true, \"should be accepted (report)\")\n\tassert.Equal(t, pkt.Action.Accepted(), true, \"should be accepted (packet)\")\n\n\t// test SyncWithPlatformCache\n\tclearWindowsDNSCacheFunc = func() error {\n\t\treturn errors.New(\"error from unit test\")\n\t}\n\tdefer func() {\n\t\tclearWindowsDNSCacheFunc = clearWindowsDNSCache\n\t}()\n\terr = proxy.SyncWithPlatformCache(context.Background(), pu)\n\tassert.Equal(t, err != nil, true, \"clearWindowsDNSCache not called with DNSACLs present\")\n\tassert.Matches(t, err.Error(), \"error from unit test\")\n\tpu.DNSACLs = policy.DNSRuleList{}\n\terr = proxy.SyncWithPlatformCache(context.Background(), pu)\n\tassert.Equal(t, err == nil, true, \"clearWindowsDNSCache called without DNSACLs present\")\n\n\tproxy.ShutdownDNS(\"pu1\")\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/dnsproxy.go",
    "content": "package dnsproxy\n\nimport (\n\t\"context\"\n\t\"net\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// DNSProxy defines an interface that trireme uses for Dns Proxy\ntype DNSProxy interface {\n\n\t// StartDNSServer starts the dns server on the port provided for contextID\n\tStartDNSServer(ctx context.Context, contextID, port string) error\n\n\t// Enforce starts enforcing policies for the given policy.PUInfo.\n\tEnforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error\n\n\t// Unenforce stops enforcing policy for the given IP.\n\tUnenforce(ctx context.Context, contextID string) error\n\n\t// SyncWithPlatformCache is only needed in Windows\n\tSyncWithPlatformCache(ctx context.Context, pctx *pucontext.PUContext) error\n\n\t// HandleDNSResponsePacket is only needed in Windows\n\tHandleDNSResponsePacket(dnsPacketData []byte, sourceIP net.IP, sourcePort uint16, destIP net.IP, destPort uint16, puFromContextID func(string) (*pucontext.PUContext, error)) error\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/mockdnsproxy/mockdnsproxy.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/internal/enforcer/dnsproxy/dnsproxy.go\n\n// Package mockdnsproxy is a generated GoMock package.\npackage mockdnsproxy\n\nimport (\n\tcontext \"context\"\n\tnet \"net\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tpucontext \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// MockDNSProxy is a mock of DNSProxy interface\n// nolint\ntype MockDNSProxy struct {\n\tctrl     *gomock.Controller\n\trecorder *MockDNSProxyMockRecorder\n}\n\n// MockDNSProxyMockRecorder is the mock recorder for MockDNSProxy\n// nolint\ntype MockDNSProxyMockRecorder struct {\n\tmock *MockDNSProxy\n}\n\n// NewMockDNSProxy creates a new mock instance\n// nolint\nfunc NewMockDNSProxy(ctrl *gomock.Controller) *MockDNSProxy {\n\tmock := &MockDNSProxy{ctrl: ctrl}\n\tmock.recorder = &MockDNSProxyMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockDNSProxy) EXPECT() *MockDNSProxyMockRecorder {\n\treturn m.recorder\n}\n\n// StartDNSServer mocks base method\n// nolint\nfunc (m *MockDNSProxy) StartDNSServer(ctx context.Context, contextID, port string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"StartDNSServer\", ctx, contextID, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// StartDNSServer indicates an expected call of StartDNSServer\n// nolint\nfunc (mr *MockDNSProxyMockRecorder) StartDNSServer(ctx, contextID, port interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StartDNSServer\", reflect.TypeOf((*MockDNSProxy)(nil).StartDNSServer), ctx, contextID, port)\n}\n\n// Enforce mocks base method\n// nolint\nfunc (m *MockDNSProxy) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Enforce\", ctx, contextID, puInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Enforce indicates an expected call of Enforce\n// nolint\nfunc (mr *MockDNSProxyMockRecorder) Enforce(ctx, contextID, puInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Enforce\", reflect.TypeOf((*MockDNSProxy)(nil).Enforce), ctx, contextID, puInfo)\n}\n\n// Unenforce mocks base method\n// nolint\nfunc (m *MockDNSProxy) Unenforce(ctx context.Context, contextID string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Unenforce\", ctx, contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Unenforce indicates an expected call of Unenforce\n// nolint\nfunc (mr *MockDNSProxyMockRecorder) Unenforce(ctx, contextID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Unenforce\", reflect.TypeOf((*MockDNSProxy)(nil).Unenforce), ctx, contextID)\n}\n\n// SyncWithPlatformCache mocks base method\n// nolint\nfunc (m *MockDNSProxy) SyncWithPlatformCache(ctx context.Context, pctx *pucontext.PUContext) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SyncWithPlatformCache\", ctx, pctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SyncWithPlatformCache indicates an expected call of SyncWithPlatformCache\n// nolint\nfunc (mr *MockDNSProxyMockRecorder) SyncWithPlatformCache(ctx, pctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SyncWithPlatformCache\", reflect.TypeOf((*MockDNSProxy)(nil).SyncWithPlatformCache), ctx, pctx)\n}\n\n// HandleDNSResponsePacket mocks base method\n// nolint\nfunc (m *MockDNSProxy) HandleDNSResponsePacket(dnsPacketData []byte, sourceIP net.IP, sourcePort uint16, destIP net.IP, destPort uint16, puFromContextID func(string) (*pucontext.PUContext, error)) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"HandleDNSResponsePacket\", dnsPacketData, sourceIP, sourcePort, destIP, destPort, puFromContextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// HandleDNSResponsePacket indicates an expected call of HandleDNSResponsePacket\n// nolint\nfunc (mr *MockDNSProxyMockRecorder) HandleDNSResponsePacket(dnsPacketData, sourceIP, sourcePort, destIP, destPort, puFromContextID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"HandleDNSResponsePacket\", reflect.TypeOf((*MockDNSProxy)(nil).HandleDNSResponsePacket), dnsPacketData, sourceIP, sourcePort, destIP, destPort, puFromContextID)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/dnsproxy/mutex_map.go",
    "content": "package dnsproxy\n\nimport (\n\t\"sync\"\n)\n\n// mutexMap\ntype mutexMap struct {\n\t// as the mutex map has a map of its own\n\t// we also need to lock access to this map\n\tl sync.Mutex\n\tm map[string]*sync.Mutex\n}\n\n// unlocker defines the Unlock mechanism which is returned by the Lock method\ntype unlocker interface {\n\t// Unlock releases the lock\n\tUnlock()\n}\n\n// newMutexMap initializes a new map of strings which provide a mutex\nfunc newMutexMap() *mutexMap {\n\treturn &mutexMap{m: map[string]*sync.Mutex{}}\n}\n\n// Remove removes an entry from the mutex map\nfunc (m *mutexMap) Remove(entry string) {\n\tm.l.Lock()\n\tdefer m.l.Unlock()\n\tdelete(m.m, entry)\n}\n\n// Lock will gain a lock on `entry`. The caller must call `Unlock` on the returned unlocker when done.\nfunc (m *mutexMap) Lock(entry string) unlocker {\n\tm.l.Lock()\n\te, ok := m.m[entry]\n\tif !ok {\n\t\tm.m[entry] = &sync.Mutex{}\n\t\te = m.m[entry]\n\t}\n\tm.l.Unlock()\n\te.Lock()\n\treturn e\n}\n"
  },
  {
    "path": "controller/internal/enforcer/enforcer.go",
    "content": "package enforcer\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"time\"\n\n\t\"github.com/blang/semver\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/envoyauthorizer\"\n\t\"go.aporeto.io/gaia\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ebpf\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\n// A Enforcer is an implementation of the enforcer datapath. The interface\n// can be implemented by one or multiple datapaths.\ntype Enforcer interface {\n\n\t// Enforce starts enforcing policies for the given policy.PUInfo.\n\tEnforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error\n\n\t// Unenforce stops enforcing policy for the given IP.\n\tUnenforce(ctx context.Context, contextID string) error\n\n\t// GetFilterQueue returns the current FilterQueueConfig.\n\tGetFilterQueue() fqconfig.FilterQueue\n\n\t// GetBPFObject returns the bpf pobject\n\tGetBPFObject() ebpf.BPFModule\n\n\t// Run starts the PolicyEnforcer.\n\tRun(ctx context.Context) error\n\n\t// UpdateSecrets -- updates the secrets of running enforcers managed by trireme. Remote enforcers will get the secret updates with the next policy push\n\tUpdateSecrets(secrets secrets.Secrets) error\n\n\t// SetTargetNetworks sets the target network configuration of the controllers.\n\tSetTargetNetworks(cfg *runtime.Configuration) error\n\n\t// SetLogLevel sets log level.\n\tSetLogLevel(level constants.LogLevel) error\n\n\t// Cleanup request a clean up of the controllers.\n\tCleanUp() error\n\n\t// GetServiceMeshType returns the serviceMeshType\n\tGetServiceMeshType() policy.ServiceMesh\n\n\tDebugInfo\n}\n\n// DebugInfo is interface to implement methods to configure datapath packet tracing in the nfqdatapath\ntype DebugInfo interface {\n\t//  EnableDatapathPacketTracing will enable tracing of packets received by the datapath for a particular PU. Setting Disabled as tracing direction will stop tracing for the contextID\n\tEnableDatapathPacketTracing(ctx context.Context, contextID string, direction packettracing.TracingDirection, interval time.Duration) error\n\n\t// EnablePacketTracing enable iptables -j trace for the particular pu and is much wider packet stream.\n\tEnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error\n\n\t// Ping runs ping based on the given config.\n\tPing(ctx context.Context, contextID string, pingConfig *policy.PingConfig) error\n\n\t// DebugCollect collects debug information, such as packet capture\n\tDebugCollect(ctx context.Context, contextID string, debugConfig *policy.DebugConfig) error\n}\n\n// enforcer holds all the active implementations of the enforcer\ntype enforcer struct {\n\tproxy     *applicationproxy.AppProxy\n\ttransport *nfqdatapath.Datapath\n}\n\n// Run implements the run interfaces and runs the individual data paths\nfunc (e *enforcer) Run(ctx context.Context) error {\n\n\tif e.proxy != nil {\n\t\tif err := e.proxy.Run(ctx); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif e.transport != nil {\n\t\tif err := e.transport.Run(ctx); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Enforce implements the enforce interface by sending the event to all the enforcers.\nfunc (e *enforcer) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\tif e.transport != nil {\n\t\tif err := e.transport.Enforce(ctx, contextID, puInfo); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to enforce in nfq: %s\", err)\n\t\t}\n\t}\n\n\tif e.proxy != nil {\n\t\t// NOTE: Passing ctx here breaks proxy. Root cause unknown for now\n\t\tif err := e.proxy.Enforce(context.Background(), contextID, puInfo); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to enforce in proxy: %s\", err)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Unenforce implements the Unenforce interface by sending the event to all the enforcers.\nfunc (e *enforcer) Unenforce(ctx context.Context, contextID string) error {\n\n\tvar perr, nerr, serr error\n\n\tif e.proxy != nil {\n\t\tif perr = e.proxy.Unenforce(ctx, contextID); perr != nil {\n\t\t\tzap.L().Error(\"Failed to unenforce contextID in proxy\",\n\t\t\t\tzap.String(\"ContextID\", contextID),\n\t\t\t\tzap.Error(perr),\n\t\t\t)\n\t\t}\n\t}\n\n\tif e.transport != nil {\n\t\tif nerr = e.transport.Unenforce(ctx, contextID); nerr != nil {\n\t\t\tzap.L().Error(\"Failed to unenforce contextID in transport\",\n\t\t\t\tzap.String(\"ContextID\", contextID),\n\t\t\t\tzap.Error(nerr),\n\t\t\t)\n\t\t}\n\t}\n\n\tif perr != nil || nerr != nil || serr != nil {\n\t\treturn fmt.Errorf(\"Failed to unenforce. proxy: %s transport: %s secret: %s\", perr, nerr, serr)\n\t}\n\n\treturn nil\n}\n\nfunc (e *enforcer) SetTargetNetworks(cfg *runtime.Configuration) error {\n\treturn e.transport.SetTargetNetworks(cfg)\n}\n\n// Updatesecrets updates the secrets of the enforcers\nfunc (e *enforcer) UpdateSecrets(secrets secrets.Secrets) error {\n\tif e.proxy != nil {\n\t\tif err := e.proxy.UpdateSecrets(secrets); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif e.transport != nil {\n\t\tif err := e.transport.UpdateSecrets(secrets); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// SetLogLevel sets log level.\nfunc (e *enforcer) SetLogLevel(level constants.LogLevel) error {\n\n\tif e.transport != nil {\n\t\tif err := e.transport.SetLogLevel(level); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Cleanup implements the cleanup interface.\nfunc (e *enforcer) CleanUp() error {\n\tif e.transport != nil {\n\t\treturn e.transport.CleanUp()\n\t}\n\treturn nil\n}\n\n//GetBPFObject returns the bpf object\nfunc (e *enforcer) GetBPFObject() ebpf.BPFModule {\n\treturn e.transport.GetBPFObject()\n}\n\n// GetServiceMeshType returns the serviceMesh type\nfunc (e *enforcer) GetServiceMeshType() policy.ServiceMesh {\n\treturn e.transport.GetServiceMeshType()\n}\n\n// GetFilterQueue returns the current FilterQueueConfig of the transport path.\nfunc (e *enforcer) GetFilterQueue() fqconfig.FilterQueue {\n\treturn e.transport.GetFilterQueue()\n}\n\n// EnableDatapathPacketTracing implemented the datapath packet tracing\nfunc (e *enforcer) EnableDatapathPacketTracing(ctx context.Context, contextID string, direction packettracing.TracingDirection, interval time.Duration) error {\n\treturn e.transport.EnableDatapathPacketTracing(ctx, contextID, direction, interval)\n\n}\n\n// EnableIPTablesPacketTracing enable iptables -j trace for the particular pu and is much wider packet stream.\nfunc (e *enforcer) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\treturn nil\n}\n\n// Ping runs ping to the given config.\nfunc (e *enforcer) Ping(ctx context.Context, contextID string, pingConfig *policy.PingConfig) error {\n\n\tsrv := policy.ServiceL3\n\tsctx, sdata, err := e.proxy.ServiceData(\n\t\tcontextID,\n\t\tpingConfig.IP,\n\t\tint(pingConfig.Port),\n\t\tpingConfig.ServiceAddresses)\n\tif err == nil {\n\t\tsrv = sdata.ServiceObject.Type\n\t}\n\n\tswitch pingConfig.Mode {\n\tcase gaia.ProcessingUnitRefreshPingModeAuto:\n\t\tswitch srv {\n\t\tcase policy.ServiceHTTP, policy.ServiceTCP:\n\t\t\treturn e.proxy.Ping(ctx, contextID, sctx, sdata, pingConfig)\n\t\tdefault:\n\t\t\treturn e.transport.Ping(ctx, contextID, pingConfig)\n\t\t}\n\n\tcase gaia.ProcessingUnitRefreshPingModeL3:\n\t\treturn e.transport.Ping(ctx, contextID, pingConfig)\n\n\tcase gaia.ProcessingUnitRefreshPingModeL4, gaia.ProcessingUnitRefreshPingModeL7:\n\t\treturn e.proxy.Ping(ctx, contextID, sctx, sdata, pingConfig)\n\n\tdefault:\n\t\treturn e.transport.Ping(ctx, contextID, pingConfig)\n\t}\n}\n\nfunc (e *enforcer) DebugCollect(ctx context.Context, contextID string, debugConfig *policy.DebugConfig) error {\n\t// this is handled in remoteenforcer\n\treturn nil\n}\n\n// New returns a new policy enforcer that implements both the data paths.\nfunc New(\n\tmutualAuthorization bool,\n\tfqConfig fqconfig.FilterQueue,\n\tcollector collector.EventCollector,\n\tsecrets secrets.Secrets,\n\tserverID string,\n\tvalidity time.Duration,\n\tmode constants.ModeType,\n\tprocMountPoint string,\n\texternalIPCacheTimeout time.Duration,\n\tpacketLogs bool,\n\tcfg *runtime.Configuration,\n\ttokenIssuer common.ServiceTokenIssuer,\n\tisBPFEnabled bool,\n\tagentVersion semver.Version,\n\tserviceMeshType policy.ServiceMesh,\n) (Enforcer, error) {\n\tif mode == constants.RemoteContainerEnvoyAuthorizer || mode == constants.LocalEnvoyAuthorizer {\n\t\treturn envoyauthorizer.NewEnvoyAuthorizerEnforcer(mode, collector, externalIPCacheTimeout, secrets, tokenIssuer)\n\t}\n\n\ttokenAccessor, err := tokenaccessor.New(serverID, validity, secrets)\n\n\tif err != nil {\n\t\tzap.L().Fatal(\"Cannot create a token engine\")\n\t}\n\n\tdatapathKeypair, err := ephemeralkeys.NewWithRenewal()\n\tif err != nil {\n\t\tzap.L().Fatal(\"Cannot create a ephemeral key pair\", zap.Error(err))\n\t}\n\n\tpuFromContextID := cache.NewCache(\"puFromContextID\")\n\n\ttransport := nfqdatapath.New(\n\t\tmutualAuthorization,\n\t\tfqConfig,\n\t\tcollector,\n\t\tserverID,\n\t\tvalidity,\n\t\tsecrets,\n\t\tmode,\n\t\tprocMountPoint,\n\t\texternalIPCacheTimeout,\n\t\tpacketLogs,\n\t\ttokenAccessor,\n\t\tpuFromContextID,\n\t\tcfg,\n\t\tisBPFEnabled,\n\t\tagentVersion,\n\t\tserviceMeshType,\n\t)\n\tvar appProxy *applicationproxy.AppProxy\n\n\tif serviceMeshType == policy.None {\n\t\tappProxy, err = applicationproxy.NewAppProxy(tokenAccessor, collector, puFromContextID, secrets, tokenIssuer, datapathKeypair, agentVersion)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"App proxy %s\", err)\n\t\t}\n\t}\n\n\treturn &enforcer{\n\t\tproxy:     appProxy,\n\t\ttransport: transport,\n\t}, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/envoyauthorizer/envoyauthorizerenforcer.go",
    "content": "package envoyauthorizer\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/envoyauthorizer/envoyproxy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/metadata\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ebpf\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\n// Enforcer implements the Enforcer interface as an envoy authorizer\n// and starts envoy external authz filter gRPC servers for enforcement.\ntype Enforcer struct {\n\tmode                   constants.ModeType\n\tcollector              collector.EventCollector\n\texternalIPCacheTimeout time.Duration\n\tsecrets                secrets.Secrets\n\ttokenIssuer            common.ServiceTokenIssuer\n\n\tpuContexts   cache.DataStore\n\tclients      cache.DataStore\n\tsystemCAPool *x509.CertPool\n\n\tmetadata *metadata.Client\n\tsync.RWMutex\n}\n\n// envoyAuthzServers, envoy servers used my enforcer\ntype envoyServers struct {\n\tingress *envoyproxy.AuthServer\n\tegress  *envoyproxy.AuthServer\n\tsds     *envoyproxy.SdsServer\n}\n\n// NewEnvoyAuthorizerEnforcer creates a new envoy authorizer\nfunc NewEnvoyAuthorizerEnforcer(mode constants.ModeType, eventCollector collector.EventCollector, externalIPCacheTimeout time.Duration, secrets secrets.Secrets, tokenIssuer common.ServiceTokenIssuer) (*Enforcer, error) {\n\t// abort if this is not the right mode\n\tif mode != constants.RemoteContainerEnvoyAuthorizer && mode != constants.LocalEnvoyAuthorizer {\n\t\treturn nil, fmt.Errorf(\"enforcer mode type must be either RemoteContainerEnvoyAuthorizer or LocalEnvoyAuthorizer, got: %d\", mode)\n\t}\n\tzap.L().Info(\"Creating Envoy Authorizer Enforcer\")\n\t// same logic as in the nfqdatapath\n\tif externalIPCacheTimeout <= 0 {\n\t\tvar err error\n\t\texternalIPCacheTimeout, err = time.ParseDuration(enforcerconstants.DefaultExternalIPTimeout)\n\t\tif err != nil {\n\t\t\texternalIPCacheTimeout = time.Second\n\t\t}\n\t}\n\n\t// same logic as in app proxy\n\tsystemPool, err := x509.SystemCertPool()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif ok := systemPool.AppendCertsFromPEM(secrets.CertAuthority()); !ok {\n\t\treturn nil, fmt.Errorf(\"error while adding provided CA\")\n\t}\n\t// TODO: systemPool needs the same treatment as the AppProxy and a `processCertificateUpdates` and `expandCAPool` implementation as well\n\treturn &Enforcer{\n\t\tmode:                   mode,\n\t\tcollector:              eventCollector,\n\t\texternalIPCacheTimeout: externalIPCacheTimeout,\n\t\tsecrets:                secrets,\n\t\ttokenIssuer:            tokenIssuer,\n\t\tpuContexts:             cache.NewCache(\"puContexts\"),\n\t\tclients:                cache.NewCache(\"clients\"),\n\t\t// auth:                   apiauth.New(puContexts, registry, secrets),\n\t\t// metadata:               metadata.NewClient(puContext, registry, tokenIssuer),\n\t}, nil\n}\n\n// Secrets implements the LockedSecrets\nfunc (e *Enforcer) Secrets() (secrets.Secrets, func()) {\n\te.RLock()\n\treturn e.secrets, e.RUnlock\n}\n\n// Enforce starts enforcing policies for the given policy.PUInfo.\n// here we do the following:\n// 1. create a new PU always and instantiate a new apiAuth, as we want to be as stateless as possible.\n// 2. create a PUcontext as this will be used in auth code.\n// 3. If envoy servers are not present then create all 3 envoy servers.\n// 4. If the servers are already present under policy update then update the service certs.\nfunc (e *Enforcer) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\te.Lock()\n\tdefer e.Unlock()\n\n\tzap.L().Debug(\"Enforce for the envoy for pu\", zap.String(\"puID\", contextID))\n\t// here we 1st need to create a PuContext, as the PU context will derive the\n\t// serviceCtxt which will be used by the authorizer to determine the policyInfo.\n\n\tpu, err := pucontext.NewPU(contextID, puInfo, nil, e.externalIPCacheTimeout)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error creating new pu: %s\", err)\n\t}\n\t// Add the puContext to the cache as we need to later while serving the requests.\n\te.puContexts.AddOrUpdate(contextID, pu)\n\n\tsctx, err := serviceregistry.Instance().Register(contextID, puInfo, pu, e.secrets)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"policy conflicts detected: %s\", err)\n\t}\n\n\tcaPool := e.expandCAPool(sctx.RootCA)\n\n\t// now instantiate the apiAuth and metadata\n\t// create a new server if it doesn't exist yet\n\tif _, err := e.clients.Get(contextID); err != nil {\n\t\tzap.L().Debug(\"creating new auth and sds servers\", zap.String(\"puID\", contextID))\n\t\tingressServer, err := envoyproxy.NewExtAuthzServer(contextID, e.puContexts, e.collector, envoyproxy.IngressDirection, e.secrets, e.tokenIssuer)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Cannot create and run IngressServer\", zap.Error(err))\n\t\t\treturn err\n\t\t}\n\n\t\tegressServer, err := envoyproxy.NewExtAuthzServer(contextID, e.puContexts, e.collector, envoyproxy.EgressDirection, e.secrets, e.tokenIssuer)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Cannot create and run EgressServer\", zap.Error(err))\n\t\t\tingressServer.Stop()\n\t\t\treturn err\n\t\t}\n\t\tsdsServer, err := envoyproxy.NewSdsServer(contextID, puInfo, caPool, e.secrets)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Cannot create and run SdsServer\", zap.Error(err))\n\t\t\treturn err\n\t\t}\n\t\t// Add the EnvoyServers to our cache\n\t\tif err := e.clients.Add(contextID, &envoyServers{ingress: ingressServer, egress: egressServer, sds: sdsServer}); err != nil {\n\t\t\tingressServer.Stop()\n\t\t\tegressServer.Stop()\n\t\t\tsdsServer.Stop()\n\t\t\treturn err\n\t\t}\n\n\t} else {\n\t\t// we have this client already, this is only a policy update\n\t\tzap.L().Debug(\"handling policy update for envoy servers\", zap.String(\"puID\", contextID))\n\t\t// For updates we need to update the certificates if we have new ones. Otherwise\n\t\t// we return. There is nothing else to do in case of policy update.\n\t\t// this required for the Envoy servers.\n\t\tif c, cerr := e.clients.Get(contextID); cerr == nil {\n\t\t\t_, perr := e.processCertificateUpdates(puInfo, c.(*envoyServers), caPool)\n\t\t\tif perr != nil {\n\t\t\t\tzap.L().Error(\"unable to update certificates for services\", zap.Error(perr))\n\t\t\t\treturn perr\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// processCertificateUpdates processes the certificate information and updates\n// the servers.\nfunc (e *Enforcer) processCertificateUpdates(puInfo *policy.PUInfo, server *envoyServers, caPool *x509.CertPool) (bool, error) {\n\n\t// If there are certificates provided, we will need to update them for the\n\t// services. If the certificates are nil, we ignore them.\n\tcertPEM, keyPEM, caPEM := puInfo.Policy.ServiceCertificates()\n\tif certPEM == \"\" || keyPEM == \"\" {\n\t\treturn false, nil\n\t}\n\n\t// Process any updates on the cert pool\n\tif caPEM != \"\" {\n\t\tif !caPool.AppendCertsFromPEM([]byte(caPEM)) {\n\t\t\tzap.L().Warn(\"Failed to add Services CA\")\n\t\t}\n\t}\n\n\t// Create the TLS certificate\n\ttlsCert, err := tls.X509KeyPair([]byte(certPEM), []byte(keyPEM))\n\tif err != nil {\n\t\treturn false, fmt.Errorf(\"Invalid certificates: %s\", err)\n\t}\n\t// Here update the enforcer secrets because we are using the LockedSecrets.\n\t// Also, send a update event to the SDS server so it can send a new cert to the envoy Sidecar.\n\t// // update all the server certs, the Write lock has already been acquired by the Enforce function, so no need to lock again.\n\tserver.ingress.UpdateSecrets(&tlsCert, caPool, e.secrets, certPEM, keyPEM)\n\tserver.egress.UpdateSecrets(&tlsCert, caPool, e.secrets, certPEM, keyPEM)\n\tserver.sds.UpdateSecrets(&tlsCert, caPool, e.secrets, certPEM, keyPEM)\n\n\tif e.metadata != nil {\n\t\te.metadata.UpdateSecrets([]byte(certPEM), []byte(keyPEM))\n\t}\n\treturn true, nil\n}\n\nfunc (e *Enforcer) expandCAPool(externalCAs [][]byte) *x509.CertPool {\n\tsystemPool, err := x509.SystemCertPool()\n\tif err != nil {\n\t\tzap.L().Error(\"cannot process system pool\", zap.Error(err))\n\t\treturn e.systemCAPool\n\t}\n\tif ok := systemPool.AppendCertsFromPEM(e.secrets.CertAuthority()); !ok {\n\t\tzap.L().Error(\"cannot appen system CA\", zap.Error(err))\n\t\treturn e.systemCAPool\n\t}\n\tfor _, ca := range externalCAs {\n\t\tif ok := systemPool.AppendCertsFromPEM(ca); !ok {\n\t\t\tzap.L().Error(\"cannot append external service ca\", zap.String(\"CA\", string(ca)))\n\t\t}\n\t}\n\treturn systemPool\n}\n\n// Unenforce stops enforcing policy for the given IP.\nfunc (e *Enforcer) Unenforce(ctx context.Context, contextID string) error {\n\te.Lock()\n\tdefer e.Unlock()\n\n\t// stop the authz servers\n\trawAuthzServers, err := e.clients.Get(contextID)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tserver := rawAuthzServers.(*envoyServers)\n\tshutdownCtx, shutdownCtxCancel := context.WithTimeout(ctx, time.Second*10)\n\tdefer shutdownCtxCancel()\n\n\tvar wg sync.WaitGroup\n\tshutdownCh := make(chan struct{})\n\twg.Add(3)\n\tgo func() {\n\t\tserver.ingress.GracefulStop()\n\t\twg.Done()\n\t}()\n\tgo func() {\n\t\tserver.egress.GracefulStop()\n\t\twg.Done()\n\t}()\n\tgo func() {\n\t\tserver.sds.GracefulStop()\n\t\twg.Done()\n\t}()\n\tgo func() {\n\t\twg.Wait()\n\t\tshutdownCh <- struct{}{}\n\t}()\n\n\tselect {\n\tcase <-shutdownCtx.Done():\n\t\tzap.L().Warn(\"Graceful shutdown of envoy server did not finish in time. Shutting down hard now...\", zap.String(\"puID\", contextID), zap.Error(shutdownCtx.Err()))\n\t\tvar wg sync.WaitGroup\n\t\twg.Add(3)\n\t\tgo func() {\n\t\t\tserver.ingress.Stop()\n\t\t\twg.Done()\n\t\t}()\n\t\tgo func() {\n\t\t\tserver.egress.Stop()\n\t\t\twg.Done()\n\t\t}()\n\t\tgo func() {\n\t\t\tserver.sds.Stop()\n\t\t\twg.Done()\n\t\t}()\n\t\twg.Wait()\n\tcase <-shutdownCh:\n\t}\n\n\tif err := e.puContexts.RemoveWithDelay(contextID, 10*time.Second); err != nil {\n\t\tzap.L().Debug(\"Unable to remove PU context from cache\", zap.String(\"puID\", contextID), zap.Error(err))\n\t}\n\n\treturn nil\n}\n\n// UpdateSecrets -- updates the secrets of running enforcers managed by trireme. Remote enforcers will get the secret updates with the next policy push\nfunc (e *Enforcer) UpdateSecrets(secrets secrets.Secrets) error {\n\te.Lock()\n\tdefer e.Unlock()\n\te.secrets = secrets\n\treturn nil\n}\n\n// SetTargetNetworks is unimplemented in the envoy authorizer\nfunc (e *Enforcer) SetTargetNetworks(cfg *runtime.Configuration) error {\n\treturn nil\n}\n\n// SetLogLevel is unimplemented in the envoy authorizer\nfunc (e *Enforcer) SetLogLevel(level constants.LogLevel) error {\n\treturn nil\n}\n\n// CleanUp is unimplemented in the envoy authorizer\nfunc (e *Enforcer) CleanUp() error {\n\treturn nil\n}\n\n// Run is unimplemented in the envoy authorizer\nfunc (e *Enforcer) Run(ctx context.Context) error {\n\treturn nil\n}\n\n// GetBPFObject is unimplemented in the envoy authorizer\nfunc (e *Enforcer) GetBPFObject() ebpf.BPFModule {\n\treturn nil\n}\n\n// GetServiceMeshType is unimplemented in the envoy authorizer\nfunc (e *Enforcer) GetServiceMeshType() policy.ServiceMesh {\n\treturn policy.None\n}\n\n// GetFilterQueue is unimplemented in the envoy authorizer\nfunc (e *Enforcer) GetFilterQueue() fqconfig.FilterQueue {\n\treturn nil\n}\n\n// EnableDatapathPacketTracing is unimplemented in the envoy authorizer\nfunc (e *Enforcer) EnableDatapathPacketTracing(ctx context.Context, contextID string, direction packettracing.TracingDirection, interval time.Duration) error {\n\treturn nil\n}\n\n// EnableIPTablesPacketTracing is unimplemented in the envoy authorizer\nfunc (e *Enforcer) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\treturn nil\n}\n\n// Ping is unimplemented in the envoy authorizer\nfunc (e *Enforcer) Ping(ctx context.Context, contextID string, pingConfig *policy.PingConfig) error {\n\treturn nil\n}\n\n// DebugCollect is unimplemented in the envoy authorizer\nfunc (e *Enforcer) DebugCollect(ctx context.Context, contextID string, debugConfig *policy.DebugConfig) error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/envoyauthorizer/envoyproxy/auth_server.go",
    "content": "package envoyproxy\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"sync\"\n\n\tenvoy_core \"github.com/envoyproxy/go-control-plane/envoy/api/v2/core\"\n\text_auth \"github.com/envoyproxy/go-control-plane/envoy/service/auth/v2\"\n\tenvoy_type \"github.com/envoyproxy/go-control-plane/envoy/type\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/apiauth\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/flowstats\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/metadata\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n\t\"google.golang.org/genproto/googleapis/rpc/code\"\n\t\"google.golang.org/grpc\"\n\n\t//rpc \"istio.io/gogo-genproto/googleapis/google/rpc\"\n\tstatus \"google.golang.org/genproto/googleapis/rpc/status\"\n)\n\nconst (\n\t// IngressSocketPath is the unix socket path where the authz server will be listening on for the ingress authz server\n\t//IngressSocketPath = \"@aporeto_envoy_authz_ingress\"\n\tIngressSocketPath = \"127.0.0.1:1999\"\n\n\t// EgressSocketPath is the unix socket path where the authz server will be listening on for the egress authz server\n\tEgressSocketPath = \"127.0.0.1:1998\"\n\t//EgressSocketPath = \"@aporeto_envoy_authz_egress\"\n\n\t// aporetoKeyHeader is the HTTP header name for the key header\n\taporetoKeyHeader = \"x-aporeto-key\"\n\n\t// aporetoAuthHeader is the HTTP header name for the auth header\n\taporetoAuthHeader = \"x-aporeto-auth\"\n)\n\n// Direction is used to indicate if the authorization server is ingress or egress.\n// NOTE: the type is currently set to uint8 and not bool because in Istio there are 3 types:\n// - SIDECAR_INBOUND\n// - SIDECAR_OUTBOUND\n// - GATEWAY\n// And we are not sure yet if we need an extra authz server for GATEWAY.\ntype Direction uint8\n\nconst (\n\t// UnknownDirection is only used to denote uninitialized variables\n\tUnknownDirection Direction = 0\n\n\t// IngressDirection refers to inbound / ingress traffic.\n\t// NOTE: for Istio use this in conjunction with SIDECAR_INBOUND\n\tIngressDirection Direction = 1\n\n\t// EgressDirection refers to outbound / egress traffic.\n\t// NOTE: for Istio use this in conjunction with SIDECAR_OUTBOUND\n\tEgressDirection Direction = 2\n)\n\n// String overwrites the string interface\nfunc (d Direction) String() string {\n\tswitch d {\n\tcase UnknownDirection:\n\t\treturn \"UnknownDirection\"\n\tcase IngressDirection:\n\t\treturn \"IngressDirection\"\n\tcase EgressDirection:\n\t\treturn \"EgressDirection\"\n\tdefault:\n\t\treturn fmt.Sprintf(\"Unimplemented(%d)\", d)\n\t}\n}\n\n// AuthServer struct, the server to hold the envoy External Auth.\ntype AuthServer struct {\n\tpuID       string\n\tpuContexts cache.DataStore\n\tsecrets    secrets.Secrets\n\tsocketPath string\n\tserver     *grpc.Server\n\tdirection  Direction\n\tcollector  collector.EventCollector\n\tauth       *apiauth.Processor\n\tmetadata   *metadata.Client\n\tsync.RWMutex\n}\n\n// Secrets implements locked secrets\n// func (s *AuthServer) Secrets() secrets.Secrets {\n// \ts.RLock()\n// \tdefer s.RUnlock()\n// \treturn s.secrets\n// }\n\n// NewExtAuthzServer creates a new envoy ext_authz server\nfunc NewExtAuthzServer(puID string, puContexts cache.DataStore, collector collector.EventCollector, direction Direction, secrets secrets.Secrets, tokenIssuer common.ServiceTokenIssuer) (*AuthServer, error) {\n\tvar socketPath string\n\tswitch direction {\n\tcase UnknownDirection:\n\t\treturn nil, fmt.Errorf(\"direction must be set to ingress or egress\")\n\tcase IngressDirection:\n\t\tsocketPath = IngressSocketPath\n\tcase EgressDirection:\n\t\tsocketPath = EgressSocketPath\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"direction must be set to ingress or egress\")\n\t}\n\tif direction == UnknownDirection || direction > EgressDirection {\n\t\treturn nil, fmt.Errorf(\"direction must be set to ingress or egress\")\n\t}\n\n\ts := &AuthServer{\n\t\tpuID:       puID,\n\t\tpuContexts: puContexts,\n\t\tsecrets:    secrets,\n\t\tsocketPath: socketPath,\n\t\tserver:     grpc.NewServer(),\n\t\tdirection:  direction,\n\t\tauth:       apiauth.New(puID, secrets),\n\t\tmetadata:   metadata.NewClient(puID, tokenIssuer),\n\t\tcollector:  collector,\n\t}\n\n\t// register with gRPC\n\text_auth.RegisterAuthorizationServer(s.server, s)\n\n\taddr, err := net.ResolveTCPAddr(\"tcp\", s.socketPath)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tnl, err := net.ListenTCP(\"tcp\", addr)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\t// start and listen to the server\n\tzap.L().Debug(\"ext_authz_server: Auth Server started the server on\", zap.Any(\"addr\", nl.Addr()), zap.String(\"puID\", puID))\n\tgo s.run(nl)\n\n\treturn s, nil\n}\n\n// UpdateSecrets updates the secrets\n// Whenever the Envoy makes a request for certificate, the certs and keys are fetched from\n// the Proxy.\nfunc (s *AuthServer) UpdateSecrets(cert *tls.Certificate, caPool *x509.CertPool, secrets secrets.Secrets, certPEM, keyPEM string) {\n\ts.Lock()\n\tdefer s.Unlock()\n\ts.secrets = secrets\n\t// we need update the apiAuth secrets.\n\ts.auth.UpdateSecrets(secrets)\n}\n\nfunc (s *AuthServer) run(lis net.Listener) {\n\tzap.L().Debug(\"Starting to serve gRPC for ext_authz server\", zap.String(\"puID\", s.puID), zap.String(\"direction\", s.direction.String()))\n\tif err := s.server.Serve(lis); err != nil {\n\t\tzap.L().Error(\"gRPC server for ext_authz failed\", zap.String(\"puID\", s.puID), zap.Error(err), zap.String(\"direction\", s.direction.String()))\n\t}\n\tzap.L().Debug(\"stopped serving gRPC for ext_authz server\", zap.String(\"puID\", s.puID), zap.String(\"direction\", s.direction.String()))\n}\n\n// Stop calls the function with the same name on the backing gRPC server\nfunc (s *AuthServer) Stop() {\n\ts.server.Stop()\n}\n\n// GracefulStop calls the function with the same name on the backing gRPC server\nfunc (s *AuthServer) GracefulStop() {\n\ts.server.GracefulStop()\n}\n\n// Check implements the AuthorizationServer interface\nfunc (s *AuthServer) Check(ctx context.Context, checkRequest *ext_auth.CheckRequest) (*ext_auth.CheckResponse, error) {\n\tzap.L().Debug(\"Envoy check, DIR\", zap.Uint8(\"dir\", uint8(s.direction)))\n\tswitch s.direction {\n\tcase IngressDirection:\n\t\treturn s.ingressCheck(ctx, checkRequest)\n\tcase EgressDirection:\n\t\treturn s.egressCheck(ctx, checkRequest)\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"direction: %s\", s.direction)\n\t}\n}\n\n// ingressCheck implements the AuthorizationServer for ingress connections\nfunc (s *AuthServer) ingressCheck(ctx context.Context, checkRequest *ext_auth.CheckRequest) (*ext_auth.CheckResponse, error) {\n\n\t// now extract the attributes and call the API auth to decode and check all the claims in request.\n\tvar sourceIP, destIP, aporetoAuth, aporetoKey string\n\tvar source, dest *ext_auth.AttributeContext_Peer\n\tvar httpReq *ext_auth.AttributeContext_HttpRequest\n\tvar destPort, srcPort int\n\tvar urlStr, method, scheme string\n\tattrs := checkRequest.GetAttributes()\n\tif attrs != nil {\n\t\tsource = attrs.GetSource()\n\t\tdest = attrs.GetDestination()\n\n\t\tif source != nil {\n\t\t\tif addr := source.GetAddress(); addr != nil {\n\t\t\t\tif sockAddr := addr.GetSocketAddress(); sockAddr != nil {\n\t\t\t\t\tsourceIP = sockAddr.GetAddress()\n\t\t\t\t\tsrcPort = int(sockAddr.GetPortValue())\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif dest != nil {\n\t\t\tif destAddr := dest.GetAddress(); destAddr != nil {\n\t\t\t\tif destSockAddr := destAddr.GetSocketAddress(); destSockAddr != nil {\n\t\t\t\t\tdestIP = destSockAddr.GetAddress()\n\t\t\t\t\tdestPort = int(destSockAddr.GetPortValue())\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif request := attrs.GetRequest(); request != nil {\n\t\t\thttpReq = request.GetHttp()\n\t\t\tif httpReq != nil {\n\t\t\t\thttpReqHeaders := httpReq.GetHeaders()\n\t\t\t\taporetoAuth, _ = httpReqHeaders[aporetoAuthHeader] // nolint\n\t\t\t\taporetoKey, _ = httpReqHeaders[aporetoKeyHeader]   // nolint\n\t\t\t\tzap.L().Debug(\"ext_authz ingress\", zap.Any(\"httpReqHeaders\", httpReqHeaders), zap.String(\"aporetoKey\", aporetoKey))\n\t\t\t\turlStr = httpReq.GetPath()\n\t\t\t\tmethod = httpReq.GetMethod()\n\t\t\t\tscheme = httpReq.GetScheme()\n\n\t\t\t}\n\t\t}\n\t}\n\tzap.L().Debug(\"ext_authz ingress\", zap.String(\"source addr\", sourceIP), zap.String(\"source, dest\", source.GetAddress().GetSocketAddress().GetAddress()), zap.String(\"dest addr\", dest.GetAddress().GetSocketAddress().GetAddress()))\n\tzap.L().Debug(\"ext_authz ingress\", zap.Any(\"destPort\", destPort), zap.Any(\"srcPort\", srcPort), zap.String(\"scheme\", scheme))\n\n\trequestCookie := &http.Cookie{Name: aporetoAuthHeader, Value: aporetoAuth} // nolint errcheck\n\thdr := make(http.Header)\n\n\thdr.Add(aporetoAuthHeader, aporetoAuth) //string(p.secrets.TransmittedKey()))\n\thdr.Add(aporetoKeyHeader, aporetoKey)   //resp.Token)\n\n\t// Create the new target URL based on the method+path parameter that we had.\n\tURL, err := url.ParseRequestURI(\"http:\" + method + urlStr)\n\tif err != nil {\n\t\tzap.L().Error(\"ext_authz ingress: Cannot parse the URI\", zap.Error(err))\n\t\treturn nil, err\n\t}\n\tzap.L().Debug(\"ext_authz ingress\", zap.String(\"URL\", URL.String()))\n\trequest := &apiauth.Request{\n\t\tOriginalDestination: &net.TCPAddr{IP: net.ParseIP(destIP), Port: destPort},\n\t\tSourceAddress:       &net.TCPAddr{IP: net.ParseIP(sourceIP), Port: srcPort},\n\t\tHeader:              hdr,\n\t\tURL:                 URL,\n\t\tMethod:              method,\n\t\tRequestURI:          \"\",\n\t\tCookie:              requestCookie,\n\t\tTLS:                 nil,\n\t}\n\n\tresponse, err := s.auth.NetworkRequest(ctx, request)\n\tvar userID string\n\tif response != nil && len(response.UserAttributes) > 0 {\n\t\tuserData := &collector.UserRecord{\n\t\t\tNamespace: response.Namespace,\n\t\t\tClaims:    response.UserAttributes,\n\t\t}\n\t\ts.collector.CollectUserEvent(userData)\n\t\tuserID = userData.ID\n\t}\n\n\tstate := flowstats.NewNetworkConnectionState(s.puID, userID, request, response)\n\tdefer s.collector.CollectFlowEvent(state.Stats)\n\n\tif err != nil {\n\t\tif response == nil {\n\t\t\tzap.L().Error(\"ext_authz ingress: auth.Networkrequest response is nil\")\n\t\t\treturn createDeniedCheckResponse(code.Code_PERMISSION_DENIED, envoy_type.StatusCode_Forbidden, \"No aporeto service installed\"), nil\n\t\t}\n\t\treturn createDeniedCheckResponse(code.Code_PERMISSION_DENIED, envoy_type.StatusCode_Forbidden, \"Access not authorized by network policy\"), nil\n\t}\n\tif response.Action.Rejected() {\n\t\tzap.L().Error(\"ext_authz ingress: Access *NOT* authorized by network policy\", zap.String(\"puID\", s.puID))\n\t\t//flow.DropReason = \"access not authorized by network policy\"\n\t\treturn createDeniedCheckResponse(code.Code_PERMISSION_DENIED, envoy_type.StatusCode_Forbidden, \"Access not authorized by network policy\"), nil\n\t}\n\tzap.L().Debug(\"ext_authz ingress: Access authorized by network policy\", zap.String(\"puID\", s.puID), zap.String(\"dst: \", destIP), zap.String(\"src: \", sourceIP))\n\treturn &ext_auth.CheckResponse{\n\t\tStatus: &status.Status{\n\t\t\tCode: int32(code.Code_OK),\n\t\t},\n\t\tHttpResponse: &ext_auth.CheckResponse_OkResponse{\n\t\t\tOkResponse: &ext_auth.OkHttpResponse{},\n\t\t},\n\t}, nil\n}\n\n// egressCheck implements the AuthorizationServer for egress connections\nfunc (s *AuthServer) egressCheck(_ context.Context, checkRequest *ext_auth.CheckRequest) (*ext_auth.CheckResponse, error) {\n\tzap.L().Debug(\"ext_authz egress: checkRequest\", zap.String(\"puID\", s.puID), zap.String(\"checkRequest\", checkRequest.String()))\n\n\tvar sourceIP, destIP string\n\tvar source, dest *ext_auth.AttributeContext_Peer\n\tvar httpReq *ext_auth.AttributeContext_HttpRequest\n\tvar destPort, srcPort int\n\tvar urlStr, method string\n\tattrs := checkRequest.GetAttributes()\n\tif attrs != nil {\n\t\tsource = attrs.GetSource()\n\t\tdest = attrs.GetDestination()\n\n\t\tif source != nil {\n\t\t\tif addr := source.GetAddress(); addr != nil {\n\t\t\t\tif sockAddr := addr.GetSocketAddress(); sockAddr != nil {\n\t\t\t\t\tsourceIP = sockAddr.GetAddress()\n\t\t\t\t\tsrcPort = int(sockAddr.GetPortValue())\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif dest != nil {\n\t\t\tif destAddr := dest.GetAddress(); destAddr != nil {\n\t\t\t\tif destSockAddr := destAddr.GetSocketAddress(); destSockAddr != nil {\n\t\t\t\t\tdestIP = destSockAddr.GetAddress()\n\t\t\t\t\tdestPort = int(destSockAddr.GetPortValue())\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif request := attrs.GetRequest(); request != nil {\n\t\t\thttpReq = request.GetHttp()\n\t\t\turlStr = httpReq.GetPath()\n\t\t\tmethod = httpReq.GetMethod()\n\t\t}\n\t}\n\t// Create the new target URL based on the path parameter that we have from envoy.\n\tURL, err := url.ParseRequestURI(urlStr)\n\tif err != nil {\n\t\tzap.L().Error(\"ext_authz egress: Cannot parse the URI\", zap.Error(err))\n\t\treturn nil, err\n\t}\n\n\tauthRequest := &apiauth.Request{\n\t\tOriginalDestination: &net.TCPAddr{IP: net.ParseIP(destIP), Port: destPort},\n\t\tSourceAddress:       &net.TCPAddr{IP: net.ParseIP(sourceIP), Port: srcPort},\n\t\tURL:                 URL,\n\t\tMethod:              method,\n\t\tRequestURI:          \"\",\n\t}\n\tr := new(http.Request)\n\tr.RemoteAddr = sourceIP\n\n\tresp, err := s.auth.ApplicationRequest(authRequest)\n\tif err != nil {\n\t\tif resp.PUContext != nil {\n\t\t\tstate := flowstats.NewAppConnectionState(s.puID, r, authRequest, resp)\n\t\t\tstate.Stats.Action = resp.Action\n\t\t\tstate.Stats.PolicyID = resp.NetworkPolicyID\n\t\t\ts.collector.CollectFlowEvent(state.Stats)\n\t\t}\n\t\tzap.L().Error(\"ext_authz egress: Access *NOT* authorized by network policy\", zap.String(\"puID\", s.puID), zap.Error(err))\n\t\t//flow.DropReason = \"access not authorized by network policy\"\n\t\treturn createDeniedCheckResponse(code.Code_PERMISSION_DENIED, envoy_type.StatusCode_Forbidden, \"Access not authorized by network policy\"), err\n\t}\n\t// record the flow stats\n\tstate := flowstats.NewAppConnectionState(s.puID, r, authRequest, resp)\n\t// If the flow is external, then collect the stats here as the policy decision has already been made.\n\tif resp.External {\n\t\tdefer s.collector.CollectFlowEvent(state.Stats)\n\t}\n\tif resp.Action.Rejected() {\n\t\tzap.L().Error(\"ext_authz egress: Access action rejected by network policy\", zap.String(\"puID\", s.puID))\n\t\t//flow.DropReason = \"access not authorized by network policy\"\n\t\treturn createDeniedCheckResponse(code.Code_PERMISSION_DENIED, envoy_type.StatusCode_Forbidden, \"Access not authorized by network policy\"), nil\n\t}\n\t// now create the response and inject our identity\n\tzap.L().Debug(\"ext_authz egress: injecting header\", zap.String(\"puID\", s.puID))\n\t// build our identity token\n\tvar transmittedKey []byte\n\tif s.secrets != nil {\n\t\ttransmittedKey = s.secrets.TransmittedKey()\n\t} else {\n\t\tzap.L().Error(\"ext_authz egress:the secrerts are nil\")\n\t}\n\tzap.L().Debug(\"ext_authz egress: Request accepted for\", zap.String(\"dst\", destIP))\n\treturn &ext_auth.CheckResponse{\n\t\tStatus: &status.Status{\n\t\t\tCode: int32(code.Code_OK),\n\t\t},\n\t\tHttpResponse: &ext_auth.CheckResponse_OkResponse{\n\t\t\tOkResponse: &ext_auth.OkHttpResponse{\n\t\t\t\tHeaders: []*envoy_core.HeaderValueOption{\n\t\t\t\t\t{\n\t\t\t\t\t\tHeader: &envoy_core.HeaderValue{\n\t\t\t\t\t\t\tKey:   aporetoKeyHeader,\n\t\t\t\t\t\t\tValue: string(transmittedKey),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tHeader: &envoy_core.HeaderValue{\n\t\t\t\t\t\t\tKey:   aporetoAuthHeader,\n\t\t\t\t\t\t\tValue: resp.Token,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}, nil\n}\n\nfunc createDeniedCheckResponse(rpcCode code.Code, httpCode envoy_type.StatusCode, body string) *ext_auth.CheckResponse { // nolint\n\treturn &ext_auth.CheckResponse{\n\t\tStatus: &status.Status{\n\t\t\tCode: int32(rpcCode),\n\t\t},\n\t\tHttpResponse: &ext_auth.CheckResponse_DeniedResponse{\n\t\t\tDeniedResponse: &ext_auth.DeniedHttpResponse{\n\t\t\t\tStatus: &envoy_type.HttpStatus{\n\t\t\t\t\tCode: httpCode,\n\t\t\t\t},\n\t\t\t\tBody: body,\n\t\t\t},\n\t\t},\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/envoyauthorizer/envoyproxy/sds_server.go",
    "content": "package envoyproxy\n\nimport (\n\t\"bytes\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"net\"\n\t\"sync\"\n\t\"time\"\n\n\t\"context\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n\t\"google.golang.org/grpc\"\n\n\tv2 \"github.com/envoyproxy/go-control-plane/envoy/api/v2\"\n\tenvoy_api_v2_auth \"github.com/envoyproxy/go-control-plane/envoy/api/v2/auth\"\n\tenvoy_api_v2_core \"github.com/envoyproxy/go-control-plane/envoy/api/v2/core\"\n\tsds \"github.com/envoyproxy/go-control-plane/envoy/service/discovery/v2\"\n\n\t\"github.com/golang/protobuf/ptypes\"\n\t//\"github.com/gogo/protobuf/types\"\n\t\"google.golang.org/grpc/metadata\"\n)\n\nconst (\n\t// SdsSocketpath is the socket path on which the envoy will talk to the remoteEnforcer.\n\t//SdsSocketpath = \"@aporeto_envoy_sds\"\n\tSdsSocketpath = \"127.0.0.1:2999\"\n\t//SdsSocketpath = \"/var/run/sds/uds_path\"\n\ttypeCertificate = \"CERTIFICATE\"\n)\n\n// Options to create a SDS server to task to envoy\ntype Options struct {\n\tSocketPath string\n}\n\n// sdsCerts is the structure will pass the upstream certs downwards.\ntype sdsCerts struct {\n\tkey    string\n\tcert   string\n\tcaPool *x509.CertPool\n}\n\n// SdsDiscoveryStream is the same as the sds.SecretDiscoveryService_StreamSecretsServer\ntype SdsDiscoveryStream interface {\n\tSend(*v2.DiscoveryResponse) error\n\tRecv() (*v2.DiscoveryRequest, error)\n\tgrpc.ServerStream\n}\n\nvar _ sds.SecretDiscoveryServiceServer = &SdsServer{}\n\n// SdsServer to talk with envoy for sds.\ntype SdsServer struct {\n\tsdsGrpcServer   *grpc.Server\n\tsdsGrpcListener net.Listener\n\n\terrCh   chan error\n\tpuInfo  *policy.PUInfo\n\tcert    *tls.Certificate\n\tca      *x509.CertPool\n\tkeyPEM  string\n\tcertPEM string\n\tsecrets secrets.Secrets\n\tsync.RWMutex\n\t// conncache is a cache of the sdsConnection, here the key is the connectionID and val is the secret.\n\tconncache cache.DataStore\n\t// updCertsChannel is used whenever there is a cert-update/Enfore\n\tupdCertsChannel chan sdsCerts\n\tconnMap         map[string]bool\n}\n\ntype secretItem struct {\n\tCertificateChain []byte\n\tPrivateKey       []byte\n\n\tRootCert []byte\n\n\t// RootCertOwnedByCompoundSecret is true if this SecretItem was created by a\n\t// K8S secret having both server cert/key and client ca and should be deleted\n\t// with the secret.\n\tRootCertOwnedByCompoundSecret bool\n\n\t// ResourceName passed from envoy SDS discovery request.\n\t// \"ROOTCA\" for root cert request, \"default\" for key/cert request.\n\tResourceName string\n\n\t// Credential token passed from envoy, caClient uses this token to send\n\t// CSR to CA to sign certificate.\n\tToken string\n\n\t// Version is used(together with token and ResourceName) to identify discovery request from\n\t// envoy which is used only for confirm purpose.\n\tVersion string\n\n\tCreatedTime time.Time\n\n\tExpireTime time.Time\n}\n\n// clientConn is ID for the connection between client and SDS server.\ntype clientConn struct {\n\tclientID string\n\t// the TLS cert information cached for this particular connection\n\tsecret *secretItem\n\n\t// connectionID is the ID for each new request, make it a combo of nodeID+counter.\n\tconnectionID string\n\tstream       SdsDiscoveryStream\n}\n\n// NewSdsServer creates a instance of a server.\nfunc NewSdsServer(contextID string, puInfo *policy.PUInfo, caPool *x509.CertPool, secrets secrets.Secrets) (*SdsServer, error) {\n\tif puInfo == nil {\n\t\tzap.L().Error(\"SDS Server: puInfo NIL \")\n\t\treturn nil, fmt.Errorf(\"the puinfo cannot be nil\")\n\t}\n\n\tsdsOptions := &Options{SocketPath: SdsSocketpath}\n\tsdsServer := &SdsServer{\n\t\tpuInfo:          puInfo,\n\t\tca:              caPool,\n\t\terrCh:           make(chan error),\n\t\tsecrets:         secrets,\n\t\tconncache:       cache.NewCache(\"servers\"),\n\t\tupdCertsChannel: make(chan sdsCerts),\n\t\tconnMap:         make(map[string]bool),\n\t}\n\tif err := sdsServer.CreateSdsService(sdsOptions); err != nil {\n\t\tzap.L().Error(\"SDS Server:Error while starting the envoy sds server.\")\n\t\treturn nil, err\n\t}\n\tzap.L().Debug(\"SDS Server: SDS start success\", zap.String(\"pu\", puInfo.ContextID))\n\treturn sdsServer, nil\n}\n\n// CreateSdsService does the following\n// 1. create grpc server.\n// 2. create a listener on the Unix Domain Socket.\n// 3.\nfunc (s *SdsServer) CreateSdsService(options *Options) error { // nolint: unparam\n\ts.sdsGrpcServer = grpc.NewServer()\n\ts.register(s.sdsGrpcServer)\n\n\taddr, err := net.ResolveTCPAddr(\"tcp\", options.SocketPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\tnl, err := net.ListenTCP(\"tcp\", addr)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// if err := os.Remove(options.SocketPath); err != nil && !os.IsNotExist(err) {\n\t// \tzap.L().Error(\"SDS Server: envoy-reireme, failed to remove the udspath\", zap.Error(err))\n\t// \treturn err\n\t// }\n\t// zap.L().Debug(\"SDS Server: Start listening on UDS path\", zap.Any(\"socketPath\", options.SocketPath))\n\t// addr, _ := net.ResolveUnixAddr(\"unix\", options.SocketPath)\n\n\t// sdsGrpcListener, err := net.ListenUnix(\"unix\", addr)\n\t// if err != nil {\n\t// \tzap.L().Error(\"SDS Server:cannot listen on the socketpath\", zap.Error(err))\n\t// \treturn err\n\t// }\n\t// //make sure the socket path can be accessed.\n\t// if _, err := os.Stat(options.SocketPath); err != nil {\n\t// \tzap.L().Error(\"SDS Server: SDS uds file doesn't exist\", zap.String(\"socketPath:\", options.SocketPath))\n\t// \treturn fmt.Errorf(\"sds uds file %q doesn't exist\", options.SocketPath)\n\t// }\n\t// if err := os.Chmod(options.SocketPath, 0666); err != nil {\n\t// \tzap.L().Error(\"SDS Server: Failed to update permission\", zap.String(\"socketPath:\", options.SocketPath))\n\t// \treturn fmt.Errorf(\"failed to update %q permission\", options.SocketPath)\n\t// }\n\ts.sdsGrpcListener = nl\n\n\tzap.L().Debug(\"SDS Server: run the grpc server at\", zap.Any(\"addr\", s.sdsGrpcListener.Addr()))\n\ts.Run()\n\treturn nil\n}\n\n// Run starts the sdsGrpcServer to serve\nfunc (s *SdsServer) Run() {\n\tgo func() {\n\t\tif s.sdsGrpcListener != nil {\n\t\t\tif err := s.sdsGrpcServer.Serve(s.sdsGrpcListener); err != nil {\n\t\t\t\tzap.L().Error(\"SDS Server: Error while serve\", zap.Error(err))\n\t\t\t\ts.errCh <- err\n\t\t\t}\n\t\t}\n\t\tzap.L().Error(\"SDS Server: the listener is nil, cannot start the SDS server for\", zap.String(\"puID\", s.puInfo.ContextID))\n\t}()\n}\n\n// Stop stops all the listeners and the grpc servers.\nfunc (s *SdsServer) Stop() {\n\tif s.sdsGrpcListener != nil {\n\t\ts.sdsGrpcListener.Close() // nolint\n\t}\n\tif s.sdsGrpcServer != nil {\n\t\ts.sdsGrpcServer.Stop()\n\t}\n}\n\n// GracefulStop calls the function with the same name on the backing gRPC server\nfunc (s *SdsServer) GracefulStop() {\n\ts.sdsGrpcServer.GracefulStop()\n}\n\n// register adds the SDS handle to the grpc server\nfunc (s *SdsServer) register(sdsGrpcServer *grpc.Server) {\n\tzap.L().Debug(\"SDS Server:  envoy-trireme registering the secret discovery\")\n\tsds.RegisterSecretDiscoveryServiceServer(sdsGrpcServer, s)\n}\n\n// UpdateSecrets updates the secrets\n// Whenever the Envoy makes a request for certificate, the certs and keys are fetched from\n// the Proxy.\nfunc (s *SdsServer) UpdateSecrets(cert *tls.Certificate, caPool *x509.CertPool, secrets secrets.Secrets, certPEM, keyPEM string) {\n\ts.Lock()\n\tdefer s.Unlock()\n\n\ts.cert = cert\n\ts.ca = caPool\n\t//s.secrets = secrets\n\ts.certPEM = certPEM\n\ts.keyPEM = keyPEM\n\ts.updCertsChannel <- sdsCerts{key: keyPEM, cert: certPEM, caPool: caPool}\n}\n\n// now implement the interfaces of the SDS grpc server.\n// type SecretDiscoveryServiceServer interface {\n// \tDeltaSecrets(SecretDiscoveryService_DeltaSecretsServer) error\n// \tStreamSecrets(SecretDiscoveryService_StreamSecretsServer) error\n// \tFetchSecrets(context.Context, *v2.DiscoveryRequest) (*v2.DiscoveryResponse, error)\n// }\n\n// DeltaSecrets checks for the delta and sends the changes.\nfunc (s *SdsServer) DeltaSecrets(stream sds.SecretDiscoveryService_DeltaSecretsServer) error {\n\treturn nil\n}\n\nfunc startStreaming(stream SdsDiscoveryStream, discoveryReqCh chan *v2.DiscoveryRequest) {\n\tdefer close(discoveryReqCh)\n\tfor {\n\t\treq, err := stream.Recv()\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"SDS Server: Connection terminated with err\", zap.Error(err))\n\t\t\treturn\n\t\t}\n\t\tdiscoveryReqCh <- req\n\t}\n}\n\n// StreamSecrets is the function invoked by the envoy in-order to pull the certs, this also sends the response back to the envoy.\n// It does the following:\n// 1. create a receiver thread to stream the requests.\n// 2. parse the discovery request.\n// 3. track the request.\n// 4. call the Aporeto api to generate the secret\nfunc (s *SdsServer) StreamSecrets(stream sds.SecretDiscoveryService_StreamSecretsServer) error {\n\tctx := stream.Context()\n\ttoken := \"\"\n\tmetadata, ok := metadata.FromIncomingContext(ctx)\n\tif !ok {\n\t\treturn fmt.Errorf(\"unable to get metadata from incoming context\")\n\t}\n\tif h, ok := metadata[\"authorization\"]; ok {\n\t\tif len(h) != 1 {\n\t\t\treturn fmt.Errorf(\"credential token from %q must have 1 value in gRPC metadata but got %d\", \"authorization\", len(h))\n\t\t}\n\t\ttoken = h[0]\n\t}\n\tzap.L().Debug(\"SDS Server: IN stream secrets, token\", zap.String(\"token\", token))\n\n\t// create new connection\n\tconn := &clientConn{}\n\tconn.stream = stream\n\tdiscoveryReqCh := make(chan *v2.DiscoveryRequest, 1)\n\tgo startStreaming(stream, discoveryReqCh)\n\n\tfor {\n\t\t// wait for the receiver thread to stream the request and send it to us over here.\n\t\tselect {\n\t\tcase req, ok := <-discoveryReqCh:\n\t\t\t// if req == nil {\n\t\t\t// \tzap.L().Warn(\"SDS Server: The request is nil\")\n\t\t\t// \tcontinue\n\t\t\t// }\n\t\t\t// Now check the following:\n\t\t\t// 1. Return if stream is closed.\n\t\t\t// 2. Return if its invalid request.\n\t\t\tif !ok {\n\t\t\t\tzap.L().Error(\"SDS Server: Receiver channel closed, which means the Receiver stream is closed\")\n\t\t\t\treturn fmt.Errorf(\"Receiver closed the channel\")\n\t\t\t}\n\t\t\t// then check for the req.Node\n\t\t\tif req.Node == nil {\n\t\t\t\tzap.L().Error(\"Invalid discovery request with no node\")\n\t\t\t\treturn fmt.Errorf(\"invalid discovery request with no node\")\n\t\t\t}\n\t\t\tif req.ErrorDetail != nil {\n\t\t\t\tzap.L().Error(\"SDS Server: ERROR from envoy for processing the resource\", zap.String(\"error\", req.GetErrorDetail().String()))\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// now according to the Istio pilot SDS secret config we have 2 configs, this configs are pushed to envoy through Istio.\n\t\t\t// 1. SDSDefaultResourceName is the default name in sdsconfig, used for fetching normal key/cert.\n\t\t\t// 2. SDSRootResourceName is the sdsconfig name for root CA, used for fetching root cert.\n\t\t\t// therefore from the above we receive 2 requests, 1 for default and 2 for the ROOTCA\n\n\t\t\t// now check for the resourcename, it should atleast have one, else continue and stream the next request.\n\t\t\t// according to the definition this could be empty.\n\t\t\tif len(req.ResourceNames) == 0 {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif len(req.ResourceNames) > 1 {\n\t\t\t\treturn fmt.Errorf(\"SDS Server: invalid resourceNames, greater than one\")\n\t\t\t}\n\t\t\tresourceName := req.ResourceNames[0]\n\t\t\tconn.clientID = req.Node.GetId()\n\t\t\t//if len(conn.connectionID) == 0 {\n\t\t\tconn.connectionID = createConnID(conn.clientID, resourceName)\n\n\t\t\t// if this is not the 1st request and if the secret is already present then dont proceed as this is a ACK according to the XDS protocol.\n\t\t\tif req.VersionInfo != \"\" && s.checkSecretPresent(conn.connectionID, req, token) {\n\t\t\t\tzap.L().Warn(\"SDS Server: got a ACK from envoy\", zap.String(\"connectionID\", conn.connectionID), zap.String(\"resourceName\", resourceName), zap.String(\"version\", req.VersionInfo))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tsecret := s.generateSecret(req, token)\n\t\t\tif secret == nil {\n\t\t\t\tzap.L().Error(\"SDS Server: the Certs cannot be served so return nil\")\n\t\t\t\treturn fmt.Errorf(\"the aporeto SDS server cannot generate server, the certs are nil\")\n\t\t\t}\n\t\t\tconn.secret = secret\n\t\t\ts.conncache.AddOrUpdate(conn.connectionID, conn)\n\n\t\t\tresp := &v2.DiscoveryResponse{\n\t\t\t\tTypeUrl:     \"type.googleapis.com/envoy.api.v2.auth.Secret\",\n\t\t\t\tVersionInfo: secret.Version,\n\t\t\t\tNonce:       secret.Version,\n\t\t\t}\n\t\t\tretSecret := &envoy_api_v2_auth.Secret{\n\t\t\t\tName: secret.ResourceName,\n\t\t\t}\n\t\t\tif secret.RootCert != nil {\n\t\t\t\tretSecret.Type = getRootCert(secret)\n\t\t\t} else {\n\t\t\t\tretSecret.Type = getTLScerts(secret)\n\t\t\t}\n\t\t\tendSecret, err := ptypes.MarshalAny(retSecret)\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Error(\"SDS Server: Cannot marshall the secret\", zap.Error(err))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tresp.Resources = append(resp.Resources, endSecret)\n\t\t\tif err = stream.Send(resp); err != nil {\n\t\t\t\tzap.L().Error(\"SDS Server: Failed to send the resp cert\", zap.Error(err))\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif secret.RootCert != nil {\n\t\t\t\tzap.L().Debug(\"SDS Server: Successfully sent root cert\", zap.String(\"rootCA\", string(secret.RootCert)))\n\t\t\t} else {\n\t\t\t\tzap.L().Debug(\"SDS Server: Successfully sent default cert\", zap.String(\"default cert\", string(secret.CertificateChain)))\n\t\t\t}\n\t\tcase updateCerts := <-s.updCertsChannel:\n\t\t\t// 1st check if the connection is present\n\n\t\t\tif _, err := s.conncache.Get(conn.connectionID); err != nil {\n\t\t\t\tzap.L().Warn(\"SDS server: updCertsChannel, no connID found in cache,\", zap.String(\"connID\", conn.connectionID))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tfmt.Println(\"connID found now send certs\")\n\t\t\tif updateCerts.key != \"\" && updateCerts.cert != \"\" {\n\t\t\t\tif err := s.sendUpdatedCerts(updateCerts, conn); err != nil {\n\t\t\t\t\tzap.L().Error(\"SDS Server: send updated certs failed\", zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n}\n\nfunc (s *SdsServer) sendUpdatedCerts(apoSecret sdsCerts, conn *clientConn) error {\n\tvar err error\n\tpemCert := []byte{} // nolint\n\tt := time.Now()\n\n\tif apoSecret.key != \"\" && apoSecret.cert != \"\" {\n\t\tcaPEM := s.secrets.CertAuthority()\n\n\t\tpemCert, err = buildCertChain([]byte(apoSecret.cert), caPEM)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"SDS Server: Cannot build the cert chain\")\n\t\t\treturn fmt.Errorf(\"SDS Server: Cannot build the cert chain\")\n\t\t}\n\n\t\tresp := &v2.DiscoveryResponse{\n\t\t\tTypeUrl:     \"type.googleapis.com/envoy.api.v2.auth.Secret\",\n\t\t\tVersionInfo: t.String(),\n\t\t\tNonce:       t.String(),\n\t\t}\n\t\tretSecret := &envoy_api_v2_auth.Secret{\n\t\t\tName: \"default\",\n\t\t}\n\n\t\tretSecret.Type = &envoy_api_v2_auth.Secret_TlsCertificate{\n\t\t\tTlsCertificate: &envoy_api_v2_auth.TlsCertificate{\n\t\t\t\tCertificateChain: &envoy_api_v2_core.DataSource{\n\t\t\t\t\tSpecifier: &envoy_api_v2_core.DataSource_InlineBytes{\n\t\t\t\t\t\tInlineBytes: pemCert,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tPrivateKey: &envoy_api_v2_core.DataSource{\n\t\t\t\t\tSpecifier: &envoy_api_v2_core.DataSource_InlineBytes{\n\t\t\t\t\t\tInlineBytes: []byte(apoSecret.key),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\tendSecret, err := ptypes.MarshalAny(retSecret)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"SDS Server: Cannot marshall the secret\")\n\t\t\treturn fmt.Errorf(\"SDS Server: Cannot marshall the secret\")\n\t\t}\n\n\t\tresp.Resources = append(resp.Resources, endSecret)\n\t\tif err = conn.stream.Send(resp); err != nil {\n\t\t\tzap.L().Error(\"SDS Server: Failed to send the resp cert\")\n\t\t\treturn err\n\t\t}\n\n\t}\n\treturn nil\n}\n\nfunc (s *SdsServer) checkSecretPresent(connID string, req *v2.DiscoveryRequest, token string) bool {\n\tval, err := s.conncache.Get(connID)\n\tif err != nil {\n\t\treturn false\n\t}\n\te := val.(*clientConn)\n\treturn e.secret.ResourceName == req.ResourceNames[0] && e.secret.Token == token && e.secret.Version == req.VersionInfo\n}\n\nfunc createConnID(clientID, resourceName string) string {\n\ttemp := clientID + resourceName\n\tzap.L().Debug(\"SDS Server: generated a unique ID\", zap.String(\"connID\", temp), zap.String(\"resource\", resourceName))\n\treturn temp\n}\n\n// FetchSecrets gets the discovery request and call the Aporeto backend to fetch the certs.\n// 1. parse the discovery request.\n// 2. track the request.\n// 3. call the Aporeto api to generate the secret\nfunc (s *SdsServer) FetchSecrets(ctx context.Context, req *v2.DiscoveryRequest) (*v2.DiscoveryResponse, error) {\n\ttoken := \"\"\n\tmetadata, ok := metadata.FromIncomingContext(ctx)\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"unable to get metadata from incoming context\")\n\t}\n\tif h, ok := metadata[\"authorization\"]; ok {\n\t\tif len(h) != 1 {\n\t\t\treturn nil, fmt.Errorf(\"credential token from %q must have 1 value in gRPC metadata but got %d\", \"authorization\", len(h))\n\t\t}\n\t\ttoken = h[0]\n\t}\n\tzap.L().Info(\"SDS Server: IN stream secrets, token\", zap.String(\"token\", token))\n\tsecret := s.generateSecret(req, token)\n\n\tresp := &v2.DiscoveryResponse{\n\t\tTypeUrl: \"type.googleapis.com/envoy.api.v2.auth.Secret\",\n\t}\n\tretSecret := &envoy_api_v2_auth.Secret{\n\t\tName: secret.ResourceName,\n\t}\n\tif secret.RootCert != nil {\n\t\tretSecret.Type = getRootCert(secret)\n\t} else {\n\t\tretSecret.Type = getTLScerts(secret)\n\t}\n\tendSecret, err := ptypes.MarshalAny(retSecret)\n\tif err != nil {\n\t\tzap.L().Error(\"SDS Server: Cannot marshall the secret\")\n\t\treturn nil, err\n\t}\n\tresp.Resources = append(resp.Resources, endSecret)\n\n\tif secret.RootCert != nil {\n\t\tzap.L().Debug(\"SDS Server: Successfully sent root cert\", zap.Any(\"rootCA\", string(secret.RootCert)))\n\t} else {\n\t\tzap.L().Debug(\"SDS Server: Successfully sent default cert\", zap.Any(\"default cert\", string(secret.CertificateChain)))\n\t}\n\treturn resp, nil\n}\n\n// generateSecret is the call which talks to the metadata API to fetch the certs.\nfunc (s *SdsServer) generateSecret(req *v2.DiscoveryRequest, token string) *secretItem {\n\n\tvar err error\n\tvar pemCert []byte\n\tt := time.Now()\n\tvar expTime time.Time\n\n\tif s.puInfo.Policy == nil {\n\t\tzap.L().Error(\"SDS Server:  The policy is nil, Policy cannot be nil.\")\n\t}\n\t// now fetch the certificates for the PU/Service.\n\tcertPEM, keyPEM, _ := s.puInfo.Policy.ServiceCertificates()\n\tif certPEM == \"\" || keyPEM == \"\" {\n\t\tzap.L().Error(\"SDS Server:  the certs are empty\")\n\t\treturn nil\n\t}\n\n\tcaPEM := s.secrets.CertAuthority()\n\tif req.ResourceNames[0] == \"default\" {\n\n\t\texpTime, _ = getExpTimeFromCert([]byte(certPEM))\n\t\tpemCert, _ = buildCertChain([]byte(certPEM), caPEM)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"SDS Server: Cannot build the cert chain\")\n\t\t\treturn nil\n\t\t}\n\n\t} else {\n\n\t\texpTime, _ = getExpTimeFromCert(caPEM)\n\t\tpemCert, err = getTopRootCa(caPEM)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"SDS Server:  Cannot build the Root cert chain\")\n\t\t}\n\t}\n\tif err != nil {\n\t\tzap.L().Error(\"SDS Server: cannot get exp time\", zap.Error(err))\n\t\treturn nil\n\t}\n\tif req.ResourceNames[0] == \"default\" {\n\t\treturn &secretItem{\n\t\t\tCertificateChain: pemCert,\n\t\t\tPrivateKey:       []byte(keyPEM),\n\t\t\t//PrivateKey:   []byte(keyPEMdebug),\n\t\t\tResourceName: req.ResourceNames[0],\n\t\t\tToken:        token,\n\t\t\tCreatedTime:  t,\n\t\t\tExpireTime:   expTime,\n\t\t\tVersion:      t.String(),\n\t\t}\n\t}\n\n\treturn &secretItem{\n\t\tRootCert:     pemCert,\n\t\tResourceName: req.ResourceNames[0],\n\t\tToken:        token,\n\t\tCreatedTime:  t,\n\t\tExpireTime:   expTime,\n\t\tVersion:      t.String(),\n\t}\n\n}\n\nfunc buildCertChain(certPEM, caPEM []byte) ([]byte, error) {\n\tzap.L().Debug(\"SDS Server: BEFORE in buildCertChain certPEM\", zap.String(\"certPEM\", string(certPEM)), zap.String(\"caPEM\", string(caPEM)))\n\tcertChain := []*x509.Certificate{}\n\t//certPEMBlock := caPEM\n\tclientPEMBlock := certPEM\n\tderBlock, _ := pem.Decode(clientPEMBlock)\n\tif derBlock != nil {\n\t\tif derBlock.Type == typeCertificate {\n\t\t\tcert, err := x509.ParseCertificate(derBlock.Bytes)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tcertChain = append(certChain, cert)\n\t\t} else {\n\t\t\treturn nil, fmt.Errorf(\"invalid pem block type: %s\", derBlock.Type)\n\t\t}\n\t}\n\tvar certDERBlock *pem.Block\n\tfor {\n\t\tcertDERBlock, caPEM = pem.Decode(caPEM)\n\t\tif certDERBlock == nil {\n\t\t\tbreak\n\t\t}\n\t\tif certDERBlock.Type == typeCertificate {\n\t\t\tcert, err := x509.ParseCertificate(certDERBlock.Bytes)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tcertChain = append(certChain, cert)\n\t\t} else {\n\t\t\treturn nil, fmt.Errorf(\"invalid pem block type: %s\", certDERBlock.Type)\n\t\t}\n\t}\n\tby, _ := x509CertChainToPem(certChain)\n\tzap.L().Debug(\"SDS Server: After building the cert chain\", zap.String(\"certChain\", string(by)))\n\treturn x509CertChainToPem(certChain)\n}\n\n// x509CertToPem converts x509 to byte.\nfunc x509CertToPem(cert *x509.Certificate) ([]byte, error) {\n\tvar pemBytes bytes.Buffer\n\tif err := pem.Encode(&pemBytes, &pem.Block{Type: typeCertificate, Bytes: cert.Raw}); err != nil {\n\t\treturn nil, err\n\t}\n\treturn pemBytes.Bytes(), nil\n}\n\n// x509CertChainToPem converts chain of x509 certs to byte.\nfunc x509CertChainToPem(certChain []*x509.Certificate) ([]byte, error) {\n\tvar pemBytes bytes.Buffer\n\tfor _, cert := range certChain {\n\t\tif err := pem.Encode(&pemBytes, &pem.Block{Type: typeCertificate, Bytes: cert.Raw}); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn pemBytes.Bytes(), nil\n}\n\n// getTopRootCa get the top root CA\nfunc getTopRootCa(certPEMBlock []byte) ([]byte, error) {\n\tzap.L().Debug(\"SDS Server: BEFORE root cert\", zap.String(\"root_cert\", string(certPEMBlock)))\n\t//rootCert := []*x509.Certificate{}\n\tvar certChain tls.Certificate\n\t//certPEMBlock := []byte(rootcaBundle)\n\tvar certDERBlock *pem.Block\n\tfor {\n\t\tcertDERBlock, certPEMBlock = pem.Decode(certPEMBlock)\n\t\tif certDERBlock == nil {\n\t\t\tbreak\n\t\t}\n\t\tif certDERBlock.Type == typeCertificate {\n\t\t\tcertChain.Certificate = append(certChain.Certificate, certDERBlock.Bytes)\n\t\t}\n\t}\n\tzap.L().Debug(\"SDS Server: the root ca\", zap.String(\"cert\", string(certChain.Certificate[len(certChain.Certificate)-1])))\n\tx509Cert, err := x509.ParseCertificate(certChain.Certificate[len(certChain.Certificate)-1])\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tby, _ := x509CertToPem(x509Cert)\n\tzap.L().Debug(\"SDS Server: After building the cert chain\", zap.String(\"rootCert\", string(by)))\n\treturn x509CertToPem(x509Cert)\n}\n\n// getExpTimeFromCert gets the exp time from the cert, assumning the cert is in pem encoded.\nfunc getExpTimeFromCert(cert []byte) (time.Time, error) {\n\tblock, _ := pem.Decode(cert)\n\tif block == nil {\n\t\tzap.L().Error(\"getExpTimeFromCert: error while pem decode\")\n\t\treturn time.Time{}, fmt.Errorf(\"Cannot decode the pem certs\")\n\t}\n\tx509Cert, err := x509.ParseCertificate(block.Bytes)\n\tif err != nil {\n\t\tzap.L().Error(\"failed to parse the certs\", zap.Error(err))\n\t\treturn time.Time{}, err\n\t}\n\treturn x509Cert.NotAfter, nil\n}\n\nfunc getRootCert(secret *secretItem) *envoy_api_v2_auth.Secret_ValidationContext {\n\treturn &envoy_api_v2_auth.Secret_ValidationContext{\n\t\tValidationContext: &envoy_api_v2_auth.CertificateValidationContext{\n\t\t\tTrustedCa: &envoy_api_v2_core.DataSource{\n\t\t\t\tSpecifier: &envoy_api_v2_core.DataSource_InlineBytes{\n\t\t\t\t\tInlineBytes: secret.RootCert,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n}\n\nfunc getTLScerts(secret *secretItem) *envoy_api_v2_auth.Secret_TlsCertificate {\n\treturn &envoy_api_v2_auth.Secret_TlsCertificate{\n\t\tTlsCertificate: &envoy_api_v2_auth.TlsCertificate{\n\t\t\tCertificateChain: &envoy_api_v2_core.DataSource{\n\t\t\t\tSpecifier: &envoy_api_v2_core.DataSource_InlineBytes{\n\t\t\t\t\tInlineBytes: secret.CertificateChain,\n\t\t\t\t},\n\t\t\t},\n\t\t\tPrivateKey: &envoy_api_v2_core.DataSource{\n\t\t\t\tSpecifier: &envoy_api_v2_core.DataSource_InlineBytes{\n\t\t\t\t\tInlineBytes: secret.PrivateKey,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/flowstats/state.go",
    "content": "package flowstats\n\nimport (\n\t\"net\"\n\t\"net/http\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/apiauth\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// ConnectionState captures the connection state. This state\n// is passed to the RoundTripper for any last minute adjustments.\ntype ConnectionState struct {\n\tStats  *collector.FlowRecord\n\tCookie *http.Cookie\n}\n\n// NewAppConnectionState will create the initial connection state object.\nfunc NewAppConnectionState(nativeID string, r *http.Request, authRequest *apiauth.Request, resp *apiauth.AppAuthResponse) *ConnectionState {\n\n\tsourceIP := \"0.0.0.0/0\"\n\tsourcePort := 0\n\tif sourceAddress, err := net.ResolveTCPAddr(\"tcp\", r.RemoteAddr); err == nil {\n\t\tsourceIP = sourceAddress.IP.String()\n\t\tsourcePort = sourceAddress.Port\n\t}\n\n\tvar tags policy.TagStore\n\tif resp.PUContext.Annotations() != nil {\n\t\ttags = *resp.PUContext.Annotations()\n\t}\n\n\treturn &ConnectionState{\n\t\tStats: &collector.FlowRecord{\n\t\t\tContextID: nativeID,\n\t\t\tDestination: collector.EndPoint{\n\t\t\t\tURI:        r.Method + \" \" + r.RequestURI,\n\t\t\t\tHTTPMethod: r.Method,\n\t\t\t\tType:       collector.EndPointTypeExternalIP,\n\t\t\t\tPort:       uint16(authRequest.OriginalDestination.Port),\n\t\t\t\tIP:         authRequest.OriginalDestination.IP.String(),\n\t\t\t\tID:         resp.NetworkServiceID,\n\t\t\t},\n\t\t\tSource: collector.EndPoint{\n\t\t\t\tType:       collector.EndPointTypePU,\n\t\t\t\tID:         resp.PUContext.ManagementID(),\n\t\t\t\tIP:         sourceIP,\n\t\t\t\tPort:       uint16(sourcePort),\n\t\t\t\tHTTPMethod: r.Method,\n\t\t\t\tURI:        r.Method + \" \" + r.RequestURI,\n\t\t\t},\n\t\t\tAction:      resp.Action,\n\t\t\tL4Protocol:  packet.IPProtocolTCP,\n\t\t\tServiceType: policy.ServiceHTTP,\n\t\t\tServiceID:   resp.ServiceID,\n\t\t\tTags:        tags.GetSlice(),\n\t\t\tNamespace:   resp.PUContext.ManagementNamespace(),\n\t\t\tPolicyID:    resp.NetworkPolicyID,\n\t\t\tCount:       1,\n\t\t},\n\t}\n}\n\n// NewNetworkConnectionState will create the initial connection state object.\nfunc NewNetworkConnectionState(nativeID string, userID string, r *apiauth.Request, d *apiauth.NetworkAuthResponse) *ConnectionState {\n\n\tvar mgmtID, namespace, serviceID string\n\tvar tags policy.TagStore\n\n\tif d.PUContext != nil {\n\t\tmgmtID = d.PUContext.ManagementID()\n\t\tnamespace = d.PUContext.ManagementNamespace()\n\t\tif d.PUContext.Annotations() != nil {\n\t\t\ttags = *d.PUContext.Annotations()\n\t\t}\n\t\tserviceID = d.ServiceID\n\t} else {\n\t\tmgmtID = collector.DefaultEndPoint\n\t\tnamespace = collector.DefaultEndPoint\n\t\ttags = *policy.NewTagStore()\n\t\tserviceID = collector.DefaultEndPoint\n\t}\n\n\tsourceType := collector.EndPointTypeExternalIP\n\tsourceID := collector.DefaultEndPoint\n\tnetworkPolicyID := collector.DefaultEndPoint\n\taction := policy.Reject | policy.Log\n\n\tif d != nil {\n\t\tsourceType = d.SourceType\n\t\tif sourceType == collector.EndPointTypeClaims {\n\t\t\tsourceType = collector.EndPointTypeExternalIP\n\t\t}\n\n\t\tswitch d.SourceType {\n\t\tcase collector.EndPointTypePU:\n\t\t\tsourceID = d.SourcePUID\n\t\tcase collector.EndPointTypeClaims:\n\t\t\tsourceID = d.NetworkServiceID\n\t\tdefault:\n\t\t\tsourceID = d.NetworkServiceID\n\t\t}\n\n\t\tif d.NetworkPolicyID != \"\" {\n\t\t\tnetworkPolicyID = d.NetworkPolicyID\n\t\t}\n\t\taction = d.Action\n\t}\n\n\tc := &ConnectionState{\n\t\tStats: &collector.FlowRecord{\n\t\t\tContextID: nativeID,\n\t\t\tDestination: collector.EndPoint{\n\t\t\t\tID:         mgmtID,\n\t\t\t\tType:       collector.EndPointTypePU,\n\t\t\t\tIP:         r.OriginalDestination.IP.String(),\n\t\t\t\tPort:       uint16(r.OriginalDestination.Port),\n\t\t\t\tURI:        r.Method + \" \" + r.RequestURI,\n\t\t\t\tHTTPMethod: r.Method,\n\t\t\t\tUserID:     userID,\n\t\t\t},\n\t\t\tSource: collector.EndPoint{\n\t\t\t\tID:     sourceID,\n\t\t\t\tType:   sourceType,\n\t\t\t\tIP:     r.SourceAddress.IP.String(),\n\t\t\t\tPort:   uint16(r.SourceAddress.Port),\n\t\t\t\tUserID: userID,\n\t\t\t},\n\t\t\tAction:      action,\n\t\t\tL4Protocol:  packet.IPProtocolTCP,\n\t\t\tServiceType: policy.ServiceHTTP,\n\t\t\tPolicyID:    networkPolicyID,\n\t\t\tServiceID:   serviceID,\n\t\t\tTags:        tags.GetSlice(),\n\t\t\tNamespace:   namespace,\n\t\t\tCount:       1,\n\t\t},\n\t}\n\n\tif d != nil {\n\t\tif d.Action.Rejected() {\n\t\t\tc.Stats.DropReason = d.DropReason\n\t\t}\n\n\t\tif d.ObservedPolicyID != \"\" {\n\t\t\tc.Stats.ObservedPolicyID = d.ObservedPolicyID\n\t\t\tc.Stats.ObservedAction = d.ObservedAction\n\t\t}\n\n\t\tc.Cookie = d.Cookie\n\t}\n\n\treturn c\n}\n"
  },
  {
    "path": "controller/internal/enforcer/lookup/lookup.go",
    "content": "package lookup\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"sort\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n\t\"go.uber.org/zap\"\n)\n\n// ForwardingPolicy is an instance of the forwarding policy\ntype ForwardingPolicy struct {\n\ttags    []policy.KeyValueOperator\n\tcount   int\n\tindex   int\n\tactions interface{}\n}\n\n// intList is a list of integeres\ntype intList []int\n\n//PolicyDB is the structure of a policy\ntype PolicyDB struct {\n\t// rules    []policy\n\tnumberOfPolicies       int\n\tequalPrefixes          map[string]intList\n\tequalMapTable          map[string]map[string][]*ForwardingPolicy\n\tequalIDMapTable        map[string][]*ForwardingPolicy\n\tnotEqualMapTable       map[string]map[string][]*ForwardingPolicy\n\tnotStarTable           map[string][]*ForwardingPolicy\n\tdefaultNotExistsPolicy *ForwardingPolicy\n}\n\n//NewPolicyDB creates a new PolicyDB for efficient search of policies\nfunc NewPolicyDB() (m *PolicyDB) {\n\n\tm = &PolicyDB{\n\t\tnumberOfPolicies:       0,\n\t\tequalPrefixes:          map[string]intList{},\n\t\tequalMapTable:          map[string]map[string][]*ForwardingPolicy{},\n\t\tequalIDMapTable:        map[string][]*ForwardingPolicy{},\n\t\tnotEqualMapTable:       map[string]map[string][]*ForwardingPolicy{},\n\t\tnotStarTable:           map[string][]*ForwardingPolicy{},\n\t\tdefaultNotExistsPolicy: nil,\n\t}\n\n\treturn m\n}\n\nfunc (array intList) sortedInsert(value int) intList {\n\tl := len(array)\n\tif l == 0 {\n\t\tarray = append(array, value)\n\t\treturn array\n\t}\n\n\ti := sort.Search(l, func(i int) bool {\n\t\treturn array[i] <= value\n\t})\n\n\tif i == 0 { // new value is the largest\n\t\tarray = append([]int{value}, array...)\n\t\treturn array\n\t}\n\n\tif i == l-1 { // new value is the smallest\n\t\tarray = append(array, value)\n\t\treturn array\n\t}\n\n\tinserted := append(array[0:i], value)\n\n\treturn append(inserted, array[i:]...)\n\n}\n\n//AddPolicy adds a policy to the database\nfunc (m *PolicyDB) AddPolicy(selector policy.TagSelector) (policyID int) {\n\n\t// Create a new policy object\n\te := ForwardingPolicy{\n\t\tcount:   0,\n\t\ttags:    selector.Clause,\n\t\tactions: selector.Policy,\n\t}\n\n\t// For each tag of the incoming policy add a mapping between the map tables\n\t// and the structure that represents the policy\n\tfor _, keyValueOp := range selector.Clause {\n\n\t\tswitch keyValueOp.Operator {\n\n\t\tcase policy.KeyExists:\n\t\t\tm.equalPrefixes[keyValueOp.Key] = m.equalPrefixes[keyValueOp.Key].sortedInsert(0)\n\t\t\tif _, ok := m.equalMapTable[keyValueOp.Key]; !ok {\n\t\t\t\tm.equalMapTable[keyValueOp.Key] = map[string][]*ForwardingPolicy{}\n\t\t\t}\n\t\t\tm.equalMapTable[keyValueOp.Key][\"\"] = append(m.equalMapTable[keyValueOp.Key][\"\"], &e)\n\t\t\te.count++\n\n\t\tcase policy.KeyNotExists:\n\t\t\tm.notStarTable[keyValueOp.Key] = append(m.notStarTable[keyValueOp.Key], &e)\n\t\t\tif len(selector.Clause) == 1 {\n\t\t\t\tm.defaultNotExistsPolicy = &e\n\t\t\t}\n\n\t\tcase policy.Equal:\n\t\t\tif _, ok := m.equalMapTable[keyValueOp.Key]; !ok {\n\t\t\t\tm.equalMapTable[keyValueOp.Key] = map[string][]*ForwardingPolicy{}\n\t\t\t}\n\t\t\tfor _, v := range keyValueOp.Value {\n\t\t\t\tif end := len(v) - 1; v[end] == '*' {\n\t\t\t\t\tm.equalPrefixes[keyValueOp.Key] = m.equalPrefixes[keyValueOp.Key].sortedInsert(end)\n\t\t\t\t\tm.equalMapTable[keyValueOp.Key][v[:end]] = append(m.equalMapTable[keyValueOp.Key][v[:end]], &e)\n\t\t\t\t} else {\n\t\t\t\t\tm.equalMapTable[keyValueOp.Key][v] = append(m.equalMapTable[keyValueOp.Key][v], &e)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif keyValueOp.ID != \"\" {\n\t\t\t\tif _, ok := m.equalIDMapTable[keyValueOp.ID]; !ok {\n\t\t\t\t\tm.equalIDMapTable[keyValueOp.ID] = []*ForwardingPolicy{}\n\t\t\t\t}\n\t\t\t\tm.equalIDMapTable[keyValueOp.ID] = append(m.equalIDMapTable[keyValueOp.ID], &e)\n\t\t\t}\n\t\t\te.count++\n\n\t\tdefault: // policy.NotEqual\n\t\t\tif _, ok := m.notEqualMapTable[keyValueOp.Key]; !ok {\n\t\t\t\tm.notEqualMapTable[keyValueOp.Key] = map[string][]*ForwardingPolicy{}\n\t\t\t}\n\t\t\tfor _, v := range keyValueOp.Value {\n\t\t\t\tm.notEqualMapTable[keyValueOp.Key][v] = append(m.notEqualMapTable[keyValueOp.Key][v], &e)\n\t\t\t\te.count++\n\t\t\t}\n\t\t}\n\t}\n\n\t// Increase the number of policies\n\tm.numberOfPolicies++\n\n\t// Give the policy an index\n\te.index = m.numberOfPolicies\n\n\t// Return the ID\n\treturn e.index\n\n}\n\nvar (\n\terrInvalidTag = errors.New(\"tag must be k=v\")\n)\n\n// Custom implementation for splitting strings. Gives significant performance\n// improvement. Do not allocate new strings\nfunc (m *PolicyDB) tagSplit(tag string, k *string, v *string) error {\n\tl := len(tag)\n\tif l < 3 {\n\t\treturn errInvalidTag\n\t}\n\n\tif tag[0] == '=' {\n\t\treturn errInvalidTag\n\t}\n\n\tfor i := 0; i < l; i++ {\n\t\tif tag[i] == '=' {\n\t\t\tif i+1 >= l {\n\t\t\t\treturn errInvalidTag\n\t\t\t}\n\t\t\t*k = tag[:i]\n\t\t\t*v = tag[i+1:]\n\t\t\treturn nil\n\t\t}\n\t}\n\n\treturn errInvalidTag\n}\n\n// Search searches for a set of tags in the database to find a policy match\nfunc (m *PolicyDB) Search(tags *policy.TagStore) (int, interface{}) {\n\n\tcount := make([]int, m.numberOfPolicies+1)\n\n\tskip := make([]bool, m.numberOfPolicies+1)\n\n\t// Disable all policies that fail the not key exists\n\tcopiedTags := tags.GetSlice()\n\tvar k, v string\n\n\tfor _, t := range copiedTags {\n\t\tif err := m.tagSplit(t, &k, &v); err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tfor _, policy := range m.notStarTable[k] {\n\t\t\tskip[policy.index] = true\n\t\t}\n\t}\n\n\t// Go through the list of tags\n\tfor _, t := range copiedTags {\n\n\t\t// Search for matches of t (tag id)\n\t\tif index, action := searchInMapTable(m.equalIDMapTable[t], nil, count, skip); index >= 0 {\n\t\t\treturn index, action\n\t\t}\n\n\t\tif err := m.tagSplit(t, &k, &v); err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tvar ports *portspec.PortSpec\n\t\tif k == constants.PortNumberLabelString {\n\t\t\t// We should get range here\n\t\t\ttagValue, servicePorts, err := parseTagValueRange(v)\n\t\t\tif err != nil || servicePorts == nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tv = tagValue\n\t\t\tports = servicePorts\n\t\t}\n\n\t\t// Search for matches of k=v\n\t\tif index, action := searchInMapTable(m.equalMapTable[k][v], ports, count, skip); index >= 0 {\n\t\t\treturn index, action\n\t\t}\n\n\t\t// Search for matches in prefixes\n\t\tfor _, i := range m.equalPrefixes[k] {\n\t\t\tif i <= len(v) {\n\t\t\t\tif index, action := searchInMapTable(m.equalMapTable[k][v[:i]], nil, count, skip); index >= 0 {\n\t\t\t\t\treturn index, action\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t// Parse all of the policies that have a key that matches the incoming tag key\n\t\t// and a not equal operator and that has a not match rule\n\t\tfor value, policies := range m.notEqualMapTable[k] {\n\t\t\tif v == value {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif index, action := searchInMapTable(policies, nil, count, skip); index >= 0 {\n\t\t\t\treturn index, action\n\t\t\t}\n\t\t}\n\t}\n\n\tif m.defaultNotExistsPolicy != nil && !skip[m.defaultNotExistsPolicy.index] {\n\t\treturn m.defaultNotExistsPolicy.index, m.defaultNotExistsPolicy.actions\n\t}\n\n\treturn -1, nil\n}\n\nfunc searchInMapTable(table []*ForwardingPolicy, ports *portspec.PortSpec, count []int, skip []bool) (int, interface{}) {\n\tfor _, policy := range table {\n\n\t\t// Skip the policy if we have marked it\n\t\tif skip[policy.index] {\n\t\t\tcontinue\n\t\t}\n\n\t\tif ports != nil {\n\t\t\tfor _, tag := range policy.tags {\n\t\t\t\tif tag.PortRange != nil && tag.PortRange.Intersects(ports) {\n\t\t\t\t\tcount[policy.index]++\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t// Since a policy is hit, the count of remaining tags is reduced by one\n\t\t\tcount[policy.index]++\n\t\t}\n\n\t\t// If all tags of the policy have been hit, there is a match\n\t\tif count[policy.index] == policy.count {\n\t\t\treturn policy.index, policy.actions\n\t\t}\n\n\t}\n\n\treturn -1, nil\n}\n\n// PrintPolicyDB is a debugging function to dump the map\nfunc (m *PolicyDB) PrintPolicyDB() {\n\n\tzap.L().Debug(\"Print Policy DB: equal table\")\n\n\tfor key, values := range m.equalMapTable {\n\t\tfor value, policies := range values {\n\t\t\tzap.L().Debug(\"Print Policy DB\",\n\t\t\t\tzap.String(\"policies\", fmt.Sprintf(\"%#v\", policies)),\n\t\t\t\tzap.String(\"key\", key),\n\t\t\t\tzap.String(\"value\", value),\n\t\t\t)\n\t\t}\n\t}\n\n\tzap.L().Debug(\"Print Policy DB: equal id table\")\n\n\tfor key, values := range m.equalIDMapTable {\n\t\tfor _, policies := range values {\n\t\t\tzap.L().Debug(\"Print Policy DB\",\n\t\t\t\tzap.String(\"policies\", fmt.Sprintf(\"%#v\", policies)),\n\t\t\t\tzap.String(\"key\", key),\n\t\t\t)\n\t\t}\n\t}\n\n\tzap.L().Debug(\"Print Policy DB - not equal table\")\n\n\tfor key, values := range m.notEqualMapTable {\n\t\tfor value, policies := range values {\n\t\t\tzap.L().Debug(\"Print Policy DB\",\n\t\t\t\tzap.String(\"policies\", fmt.Sprintf(\"%#v\", policies)),\n\t\t\t\tzap.String(\"key\", key),\n\t\t\t\tzap.String(\"value\", value),\n\t\t\t)\n\t\t}\n\t}\n\n}\n\nfunc parseTagValueRange(value string) (string, *portspec.PortSpec, error) {\n\tindex := strings.Index(value, \"/\")\n\tif index == -1 {\n\t\t// means there was no range\n\t\treturn value, nil, nil\n\t}\n\trangeSpec, err := portspec.NewPortSpecFromString(value[index+1:], nil)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\treturn value[:index], rangeSpec, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/lookup/lookup_test.go",
    "content": "// +build !windows\n\npackage lookup\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nvar (\n\tappEqWeb = policy.KeyValueOperator{\n\t\tKey:      \"app\",\n\t\tValue:    []string{\"web\"},\n\t\tOperator: policy.Equal,\n\t\tID:       \"1\",\n\t}\n\tenvEqDemo = policy.KeyValueOperator{\n\t\tKey:      \"env\",\n\t\tValue:    []string{\"demo\"},\n\t\tOperator: policy.Equal,\n\t\tID:       \"2\",\n\t}\n\n\tenvEqDemoOrQa = policy.KeyValueOperator{\n\t\tKey:      \"env\",\n\t\tValue:    []string{\"demo\", \"qa\"},\n\t\tOperator: policy.Equal,\n\t\tID:       \"3\",\n\t}\n\n\tdcKeyExists = policy.KeyValueOperator{\n\t\tKey:      \"dc\",\n\t\tOperator: policy.KeyExists,\n\t}\n\n\tlangNotJava = policy.KeyValueOperator{\n\t\tKey:      \"lang\",\n\t\tValue:    []string{\"java\"},\n\t\tOperator: policy.NotEqual,\n\t}\n\n\tenvNotDemoOrQA = policy.KeyValueOperator{\n\t\tKey:      \"env\",\n\t\tValue:    []string{\"demo\", \"qa\"},\n\t\tOperator: policy.NotEqual,\n\t}\n\n\tenvKeyNotExists = policy.KeyValueOperator{\n\t\tKey:      \"env\",\n\t\tOperator: policy.KeyNotExists,\n\t}\n\n\tvulnerKey = policy.KeyValueOperator{\n\t\tKey:      \"vulnerability\",\n\t\tValue:    []string{\"high\"},\n\t\tOperator: policy.Equal,\n\t}\n\n\tvulnerLowKey = policy.KeyValueOperator{\n\t\tKey:      \"vulnerability\",\n\t\tValue:    []string{\"low\"},\n\t\tOperator: policy.Equal,\n\t}\n\n\tnamespaceKey = policy.KeyValueOperator{\n\t\tKey:      \"namespace\",\n\t\tValue:    []string{\"/a/b/*\"},\n\t\tOperator: policy.Equal,\n\t}\n\n\tappEqWebAndenvEqDemo = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{appEqWeb, envEqDemo},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tappEqWebAndEnvEqDemoOrQa = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{appEqWeb, envEqDemoOrQa},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tdcTagExists = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{dcKeyExists},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tpolicylangNotJava = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{langNotJava},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tappEqWebAndenvNotDemoOrQA = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{appEqWeb, envNotDemoOrQA},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tenvKeyNotExistsAndAppEqWeb = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{envKeyNotExists, appEqWeb},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tvulnTagPolicy = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{vulnerKey},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tpolicyNamespace = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{namespaceKey, vulnerLowKey},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tdomainParent = policy.KeyValueOperator{\n\t\tKey:      \"domain\",\n\t\tValue:    []string{\"com.example.*\", \"com.*\", \"com.longexample.*\", \"com.ex.*\"},\n\t\tOperator: policy.Equal,\n\t}\n\n\tdomainFull = policy.KeyValueOperator{\n\t\tKey:      \"domain\",\n\t\tValue:    []string{\"com.example.web\"},\n\t\tOperator: policy.Equal,\n\t}\n\n\tpolicyDomainParent = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{domainParent},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tpolicyDomainFull = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{domainFull},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tpolicyEnvDoesNotExist = policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{envKeyNotExists},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n)\n\n// TestConstructorNewPolicyDB tests the NewPolicyDB constructor\nfunc TestConstructorNewPolicyDB(t *testing.T) {\n\tConvey(\"Given that I instantiate a new policy DB, I should not get nil\", t, func() {\n\n\t\tp := &PolicyDB{}\n\n\t\tpolicyDB := NewPolicyDB()\n\n\t\tSo(policyDB, ShouldHaveSameTypeAs, p)\n\t})\n}\n\n// TestFuncAddPolicy tests the add policy function\nfunc TestFuncAddPolicy(t *testing.T) {\n\n\tConvey(\"Given an empty policy DB\", t, func() {\n\t\tpolicyDB := NewPolicyDB()\n\n\t\tConvey(\"When I add a single policy it should be associated with all the tags\", func() {\n\t\t\tindex := policyDB.AddPolicy(appEqWebAndenvEqDemo)\n\n\t\t\tSo(policyDB.numberOfPolicies, ShouldEqual, 1)\n\t\t\tSo(index, ShouldEqual, 1)\n\t\t\tfor _, c := range appEqWebAndenvEqDemo.Clause {\n\t\t\t\tSo(policyDB.equalMapTable[c.Key][c.Value[0]], ShouldNotBeNil)\n\t\t\t\tSo(policyDB.equalMapTable[c.Key][c.Value[0]][0].index, ShouldEqual, index)\n\t\t\t\tSo(policyDB.equalPrefixes[c.Key], ShouldNotContain, c.Key)\n\t\t\t}\n\t\t})\n\n\t\tConvey(\"When I add a policy with the not equal operator, it should be added to the notEqual db\", func() {\n\t\t\tindex := policyDB.AddPolicy(policylangNotJava)\n\n\t\t\tSo(policyDB.numberOfPolicies, ShouldEqual, 1)\n\t\t\tSo(index, ShouldEqual, 1)\n\t\t\tfor _, c := range policylangNotJava.Clause {\n\t\t\t\tSo(policyDB.notEqualMapTable[c.Key][c.Value[0]], ShouldNotBeNil)\n\t\t\t\tSo(policyDB.notEqualMapTable[c.Key][c.Value[0]][0].index, ShouldEqual, index)\n\t\t\t\tSo(policyDB.equalPrefixes, ShouldNotContainKey, c.Key)\n\t\t\t}\n\t\t})\n\n\t\tConvey(\"When I add a policy with the KeyExists operator, it should be added as a prefix of 0\", func() {\n\t\t\tindex := policyDB.AddPolicy(dcTagExists)\n\n\t\t\tkey := dcTagExists.Clause[0].Key\n\t\t\tSo(policyDB.numberOfPolicies, ShouldEqual, 1)\n\t\t\tSo(index, ShouldEqual, 1)\n\t\t\tSo(policyDB.equalPrefixes, ShouldContainKey, key)\n\t\t\tSo(policyDB.equalPrefixes[key], ShouldContain, 0)\n\t\t\tSo(policyDB.equalMapTable[key], ShouldHaveLength, 1)\n\t\t\tSo(policyDB.equalMapTable[key], ShouldContainKey, \"\")\n\t\t\tSo(policyDB.equalPrefixes[key], ShouldHaveLength, 1)\n\t\t})\n\n\t\tConvey(\"When I add a policy with prefixes, it should be associated with the right prefixes\", func() {\n\t\t\tindex := policyDB.AddPolicy(policyDomainParent)\n\n\t\t\tkey := policyDomainParent.Clause[0].Key\n\t\t\tvalue0 := policyDomainParent.Clause[0].Value[0]\n\t\t\tvalue1 := policyDomainParent.Clause[0].Value[1]\n\t\t\tvalue2 := policyDomainParent.Clause[0].Value[2]\n\t\t\tvalue3 := policyDomainParent.Clause[0].Value[3]\n\t\t\tSo(policyDB.numberOfPolicies, ShouldEqual, 1)\n\t\t\tSo(index, ShouldEqual, 1)\n\t\t\tSo(policyDB.equalMapTable[key], ShouldHaveLength, 4)\n\t\t\tSo(policyDB.equalMapTable[key], ShouldContainKey, value0[:len(value0)-1])\n\t\t\tSo(policyDB.equalMapTable[key], ShouldContainKey, value1[:len(value1)-1])\n\t\t\tSo(policyDB.equalMapTable[key], ShouldContainKey, value2[:len(value2)-1])\n\t\t\tSo(policyDB.equalMapTable[key], ShouldContainKey, value3[:len(value3)-1])\n\t\t\tSo(policyDB.equalPrefixes[key], ShouldHaveLength, 4)\n\t\t\tSo(policyDB.equalPrefixes[key], ShouldContain, len(value0)-1)\n\t\t\tSo(policyDB.equalPrefixes[key], ShouldContain, len(value1)-1)\n\t\t\tSo(policyDB.equalPrefixes[key], ShouldContain, len(value2)-1)\n\t\t\tSo(policyDB.equalPrefixes[key], ShouldContain, len(value3)-1)\n\t\t})\n\n\t})\n}\n\n// TestFuncSearch tests the search function of the lookup\nfunc TestFuncSearch(t *testing.T) {\n\t// policy1 : app=web and env=demo\n\t// policy2 : lang != java\n\t// policy3 : dc=*\n\t// policy4: app=web and env IN (demo, qa)\n\t// policy5: app=web and env NotIN (demo, qa)\n\t// policy6: app=web not env=*\n\t// policy7: domain IN (\"com.*\", \"com.example.*\")\n\t// policy8: domain=com.example.web\n\t// policy9: env doesn't exist\n\t// policy10: vulnerability=high\n\t// policy11: namespace=/a/b/* and vulnerability=low\n\n\tConvey(\"Given an empty policyDB\", t, func() {\n\t\tpolicyDB := NewPolicyDB()\n\t\tConvey(\"Given that I add two policy rules\", func() {\n\t\t\tindex1 := policyDB.AddPolicy(appEqWebAndenvEqDemo)\n\t\t\tindex2 := policyDB.AddPolicy(policylangNotJava)\n\t\t\tindex3 := policyDB.AddPolicy(dcTagExists)\n\t\t\tindex4 := policyDB.AddPolicy(appEqWebAndEnvEqDemoOrQa)\n\t\t\tindex5 := policyDB.AddPolicy(appEqWebAndenvNotDemoOrQA)\n\t\t\tindex6 := policyDB.AddPolicy(envKeyNotExistsAndAppEqWeb)\n\t\t\tindex7 := policyDB.AddPolicy(policyDomainParent)\n\t\t\tindex8 := policyDB.AddPolicy(policyDomainFull)\n\t\t\tindex9 := policyDB.AddPolicy(policyEnvDoesNotExist)\n\t\t\tindex10 := policyDB.AddPolicy(vulnTagPolicy)\n\t\t\tindex11 := policyDB.AddPolicy(policyNamespace)\n\n\t\t\tSo(index1, ShouldEqual, 1)\n\t\t\tSo(index2, ShouldEqual, 2)\n\t\t\tSo(index3, ShouldEqual, 3)\n\t\t\tSo(index4, ShouldEqual, 4)\n\t\t\tSo(index5, ShouldEqual, 5)\n\t\t\tSo(index6, ShouldEqual, 6)\n\t\t\tSo(index9, ShouldEqual, 9)\n\n\t\t\tConvey(\"The vulnerability tag policy should match\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"vulnerability\", \"high\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index10)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"A policy that matches ID should match\", func() {\n\t\t\t\ttags := policy.NewTagStoreFromSlice([]string{\"1\"})\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(action, ShouldNotBeNil)\n\t\t\t\tSo(index, ShouldEqual, index6)\n\t\t\t})\n\n\t\t\tConvey(\"A policy that matches only the namespace, shoud not match\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"namespace\", \"/a/b/c/d\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"privatedemo\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(action, ShouldBeNil)\n\t\t\t\tSo(index, ShouldEqual, -1)\n\t\t\t})\n\n\t\t\tConvey(\"A policy that matches namespace and vulnerability low, shoud match\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"namespace\", \"/a/b/c/d\")\n\t\t\t\ttags.AppendKeyValue(\"vulnerability\", \"low\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"privatedemo\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(action, ShouldNotBeNil)\n\t\t\t\tSo(index, ShouldEqual, index11)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a single matching that matches the equal rules, it should return the correct index,\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"app\", \"web\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"demo\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index1)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a single matching that matches the not equal rules, it should return the right index,\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"lang\", \"go\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"demo\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index2)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for rules that match the KeyExists Policy, it should return the right index  \", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"dc\", \"EAST\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"demo\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index3)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a single matching that matches the Or rules, it should return the right index,\", func() {\n\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"app\", \"web\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"qa\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index4)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a single matching that matches the NOT Or rlues, it should return the right index,\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"app\", \"web\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"prod\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index5)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a single clause  that fails in the Not OR operator, it should fail ,\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"lang\", \"java\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"demo\")\n\t\t\t\ttags.AppendKeyValue(\"app\", \"db\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, -1)\n\t\t\t\tSo(action, ShouldBeNil)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for rules that do not match, it should return an error \", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"tag\", \"node\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"node\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, -1)\n\t\t\t\tSo(action, ShouldBeNil)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a single that succeeds in the Not Key  operator, it should succeed ,\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"app\", \"web\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index6)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a value that matches a prefix\", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"domain\", \"com.example.db\")\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index7)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a value that matches a complete value \", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"domain\", \"com.example.web\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index8)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a value that matches some of the prefix, it should return err  \", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"domain\", \"co\")\n\t\t\t\ttags.AppendKeyValue(\"env\", \"node\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, -1)\n\t\t\t\tSo(action, ShouldBeNil)\n\t\t\t})\n\n\t\t\tConvey(\"Given that I search for a value matches only the env not exists policy \", func() {\n\t\t\t\ttags := policy.NewTagStore()\n\t\t\t\ttags.AppendKeyValue(\"sometag\", \"nomatch\")\n\n\t\t\t\tindex, action := policyDB.Search(tags)\n\t\t\t\tSo(index, ShouldEqual, index9)\n\t\t\t\tSo(action.(*policy.FlowPolicy).Action, ShouldEqual, policy.Accept)\n\t\t\t})\n\n\t\t})\n\n\t})\n}\n\n// TestFuncDumbDB is a mock test for the print function\nfunc TestFuncDumpDB(t *testing.T) {\n\tConvey(\"Given an empty policy DB\", t, func() {\n\t\tpolicyDB := NewPolicyDB()\n\n\t\tConvey(\"Given that I add two policy rules, I should be able to print the db \", func() {\n\t\t\tindex1 := policyDB.AddPolicy(appEqWebAndenvEqDemo)\n\t\t\tindex2 := policyDB.AddPolicy(policylangNotJava)\n\t\t\tSo(index1, ShouldEqual, 1)\n\t\t\tSo(index2, ShouldEqual, 2)\n\n\t\t\tpolicyDB.PrintPolicyDB()\n\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/metadata/metadata.go",
    "content": "package metadata\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/apiauth\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/serviceregistry\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// Client is a metadata client.\ntype Client struct {\n\tpuContext   string\n\ttokenIssuer common.ServiceTokenIssuer\n\tcertPEM     []byte\n\tkeyPEM      []byte\n\n\tsync.RWMutex\n}\n\n// NewClient returns a new metadata client\nfunc NewClient(puContext string, t common.ServiceTokenIssuer) *Client {\n\treturn &Client{\n\t\tpuContext:   puContext,\n\t\ttokenIssuer: t,\n\t}\n}\n\n// UpdateSecrets updates the secrets of the client.\nfunc (c *Client) UpdateSecrets(cert, key []byte) {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tc.certPEM = cert\n\tc.keyPEM = key\n}\n\n// GetCertificate returns back the certificate.\nfunc (c *Client) GetCertificate() []byte {\n\tc.RLock()\n\tdefer c.RUnlock()\n\n\treturn c.certPEM\n}\n\n// GetPrivateKey returns the private key associated with this service.\nfunc (c *Client) GetPrivateKey() []byte {\n\tc.RLock()\n\tdefer c.RUnlock()\n\n\treturn c.keyPEM\n}\n\n// GetCurrentPolicy returns the current policy of the datapath. It returns\n// the marshalled policy as well as the original object for any farther processing.\nfunc (c *Client) GetCurrentPolicy() ([]byte, *policy.PUPolicyPublic, error) {\n\n\tsctx, err := serviceregistry.Instance().RetrieveServiceByID(c.puContext)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tplc := sctx.PU.Policy.ToPublicPolicy()\n\tplc.ServicesCertificate = \"\"\n\tplc.ServicesPrivateKey = \"\"\n\tdata, err := json.MarshalIndent(plc, \"  \", \"  \")\n\tif err != nil {\n\t\tdata = []byte(\"Internal Server Error\")\n\t}\n\n\treturn data, plc, nil\n}\n\n// IssueToken issues an OAUTH token for this PU for the desired audience\n// and validity. The request will use the token issuer to contact the OIDC\n// provider and issue the token.\nfunc (c *Client) IssueToken(ctx context.Context, stype common.ServiceTokenType, audience string, validity time.Duration) (string, error) {\n\treturn c.tokenIssuer.Issue(ctx, c.puContext, stype, audience, validity)\n}\n\n// Authorize request will use the enforcerd databases and context to authorize\n// an http request given the provided credentials.\nfunc (c *Client) Authorize(request *apiauth.Request) error {\n\n\t// TODO\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/mockenforcer/mockenforcer.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/internal/enforcer/enforcer.go\n\n// Package mockenforcer is a generated GoMock package.\npackage mockenforcer\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\ttime \"time\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\tebpf \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ebpf\"\n\tfqconfig \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\tpackettracing \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\tsecrets \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\truntime \"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// MockEnforcer is a mock of Enforcer interface\n// nolint\ntype MockEnforcer struct {\n\tctrl     *gomock.Controller\n\trecorder *MockEnforcerMockRecorder\n}\n\n// MockEnforcerMockRecorder is the mock recorder for MockEnforcer\n// nolint\ntype MockEnforcerMockRecorder struct {\n\tmock *MockEnforcer\n}\n\n// NewMockEnforcer creates a new mock instance\n// nolint\nfunc NewMockEnforcer(ctrl *gomock.Controller) *MockEnforcer {\n\tmock := &MockEnforcer{ctrl: ctrl}\n\tmock.recorder = &MockEnforcerMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockEnforcer) EXPECT() *MockEnforcerMockRecorder {\n\treturn m.recorder\n}\n\n// Enforce mocks base method\n// nolint\nfunc (m *MockEnforcer) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Enforce\", ctx, contextID, puInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Enforce indicates an expected call of Enforce\n// nolint\nfunc (mr *MockEnforcerMockRecorder) Enforce(ctx, contextID, puInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Enforce\", reflect.TypeOf((*MockEnforcer)(nil).Enforce), ctx, contextID, puInfo)\n}\n\n// Unenforce mocks base method\n// nolint\nfunc (m *MockEnforcer) Unenforce(ctx context.Context, contextID string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Unenforce\", ctx, contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Unenforce indicates an expected call of Unenforce\n// nolint\nfunc (mr *MockEnforcerMockRecorder) Unenforce(ctx, contextID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Unenforce\", reflect.TypeOf((*MockEnforcer)(nil).Unenforce), ctx, contextID)\n}\n\n// GetFilterQueue mocks base method\n// nolint\nfunc (m *MockEnforcer) GetFilterQueue() fqconfig.FilterQueue {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetFilterQueue\")\n\tret0, _ := ret[0].(fqconfig.FilterQueue)\n\treturn ret0\n}\n\n// GetFilterQueue indicates an expected call of GetFilterQueue\n// nolint\nfunc (mr *MockEnforcerMockRecorder) GetFilterQueue() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetFilterQueue\", reflect.TypeOf((*MockEnforcer)(nil).GetFilterQueue))\n}\n\n// GetBPFObject mocks base method\n// nolint\nfunc (m *MockEnforcer) GetBPFObject() ebpf.BPFModule {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetBPFObject\")\n\tret0, _ := ret[0].(ebpf.BPFModule)\n\treturn ret0\n}\n\n// GetBPFObject indicates an expected call of GetBPFObject\n// nolint\nfunc (mr *MockEnforcerMockRecorder) GetBPFObject() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetBPFObject\", reflect.TypeOf((*MockEnforcer)(nil).GetBPFObject))\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockEnforcer) Run(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Run\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockEnforcerMockRecorder) Run(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockEnforcer)(nil).Run), ctx)\n}\n\n// UpdateSecrets mocks base method\n// nolint\nfunc (m *MockEnforcer) UpdateSecrets(secrets secrets.Secrets) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateSecrets\", secrets)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateSecrets indicates an expected call of UpdateSecrets\n// nolint\nfunc (mr *MockEnforcerMockRecorder) UpdateSecrets(secrets interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateSecrets\", reflect.TypeOf((*MockEnforcer)(nil).UpdateSecrets), secrets)\n}\n\n// SetTargetNetworks mocks base method\n// nolint\nfunc (m *MockEnforcer) SetTargetNetworks(cfg *runtime.Configuration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SetTargetNetworks\", cfg)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SetTargetNetworks indicates an expected call of SetTargetNetworks\n// nolint\nfunc (mr *MockEnforcerMockRecorder) SetTargetNetworks(cfg interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetTargetNetworks\", reflect.TypeOf((*MockEnforcer)(nil).SetTargetNetworks), cfg)\n}\n\n// SetLogLevel mocks base method\n// nolint\nfunc (m *MockEnforcer) SetLogLevel(level constants.LogLevel) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SetLogLevel\", level)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SetLogLevel indicates an expected call of SetLogLevel\n// nolint\nfunc (mr *MockEnforcerMockRecorder) SetLogLevel(level interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetLogLevel\", reflect.TypeOf((*MockEnforcer)(nil).SetLogLevel), level)\n}\n\n// CleanUp mocks base method\n// nolint\nfunc (m *MockEnforcer) CleanUp() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CleanUp\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CleanUp indicates an expected call of CleanUp\n// nolint\nfunc (mr *MockEnforcerMockRecorder) CleanUp() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CleanUp\", reflect.TypeOf((*MockEnforcer)(nil).CleanUp))\n}\n\n// GetServiceMeshType mocks base method\n// nolint\nfunc (m *MockEnforcer) GetServiceMeshType() policy.ServiceMesh {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetServiceMeshType\")\n\tret0, _ := ret[0].(policy.ServiceMesh)\n\treturn ret0\n}\n\n// GetServiceMeshType indicates an expected call of GetServiceMeshType\n// nolint\nfunc (mr *MockEnforcerMockRecorder) GetServiceMeshType() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetServiceMeshType\", reflect.TypeOf((*MockEnforcer)(nil).GetServiceMeshType))\n}\n\n// EnableDatapathPacketTracing mocks base method\n// nolint\nfunc (m *MockEnforcer) EnableDatapathPacketTracing(ctx context.Context, contextID string, direction packettracing.TracingDirection, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableDatapathPacketTracing\", ctx, contextID, direction, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableDatapathPacketTracing indicates an expected call of EnableDatapathPacketTracing\n// nolint\nfunc (mr *MockEnforcerMockRecorder) EnableDatapathPacketTracing(ctx, contextID, direction, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableDatapathPacketTracing\", reflect.TypeOf((*MockEnforcer)(nil).EnableDatapathPacketTracing), ctx, contextID, direction, interval)\n}\n\n// EnableIPTablesPacketTracing mocks base method\n// nolint\nfunc (m *MockEnforcer) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableIPTablesPacketTracing\", ctx, contextID, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableIPTablesPacketTracing indicates an expected call of EnableIPTablesPacketTracing\n// nolint\nfunc (mr *MockEnforcerMockRecorder) EnableIPTablesPacketTracing(ctx, contextID, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableIPTablesPacketTracing\", reflect.TypeOf((*MockEnforcer)(nil).EnableIPTablesPacketTracing), ctx, contextID, interval)\n}\n\n// Ping mocks base method\n// nolint\nfunc (m *MockEnforcer) Ping(ctx context.Context, contextID string, pingConfig *policy.PingConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Ping\", ctx, contextID, pingConfig)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Ping indicates an expected call of Ping\n// nolint\nfunc (mr *MockEnforcerMockRecorder) Ping(ctx, contextID, pingConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Ping\", reflect.TypeOf((*MockEnforcer)(nil).Ping), ctx, contextID, pingConfig)\n}\n\n// DebugCollect mocks base method\n// nolint\nfunc (m *MockEnforcer) DebugCollect(ctx context.Context, contextID string, debugConfig *policy.DebugConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DebugCollect\", ctx, contextID, debugConfig)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DebugCollect indicates an expected call of DebugCollect\n// nolint\nfunc (mr *MockEnforcerMockRecorder) DebugCollect(ctx, contextID, debugConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DebugCollect\", reflect.TypeOf((*MockEnforcer)(nil).DebugCollect), ctx, contextID, debugConfig)\n}\n\n// MockDebugInfo is a mock of DebugInfo interface\n// nolint\ntype MockDebugInfo struct {\n\tctrl     *gomock.Controller\n\trecorder *MockDebugInfoMockRecorder\n}\n\n// MockDebugInfoMockRecorder is the mock recorder for MockDebugInfo\n// nolint\ntype MockDebugInfoMockRecorder struct {\n\tmock *MockDebugInfo\n}\n\n// NewMockDebugInfo creates a new mock instance\n// nolint\nfunc NewMockDebugInfo(ctrl *gomock.Controller) *MockDebugInfo {\n\tmock := &MockDebugInfo{ctrl: ctrl}\n\tmock.recorder = &MockDebugInfoMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockDebugInfo) EXPECT() *MockDebugInfoMockRecorder {\n\treturn m.recorder\n}\n\n// EnableDatapathPacketTracing mocks base method\n// nolint\nfunc (m *MockDebugInfo) EnableDatapathPacketTracing(ctx context.Context, contextID string, direction packettracing.TracingDirection, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableDatapathPacketTracing\", ctx, contextID, direction, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableDatapathPacketTracing indicates an expected call of EnableDatapathPacketTracing\n// nolint\nfunc (mr *MockDebugInfoMockRecorder) EnableDatapathPacketTracing(ctx, contextID, direction, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableDatapathPacketTracing\", reflect.TypeOf((*MockDebugInfo)(nil).EnableDatapathPacketTracing), ctx, contextID, direction, interval)\n}\n\n// EnableIPTablesPacketTracing mocks base method\n// nolint\nfunc (m *MockDebugInfo) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableIPTablesPacketTracing\", ctx, contextID, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableIPTablesPacketTracing indicates an expected call of EnableIPTablesPacketTracing\n// nolint\nfunc (mr *MockDebugInfoMockRecorder) EnableIPTablesPacketTracing(ctx, contextID, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableIPTablesPacketTracing\", reflect.TypeOf((*MockDebugInfo)(nil).EnableIPTablesPacketTracing), ctx, contextID, interval)\n}\n\n// Ping mocks base method\n// nolint\nfunc (m *MockDebugInfo) Ping(ctx context.Context, contextID string, pingConfig *policy.PingConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Ping\", ctx, contextID, pingConfig)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Ping indicates an expected call of Ping\n// nolint\nfunc (mr *MockDebugInfoMockRecorder) Ping(ctx, contextID, pingConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Ping\", reflect.TypeOf((*MockDebugInfo)(nil).Ping), ctx, contextID, pingConfig)\n}\n\n// DebugCollect mocks base method\n// nolint\nfunc (m *MockDebugInfo) DebugCollect(ctx context.Context, contextID string, debugConfig *policy.DebugConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DebugCollect\", ctx, contextID, debugConfig)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DebugCollect indicates an expected call of DebugCollect\n// nolint\nfunc (mr *MockDebugInfoMockRecorder) DebugCollect(ctx, contextID, debugConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DebugCollect\", reflect.TypeOf((*MockDebugInfo)(nil).DebugCollect), ctx, contextID, debugConfig)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/afinetrawsocket/afinetrawsocket.go",
    "content": "// +build linux\n\npackage afinetrawsocket\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"strconv\"\n\t\"strings\"\n\t\"syscall\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n)\n\ntype socketv4 struct {\n\tfd     int\n\tinsock *syscall.SockaddrInet4\n}\n\ntype socketv6 struct {\n\tfd     int\n\tinsock *syscall.SockaddrInet6\n}\n\ntype rawsocket struct {\n\tinsockv4 *socketv4\n\tinsockv6 *socketv6\n}\n\nconst (\n\t// RawSocketMark is the mark asserted on all packet sent out of this socket\n\tRawSocketMark = 0x63\n\t// NetworkRawSocketMark is the mark on packet egressing\n\t//the raw socket coming in from network\n\tNetworkRawSocketMark = 0x40000063\n\t//ApplicationRawSocketMark is the mark on packet egressing\n\t//the raw socket coming from application\n\tApplicationRawSocketMark = 0x40000062\n)\n\n// SocketWriter interface exposes an interface to write and close sockets\ntype SocketWriter interface {\n\tWriteSocket(buf []byte, version packet.IPver, data packet.PlatformMetadata) error\n}\n\n// CreateSocket returns a handle to SocketWriter interface\nfunc CreateSocket(mark int, deviceName string) (SocketWriter, error) {\n\tvar sockv6 *socketv6\n\tvar sockv4 *socketv4\n\tvar err error\n\tcreateSocketv4 := func() (*socketv4, error) {\n\n\t\tfd, err := syscall.Socket(syscall.AF_INET, syscall.SOCK_RAW, syscall.IPPROTO_UDP)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"received error %s while open ipv4 socket\", err)\n\t\t}\n\n\t\tif err := syscall.SetsockoptInt(fd, syscall.SOL_SOCKET, syscall.SO_MARK, mark); err != nil {\n\t\t\tsyscall.Close(fd) // nolint: errcheck\n\t\t\treturn nil, fmt.Errorf(\"received error %s while setting socket Option SO_MARK\", err)\n\t\t}\n\n\t\tif err := syscall.SetsockoptInt(fd, syscall.IPPROTO_IP, syscall.IP_HDRINCL, 0); err != nil {\n\t\t\tsyscall.Close(fd) // nolint: errcheck\n\t\t\treturn nil, fmt.Errorf(\"received error %s while setting socket Option IP_HDRINCL\", err)\n\t\t}\n\n\t\tif err := syscall.SetsockoptInt(fd, syscall.IPPROTO_IP, syscall.IP_MTU_DISCOVER, syscall.IP_PMTUDISC_DONT); err != nil {\n\t\t\tsyscall.Close(fd) // nolint: errcheck\n\t\t\treturn nil, fmt.Errorf(\"received error %s while setting socket Option IP_PMTUDISC_DONT\", err)\n\t\t}\n\n\t\treturn &socketv4{\n\t\t\tfd: fd,\n\t\t\tinsock: &syscall.SockaddrInet4{\n\t\t\t\tPort: 0,\n\t\t\t},\n\t\t}, nil\n\t}\n\n\tcreateSocketv6 := func() (*socketv6, error) {\n\n\t\tfd, err := syscall.Socket(syscall.AF_INET6, syscall.SOCK_RAW, syscall.IPPROTO_UDP)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"received error %s while open ipv6 socket\", err)\n\t\t}\n\n\t\tif err := syscall.SetsockoptInt(fd, syscall.SOL_SOCKET, syscall.SO_MARK, mark); err != nil {\n\t\t\tsyscall.Close(fd) // nolint: errcheck\n\t\t\treturn nil, fmt.Errorf(\"received error %s while setting socket Option SO_MARK\", err)\n\t\t}\n\n\t\tif err := syscall.SetsockoptInt(fd, syscall.IPPROTO_IPV6, syscall.IP_HDRINCL, 0); err != nil {\n\t\t\tsyscall.Close(fd) // nolint: errcheck\n\t\t\treturn nil, fmt.Errorf(\"received error %s while setting socket Option IP_HDRINCL\", err)\n\t\t}\n\n\t\tif err := syscall.SetsockoptInt(fd, syscall.IPPROTO_IPV6, syscall.IPV6_MTU_DISCOVER, syscall.IPV6_PMTUDISC_DONT); err != nil {\n\t\t\tsyscall.Close(fd) // nolint: errcheck\n\t\t\treturn nil, fmt.Errorf(\"received error %s while setting socket Option IP_PMTUDISC_DONT ipv6\", err)\n\t\t}\n\n\t\treturn &socketv6{\n\t\t\tfd: fd,\n\t\t\tinsock: &syscall.SockaddrInet6{\n\t\t\t\tPort: 0,\n\t\t\t},\n\t\t}, nil\n\t}\n\n\tsockv4, err = createSocketv4()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif IsIpv6Supported() {\n\t\tsockv6, err = createSocketv6()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn &rawsocket{\n\t\tinsockv4: sockv4,\n\t\tinsockv6: sockv6,\n\t}, nil\n}\n\nfunc (sock *rawsocket) WriteSocket(buf []byte, version packet.IPver, data packet.PlatformMetadata) error {\n\t// copy the dest addr\n\tif version == packet.V4 {\n\t\tcopy(sock.insockv4.insock.Addr[:], buf[16:20])\n\t\tif err := syscall.Sendto(sock.insockv4.fd, buf[20:], 0, sock.insockv4.insock); err != nil {\n\t\t\treturn fmt.Errorf(\"received error %s while sending to socket\", err)\n\t\t}\n\t} else if sock.insockv6 != nil {\n\n\t\tcopy(sock.insockv6.insock.Addr[:], buf[24:40])\n\t\tif err := syscall.Sendto(sock.insockv6.fd, buf[40:], 0, sock.insockv6.insock); err != nil {\n\t\t\treturn fmt.Errorf(\"received error %s while sending to socket\", err)\n\t\t}\n\n\t}\n\n\treturn nil\n}\n\n// IsIpv6Supported returns true if the system supports ipv6 else returns false\nfunc IsIpv6Supported() bool {\n\tipv6ConfPath := \"/proc/sys/net/ipv6/conf/all/disable_ipv6\"\n\tdata, err := ioutil.ReadFile(ipv6ConfPath)\n\tif err != nil {\n\t\treturn false\n\t}\n\tval, err := strconv.Atoi(strings.Trim(string(data), \"\\n\"))\n\tif err != nil {\n\t\treturn false\n\t}\n\tif val == 1 {\n\t\treturn false\n\t}\n\treturn true\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/afinetrawsocket/afinetrawsocket_osx.go",
    "content": "// +build darwin\n\npackage afinetrawsocket\n\nimport \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\nconst (\n\t// RawSocketMark is the mark asserted on all packet sent out of this socket\n\tRawSocketMark = 0x63\n\t// NetworkRawSocketMark is the mark on packet egressing\n\t//the raw socket coming in from network\n\tNetworkRawSocketMark = 0x40000063\n\t//ApplicationRawSocketMark is the mark on packet egressing\n\t//the raw socket coming from application\n\tApplicationRawSocketMark = 0x40000062\n)\n\n// SocketWriter interface exposes an interface to write and close sockets\ntype SocketWriter interface {\n\tWriteSocket(buf []byte, version packet.IPver, data packet.PlatformMetadata) error\n\tCloseSocket() error\n}\n\ntype rawsocket struct { // nolint\n}\n\n// CreateSocket returns a handle to SocketWriter interface\nfunc CreateSocket(mark int, deviceName string) (SocketWriter, error) {\n\treturn nil, nil\n}\n\n// WriteSocket writes data into raw socket.\nfunc (sock *rawsocket) WriteSocket(buf []byte, version packet.IPver, data packet.PlatformMetadata) error {\n\t//This is an IP frame dest address at byte[16]\n\n\treturn nil\n}\n\n// CloseSocket closes the raw socket.\nfunc (sock *rawsocket) CloseSocket() error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/afinetrawsocket/afinetrawsocket_windows.go",
    "content": "// +build windows\n\npackage afinetrawsocket\n\nimport (\n\t\"errors\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n)\n\ntype rawsocket struct {\n}\n\n// WindowPlatformMetadata is platform-specific data about the packet\ntype WindowPlatformMetadata struct {\n\tPacketInfo frontman.PacketInfo\n\tIgnoreFlow bool\n\tDropFlow   bool\n\tDrop       bool\n\tSetMark    uint32\n}\n\nconst (\n\t// RawSocketMark is the mark asserted on all packet sent out of this socket\n\tRawSocketMark = 0x63\n\t// NetworkRawSocketMark is the mark on packet egressing\n\t//the raw socket coming in from network\n\tNetworkRawSocketMark = 0x40000063\n\t//ApplicationRawSocketMark is the mark on packet egressing\n\t//the raw socket coming from application\n\tApplicationRawSocketMark = 0x40000062\n)\n\n// SocketWriter interface exposes an interface to write and close sockets\ntype SocketWriter interface {\n\tWriteSocket(buf []byte, version packet.IPver, data packet.PlatformMetadata) error\n}\n\n// CreateSocket returns a handle to SocketWriter interface\nfunc CreateSocket(mark int, deviceName string) (SocketWriter, error) {\n\treturn &rawsocket{}, nil\n}\n\n// WriteSocket on Windows calls into the driver to forward the packet\nfunc (sock *rawsocket) WriteSocket(buf []byte, version packet.IPver, data packet.PlatformMetadata) error {\n\tif data == nil {\n\t\treturn errors.New(\"no PlatformMetadata for WriteSocket\")\n\t}\n\twindata, ok := data.(*WindowPlatformMetadata)\n\tif !ok {\n\t\treturn errors.New(\"no WindowPlatformMetadata for WriteSocket\")\n\t}\n\treturn windata.forwardPacket(buf, version)\n}\n\n// Clone the WindowPlatformMetadata structure\nfunc (w *WindowPlatformMetadata) Clone() packet.PlatformMetadata {\n\tplatformMetadata := &WindowPlatformMetadata{\n\t\tPacketInfo: w.PacketInfo,\n\t\tIgnoreFlow: w.IgnoreFlow,\n\t\tDrop:       w.Drop,\n\t}\n\treturn platformMetadata\n}\n\n// forwardPacket takes a raw packet and sends it to the driver to be sent on the network\nfunc (w *WindowPlatformMetadata) forwardPacket(buf []byte, version packet.IPver) error {\n\n\tif w.IgnoreFlow && w.DropFlow {\n\t\treturn errors.New(\"ignoreFlow and dropFlow cannot both be true\")\n\t}\n\n\t// Could set port/addr in packet info but not required by the driver for forwarding of the packet.\n\t// Create a copy of the packet info so that these changes don't modifiy the current PacketInfo\n\tpacketInfo := w.PacketInfo\n\tpacketInfo.Outbound = 1\n\tpacketInfo.NewPacket = 1\n\tpacketInfo.Drop = 0\n\tpacketInfo.IgnoreFlow = 0\n\tif version == packet.V4 {\n\t\tpacketInfo.Ipv4 = 1\n\t} else {\n\t\tpacketInfo.Ipv4 = 0\n\t}\n\tif w.Drop {\n\t\tpacketInfo.Drop = 1\n\t}\n\tif w.IgnoreFlow {\n\t\tpacketInfo.IgnoreFlow = 1\n\t}\n\tif w.DropFlow {\n\t\tpacketInfo.DropFlow = 1\n\t}\n\tpacketInfo.PacketSize = uint32(len(buf))\n\tif err := frontman.Wrapper.PacketFilterForward(&packetInfo, buf); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/autoport.go",
    "content": "package nfqdatapath\n\nimport (\n\t\"path/filepath\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n\t\"go.uber.org/zap\"\n)\n\ntype readSystemFiles interface {\n\treadOpenSockFD(pid string) []string\n\treadProcNetTCP() (inodeMap map[string]string, userMap map[string]map[string]bool, err error)\n\tgetCgroupList() []string\n\tlistCgroupProcesses(cgroupname string) ([]string, error)\n}\n\ntype defaultRead struct{}\n\nvar readFiles readSystemFiles\nvar d *defaultRead\nvar lock sync.RWMutex\n\nfunc init() {\n\tlock.Lock()\n\treadFiles = d\n\tlock.Unlock()\n}\n\nfunc (d *Datapath) autoPortDiscovery() {\n\tfor {\n\t\td.findPorts()\n\t\ttime.Sleep(2 * time.Second)\n\t}\n}\n\n// resync adds new port for the PU and removes the stale ports\nfunc (d *Datapath) resync(newPortMap map[string]map[string]bool) {\n\n\tfor k, vs := range d.puToPortsMap {\n\t\tm := newPortMap[k]\n\n\t\tfor v := range vs {\n\t\t\tif m == nil || !m[v] {\n\t\t\t\terr := ipsetmanager.V4().DeletePortFromServerPortSet(k, v)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Debug(\"autoPortDiscovery: Delete port set returned error\", zap.Error(err))\n\t\t\t\t}\n\t\t\t\terr = ipsetmanager.V6().DeletePortFromServerPortSet(k, v)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Debug(\"autoPortDiscovery: Delete port set returned error\", zap.Error(err))\n\t\t\t\t}\n\t\t\t\t// delete the port from contextIDFromTCPPort cache\n\t\t\t\terr = d.contextIDFromTCPPort.RemoveStringPorts(v)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Debug(\"autoPortDiscovery: can not remove port from cache\", zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tfor k, vs := range newPortMap {\n\t\tm := d.puToPortsMap[k]\n\t\tfor v := range vs {\n\t\t\tif m == nil || !m[v] {\n\t\t\t\tportSpec, err := portspec.NewPortSpecFromString(v, k)\n\t\t\t\tif err != nil {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\td.contextIDFromTCPPort.AddPortSpec(portSpec)\n\t\t\t\terr = ipsetmanager.V4().AddPortToServerPortSet(k, v)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"autoPortDiscovery: Failed to add port to portset\", zap.String(\"context\", k), zap.String(\"port\", v))\n\t\t\t\t}\n\t\t\t\terr = ipsetmanager.V6().AddPortToServerPortSet(k, v)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"autoPortDiscovery: Failed to add port to portset\", zap.String(\"context\", k), zap.String(\"port\", v))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\td.puToPortsMap = newPortMap\n}\n\nvar lastRun time.Time\n\nfunc (d *Datapath) findPorts() {\n\tlock.Lock()\n\tdefer lock.Unlock()\n\n\t// Rate limit this function to run every 5 milliseconds\n\tif time.Since(lastRun) <= 5*time.Millisecond {\n\t\treturn\n\t}\n\n\tlastRun = time.Now()\n\n\tcgroupList := readFiles.getCgroupList()\n\n\tnewPUToPortsMap := map[string]map[string]bool{}\n\tinodeMap, _, err := readFiles.readProcNetTCP()\n\tif err != nil {\n\t\tzap.L().Error(\"autoPortDiscovery: /proc/net/tcp read failed with error\", zap.Error(err))\n\t\treturn\n\t}\n\n\tfor _, cgroupPath := range cgroupList {\n\t\t/* cgroup is also the contextID */\n\t\tnewMap := map[string]bool{}\n\n\t\tcgroup := filepath.Base(cgroupPath)\n\n\t\t// check if a PU exists with that contextID and is marked with auto port\n\t\tpu, err := d.puFromContextID.Get(cgroup)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tp := pu.(*pucontext.PUContext)\n\n\t\t// we skip AutoPort discovery if it is not enabled\n\t\tif !p.Autoport() {\n\t\t\tcontinue\n\t\t}\n\n\t\tprocs, err := readFiles.listCgroupProcesses(cgroupPath)\n\t\tif err != nil {\n\t\t\tzap.L().Warn(\"autoPortDiscovery: Cgroup processes could not be retrieved\", zap.String(\"cgroupPath\", cgroupPath), zap.String(\"cgroup\", cgroup), zap.Error(err))\n\t\t\tcontinue\n\t\t}\n\t\tzap.L().Debug(\"autoPortDiscovery: processes for cgroup detected\", zap.String(\"cgroupPath\", cgroupPath), zap.String(\"cgroup\", cgroup), zap.String(\"id\", p.ID()), zap.Strings(\"procs\", procs))\n\n\t\tfor _, proc := range procs {\n\t\t\topenSockFDs := readFiles.readOpenSockFD(proc)\n\t\t\tfor _, sock := range openSockFDs {\n\t\t\t\tif inodeMap[sock] != \"\" {\n\t\t\t\t\tnewMap[inodeMap[sock]] = true\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tnewPUToPortsMap[cgroup] = newMap\n\t}\n\n\td.resync(newPUToPortsMap)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/autoport_nonwindows.go",
    "content": "// +build !windows\n\npackage nfqdatapath\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"os/user\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\tprocNetTCPFile     = \"/proc/net/tcp\"\n\tuidFieldOffset     = 7\n\tinodeFieldOffset   = 9\n\tprocHeaderLineNum  = 0\n\tportOffset         = 1\n\tipPortOffset       = 1\n\tsockStateOffset    = 3\n\tsockListeningState = \"0A\"\n\thexFormat          = 16\n\tintegerSize        = 64\n\tminimumFields      = 2\n)\n\nfunc getUserName(uid string) (string, error) {\n\n\tu, err := user.LookupId(uid)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn u.Username, nil\n}\n\nfunc (d *defaultRead) readProcNetTCP() (inodeMap map[string]string, userMap map[string]map[string]bool, err error) {\n\n\tbuffer, err := ioutil.ReadFile(procNetTCPFile)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"Failed to read /proc/net/tcp file %s\", err)\n\t}\n\n\tinodeMap = map[string]string{}\n\tuserMap = map[string]map[string]bool{}\n\n\ts := string(buffer)\n\n\tfor cnt, line := range strings.Split(s, \"\\n\") {\n\n\t\tline := strings.Fields(line)\n\t\t// continue if not a valid line\n\t\tif len(line) < uidFieldOffset {\n\t\t\tcontinue\n\t\t}\n\n\t\t/* Look at socket which are in listening state only */\n\t\tif (cnt == procHeaderLineNum) || (line[sockStateOffset] != sockListeningState) {\n\t\t\tcontinue\n\t\t}\n\n\t\t/* Get the UID */\n\t\tuid := line[uidFieldOffset]\n\t\tinode := line[inodeFieldOffset]\n\n\t\tportString := \"\"\n\t\t{\n\t\t\tipPort := strings.Split(line[ipPortOffset], \":\")\n\n\t\t\tif len(ipPort) < minimumFields {\n\t\t\t\tzap.L().Warn(\"Failed to extract port\")\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tportNum, err := strconv.ParseInt(ipPort[portOffset], hexFormat, integerSize)\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Warn(\"failed to parse port\", zap.String(\"port\", ipPort[portOffset]))\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tportString = strconv.Itoa(int(portNum))\n\t\t}\n\n\t\tinodeMap[inode] = portString\n\n\t\t// /proc/net/tcp file contains uid. Conversion to\n\t\t// userName is required as they are keys to lookup tables.\n\t\tuserName, err := getUserName(uid)\n\t\tif err != nil {\n\t\t\tzap.L().Debug(\"Error converting to username\", zap.Error(err))\n\t\t\tcontinue\n\t\t}\n\n\t\tportMap := userMap[userName]\n\t\tif portMap == nil {\n\t\t\tportMap = map[string]bool{}\n\t\t}\n\t\tportMap[portString] = true\n\t\tuserMap[userName] = portMap\n\t}\n\n\treturn inodeMap, userMap, nil\n}\n\nfunc (d *defaultRead) readOpenSockFD(pid string) []string {\n\tvar inodes []string\n\tfdPath := \"/proc/\" + pid + \"/fd/\"\n\n\tbuffer, err := ioutil.ReadDir(fdPath)\n\tif err != nil {\n\t\tzap.L().Warn(\"Failed to read\", zap.String(\"file\", fdPath), zap.Error(err))\n\t\treturn nil\n\t}\n\n\tfor _, f := range buffer {\n\t\tlink, err := os.Readlink(fdPath + f.Name())\n\n\t\tif err != nil {\n\t\t\tzap.L().Warn(\"Failed to read\", zap.String(\"file\", fdPath+f.Name()))\n\t\t\tcontinue\n\t\t}\n\t\tif strings.Contains(link, \"socket:\") {\n\t\t\tsocketInode := strings.Split(link, \":\")\n\n\t\t\tif len(socketInode) < minimumFields {\n\t\t\t\tzap.L().Warn(\"Failed to parse socket inodes\")\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tinodeString := socketInode[1]\n\t\t\tinodeString = strings.TrimSuffix(inodeString, \"]\")\n\t\t\tinodeString = strings.TrimPrefix(inodeString, \"[\")\n\n\t\t\tinodes = append(inodes, inodeString)\n\t\t}\n\t}\n\treturn inodes\n}\n\nfunc (d *defaultRead) getCgroupList() []string {\n\treturn cgnetcls.GetCgroupList()\n}\n\nfunc (d *defaultRead) listCgroupProcesses(cgroupname string) ([]string, error) {\n\treturn cgnetcls.ListCgroupProcesses(cgroupname)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/autoport_windows.go",
    "content": "// +build windows\n\npackage nfqdatapath\n\nfunc (d *defaultRead) readProcNetTCP() (inodeMap map[string]string, userMap map[string]map[string]bool, err error) {\n\n\tinodeMap = map[string]string{}\n\tuserMap = map[string]map[string]bool{}\n\n\treturn inodeMap, userMap, nil\n}\n\nfunc (d *defaultRead) readOpenSockFD(pid string) []string {\n\tvar inodes []string\n\n\treturn inodes\n}\n\nfunc (d *defaultRead) getCgroupList() []string {\n\treturn []string{}\n}\n\nfunc (d *defaultRead) listCgroupProcesses(cgroupname string) ([]string, error) {\n\treturn []string{}, nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/countererrors.go",
    "content": "package nfqdatapath\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n)\n\nfunc appSynCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrTokenEncodeFailed:\n\t\treturn counters.ErrSynTokenEncodeFailed\n\tcase tokens.ErrTokenHashFailed:\n\t\treturn counters.ErrSynTokenHashFailed\n\tcase tokens.ErrTokenSignFailed:\n\t\treturn counters.ErrSynTokenSignFailed\n\tcase tokens.ErrSharedSecretMissing:\n\t\treturn counters.ErrSynSharedSecretMissing\n\tcase tokens.ErrInvalidSecret:\n\t\treturn counters.ErrSynInvalidSecret\n\tcase tokens.ErrInvalidSignature:\n\t\treturn counters.ErrSynInvalidSignature\n\tdefault:\n\t\treturn counters.ErrSynTokenFailed\n\t}\n}\n\nfunc appSynAckCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrTokenEncodeFailed:\n\t\treturn counters.ErrSynAckTokenEncodeFailed\n\tcase tokens.ErrTokenHashFailed:\n\t\treturn counters.ErrSynAckTokenHashFailed\n\tcase tokens.ErrTokenSignFailed:\n\t\treturn counters.ErrSynAckTokenSignFailed\n\tcase tokens.ErrSharedSecretMissing:\n\t\treturn counters.ErrSynAckSharedSecretMissing\n\tcase tokens.ErrInvalidSecret:\n\t\treturn counters.ErrSynAckInvalidSecret\n\tcase tokens.ErrInvalidSignature:\n\t\treturn counters.ErrSynAckInvalidSignature\n\tdefault:\n\t\treturn counters.ErrSynAckTokenFailed\n\t}\n}\n\nfunc appAckCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrTokenEncodeFailed:\n\t\treturn counters.ErrAckTokenEncodeFailed\n\tcase tokens.ErrTokenHashFailed:\n\t\treturn counters.ErrAckTokenHashFailed\n\tcase tokens.ErrInvalidSecret:\n\t\treturn counters.ErrAckInvalidSecret\n\tcase tokens.ErrSharedSecretMissing:\n\t\treturn counters.ErrAckSharedSecretMissing\n\tdefault:\n\t\treturn counters.ErrAckTokenFailed\n\t}\n}\n\nfunc netSynCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrInvalidTokenLength:\n\t\treturn counters.ErrSynInvalidTokenLength\n\tcase tokens.ErrMissingSignature:\n\t\treturn counters.ErrSynMissingSignature\n\tcase tokens.ErrCompressedTagMismatch:\n\t\treturn counters.ErrSynCompressedTagMismatch\n\tcase tokens.ErrDatapathVersionMismatch:\n\t\treturn counters.ErrSynDatapathVersionMismatch\n\tcase tokens.ErrTokenDecodeFailed:\n\t\treturn counters.ErrSynTokenDecodeFailed\n\tcase tokens.ErrTokenExpired:\n\t\treturn counters.ErrSynTokenExpired\n\tcase tokens.ErrPublicKeyFailed:\n\t\treturn counters.ErrSynPublicKeyFailed\n\tcase tokens.ErrSharedKeyHashFailed:\n\t\treturn counters.ErrSynSharedKeyHashFailed\n\tdefault:\n\t\treturn counters.ErrSynDroppedInvalidToken\n\t}\n}\n\nfunc netSynAckCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrInvalidTokenLength:\n\t\treturn counters.ErrSynAckInvalidTokenLength\n\tcase tokens.ErrMissingSignature:\n\t\treturn counters.ErrSynAckMissingSignature\n\tcase tokens.ErrCompressedTagMismatch:\n\t\treturn counters.ErrSynAckCompressedTagMismatch\n\tcase tokens.ErrDatapathVersionMismatch:\n\t\treturn counters.ErrSynAckDatapathVersionMismatch\n\tcase tokens.ErrTokenDecodeFailed:\n\t\treturn counters.ErrSynAckTokenDecodeFailed\n\tcase tokens.ErrTokenExpired:\n\t\treturn counters.ErrSynAckTokenExpired\n\tcase tokens.ErrPublicKeyFailed:\n\t\treturn counters.ErrSynAckPublicKeyFailed\n\tcase tokens.ErrSharedKeyHashFailed:\n\t\treturn counters.ErrSynAckSharedKeyHashFailed\n\tdefault:\n\t\treturn counters.ErrSynAckInvalidToken\n\t}\n}\n\nfunc netAckCounterFromError(err error) counters.CounterType {\n\tswitch err {\n\tcase tokens.ErrInvalidTokenLength:\n\t\treturn counters.ErrAckInvalidTokenLength\n\tcase tokens.ErrMissingSignature:\n\t\treturn counters.ErrAckMissingSignature\n\tcase tokens.ErrCompressedTagMismatch:\n\t\treturn counters.ErrAckCompressedTagMismatch\n\tcase tokens.ErrDatapathVersionMismatch:\n\t\treturn counters.ErrAckDatapathVersionMismatch\n\tcase tokens.ErrTokenDecodeFailed:\n\t\treturn counters.ErrAckTokenDecodeFailed\n\tcase tokens.ErrTokenExpired:\n\t\treturn counters.ErrAckTokenExpired\n\tcase tokens.ErrSharedSecretMissing:\n\t\treturn counters.ErrAckSharedSecretMissing\n\tcase tokens.ErrTokenHashFailed:\n\t\treturn counters.ErrAckTokenHashFailed\n\tcase tokens.ErrSignatureMismatch:\n\t\treturn counters.ErrAckSignatureMismatch\n\tdefault:\n\t\treturn counters.ErrAckInvalidToken\n\t}\n}\n\nfunc appUDPSynCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrTokenEncodeFailed:\n\t\treturn counters.ErrUDPSynTokenEncodeFailed\n\tcase tokens.ErrTokenHashFailed:\n\t\treturn counters.ErrUDPSynTokenHashFailed\n\tcase tokens.ErrTokenSignFailed:\n\t\treturn counters.ErrUDPSynTokenSignFailed\n\tcase tokens.ErrSharedSecretMissing:\n\t\treturn counters.ErrUDPSynSharedSecretMissing\n\tcase tokens.ErrInvalidSecret:\n\t\treturn counters.ErrUDPSynInvalidSecret\n\tcase tokens.ErrInvalidSignature:\n\t\treturn counters.ErrUDPSynInvalidSignature\n\tdefault:\n\t\treturn counters.ErrUDPSynTokenFailed\n\t}\n}\n\nfunc appUDPSynAckCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrTokenEncodeFailed:\n\t\treturn counters.ErrUDPSynAckTokenEncodeFailed\n\tcase tokens.ErrTokenHashFailed:\n\t\treturn counters.ErrUDPSynAckTokenHashFailed\n\tcase tokens.ErrTokenSignFailed:\n\t\treturn counters.ErrUDPSynAckTokenSignFailed\n\tcase tokens.ErrSharedSecretMissing:\n\t\treturn counters.ErrUDPSynAckSharedSecretMissing\n\tcase tokens.ErrInvalidSecret:\n\t\treturn counters.ErrUDPSynAckInvalidSecret\n\tcase tokens.ErrInvalidSignature:\n\t\treturn counters.ErrUDPSynAckInvalidSignature\n\tdefault:\n\t\treturn counters.ErrUDPSynAckTokenFailed\n\t}\n}\n\nfunc appUDPAckCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrTokenEncodeFailed:\n\t\treturn counters.ErrUDPAckTokenEncodeFailed\n\tcase tokens.ErrTokenHashFailed:\n\t\treturn counters.ErrUDPAckTokenHashFailed\n\tcase tokens.ErrInvalidSecret:\n\t\treturn counters.ErrUDPAckInvalidSecret\n\tcase tokens.ErrSharedSecretMissing:\n\t\treturn counters.ErrUDPAckSharedSecretMissing\n\tdefault:\n\t\treturn counters.ErrUDPAckTokenFailed\n\t}\n}\n\nfunc netUDPSynCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrInvalidTokenLength:\n\t\treturn counters.ErrUDPSynInvalidTokenLength\n\tcase tokens.ErrMissingSignature:\n\t\treturn counters.ErrUDPSynMissingSignature\n\tcase tokens.ErrCompressedTagMismatch:\n\t\treturn counters.ErrUDPSynCompressedTagMismatch\n\tcase tokens.ErrDatapathVersionMismatch:\n\t\treturn counters.ErrUDPSynDatapathVersionMismatch\n\tcase tokens.ErrTokenDecodeFailed:\n\t\treturn counters.ErrUDPSynTokenDecodeFailed\n\tcase tokens.ErrTokenExpired:\n\t\treturn counters.ErrUDPSynTokenExpired\n\tcase tokens.ErrPublicKeyFailed:\n\t\treturn counters.ErrUDPSynPublicKeyFailed\n\tcase tokens.ErrSharedKeyHashFailed:\n\t\treturn counters.ErrUDPSynSharedKeyHashFailed\n\tdefault:\n\t\treturn counters.ErrUDPSynDroppedInvalidToken\n\t}\n}\n\nfunc netUDPSynAckCounterFromError(err error) counters.CounterType {\n\n\tswitch err {\n\tcase tokens.ErrInvalidTokenLength:\n\t\treturn counters.ErrUDPSynAckInvalidTokenLength\n\tcase tokens.ErrMissingSignature:\n\t\treturn counters.ErrUDPSynAckMissingSignature\n\tcase tokens.ErrCompressedTagMismatch:\n\t\treturn counters.ErrUDPSynAckCompressedTagMismatch\n\tcase tokens.ErrDatapathVersionMismatch:\n\t\treturn counters.ErrUDPSynAckDatapathVersionMismatch\n\tcase tokens.ErrTokenDecodeFailed:\n\t\treturn counters.ErrUDPSynAckTokenDecodeFailed\n\tcase tokens.ErrTokenExpired:\n\t\treturn counters.ErrUDPSynAckTokenExpired\n\tcase tokens.ErrPublicKeyFailed:\n\t\treturn counters.ErrUDPSynAckPublicKeyFailed\n\tcase tokens.ErrSharedKeyHashFailed:\n\t\treturn counters.ErrUDPSynAckSharedKeyHashFailed\n\tdefault:\n\t\treturn counters.ErrUDPSynAckInvalidToken\n\t}\n}\n\nfunc netUDPAckCounterFromError(err error) counters.CounterType {\n\tswitch err {\n\tcase tokens.ErrInvalidTokenLength:\n\t\treturn counters.ErrUDPAckInvalidTokenLength\n\tcase tokens.ErrMissingSignature:\n\t\treturn counters.ErrUDPAckMissingSignature\n\tcase tokens.ErrCompressedTagMismatch:\n\t\treturn counters.ErrUDPAckCompressedTagMismatch\n\tcase tokens.ErrDatapathVersionMismatch:\n\t\treturn counters.ErrUDPAckDatapathVersionMismatch\n\tcase tokens.ErrTokenDecodeFailed:\n\t\treturn counters.ErrUDPAckTokenDecodeFailed\n\tcase tokens.ErrTokenExpired:\n\t\treturn counters.ErrUDPAckTokenExpired\n\tcase tokens.ErrSharedSecretMissing:\n\t\treturn counters.ErrUDPAckSharedSecretMissing\n\tcase tokens.ErrTokenHashFailed:\n\t\treturn counters.ErrUDPAckTokenHashFailed\n\tcase tokens.ErrSignatureMismatch:\n\t\treturn counters.ErrUDPAckSignatureMismatch\n\tdefault:\n\t\treturn counters.ErrUDPAckInvalidToken\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath.go",
    "content": "package nfqdatapath\n\n// Go libraries\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os/exec\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/blang/semver\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/acls\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/dnsproxy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/afinetrawsocket\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/nflog\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ebpf\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/flowtracking\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\ttpacket \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packetprocessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portcache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n\t\"go.uber.org/zap\"\n)\n\n// DefaultExternalIPTimeout is the default used for the cache for External IPTimeout.\nconst DefaultExternalIPTimeout = \"500ms\"\n\nvar collectCounterInterval = 30 * time.Second\n\n// GetUDPRawSocket is placeholder for createSocket function. It is useful to mock tcp unit tests.\nvar GetUDPRawSocket = afinetrawsocket.CreateSocket\n\ntype debugpacketmessage struct {\n\tMark    int\n\tp       *packet.Packet\n\ttcpConn *connection.TCPConnection\n\tudpConn *connection.UDPConnection\n\terr     error\n\tnetwork bool\n}\n\n// Datapath is the structure holding all information about a connection filter\ntype Datapath struct {\n\n\t// Configuration parameters\n\tfilterQueue    fqconfig.FilterQueue\n\tcollector      collector.EventCollector\n\ttokenAccessor  tokenaccessor.TokenAccessor\n\tservice        packetprocessor.PacketProcessor\n\tscrts          secrets.Secrets\n\tnflogger       nflog.NFLogger\n\tprocMountPoint string\n\n\ttargetNetworks *acls.ACLCache\n\t// Internal structures and caches\n\t// Key=ContextId Value=puContext\n\tpuFromContextID cache.DataStore\n\tpuFromMark      cache.DataStore\n\tpuFromHash      cache.DataStore\n\t// hostPU is the host PU context associated with the datapath.\n\t// There can not be more than one host PU.\n\thostPU *pucontext.PUContext\n\n\tcontextIDFromTCPPort *portcache.PortCache\n\tcontextIDFromUDPPort *portcache.PortCache\n\t// For remotes this is a reverse link to the context\n\tpuFromIP *pucontext.PUContext\n\n\t//tcpClient and tcpServer is a connection cache with key being the flow hash\n\t// and the value being the connection object.\n\ttcpClient connection.TCPCache\n\ttcpServer connection.TCPCache\n\n\ttcpConnectionExpirationNotifier func(*connection.TCPConnection)\n\n\tudpSourcePortConnectionCache cache.DataStore\n\n\t// Hash on full five-tuple and return the connection\n\t// These are auto-expired connections after 60 seconds of inactivity.\n\tudpAppOrigConnectionTracker  cache.DataStore\n\tudpAppReplyConnectionTracker cache.DataStore\n\tudpNetOrigConnectionTracker  cache.DataStore\n\tudpNetReplyConnectionTracker cache.DataStore\n\tudpNatConnectionTracker      cache.DataStore\n\tudpFinPacketTracker          cache.DataStore\n\t// CacheTimeout used for Trireme auto-detecion\n\tExternalIPCacheTimeout time.Duration\n\n\t// Packettracing Cache :: We don't mark this in pucontext since it gets recreated on every policy update and we need to persist across them\n\tpacketTracingCache cache.DataStore\n\n\t// mode captures the mode of the enforcer\n\tmode constants.ModeType\n\n\t// ack size\n\tackSize uint32\n\n\t// conntrack is the conntrack client\n\tconntrack flowtracking.FlowClient\n\tdnsProxy  dnsproxy.DNSProxy\n\n\tmutualAuthorization bool\n\tpacketLogs          bool\n\n\t// udp socket fd for application.\n\tudpSocketWriter afinetrawsocket.SocketWriter\n\n\tpuToPortsMap map[string]map[string]bool\n\t// bpf module\n\tbpf ebpf.BPFModule\n\n\tagentVersion semver.Version\n\n\tsecretsLock        sync.RWMutex\n\tlogLevelLock       sync.RWMutex\n\ttargetNetworksLock sync.RWMutex\n\n\t// defines if serviceMesh is enabled and tells which type of serviceMesh is enabled\n\tserviceMeshType policy.ServiceMesh\n}\n\ntype tracingCacheEntry struct {\n\tdirection packettracing.TracingDirection\n}\n\nfunc createPolicy(networks []string) policy.IPRuleList {\n\tvar rules policy.IPRuleList\n\n\tf := policy.FlowPolicy{\n\t\tAction: policy.Accept,\n\t}\n\n\taddresses := []string{}\n\n\taddresses = append(addresses, networks...)\n\n\tiprule := policy.IPRule{\n\t\tAddresses: addresses,\n\t\tPorts:     []string{\"0:65535\"},\n\t\tProtocols: []string{constants.TCPProtoNum},\n\t\tPolicy:    &f,\n\t}\n\n\trules = append(rules, iprule)\n\treturn rules\n}\n\nfunc (d *Datapath) cachePut(cache connection.TCPCache, key string, conn *connection.TCPConnection) {\n\tcache.Put(key, conn)\n\tconn.StartTimer(func() {\n\t\tcache.Remove(key)\n\t\td.tcpConnectionExpirationNotifier(conn)\n\t})\n}\n\nfunc (d *Datapath) cacheGet(cache connection.TCPCache, key string) (*connection.TCPConnection, bool) {\n\treturn cache.Get(key)\n}\n\nfunc (d *Datapath) cacheRemove(cache connection.TCPCache, key string) {\n\tconn, exists := cache.Get(key)\n\tif exists {\n\t\tconn.StopTimer()\n\t\tcache.Remove(key)\n\t}\n}\n\nconst waitBeforeRemovingConn = 5 * time.Second\n\n// New will create a new data path structure. It instantiates the data stores\n// needed to track sessions. The data path is started with a different call.\n// Only required parameters must be provided. Rest a pre-populated with defaults.\nfunc New(\n\tmutualAuth bool,\n\tfilterQueue fqconfig.FilterQueue,\n\tcollector collector.EventCollector,\n\tserverID string,\n\tvalidity time.Duration,\n\tsecrets secrets.Secrets,\n\tmode constants.ModeType,\n\tprocMountPoint string,\n\tExternalIPCacheTimeout time.Duration,\n\tpacketLogs bool,\n\ttokenaccessor tokenaccessor.TokenAccessor,\n\tpuFromContextID cache.DataStore,\n\tcfg *runtime.Configuration,\n\tisBPFEnabled bool,\n\tagentVersion semver.Version,\n\tserviceMeshType policy.ServiceMesh,\n) *Datapath {\n\n\tif ExternalIPCacheTimeout <= 0 {\n\t\tvar err error\n\t\tExternalIPCacheTimeout, err = time.ParseDuration(enforcerconstants.DefaultExternalIPTimeout)\n\t\tif err != nil {\n\t\t\tExternalIPCacheTimeout = time.Second\n\t\t}\n\t}\n\n\tvar bpf ebpf.BPFModule\n\n\tif isBPFEnabled {\n\t\tif bpf = ebpf.LoadBPF(); bpf != nil {\n\t\t\tzap.L().Info(\"eBPF is Enabled in the system\")\n\n\t\t\tcmd := exec.Command(\"aporeto-conntrack\", \"-F\")\n\t\t\tif err := cmd.Run(); err != nil {\n\t\t\t\tzap.L().Error(\"Failed to flush conntrack\", zap.Error(err))\n\t\t\t}\n\t\t} else {\n\t\t\tzap.L().Info(\"eBPF is disabled as it is not supported\")\n\t\t}\n\t} else {\n\t\tzap.L().Info(\"eBPF is disabled as it is not supported\")\n\t}\n\n\tif mode == constants.RemoteContainer || mode == constants.LocalServer {\n\t\t// Make conntrack liberal for TCP\n\t\tadjustConntrack(mode)\n\t}\n\n\tcontextIDFromTCPPort := portcache.NewPortCache(\"contextIDFromTCPPort\")\n\tcontextIDFromUDPPort := portcache.NewPortCache(\"contextIDFromUDPPort\")\n\n\tudpSocketWriter, err := GetUDPRawSocket(afinetrawsocket.ApplicationRawSocketMark, \"udp\")\n\n\tif err != nil {\n\t\tzap.L().Error(\"Unable to create raw socket for udp packet transmission\", zap.Error(err))\n\t}\n\n\td := &Datapath{}\n\td.puFromMark = cache.NewCache(\"puFromMark\")\n\td.puFromHash = cache.NewCache(\"puFromHash\")\n\td.contextIDFromTCPPort = contextIDFromTCPPort\n\td.contextIDFromUDPPort = contextIDFromUDPPort\n\n\td.puFromContextID = puFromContextID\n\td.tcpClient = connection.NewTCPConnectionCache()\n\td.tcpServer = connection.NewTCPConnectionCache()\n\td.tcpConnectionExpirationNotifier = d.tcpConnectionExpirationFunc\n\n\td.udpSourcePortConnectionCache = cache.NewCacheWithExpiration(\"udpSourcePortConnectionCache\", time.Second*60)\n\td.udpAppOrigConnectionTracker = cache.NewCacheWithExpiration(\"udpAppOrigConnectionTracker\", time.Second*60)\n\td.udpAppReplyConnectionTracker = cache.NewCacheWithExpiration(\"udpAppReplyConnectionTracker\", time.Second*60)\n\td.udpNetOrigConnectionTracker = cache.NewCacheWithExpiration(\"udpNetOrigConnectionTracker\", time.Second*60)\n\td.udpNetReplyConnectionTracker = cache.NewCacheWithExpiration(\"udpNetReplyConnectionTracker\", time.Second*60)\n\td.udpNatConnectionTracker = cache.NewCacheWithExpiration(\"udpNatConnectionTracker\", time.Second*60)\n\td.udpFinPacketTracker = cache.NewCacheWithExpiration(\"udpFinPacketTracker\", time.Second*60)\n\td.packetTracingCache = cache.NewCache(\"PacketTracingCache\")\n\td.targetNetworks = acls.NewACLCache()\n\td.ExternalIPCacheTimeout = ExternalIPCacheTimeout\n\td.filterQueue = filterQueue\n\td.mutualAuthorization = mutualAuth\n\td.collector = collector\n\td.tokenAccessor = tokenaccessor\n\td.scrts = secrets\n\td.ackSize = secrets.AckSize()\n\td.mode = mode\n\td.procMountPoint = procMountPoint\n\td.packetLogs = packetLogs\n\td.udpSocketWriter = udpSocketWriter\n\td.puToPortsMap = map[string]map[string]bool{}\n\td.bpf = bpf\n\td.agentVersion = agentVersion\n\td.serviceMeshType = serviceMeshType\n\n\tif err = d.SetTargetNetworks(cfg); err != nil {\n\t\tzap.L().Error(\"Error adding target networks to the ACLs\", zap.Error(err))\n\t}\n\n\td.nflogger = nflog.NewNFLogger(11, 10, d.puContextDelegate, collector)\n\n\tephemeralkeys.UpdateDatapathSecrets(secrets)\n\n\tif mode != constants.RemoteContainer {\n\t\tgo d.autoPortDiscovery()\n\t}\n\n\treturn d\n}\n\nfunc (d *Datapath) collectCounters() {\n\n\tkeysList := d.puFromContextID.KeyList()\n\tfor _, keys := range keysList {\n\t\tval, err := d.puFromContextID.Get(keys)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tcounters := val.(*pucontext.PUContext).Counters().GetErrorCounters()\n\t\td.collector.CollectCounterEvent(\n\t\t\t&collector.CounterReport{\n\t\t\t\tPUID:      val.(*pucontext.PUContext).ManagementID(),\n\t\t\t\tCounters:  counters,\n\t\t\t\tNamespace: val.(*pucontext.PUContext).ManagementNamespace(),\n\t\t\t})\n\t}\n\n\tcounters := counters.GetErrorCounters()\n\td.collector.CollectCounterEvent(\n\t\t&collector.CounterReport{\n\t\t\tPUID:      \"\",\n\t\t\tCounters:  counters,\n\t\t\tNamespace: \"\",\n\t\t})\n}\n\nfunc (d *Datapath) counterCollector(ctx context.Context) {\n\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\td.collectCounters()\n\t\t\treturn\n\t\tcase <-time.After(collectCounterInterval):\n\t\t\td.collectCounters()\n\t\t}\n\t}\n}\n\nfunc (d *Datapath) reportErrorCounters(pu *pucontext.PUContext) {\n\n\tcounters := pu.Counters().GetErrorCounters()\n\td.collector.CollectCounterEvent(&collector.CounterReport{\n\t\tPUID:      pu.ManagementID(),\n\t\tCounters:  counters,\n\t\tNamespace: pu.ManagementNamespace(),\n\t})\n}\n\n// Enforce implements the Enforce interface method and configures the data path for a new PU\nfunc (d *Datapath) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\t// Always create a new PU context\n\tpu, err := pucontext.NewPU(contextID, puInfo, d.tokenAccessor, d.ExternalIPCacheTimeout)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error creating new pu: %s\", err)\n\t}\n\n\t// Cache PUs for retrieval based on packet information\n\tif pu.Type() != common.ContainerPU {\n\n\t\tmark, tcpPorts, udpPorts := pu.GetProcessKeys()\n\t\td.puFromMark.AddOrUpdate(mark, pu)\n\n\t\tfor _, port := range tcpPorts {\n\t\t\tif port == \"0\" {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tportSpec, err := portspec.NewPortSpecFromString(port, contextID)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif puInfo.Runtime.PUType() == common.HostPU {\n\t\t\t\td.contextIDFromTCPPort.AddPortSpecToEnd(portSpec)\n\t\t\t} else {\n\t\t\t\td.contextIDFromTCPPort.AddPortSpec(portSpec)\n\t\t\t}\n\t\t}\n\n\t\tfor _, port := range udpPorts {\n\n\t\t\tportSpec, err := portspec.NewPortSpecFromString(port, contextID)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// check for host pu and add its ports to the end.\n\t\t\tif puInfo.Runtime.PUType() == common.HostPU {\n\t\t\t\td.contextIDFromUDPPort.AddPortSpecToEnd(portSpec)\n\t\t\t\td.hostPU = pu\n\t\t\t} else {\n\t\t\t\td.contextIDFromUDPPort.AddPortSpec(portSpec)\n\t\t\t}\n\t\t}\n\n\t} else {\n\t\td.puFromIP = pu\n\t}\n\n\toldPU, err := d.puFromContextID.Get(contextID)\n\tif err != nil {\n\t\t// start the dns proxy server for the first time.\n\t\tif err := d.dnsProxy.StartDNSServer(ctx, contextID, puInfo.Policy.DNSProxyPort()); err != nil {\n\t\t\tzap.L().Error(\"could not start dns server for PU\", zap.String(\"contexID\", contextID), zap.Error(err))\n\t\t}\n\t} else {\n\t\told := oldPU.(*pucontext.PUContext)\n\t\told.StopProcessing()\n\t\td.reportErrorCounters(old)\n\t}\n\tif err := d.dnsProxy.Enforce(ctx, contextID, puInfo); err != nil {\n\t\tzap.L().Error(\"Unable to update dns proxy config\", zap.Error(err))\n\t}\n\t// Cache PU to its contextID hash.\n\td.puFromHash.AddOrUpdate(pu.HashID(), pu)\n\n\t// Cache PU from contextID for management and policy updates\n\td.puFromContextID.AddOrUpdate(contextID, pu)\n\n\tif d.dnsProxy != nil {\n\t\tif err := d.dnsProxy.SyncWithPlatformCache(ctx, pu); err != nil {\n\t\t\tzap.L().Warn(\"error syncing with DNS cache\", zap.Error(err))\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// Unenforce removes the configuration for the given PU\nfunc (d *Datapath) Unenforce(ctx context.Context, contextID string) error {\n\n\tvar err error\n\n\tpuContext, err := d.puFromContextID.Get(contextID)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"contextid not found in enforcer: %s\", err)\n\t}\n\t// Pu is being unenforcer. Collect its counters\n\tpu := puContext.(*pucontext.PUContext)\n\t// this context pointer is about to get lost. reclaims its counters\n\td.reportErrorCounters(pu)\n\n\t// Cleanup the mark information\n\tif pu.Mark() != \"\" {\n\t\tif err = d.puFromMark.Remove(pu.Mark()); err != nil {\n\t\t\tzap.L().Debug(\"Unable to remove cache entry during unenforcement\",\n\t\t\t\tzap.String(\"Mark\", pu.Mark()),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t}\n\t}\n\n\t// Cleanup the port cache\n\tfor _, port := range pu.TCPPorts() {\n\t\tif port == \"0\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tif err := d.contextIDFromTCPPort.RemoveStringPorts(port); err != nil {\n\t\t\tzap.L().Debug(\"Unable to remove cache entry during unenforcement\",\n\t\t\t\tzap.String(\"TCPPort\", port),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t}\n\t}\n\n\tfor _, port := range pu.UDPPorts() {\n\t\tif port == \"0\" {\n\t\t\tcontinue\n\t\t}\n\n\t\tif err := d.contextIDFromUDPPort.RemoveStringPorts(port); err != nil {\n\t\t\tzap.L().Debug(\"Unable to remove cache entry during unenforcement\",\n\t\t\t\tzap.String(\"UDPPort\", port),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t}\n\t}\n\n\t// Cleanup the contextID hash cache.\n\tif err := d.puFromHash.RemoveWithDelay(pu.HashID(), 10*time.Second); err != nil {\n\t\tzap.L().Warn(\"unable to remove pucontext from hash cache\",\n\t\t\tzap.String(\"hash\", pu.HashID()),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\t// Cleanup the contextID cache\n\tif err := d.puFromContextID.RemoveWithDelay(contextID, 10*time.Second); err != nil {\n\t\tzap.L().Warn(\"Unable to remove context from cache\",\n\t\t\tzap.String(\"contextID\", contextID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\tif err := d.dnsProxy.Unenforce(ctx, contextID); err != nil {\n\t\tzap.L().Warn(\"Unable to unenforce dnsproxy\",\n\t\t\tzap.String(\"contextID\", contextID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\treturn nil\n}\n\n// SetTargetNetworks sets new target networks used by datapath\nfunc (d *Datapath) SetTargetNetworks(cfg *runtime.Configuration) error {\n\n\tvar err error\n\tnetworks := cfg.TCPTargetNetworks\n\n\tif len(networks) == 0 {\n\t\tnetworks = []string{\"0.0.0.0/1\", \"128.0.0.0/1\", \"::/0\"}\n\t}\n\n\ttargetNetworks := acls.NewACLCache()\n\ttargetacl := createPolicy(networks)\n\n\tif err = targetNetworks.AddRuleList(targetacl); err == nil {\n\t\td.targetNetworksLock.Lock()\n\t\td.targetNetworks = targetNetworks\n\t\td.targetNetworksLock.Unlock()\n\t\treturn nil\n\t}\n\n\treturn err\n}\n\n// GetBPFObject returns the bpf object\nfunc (d *Datapath) GetBPFObject() ebpf.BPFModule {\n\treturn d.bpf\n}\n\n// GetFilterQueue returns the filter queues used by the data path\nfunc (d *Datapath) GetFilterQueue() fqconfig.FilterQueue {\n\n\treturn d.filterQueue\n}\n\n// Run starts the application and network interceptors\nfunc (d *Datapath) Run(ctx context.Context) error {\n\n\tzap.L().Debug(\"Start datapath tracking and network interceptor\", zap.Int(\"mode\", int(d.mode)))\n\n\tif d.conntrack == nil {\n\t\tconntrackClient, err := flowtracking.NewClient(ctx)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\td.conntrack = conntrackClient\n\t}\n\n\tif d.dnsProxy == nil {\n\t\td.dnsProxy = dnsproxy.New(ctx, d.puFromContextID, d.conntrack, d.collector)\n\t}\n\n\td.startInterceptors(ctx)\n\tgo d.nflogger.Run(ctx)\n\tgo d.counterCollector(ctx)\n\treturn nil\n}\n\n// UpdateSecrets updates the secrets used for signing communication between trireme instances\nfunc (d *Datapath) UpdateSecrets(s secrets.Secrets) error {\n\n\td.secretsLock.Lock()\n\td.scrts = s\n\td.secretsLock.Unlock()\n\n\tephemeralkeys.UpdateDatapathSecrets(s)\n\treturn nil\n}\n\nfunc (d *Datapath) secrets() secrets.Secrets {\n\n\td.secretsLock.RLock()\n\tdefer d.secretsLock.RUnlock()\n\n\treturn d.scrts\n}\n\n// PacketLogsEnabled returns true if the packet logs are enabled.\nfunc (d *Datapath) PacketLogsEnabled() bool {\n\td.logLevelLock.RLock()\n\tdefer d.logLevelLock.RUnlock()\n\n\treturn d.packetLogs\n}\n\n// SetLogLevel sets log level.\nfunc (d *Datapath) SetLogLevel(level constants.LogLevel) error {\n\n\td.logLevelLock.Lock()\n\tdefer d.logLevelLock.Unlock()\n\n\td.packetLogs = false\n\tif level == constants.Trace {\n\t\td.packetLogs = true\n\t}\n\n\treturn nil\n}\n\n// CleanUp implements the cleanup interface.\nfunc (d *Datapath) CleanUp() error {\n\n\tif d.bpf != nil {\n\t\td.bpf.Cleanup()\n\t}\n\td.cleanupPlatform()\n\n\treturn nil\n}\n\nfunc (d *Datapath) puContextDelegate(hash string) (*pucontext.PUContext, error) {\n\n\tpu, err := d.puFromHash.Get(hash)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to find pucontext in cache with hash %s: %v\", hash, err)\n\t}\n\n\treturn pu.(*pucontext.PUContext), nil\n}\n\nfunc (d *Datapath) reportFlow(p *packet.Packet, src, dst *collector.EndPoint, context *pucontext.PUContext,\n\tmode string, report *policy.FlowPolicy, actual *policy.FlowPolicy,\n\tsourceController string, destinationController string) {\n\n\tc := &collector.FlowRecord{\n\t\tContextID:   context.ID(),\n\t\tSource:      *src,\n\t\tDestination: *dst,\n\t\t//\n\t\tAction:                actual.Action,\n\t\tDropReason:            mode,\n\t\tPolicyID:              actual.PolicyID,\n\t\tL4Protocol:            p.IPProto(),\n\t\tNamespace:             context.ManagementNamespace(),\n\t\tCount:                 1,\n\t\tSourceController:      sourceController,\n\t\tDestinationController: destinationController,\n\t\tRuleName:              actual.RuleName,\n\t}\n\n\tif context.Annotations() != nil {\n\t\tc.Tags = context.Annotations().GetSlice()\n\t}\n\n\tif report.ObserveAction.Observed() {\n\t\tc.ObservedAction = report.Action\n\t\tc.ObservedPolicyID = report.PolicyID\n\t\tc.ObservedActionType = report.ObserveAction\n\t}\n\n\td.collector.CollectFlowEvent(c)\n}\n\n// contextFromIP returns the PU context from the default IP if remote. Otherwise\n// it returns the context from the port or mark values of the packet. Synack\n// packets are again special and the flow is reversed. If a container doesn't supply\n// its IP information, we use the default IP. This will only work with remotes\n// and Linux processes.\nfunc (d *Datapath) contextFromIP(app bool, mark string, port uint16, protocol uint8) (*pucontext.PUContext, error) {\n\n\tif d.puFromIP != nil {\n\t\treturn d.puFromIP, nil\n\t}\n\n\tif protocol == packet.IPProtocolICMP {\n\t\tif d.hostPU != nil {\n\t\t\treturn d.hostPU, nil\n\t\t}\n\t}\n\n\tif app {\n\t\tpu, err := d.puFromMark.Get(mark)\n\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Unable to find context for application flow with mark\",\n\t\t\t\tzap.String(\"mark\", mark),\n\t\t\t\tzap.Int(\"protocol\", int(protocol)),\n\t\t\t\tzap.Int(\"port\", int(port)),\n\t\t\t)\n\t\t\treturn nil, counters.CounterError(counters.ErrMarkNotFound, errors.New(\"Mark Not Found\"))\n\t\t}\n\t\treturn pu.(*pucontext.PUContext), nil\n\t}\n\n\t// Network packets for non container traffic\n\tif protocol == packet.IPProtocolTCP {\n\t\tcontextID, err := d.contextIDFromTCPPort.GetSpecValueFromPort(port)\n\t\tif err != nil {\n\t\t\tzap.L().Debug(\"Could not find PU context for TCP server port\", zap.Uint16(\"port\", port))\n\t\t\treturn nil, counters.CounterError(counters.ErrPortNotFound, fmt.Errorf(\" TCP Port Not Found %v\", port))\n\t\t}\n\n\t\tpu, err := d.puFromContextID.Get(contextID)\n\t\tif err != nil {\n\t\t\treturn nil, counters.CounterError(counters.ErrContextIDNotFound, err)\n\t\t}\n\t\treturn pu.(*pucontext.PUContext), nil\n\t}\n\n\t// This is the UDP case\n\tcontextID, err := d.contextIDFromUDPPort.GetSpecValueFromPort(port)\n\tif err != nil {\n\t\tzap.L().Debug(\"Could not find PU context for UDP server port\", zap.Uint16(\"port\", port))\n\t\treturn nil, counters.CounterError(counters.ErrPortNotFound, fmt.Errorf(\"UDP Port Not Found %v\", port))\n\t}\n\n\tpu, err := d.puFromContextID.Get(contextID)\n\tif err != nil {\n\t\treturn nil, counters.CounterError(counters.ErrContextIDNotFound, fmt.Errorf(\"contextID %s not Found\", contextID))\n\t}\n\n\treturn pu.(*pucontext.PUContext), nil\n}\n\n// EnableDatapathPacketTracing enable nfq datapath packet tracing\nfunc (d *Datapath) EnableDatapathPacketTracing(ctx context.Context, contextID string, direction packettracing.TracingDirection, interval time.Duration) error {\n\n\tif _, err := d.puFromContextID.Get(contextID); err != nil {\n\t\treturn fmt.Errorf(\"contextID %s does not exist\", contextID)\n\t}\n\td.packetTracingCache.AddOrUpdate(contextID, &tracingCacheEntry{\n\t\tdirection: direction,\n\t})\n\tgo func() {\n\t\t<-time.After(interval)\n\t\td.packetTracingCache.Remove(contextID) // nolint\n\t}()\n\n\treturn nil\n}\n\n// EnableIPTablesPacketTracing enable iptables -j trace for the particular pu and is much wider packet stream.\nfunc (d *Datapath) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\treturn nil\n}\n\n// DebugCollect collects debug information for remote enforcers\nfunc (d *Datapath) DebugCollect(ctx context.Context, contextID string, debugConfig *policy.DebugConfig) error {\n\t// this is handled in remoteenforcer\n\treturn nil\n}\n\nfunc (d *Datapath) collectUDPPacket(msg *debugpacketmessage) {\n\tvar value interface{}\n\tvar err error\n\treport := &collector.PacketReport{\n\t\tPayload: make([]byte, 64),\n\t}\n\tif msg.udpConn == nil {\n\t\tif d.puFromIP == nil {\n\t\t\treturn\n\t\t}\n\t\tif value, err = d.packetTracingCache.Get(d.puFromIP.ID()); err != nil {\n\t\t\t//not being traced return\n\t\t\treturn\n\t\t}\n\n\t\treport.Claims = d.puFromIP.Identity().GetSlice()\n\t\treport.PUID = d.puFromIP.ManagementID()\n\t\treport.Namespace = d.puFromIP.ManagementNamespace()\n\t\treport.Encrypt = false\n\n\t} else {\n\t\t//udpConn is not nil\n\t\tif value, err = d.packetTracingCache.Get(msg.udpConn.Context.ID()); err != nil {\n\t\t\treturn\n\t\t}\n\t\treport.Encrypt = msg.udpConn.ServiceConnection\n\t\treport.Claims = msg.udpConn.Context.Identity().GetSlice()\n\t\treport.PUID = msg.udpConn.Context.ManagementID()\n\t\treport.Namespace = msg.udpConn.Context.ManagementNamespace()\n\t}\n\n\tif msg.network && !packettracing.IsNetworkPacketTraced(value.(*tracingCacheEntry).direction) {\n\t\treturn\n\t} else if !msg.network && !packettracing.IsApplicationPacketTraced(value.(*tracingCacheEntry).direction) {\n\t\treturn\n\t}\n\treport.Protocol = int(packet.IPProtocolUDP)\n\treport.DestinationIP = msg.p.DestinationAddress().String()\n\treport.SourceIP = msg.p.SourceAddress().String()\n\treport.DestinationPort = int(msg.p.DestPort())\n\treport.SourcePort = int(msg.p.SourcePort())\n\tif msg.err != nil {\n\t\treport.DropReason = msg.err.Error()\n\t\treport.Event = packettracing.PacketDropped\n\t} else {\n\t\treport.DropReason = \"\"\n\t\treport.Event = packettracing.PacketReceived\n\t}\n\treport.Length = int(msg.p.IPTotalLen())\n\treport.Mark = msg.Mark\n\treport.PacketID, _ = strconv.Atoi(msg.p.ID())\n\treport.TriremePacket = true\n\tbuf := msg.p.GetBuffer(0)\n\tif len(buf) > 64 {\n\t\tcopy(report.Payload, msg.p.GetBuffer(0)[0:64])\n\t} else {\n\t\tcopy(report.Payload, msg.p.GetBuffer(0))\n\t}\n\n\td.collector.CollectPacketEvent(report)\n}\n\nfunc (d *Datapath) collectTCPPacket(msg *debugpacketmessage) {\n\tvar value interface{}\n\tvar err error\n\tvar report *collector.PacketReport\n\n\tif msg.tcpConn == nil {\n\t\tif d.puFromIP == nil {\n\t\t\treturn\n\t\t}\n\n\t\tif value, err = d.packetTracingCache.Get(d.puFromIP.ID()); err != nil {\n\t\t\t//not being traced return\n\t\t\treturn\n\t\t}\n\n\t\treport = &collector.PacketReport{}\n\t\treport.Claims = d.puFromIP.Identity().GetSlice()\n\t\treport.PUID = d.puFromIP.ManagementID()\n\t\treport.Encrypt = false\n\t\treport.Namespace = d.puFromIP.ManagementNamespace()\n\n\t} else {\n\n\t\tif value, err = d.packetTracingCache.Get(msg.tcpConn.Context.ID()); err != nil {\n\t\t\t//not being traced return\n\t\t\treturn\n\t\t}\n\n\t\treport = &collector.PacketReport{}\n\t\treport.Encrypt = msg.tcpConn.ServiceConnection\n\t\treport.Claims = msg.tcpConn.Context.Identity().GetSlice()\n\t\treport.PUID = msg.tcpConn.Context.ManagementID()\n\t\treport.Namespace = msg.tcpConn.Context.ManagementNamespace()\n\t}\n\n\tif msg.network && !packettracing.IsNetworkPacketTraced(value.(*tracingCacheEntry).direction) {\n\t\treturn\n\t} else if !msg.network && !packettracing.IsApplicationPacketTraced(value.(*tracingCacheEntry).direction) {\n\t\treturn\n\t}\n\n\treport.TCPFlags = int(msg.p.GetTCPFlags())\n\treport.Protocol = int(packet.IPProtocolTCP)\n\treport.DestinationIP = msg.p.DestinationAddress().String()\n\treport.SourceIP = msg.p.SourceAddress().String()\n\treport.DestinationPort = int(msg.p.DestPort())\n\treport.SourcePort = int(msg.p.SourcePort())\n\tif msg.err != nil {\n\t\treport.DropReason = msg.err.Error()\n\t\treport.Event = packettracing.PacketDropped\n\t} else {\n\t\treport.DropReason = \"\"\n\t\treport.Event = packettracing.PacketReceived\n\t}\n\treport.Length = int(msg.p.IPTotalLen())\n\treport.Mark = msg.Mark\n\treport.PacketID, _ = strconv.Atoi(msg.p.ID())\n\treport.TriremePacket = true\n\t// Memory allocation must be done only if we are sure we transmitting\n\t// the report. Leads to unnecessary memory operations otherwise\n\t// that affect performance\n\treport.Payload = make([]byte, 64)\n\tbuf := msg.p.GetBuffer(0)\n\tif len(buf) > 64 {\n\t\tcopy(report.Payload, msg.p.GetBuffer(0)[0:64])\n\t} else {\n\t\tcopy(report.Payload, msg.p.GetBuffer(0))\n\t}\n\n\td.collector.CollectPacketEvent(report)\n}\n\n// Ping runs ping to the given config.\nfunc (d *Datapath) Ping(ctx context.Context, contextID string, pingConfig *policy.PingConfig) error {\n\n\tif pingConfig == nil {\n\t\treturn nil\n\t}\n\n\titem, err := d.puFromContextID.Get(contextID)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to find context with ID %s in cache: %v\", contextID, err)\n\t}\n\n\tcontext, ok := item.(*pucontext.PUContext)\n\tif !ok {\n\t\treturn fmt.Errorf(\"invalid pu context: %v\", contextID)\n\t}\n\n\treturn d.initiatePingHandshake(ctx, context, pingConfig)\n}\n\n// tcpConnectionExpirationNotifier handles processing the expiration of an element\nfunc (d *Datapath) tcpConnectionExpirationFunc(conn *connection.TCPConnection) {\n\n\tif conn.PingEnabled() {\n\n\t\tif !conn.PingConfig.SocketClosed() {\n\t\t\tif err := close(conn); err != nil {\n\t\t\t\tzap.L().Warn(\"unable to close socket\", zap.Reflect(\"fd\", conn.PingConfig.SocketFd()), zap.Error(err))\n\t\t\t}\n\t\t}\n\n\t\tif d.collector != nil && conn.PingConfig.PingReport() != nil {\n\t\t\td.collector.CollectPingEvent(conn.PingConfig.PingReport())\n\t\t}\n\n\t\treturn\n\t}\n\n\tif conn.GetState() == connection.TCPSynSend || conn.GetState() == connection.TCPSynAckSend {\n\n\t\treason := conn.GetReportReason()\n\t\tif reason == \"\" {\n\t\t\treason = \"expired\"\n\t\t}\n\n\t\tconnectionReport := &collector.ConnectionExceptionReport{\n\t\t\tTimestamp:       time.Now(),\n\t\t\tPUID:            conn.Context.ManagementID(),\n\t\t\tNamespace:       conn.Context.ManagementNamespace(),\n\t\t\tProtocol:        tpacket.IPProtocolTCP,\n\t\t\tSourceIP:        conn.TCPtuple.SourceAddress.String(),\n\t\t\tDestinationIP:   conn.TCPtuple.DestinationAddress.String(),\n\t\t\tDestinationPort: conn.TCPtuple.DestinationPort,\n\t\t\tReason:          reason,\n\t\t\tValue:           conn.GetCounterAndReset(),\n\t\t\tState:           conn.GetStateString(),\n\t\t}\n\n\t\td.collector.CollectConnectionExceptionReport(connectionReport)\n\t}\n\n\tconn.Cleanup()\n}\n\n// GetServiceMeshType gets the service mesh that is enabled on this datapath\nfunc (d *Datapath) GetServiceMeshType() policy.ServiceMesh {\n\treturn d.serviceMeshType\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath_common_test.go",
    "content": "package nfqdatapath\n\nimport (\n\t\"context\"\n\t\"crypto/ecdsa\"\n\t\"time\"\n\n\t\"github.com/blang/semver\"\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector/mockcollector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/dnsproxy/mockdnsproxy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/afinetrawsocket\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor/mocktokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/flowtracking/mockflowclient\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/mocksecrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nconst (\n\ttestSrcIP = \"10.1.10.76\"\n\ttestDstIP = \"164.67.228.152\"\n)\n\nvar (\n\tdebug bool\n)\n\nfunc procSetValueMock(procName string, value int) error {\n\treturn nil\n}\n\n// NewWithDefaults create a new data path with most things used by default\nfunc newWithDefaults(\n\tctrl *gomock.Controller,\n\tserverID string,\n\tcollector collector.EventCollector,\n\tsecrets secrets.Secrets,\n\tmode constants.ModeType,\n\ttargetNetworks []string,\n\ttestExpirationNotifier bool,\n) *Datapath {\n\n\t// Override so that you don't have to run as root\n\tprocSetValuePtr = procSetValueMock\n\n\tmockTokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\tflowclient := mockflowclient.NewMockFlowClient(ctrl)\n\tpuFromContextID := cache.NewCache(\"puFromContextID\")\n\tmockDNS := mockdnsproxy.NewMockDNSProxy(ctrl)\n\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynAckPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().Randomize(gomock.Any(), gomock.Any()).AnyTimes()\n\tmockTokenAccessor.EXPECT().ParseAckToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().ParsePacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Do(\n\t\tfunc(privateKey, data, secrets, c, b interface{}) interface{} {\n\n\t\t\tclaims := c.(*tokens.ConnectionClaims)\n\t\t\tclaims.T = policy.NewTagStore()\n\t\t\tclaims.T.AppendKeyValue(enforcerconstants.TransmitterLabel, \"value\")\n\t\t\treturn nil\n\t\t},\n\t).Return(nil, &claimsheader.ClaimsHeader{}, &pkiverifier.PKIControllerInfo{}, []byte(\"remoteNonce\"), \"\", false, nil).AnyTimes()\n\n\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes()\n\tmockDNS.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes()\n\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).AnyTimes()\n\n\te := New(\n\t\tfalse,\n\t\tnil,\n\t\tcollector,\n\t\tserverID,\n\t\t10*time.Minute,\n\t\tsecrets,\n\t\tmode,\n\t\t\"/proc\",\n\t\t500*time.Millisecond,\n\t\tfalse,\n\t\tmockTokenAccessor,\n\t\tpuFromContextID,\n\t\t&runtime.Configuration{TCPTargetNetworks: targetNetworks},\n\t\tfalse,\n\t\tsemver.Version{},\n\t\tpolicy.None,\n\t)\n\n\te.conntrack = flowclient\n\te.dnsProxy = mockDNS\n\n\tif testExpirationNotifier {\n\t\te.tcpConnectionExpirationNotifier = testConnectionExpirationNotifier\n\t}\n\n\treturn e\n}\n\n// NewWithMocks create a new data path using mock objects\nfunc NewWithMocks(\n\tctrl *gomock.Controller,\n\tserverID string,\n\tmode constants.ModeType,\n\ttargetNetworks []string,\n\ttestExpirationNotifier bool,\n) (*Datapath, *mocksecrets.MockSecrets, *mocktokenaccessor.MockTokenAccessor,\n\t*mockcollector.MockEventCollector, *mockdnsproxy.MockDNSProxy) {\n\n\t// Override so that you don't have to run as root\n\tprocSetValuePtr = procSetValueMock\n\n\tsecrets := mocksecrets.NewMockSecrets(ctrl)\n\ttokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\tcollector := mockcollector.NewMockEventCollector(ctrl)\n\tflowclient := mockflowclient.NewMockFlowClient(ctrl)\n\tpuFromContextID := cache.NewCache(\"puFromContextID\")\n\tdnsproxy := mockdnsproxy.NewMockDNSProxy(ctrl)\n\n\tsecrets.EXPECT().AckSize().Return(uint32(300)).Times(1)\n\n\te := New(\n\t\tfalse,\n\t\tnil,\n\t\tcollector,\n\t\tserverID,\n\t\t10*time.Minute,\n\t\tsecrets,\n\t\tmode,\n\t\t\"/proc\",\n\t\t500*time.Millisecond,\n\t\tfalse,\n\t\ttokenAccessor,\n\t\tpuFromContextID,\n\t\t&runtime.Configuration{TCPTargetNetworks: targetNetworks},\n\t\tfalse,\n\t\tsemver.Version{},\n\t\tpolicy.None,\n\t)\n\n\te.conntrack = flowclient\n\te.dnsProxy = dnsproxy\n\n\tif testExpirationNotifier {\n\t\te.tcpConnectionExpirationNotifier = testConnectionExpirationNotifier\n\t}\n\n\treturn e, secrets, tokenAccessor, collector, dnsproxy\n}\n\nfunc testConnectionExpirationNotifier(conn *connection.TCPConnection) {\n\n\tconn.Cleanup()\n}\n\n// MockGetUDPRawSocket mocks the GetUDPRawSocket function. Usage \"defer MockGetUDPRawSocket()()\"\nfunc MockGetUDPRawSocket() func() {\n\tprevRawSocket := GetUDPRawSocket\n\tGetUDPRawSocket = func(mark int, device string) (afinetrawsocket.SocketWriter, error) {\n\t\treturn nil, nil\n\t}\n\treturn func() {\n\t\tGetUDPRawSocket = prevRawSocket\n\t}\n}\n\n// CreatePUContext creates a policy\nfunc CreatePUContext(enforcer *Datapath, contextID, namespace string, puType common.PUType, tokenAccessor tokenaccessor.TokenAccessor) (*pucontext.PUContext, error) {\n\tpuInfo := policy.NewPUInfo(contextID, namespace, puType)\n\tcontext, err := pucontext.NewPU(contextID, puInfo, tokenAccessor, 10*time.Second)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tenforcer.puFromContextID.AddOrUpdate(contextID, context) // nolint\n\treturn context, nil\n}\n\n// CreatePortPolicy creates a port range policy\nfunc CreatePortPolicy(enforcer *Datapath, contextID, namespace string, puType common.PUType, tokenAccessor tokenaccessor.TokenAccessor, mark string, portMin, portMax uint16) error {\n\n\tcontext, err := CreatePUContext(enforcer, contextID, namespace, puType, tokenAccessor)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = enforcer.puFromMark.Add(mark, context)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tportspec, err := portspec.NewPortSpec(portMin, portMax, contextID)\n\tif err != nil {\n\t\treturn err\n\t}\n\tenforcer.contextIDFromTCPPort.AddPortSpec(portspec)\n\treturn nil\n}\n\n// CreateFlowRecord creates a basic flow report\nfunc CreateFlowRecord(count int, srcIP, destIP string, srcPort, destPort uint16, action policy.ActionType, dropReason string) collector.FlowRecord {\n\tvar flowRecord collector.FlowRecord\n\tvar srcEndPoint collector.EndPoint\n\tvar dstEndPoint collector.EndPoint\n\n\tsrcEndPoint.IP = srcIP\n\tsrcEndPoint.Port = srcPort\n\n\tdstEndPoint.IP = destIP\n\tdstEndPoint.Port = destPort\n\n\tflowRecord.Count = count\n\tflowRecord.Source = srcEndPoint\n\tflowRecord.Destination = dstEndPoint\n\tflowRecord.Action = action\n\tflowRecord.DropReason = dropReason\n\treturn flowRecord\n}\n\nfunc createEnforcerWithPolicy(ctrl *gomock.Controller, mode constants.ModeType) (*Datapath, *mockcollector.MockEventCollector) {\n\n\tpuInfo1, puInfo2 := createPolicies(testSrcIP, testDstIP)\n\tSo(puInfo1, ShouldNotBeNil)\n\tSo(puInfo2, ShouldNotBeNil)\n\n\tenforcer, mockTokenAccessor := createEnforcer(ctrl, mode)\n\n\terr := enforcer.Enforce(context.Background(), puInfo1.ContextID, puInfo1)\n\tSo(err, ShouldBeNil)\n\n\terr = enforcer.Enforce(context.Background(), puInfo2.ContextID, puInfo2)\n\tSo(err, ShouldBeNil)\n\n\treturn enforcer, mockTokenAccessor\n}\n\nfunc createEnforcer(ctrl *gomock.Controller, mode constants.ModeType) (*Datapath, *mockcollector.MockEventCollector) {\n\n\tenforcer, secrets, mockTokenAccessor, mockCollector, mockDNS := NewWithMocks(ctrl, \"serverID\", mode, []string{\"0.0.0.0/0\"}, true)\n\tSo(enforcer != nil, ShouldBeTrue)\n\n\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynAckPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().Randomize(gomock.Any(), gomock.Any()).AnyTimes()\n\tmockTokenAccessor.EXPECT().ParseAckToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().ParsePacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Do(\n\t\tfunc(privateKey, data, secrets, c, b interface{}) interface{} {\n\n\t\t\tclaims := c.(*tokens.ConnectionClaims)\n\t\t\tclaims.T = policy.NewTagStore()\n\t\t\tclaims.T.AppendKeyValue(enforcerconstants.TransmitterLabel, \"value\")\n\t\t\treturn nil\n\t\t},\n\t).Return(nil, &claimsheader.ClaimsHeader{}, &pkiverifier.PKIControllerInfo{}, []byte(\"remoteNonce\"), \"\", false, nil).AnyTimes()\n\n\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), gomock.Any(), gomock.Any()).Times(2)\n\tmockDNS.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Times(2)\n\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).Times(2)\n\treturn enforcer, mockCollector\n}\n\nfunc createPolicies(srcIP, dstIP string) (*policy.PUInfo, *policy.PUInfo) {\n\ttagSelector := policy.TagSelector{\n\t\tClause: []policy.KeyValueOperator{\n\t\t\t{\n\t\t\t\tKey:      enforcerconstants.TransmitterLabel,\n\t\t\t\tValue:    []string{\"value\"},\n\t\t\t\tOperator: policy.Equal,\n\t\t\t},\n\t\t},\n\t\tPolicy: &policy.FlowPolicy{Action: policy.Accept},\n\t}\n\n\tpuID1 := \"SomeProcessingUnitId1\"\n\tpuID2 := \"SomeProcessingUnitId2\"\n\n\tpuIP1 := dstIP\n\tpuIP2 := srcIP\n\n\t// Create ProcessingUnit 1\n\tpuInfo1 := policy.NewPUInfo(puID1, \"/ns1\", common.ContainerPU)\n\n\tip1 := policy.ExtendedMap{}\n\tip1[\"bridge\"] = puIP1\n\tpuInfo1.Runtime.SetIPAddresses(ip1)\n\tipl1 := policy.ExtendedMap{policy.DefaultNamespace: puIP1}\n\tpuInfo1.Policy.SetIPAddresses(ipl1)\n\tpuInfo1.Policy.AddIdentityTag(enforcerconstants.TransmitterLabel, \"value\")\n\tpuInfo1.Policy.AddReceiverRules(tagSelector)\n\n\t// Create processing unit 2\n\tpuInfo2 := policy.NewPUInfo(puID2, \"/ns2\", common.ContainerPU)\n\tip2 := policy.ExtendedMap{\"bridge\": puIP2}\n\tpuInfo2.Runtime.SetIPAddresses(ip2)\n\tipl2 := policy.ExtendedMap{policy.DefaultNamespace: puIP2}\n\tpuInfo2.Policy.SetIPAddresses(ipl2)\n\tpuInfo2.Policy.AddIdentityTag(enforcerconstants.TransmitterLabel, \"value\")\n\tpuInfo2.Policy.AddReceiverRules(tagSelector)\n\n\treturn puInfo1, puInfo2\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath_darwin.go",
    "content": "// +build darwin\n\npackage nfqdatapath\n\nimport (\n\t\"context\"\n\t\"net\"\n\t\"syscall\"\n\n\tgpacket \"github.com/ghedo/go.pkt/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n)\n\nfunc adjustConntrack(mode constants.ModeType) {\n}\n\nfunc (d *Datapath) setMark(pkt *packet.Packet, mark uint32) error {\n\treturn nil\n}\n\nfunc (d *Datapath) reverseFlow(pkt *packet.Packet) error {\n\treturn nil\n}\n\nfunc (d *Datapath) drop(pkt *packet.Packet) error {\n\treturn nil\n}\n\nfunc (d *Datapath) dropFlow(pkt *packet.Packet) error {\n\treturn nil\n}\n\nfunc (d *Datapath) ignoreFlow(pkt *packet.Packet) error {\n\treturn nil\n}\n\nfunc (d *Datapath) setFlowState(pkt *packet.Packet, accepted bool) error {\n\treturn nil\n}\n\nfunc (d *Datapath) startInterceptors(ctx context.Context) {\n}\n\ntype pingConn struct {\n}\n\nfunc dialIP(srcIP, dstIP net.IP) (PingConn, error) {\n\n\treturn &pingConn{}, nil\n}\n\n// Close not implemented.\nfunc (p *pingConn) Close() error {\n\treturn nil\n}\n\n// Write not implemented.\nfunc (p *pingConn) Write(data []byte) (int, error) {\n\treturn 0, nil\n}\n\n// ConstructWirePacket not implemented.\nfunc (p *pingConn) ConstructWirePacket(srcIP, dstIP net.IP, transport gpacket.Packet, payload gpacket.Packet) ([]byte, error) {\n\treturn nil, nil\n}\n\nfunc bindRandomPort(tcpConn *connection.TCPConnection) (uint16, error) {\n\treturn 0, nil\n}\n\nfunc closeRandomPort(tcpConn *connection.TCPConnection) error {\n\treturn nil\n}\n\nfunc isAddrInUseErrno(errNo syscall.Errno) bool {\n\treturn false\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath_icmp.go",
    "content": "// +build linux\n\npackage nfqdatapath\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n)\n\ntype icmpActionType int\n\nconst (\n\ticmpAccept icmpActionType = iota\n\ticmpDrop\n)\n\nfunc (d *Datapath) processNetworkICMPPacket(context *pucontext.PUContext, packet *packet.Packet, icmpType int8, icmpCode int8) icmpActionType {\n\n\tsrcAddr := packet.SourceAddress()\n\tdstAddr := packet.DestinationAddress()\n\n\treport, pkt, err := context.NetworkICMPACLPolicy(srcAddr, icmpType, icmpCode)\n\n\td.reportExternalServiceFlowCommon(context, report, pkt, false, packet, &collector.EndPoint{IP: srcAddr.String()}, &collector.EndPoint{IP: dstAddr.String()})\n\tif err != nil || pkt.Action.Rejected() {\n\t\treturn icmpDrop\n\t}\n\n\treturn icmpAccept\n}\n\nfunc (d *Datapath) processApplicationICMPPacket(context *pucontext.PUContext, packet *packet.Packet, icmpType int8, icmpCode int8) icmpActionType {\n\n\tsrcAddr := packet.SourceAddress()\n\tdstAddr := packet.DestinationAddress()\n\n\treport, pkt, err := context.ApplicationICMPACLPolicy(dstAddr, icmpType, icmpCode)\n\n\td.reportExternalServiceFlowCommon(context, report, pkt, true, packet, &collector.EndPoint{IP: srcAddr.String()}, &collector.EndPoint{IP: dstAddr.String()})\n\n\tif err != nil || pkt.Action.Rejected() {\n\t\treturn icmpDrop\n\t}\n\n\treturn icmpAccept\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath_linux.go",
    "content": "// +build linux\n\npackage nfqdatapath\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"os\"\n\t\"strconv\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/ghedo/go.pkt/layers\"\n\tgpacket \"github.com/ghedo/go.pkt/packet\"\n\t\"github.com/ghedo/go.pkt/packet/ipv4\"\n\t\"go.aporeto.io/enforcerd/internal/utils\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/buildflags\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/sys/unix\"\n)\n\nfunc procSetValue(procName string, value int) error {\n\tfile, err := os.OpenFile(procName, os.O_RDWR|os.O_TRUNC, 0644)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer file.Close() // nolint: errcheck\n\t_, err = file.WriteString(strconv.Itoa(value))\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Declare function pointer so that it can be overridden by unit test\nvar procSetValuePtr func(procName string, value int) error = procSetValue\n\nfunc adjustConntrack(mode constants.ModeType) {\n\t// As the pods in k8s is RO, we need to use the Host Proc to write into the proc FS.\n\terr := procSetValuePtr(utils.GetPathOnHostViaProcRoot(\"/proc/sys/net/netfilter/nf_conntrack_tcp_be_liberal\"), 1)\n\tif err != nil {\n\t\tzap.L().Fatal(\"Failed to set conntrack options\", zap.Error(err))\n\t}\n\n\tif mode == constants.LocalServer && !buildflags.IsLegacyKernel() {\n\t\terr := procSetValuePtr(utils.GetPathOnHostViaProcRoot(\"/proc/sys/net/ipv4/ip_early_demux\"), 0)\n\t\tif err != nil {\n\t\t\tzap.L().Fatal(\"Failed to set early demux options\", zap.Error(err))\n\t\t}\n\t}\n}\n\nfunc (d *Datapath) setMark(pkt *packet.Packet, mark uint32) error {\n\treturn nil\n}\n\nfunc (d *Datapath) reverseFlow(pkt *packet.Packet) error {\n\treturn nil\n}\n\nfunc (d *Datapath) drop(pkt *packet.Packet) error {\n\treturn nil\n}\n\nfunc (d *Datapath) dropFlow(pkt *packet.Packet) error {\n\treturn nil\n}\n\nfunc (d *Datapath) ignoreFlow(pkt *packet.Packet) error {\n\treturn nil\n}\n\nfunc (d *Datapath) setFlowState(pkt *packet.Packet, accepted bool) error {\n\treturn nil\n}\n\nfunc (d *Datapath) startInterceptors(ctx context.Context) {\n\td.startInterceptor(ctx)\n}\n\ntype pingConn struct {\n\tconn net.Conn\n}\n\nfunc dialIP(srcIP, dstIP net.IP) (PingConn, error) {\n\n\td := net.Dialer{\n\t\tTimeout:   5 * time.Second,\n\t\tKeepAlive: -1, // keepalive disabled.\n\t\tLocalAddr: &net.IPAddr{IP: srcIP},\n\t\tControl:   markedconn.ControlFunc(constants.ProxyMarkInt, false, nil),\n\t}\n\n\tconn, err := d.Dial(\"ip4:tcp\", dstIP.String())\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &pingConn{conn: conn}, nil\n}\n\n// Close closes the connection.\nfunc (p *pingConn) Close() error {\n\treturn p.conn.Close()\n}\n\n// Write writes to the connection.\nfunc (p *pingConn) Write(data []byte) (int, error) {\n\n\tn, err := p.conn.Write(data)\n\tif err != nil {\n\t\treturn n, err\n\t}\n\n\tif n != len(data) {\n\t\treturn n, fmt.Errorf(\"partial data written, total: %v, written: %v\", len(data), n)\n\t}\n\n\treturn n, nil\n}\n\n// ConstructWirePacket returns TCP packet with the given payload in wire format.\nfunc (p *pingConn) ConstructWirePacket(srcIP, dstIP net.IP, transport gpacket.Packet, payload gpacket.Packet) ([]byte, error) {\n\treturn packLayers(srcIP, dstIP, transport, payload)\n}\n\nfunc bindRandomPort(tcpConn *connection.TCPConnection) (uint16, error) {\n\n\tfd, err := unix.Socket(unix.AF_INET, unix.SOCK_STREAM, unix.IPPROTO_TCP)\n\tif err != nil || fd <= -1 {\n\t\treturn 0, fmt.Errorf(\"unable to open socket, fd: %d : %s\", fd, err)\n\t}\n\n\taddr := unix.SockaddrInet4{Port: 0}\n\tcopy(addr.Addr[:], net.ParseIP(\"127.0.0.1\").To4())\n\tif err = unix.Bind(fd, &addr); err != nil {\n\t\tunix.Close(fd) // nolint: errcheck\n\t\treturn 0, fmt.Errorf(\"unable to bind socket: %s\", err)\n\t}\n\n\tsockAddr, err := unix.Getsockname(fd)\n\tif err != nil {\n\t\tunix.Close(fd) // nolint: errcheck\n\t\treturn 0, fmt.Errorf(\"unable to get socket address: %s\", err)\n\t}\n\n\tip4Addr, ok := sockAddr.(*unix.SockaddrInet4)\n\tif !ok {\n\t\tunix.Close(fd) // nolint: errcheck\n\t\treturn 0, fmt.Errorf(\"invalid socket address: %T\", sockAddr)\n\t}\n\n\ttcpConn.PingConfig.SetSocketFd(uintptr(fd))\n\treturn uint16(ip4Addr.Port), nil\n}\n\nfunc closeRandomPort(tcpConn *connection.TCPConnection) error {\n\n\tfd := tcpConn.PingConfig.SocketFd()\n\ttcpConn.PingConfig.SetSocketClosed(true)\n\n\treturn unix.Close(int(fd))\n}\n\nfunc packLayers(srcIP, dstIP net.IP, transport gpacket.Packet, payload gpacket.Packet) ([]byte, error) {\n\n\t// pseudo header.\n\tipPacket := ipv4.Make()\n\tipPacket.SrcAddr = srcIP\n\tipPacket.DstAddr = dstIP\n\tipPacket.Protocol = ipv4.TCP\n\n\ttransport.SetPayload(payload)  // nolint:errcheck\n\tipPacket.SetPayload(transport) // nolint:errcheck\n\n\t// pack the layers together.\n\tbuf, err := layers.Pack(transport, payload)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to encode packet to wire format: %v\", err)\n\t}\n\n\treturn buf, nil\n}\n\nfunc isAddrInUseErrno(errNo syscall.Errno) bool {\n\treturn errNo == unix.EADDRINUSE\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath_tcp.go",
    "content": "package nfqdatapath\n\n// Go libraries\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"strconv\"\n\n\t\"github.com/pkg/errors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\terrNonPUTraffic     = errors.New(\"not a pu traffic\")\n\terrNonPUUDPTraffic  = errors.New(\"not a pu udp traffic\")\n\terrOutOfOrderSynAck = errors.New(\"out of order syn ack packet\")\n\terrRstPacket        = errors.New(\"rst packet\")\n\terrNoConnection     = errors.New(\"no connection found\")\n\n\t// Custom ping error types\n\terrDropPingNetSynAck = errors.New(\"net synack dropped\")\n\terrDropPingNetSyn    = errors.New(\"net syn dropped\") // nolint: varcheck\n\n\trstIdentity = []byte(\"enforcerrstidentity\")\n)\n\n// processNetworkPackets processes packets arriving from network and are destined to the application\nfunc (d *Datapath) processNetworkTCPPackets(p *packet.Packet) (*connection.TCPConnection, func(), error) {\n\tvar conn *connection.TCPConnection\n\tvar err error\n\tvar f func()\n\n\tdebugLogs := func(debugString string) {\n\n\t\tif d.PacketLogsEnabled() {\n\t\t\tzap.L().Debug(debugString,\n\t\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t\t\tzap.String(\"Flags\", packet.TCPFlagsToStr(p.GetTCPFlags())),\n\t\t\t\tzap.Error(err))\n\t\t}\n\t}\n\n\t// Retrieve connection state of SynAck packets and\n\t// skip processing for SynAck packets that we don't have state\n\tswitch p.GetTCPFlags() & packet.TCPSynAckMask {\n\tcase packet.TCPSynMask:\n\t\tconn, err = d.netSynRetrieveState(p)\n\t\tif err != nil {\n\t\t\tswitch err {\n\t\t\t// Non PU Traffic let it through\n\t\t\tcase errNonPUTraffic:\n\t\t\t\treturn conn, nil, nil\n\t\t\tdefault:\n\t\t\t\tdebugLogs(\"Packet rejected\")\n\t\t\t\treturn conn, nil, err\n\t\t\t}\n\t\t}\n\n\tcase packet.TCPSynAckMask:\n\t\tconn, err = d.netSynAckRetrieveState(p)\n\t\tif err != nil {\n\t\t\tswitch err {\n\t\t\tcase errOutOfOrderSynAck:\n\t\t\t\t// Drop this synack it is for a flow we know which is marked for deletion.\n\t\t\t\t// We saw a FINACK and this synack has come without we seeing an appsyn for this flow again\n\t\t\t\treturn conn, nil, counters.CounterError(counters.ErrOutOfOrderSynAck, fmt.Errorf(\"ErrOutOfOrderSynAck\"))\n\t\t\tdefault:\n\t\t\t\td.releaseUnmonitoredFlow(p)\n\t\t\t\treturn conn, nil, nil\n\t\t\t}\n\t\t}\n\n\tdefault:\n\t\tconn, err = d.netRetrieveState(p)\n\t\tswitch err {\n\t\tcase nil:\n\t\t\t// Do nothing.\n\t\tcase errRstPacket:\n\t\t\treturn conn, nil, nil\n\t\tdefault:\n\t\t\tdebugLogs(\"Packet rejected\")\n\t\t\treturn conn, nil, err\n\t\t}\n\t}\n\n\tconn.Lock()\n\tdefer conn.Unlock()\n\n\tif conn.GetState() == connection.TCPSynSend && p.GetTCPFlags()&packet.TCPRstMask != 0 && !conn.PingEnabled() {\n\t\tp.TCPDataDetach(0)\n\t\td.cacheRemove(d.tcpClient, p.L4ReverseFlowHash())\n\t\treturn conn, f, nil\n\t}\n\n\tf, err = d.processNetworkTCPPacket(p, conn.Context, conn)\n\tif err != nil {\n\t\tdebugLogs(\"Rejecting packet\")\n\t\treturn conn, nil, err\n\t}\n\treturn conn, f, nil\n}\n\n// processApplicationPackets processes packets arriving from an application and are destined to the network\nfunc (d *Datapath) processApplicationTCPPackets(p *packet.Packet) (conn *connection.TCPConnection, err error) {\n\n\tdebugLogs := func(debugString string) {\n\t\tif d.PacketLogsEnabled() {\n\t\t\tzap.L().Debug(debugString,\n\t\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t\t\tzap.String(\"Flags\", packet.TCPFlagsToStr(p.GetTCPFlags())),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t}\n\t}\n\n\tswitch p.GetTCPFlags() & packet.TCPSynAckMask {\n\tcase packet.TCPSynMask:\n\t\tconn, err = d.appSynRetrieveState(p)\n\t\tif err != nil {\n\t\t\tdebugLogs(\"Packet rejected\")\n\t\t\treturn conn, err\n\t\t}\n\tcase packet.TCPSynAckMask:\n\t\tconn, err = d.appSynAckRetrieveState(p)\n\t\tif err != nil {\n\t\t\tdebugLogs(\"SynAckPacket Ignored\")\n\t\t\tcid, err := d.contextIDFromTCPPort.GetSpecValueFromPort(p.SourcePort())\n\n\t\t\tif err == nil {\n\t\t\t\titem, err := d.puFromContextID.Get(cid.(string))\n\t\t\t\tif err != nil {\n\t\t\t\t\t// Let the packet through if the context is not found\n\t\t\t\t\treturn conn, nil\n\t\t\t\t}\n\n\t\t\t\tctx := item.(*pucontext.PUContext)\n\n\t\t\t\t// Syn was not seen and this synack packet is coming from a PU\n\t\t\t\t// we monitor. This is possible only if IP is in the external\n\t\t\t\t// networks or excluded networks. Let this packet go through\n\t\t\t\t// for any of these cases. Drop for everything else.\n\t\t\t\t_, policy, perr := ctx.NetworkACLPolicyFromAddr(p.DestinationAddress(), p.SourcePort(), p.IPProto())\n\t\t\t\tif perr == nil && policy.Action.Accepted() {\n\t\t\t\t\tctx.Counters().IncrementCounter(counters.ErrSynAckToExtNetAccept)\n\t\t\t\t\treturn conn, nil\n\t\t\t\t}\n\n\t\t\t\t// Drop this synack as it belongs to PU\n\t\t\t\t// for which we didn't see syn\n\n\t\t\t\t// FYI.  This can happen when the enforcer is starting. The syn packet gets by the enforcer but is then caught here.\n\t\t\t\tzap.L().Debug(\"Network Syn was not seen, and we are monitoring this PU. Dropping the syn ack packet\", zap.String(\"contextID\", cid.(string)), zap.Uint16(\"port\", p.SourcePort()))\n\t\t\t\treturn conn, counters.CounterError(counters.ErrNetSynNotSeen, fmt.Errorf(\"Network Syn was not seen\"))\n\t\t\t}\n\n\t\t\t// syn ack for non aporeto traffic can be let through\n\t\t\treturn conn, nil\n\t\t}\n\tdefault:\n\t\tconn, err = d.appRetrieveState(p)\n\t\tif err == errRstPacket {\n\t\t\treturn nil, nil\n\t\t}\n\n\t\tif err != nil {\n\t\t\tdebugLogs(\"Packet rejected\")\n\t\t\treturn conn, err\n\t\t}\n\t}\n\n\tconn.Lock()\n\tdefer conn.Unlock()\n\n\tif conn.GetState() == connection.TCPSynReceived && p.GetTCPFlags()&packet.TCPRstMask != 0 && !conn.PingEnabled() {\n\t\t// Seen a RST packet. Remove cache entries related to this connection\n\t\tp.TCPDataDetach(0)\n\t\td.cacheRemove(d.tcpServer, p.L4ReverseFlowHash())\n\t\treturn conn, nil\n\t}\n\n\terr = d.processApplicationTCPPacket(p, conn.Context, conn)\n\tif err != nil {\n\t\tdebugLogs(\"Dropping packet\")\n\t\treturn conn, err\n\t}\n\n\treturn conn, nil\n}\n\n// processApplicationTCPPacket processes a TCP packet and dispatches it to other methods based on the flags\nfunc (d *Datapath) processApplicationTCPPacket(tcpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.TCPConnection) error {\n\n\t// State machine based on the flags\n\tswitch tcpPacket.GetTCPFlags() & packet.TCPSynAckMask {\n\tcase packet.TCPSynMask: //Processing SYN packet from Application\n\t\treturn d.processApplicationSynPacket(tcpPacket, context, conn)\n\n\tcase packet.TCPAckMask:\n\t\tif tcpPacket.GetTCPFlags()&packet.TCPFinMask != 0 {\n\t\t\tconn.MarkForDeletion = true\n\t\t}\n\t\treturn d.processApplicationAckPacket(tcpPacket, context, conn)\n\n\tcase packet.TCPSynAckMask:\n\t\treturn d.processApplicationSynAckPacket(tcpPacket, context, conn)\n\tdefault:\n\t\treturn nil\n\t}\n}\n\n// processApplicationSynPacket processes a single Syn Packet\nfunc (d *Datapath) processApplicationSynPacket(tcpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.TCPConnection) error {\n\t// Increment the counter.\n\tconn.IncrementCounter()\n\n\tif err := tcpPacket.CheckTCPAuthenticationOption(enforcerconstants.TCPAuthenticationOptionBaseLen); err == nil {\n\t\tconn.Context.Counters().IncrementCounter(counters.ErrAppSynAuthOptionSet)\n\t}\n\n\tvar tcpData []byte\n\n\tconn.Secrets, conn.Auth.LocalDatapathPrivateKey, tcpData = context.GetSynToken(nil, conn.Auth.Nonce, nil)\n\n\tbuffer := append(tcpPacket.GetBuffer(0), []byte{packet.TCPAuthenticationOption, enforcerconstants.TCPAuthenticationOptionBaseLen, 0, 0}...)\n\tbuffer = append(buffer, tcpData...)\n\t// Attach the tags to the packet and accept the packet\n\tif err := tcpPacket.UpdatePacketBuffer(buffer, enforcerconstants.TCPAuthenticationOptionBaseLen); err != nil {\n\t\treturn err\n\t}\n\n\t// Set the state indicating that we send out a Syn packet\n\tconn.SetState(connection.TCPSynSend)\n\td.cachePut(d.tcpClient, tcpPacket.L4FlowHash(), conn)\n\n\t// Attach the tags to the packet and accept the packet\n\treturn nil\n}\n\n// processApplicationSynAckPacket processes an application SynAck packet\nfunc (d *Datapath) processApplicationSynAckPacket(tcpPacket *packet.Packet, _ *pucontext.PUContext, conn *connection.TCPConnection) error {\n\t// if the traffic belongs to the same pu, let it go\n\tif conn.GetState() == connection.TCPData && conn.IsLoopbackConnection() {\n\t\treturn nil\n\t}\n\n\t// If we are already in the connection.TCPData, it means that this is an external flow\n\t// At this point we can release the flow to the kernel by updating conntrack\n\t// We can also clean up the state since we are not going to see any more\n\t// packets from this connection.\n\tif conn.GetState() == connection.TCPData {\n\t\t// remove from our tcp server cache\n\t\td.cacheRemove(d.tcpServer, tcpPacket.L4ReverseFlowHash())\n\n\t\tif err := d.ignoreFlow(tcpPacket); err != nil {\n\t\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t\t}\n\t\ttcpPacket.SetConnmark = true\n\t\treturn nil\n\t}\n\n\tif err := tcpPacket.CheckTCPAuthenticationOption(enforcerconstants.TCPAuthenticationOptionBaseLen); err == nil {\n\t\tconn.Context.Counters().IncrementCounter(counters.ErrAppSynAckAuthOptionSet)\n\t}\n\n\tbuffer := append(tcpPacket.GetBuffer(0), []byte{packet.TCPAuthenticationOption, enforcerconstants.TCPAuthenticationOptionBaseLen, 0, 0}...)\n\tbuffer = append(buffer, conn.Auth.SynAckToken...)\n\tif err := tcpPacket.UpdatePacketBuffer(buffer, enforcerconstants.TCPAuthenticationOptionBaseLen); err != nil {\n\t\treturn err\n\t}\n\n\tconn.SetState(connection.TCPSynAckSend)\n\treturn nil\n}\n\n// processApplicationAckPacket processes an application ack packet\nfunc (d *Datapath) processApplicationAckPacket(tcpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.TCPConnection) error {\n\t// Only process the first Ack of a connection. This means that we have received\n\t// as SynAck packet and we can now process the ACK.\n\tif conn.GetState() == connection.TCPSynAckReceived {\n\n\t\t// Special case. We are handling an AP packet with data, but the ACK has been lost\n\t\t// somewhere. In this case, we drop the payload and send our authorization data.\n\t\t// The TCP stack will try again.\n\t\tif !tcpPacket.IsEmptyTCPPayload() {\n\t\t\ttcpPacket.TCPDataDetach(0)\n\t\t}\n\n\t\tbuffer := append(tcpPacket.GetBuffer(0), []byte{packet.TCPAuthenticationOption, enforcerconstants.TCPAuthenticationOptionBaseLen, 0, 0}...)\n\t\tbuffer = append(buffer, conn.Auth.AckToken...)\n\t\tif err := tcpPacket.UpdatePacketBuffer(buffer, enforcerconstants.TCPAuthenticationOptionBaseLen); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tconn.SetState(connection.TCPAckSend)\n\n\t\treturn nil\n\t}\n\n\t// If we are already in the connection.TCPData connection just forward the packet\n\tif conn.GetState() == connection.TCPData {\n\t\treturn nil\n\t}\n\n\tif conn.GetState() == connection.UnknownState {\n\t\t// Check if the destination is in the external services approved cache\n\t\t// and if yes, allow the packet to go and release the flow.\n\t\t_, policy, perr := context.ApplicationACLPolicyFromAddr(tcpPacket.DestinationAddress(), tcpPacket.DestPort(), tcpPacket.IPProto())\n\n\t\tif perr != nil {\n\t\t\tconn.Context.Counters().CounterError(counters.ErrAckInUnknownState, nil) //nolint\n\t\t\tzap.L().Debug(\"converting to rst app\",\n\t\t\t\tzap.String(\"SourceIP\", tcpPacket.SourceAddress().String()),\n\t\t\t\tzap.String(\"DestinationIP\", tcpPacket.DestinationAddress().String()),\n\t\t\t\tzap.Int(\"SourcePort\", int(tcpPacket.SourcePort())),\n\t\t\t\tzap.Int(\"DestinationPort\", int(tcpPacket.DestPort())),\n\t\t\t\tzap.String(\"Flags\", packet.TCPFlagsToStr(tcpPacket.GetTCPFlags())),\n\t\t\t)\n\t\t\ttcpPacket.ConvertToRst()\n\t\t\ttcpPacket.SetConnmark = true\n\t\t\treturn nil\n\t\t}\n\n\t\tif policy.Action.Rejected() {\n\t\t\treturn conn.Context.Counters().CounterError(counters.ErrRejectPacket, fmt.Errorf(\"Rejected due to policy %s\", policy.PolicyID))\n\t\t}\n\t\tif err := d.ignoreFlow(tcpPacket); err != nil {\n\t\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t\t}\n\t\ttcpPacket.SetConnmark = true\n\t\treturn nil\n\t}\n\n\t// Here we capture the first data packet after an ACK packet by modyfing the\n\t// state. We will not release the caches though to deal with re-transmissions.\n\t// We will let the caches expire.\n\tif conn.GetState() == connection.TCPAckSend {\n\t\tif tcpPacket.SourceAddress().String() != tcpPacket.DestinationAddress().String() &&\n\t\t\t!(tcpPacket.SourceAddress().IsLoopback() && tcpPacket.DestinationAddress().IsLoopback()) {\n\n\t\t\tif err := d.ignoreFlow(tcpPacket); err != nil {\n\t\t\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t\t\t}\n\n\t\t\tconn.ResetTimer(waitBeforeRemovingConn)\n\t\t\ttcpPacket.SetConnmark = true\n\t\t\tcounters.IncrementCounter(counters.ErrConnectionsProcessed)\n\t\t}\n\t\tconn.SetState(connection.TCPData)\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"received application ack packet in the wrong state: %d\", conn.GetState())\n}\n\n// processNetworkTCPPacket processes a network TCP packet and dispatches it to different methods based on the flags\nfunc (d *Datapath) processNetworkTCPPacket(tcpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.TCPConnection) (func(), error) {\n\n\t// Update connection state in the internal state machine tracker\n\tswitch tcpPacket.GetTCPFlags() & packet.TCPSynAckMask {\n\n\tcase packet.TCPSynMask:\n\t\treturn d.processNetworkSynPacket(context, conn, tcpPacket)\n\n\tcase packet.TCPAckMask:\n\t\tif tcpPacket.GetTCPFlags()&packet.TCPFinMask == packet.TCPFinMask {\n\t\t\tconn.MarkForDeletion = true\n\t\t}\n\t\treturn nil, d.processNetworkAckPacket(context, conn, tcpPacket)\n\tcase packet.TCPSynAckMask:\n\t\treturn d.processNetworkSynAckPacket(context, conn, tcpPacket)\n\n\tdefault: // Ignore any other packet\n\t\treturn nil, nil\n\t}\n}\n\nfunc (d *Datapath) clientIdentityAllowed(context *pucontext.PUContext, token []byte, tcpPacket *packet.Packet, conn *connection.TCPConnection, networkReport *policy.FlowPolicy) error {\n\n\tclaims := &conn.Auth.ConnectionClaims\n\tsecretKey, claimsHeader, controller, remoteNonce, remoteContextID, proto314, err := d.tokenAccessor.ParsePacketToken(conn.Auth.LocalDatapathPrivateKey, token, conn.Secrets, claims, false)\n\n\tif err != nil {\n\t\tzap.L().Error(\"Syn token Parse Error\", zap.String(\"flow\", tcpPacket.L4FlowHash()), zap.Error(err))\n\t\td.reportRejectedFlow(tcpPacket, conn, collector.DefaultEndPoint, context.ManagementID(), context, collector.InvalidToken, nil, nil, false)\n\t\treturn conn.Context.Counters().CounterError(netSynCounterFromError(err), err)\n\t}\n\n\tconn.Auth.SecretKey = secretKey\n\tconn.Auth.RemoteNonce = remoteNonce\n\tconn.Auth.RemoteContextID = remoteContextID\n\tconn.Auth.Proto314 = proto314\n\n\tif controller != nil &&\n\t\t((!controller.SameController) ||\n\t\t\t(claimsHeader != nil && claimsHeader.Ping())) {\n\t\tconn.SourceController = controller.Controller\n\t}\n\n\ttxLabel, ok := claims.T.Get(enforcerconstants.TransmitterLabel)\n\tif !ok {\n\t\td.reportRejectedFlow(tcpPacket, conn, txLabel, context.ManagementID(), context, collector.InvalidFormat, nil, nil, false)\n\t\treturn conn.Context.Counters().CounterError(counters.ErrSynDroppedTCPOption, fmt.Errorf(\"ErrSynDroppedTCPOption\"))\n\t}\n\n\t// Add the port as a label with an @ prefix. These labels are invalid otherwise\n\t// If all policies are restricted by port numbers this will allow port-specific policies\n\ttags := claims.T.Copy()\n\ttags.AppendKeyValue(constants.PortNumberLabelString, fmt.Sprintf(\"%s/%s\", constants.TCPProtoString, strconv.Itoa(int(tcpPacket.DestPort()))))\n\n\t// Add the controller to the claims\n\tif controller != nil && len(controller.Controller) > 0 {\n\t\ttags.AppendKeyValue(constants.ControllerLabelString, controller.Controller)\n\t}\n\n\treport, pkt := context.SearchRcvRules(tags)\n\n\t// If we have an ObserveContinue Rejected ACL, then report this as the observed flow.\n\tif networkReport != nil && networkReport.Action.Rejected() && networkReport.ObserveAction.ObserveContinue() {\n\t\treport = networkReport\n\t}\n\n\tconn.ReportFlowPolicy = report\n\tconn.PacketFlowPolicy = pkt\n\n\tif claimsHeader != nil && claimsHeader.Ping() && claims.P != nil {\n\t\terr := d.processPingNetSynPacket(context, conn, tcpPacket, len(token), pkt, claims)\n\t\tif err != nil && err != errDropPingNetSyn {\n\t\t\tzap.L().Error(\"unable to process ping network syn\", zap.Error(err))\n\t\t}\n\t\treturn err\n\t}\n\n\tallow := false\n\tif txLabel == context.ManagementID() {\n\t\tzap.L().Debug(\"Traffic to the same pu\", zap.String(\"flow\", tcpPacket.L4FlowHash()))\n\t\tconn.SetLoopbackConnection(true)\n\t\tallow = true\n\t}\n\n\tif !pkt.Action.Rejected() || allow {\n\t\treturn nil\n\t}\n\n\t// TODO: Support ipv6\n\tif tcpPacket.IPversion() == packet.V4 {\n\t\tgo func() {\n\t\t\tif err := respondWithRstPacket(tcpPacket, rstIdentity); err != nil {\n\t\t\t\tzap.L().Warn(\"unable to send rst packet\", zap.Error(err))\n\t\t\t}\n\t\t}()\n\t}\n\n\td.reportRejectedFlow(tcpPacket, conn, txLabel, context.ManagementID(), context, collector.PolicyDrop, report, pkt, false)\n\treturn conn.Context.Counters().CounterError(counters.ErrSynRejectPacket, fmt.Errorf(\"PolicyDrop %s\", pkt.PolicyID))\n}\n\n// processNetworkSynPacket processes a syn packet arriving from the network\nfunc (d *Datapath) processNetworkSynPacket(context *pucontext.PUContext, conn *connection.TCPConnection, tcpPacket *packet.Packet) (func(), error) {\n\tvar err error\n\tconn.IncrementCounter()\n\n\tcreateSynAckToken := func() {\n\t\tvar pingPayload *policy.PingPayload\n\t\tclaimsHeader := claimsheader.NewClaimsHeader()\n\n\t\t// This means we got syn with ping header set and passthrough enabled.\n\t\t// The application responds with synack.\n\t\tif conn.PingEnabled() {\n\t\t\tpingPayload = &policy.PingPayload{}\n\t\t\tconn.PingConfig.SetApplicationListening(true)\n\t\t\tpingPayload.PingID = conn.PingConfig.PingID()\n\t\t\tpingPayload.IterationID = conn.PingConfig.IterationID()\n\t\t\tpingPayload.ApplicationListening = true\n\t\t\tpingPayload.NamespaceHash = context.ManagementNamespaceHash()\n\t\t\tclaimsHeader.SetPing(true)\n\t\t}\n\n\t\tclaims := &tokens.ConnectionClaims{\n\t\t\tCT:       context.CompressedTags(),\n\t\t\tLCL:      conn.Auth.Nonce[:],\n\t\t\tRMT:      conn.Auth.RemoteNonce,\n\t\t\tDEKV1:    conn.Auth.LocalDatapathPublicKeyV1,\n\t\t\tSDEKV1:   conn.Auth.LocalDatapathPublicKeySignV1,\n\t\t\tDEKV2:    conn.Auth.LocalDatapathPublicKeyV2,\n\t\t\tSDEKV2:   conn.Auth.LocalDatapathPublicKeySignV2,\n\t\t\tID:       context.ManagementID(),\n\t\t\tRemoteID: conn.Auth.RemoteContextID,\n\t\t\tP:        pingPayload,\n\t\t}\n\n\t\tif conn.Auth.SynAckToken, err = d.tokenAccessor.CreateSynAckPacketToken(conn.Auth.Proto314, claims, conn.EncodedBuf[:], conn.Auth.Nonce[:], claimsHeader, conn.Secrets, conn.Auth.SecretKey); err != nil {\n\t\t\tzap.L().Error(\"Syn/Ack token create failed\", zap.String(\"flow\", tcpPacket.L4FlowHash()), zap.Error(err))\n\t\t\tconn.Context.Counters().CounterError(appSynCounterFromError(err), err) //nolint\n\t\t}\n\t}\n\n\tallowPkt := func() {\n\t\ttcpPacket.TCPDataDetach(enforcerconstants.TCPAuthenticationOptionBaseLen) //nolint\n\t\tconn.SetState(connection.TCPSynReceived)\n\t\td.cachePut(d.tcpServer, tcpPacket.L4FlowHash(), conn)\n\t}\n\n\t// We should only be here if we have identity\n\tif err = tcpPacket.CheckTCPAuthenticationOption(enforcerconstants.TCPAuthenticationOptionBaseLen); err != nil || (err == nil && tcpPacket.IsEmptyTCPPayload()) {\n\t\t// This is not a normal case and should never happen because Linux/Windows rules are checking for this before sending the packet to NFQ.\n\t\tif err == nil {\n\t\t\terr = fmt.Errorf(\"identity payload empty: incoming connection dropped\")\n\t\t} else {\n\t\t\terr = fmt.Errorf(\"invalid identity: incoming connection dropped: %s\", err)\n\t\t}\n\t\td.reportRejectedFlow(tcpPacket, conn, collector.DefaultEndPoint, context.ManagementID(), context, collector.MissingToken, nil, nil, false)\n\t\treturn nil, context.Counters().CounterError(counters.ErrSynMissingTCPOption, err)\n\t}\n\n\trejected := false\n\tnetworkReport, pkt, perr := context.NetworkACLPolicy(tcpPacket)\n\tif perr == nil {\n\t\trejected = pkt.Action.Rejected()\n\t\tif rejected {\n\t\t\tperr = fmt.Errorf(\"rejected by ACL policy %s\", pkt.PolicyID)\n\t\t}\n\t} else {\n\t\t// We got an error, but ensure it isn't the catch all policy\n\t\tif !(pkt != nil && pkt.Action.Rejected() && pkt.PolicyID == \"default\") {\n\t\t\trejected = true\n\t\t}\n\t}\n\n\tif rejected {\n\t\td.reportExternalServiceFlow(context, networkReport, pkt, false, tcpPacket)\n\t\treturn nil, context.Counters().CounterError(counters.ErrSynFromExtNetReject, fmt.Errorf(\"packet had identity: incoming connection dropped: %s\", perr))\n\t}\n\n\ttoken := tcpPacket.ReadTCPData()\n\n\tif err = d.clientIdentityAllowed(context, token, tcpPacket, conn, networkReport); err == nil {\n\t\tprocessAfterVerdict := func() {\n\t\t\tcreateSynAckToken()\n\t\t}\n\n\t\tallowPkt()\n\t\treturn processAfterVerdict, nil\n\t}\n\n\treturn nil, err\n}\n\n// policyPair stores both reporting and actual action taken on packet.\ntype policyPair struct {\n\treport *policy.FlowPolicy\n\tpacket *policy.FlowPolicy\n}\n\n// processNetworkSynAckPacket processes a SynAck packet arriving from the network\nfunc (d *Datapath) processNetworkSynAckPacket(context *pucontext.PUContext, conn *connection.TCPConnection, tcpPacket *packet.Packet) (func(), error) {\n\tvar err error\n\n\tallowPkt := func() {\n\t\t// Remove any of our data\n\t\tconn.SetState(connection.TCPSynAckReceived)\n\t\ttcpPacket.TCPDataDetach(enforcerconstants.TCPAuthenticationOptionBaseLen) //nolint\n\t}\n\n\tcreateAckToken := func() {\n\t\tclaims := &tokens.ConnectionClaims{\n\t\t\tID:       context.ManagementID(),\n\t\t\tRMT:      conn.Auth.RemoteNonce,\n\t\t\tRemoteID: conn.Auth.RemoteContextID,\n\t\t}\n\n\t\t// Create a new token that includes the source and destinatio nonce\n\t\t// These are both challenges signed by the secret key and random for every\n\t\t// connection minimizing the chances of a replay attack\n\t\tif conn.Auth.AckToken, err = d.tokenAccessor.CreateAckPacketToken(conn.Auth.Proto314, conn.Auth.SecretKey, claims, conn.EncodedBuf[:]); err != nil {\n\t\t\tzap.L().Error(\"Ack token create failed\", zap.String(\"flow\", tcpPacket.L4FlowHash()), zap.Error(err))\n\t\t\tconn.Context.Counters().CounterError(appAckCounterFromError(err), err) //nolint\n\t\t}\n\t}\n\n\t// Packets with no authorization are processed as external services based on the ACLS\n\tif err = tcpPacket.CheckTCPAuthenticationOption(enforcerconstants.TCPAuthenticationOptionBaseLen); err != nil || (err == nil && tcpPacket.IsEmptyTCPPayload()) {\n\n\t\tif _, err := d.puFromContextID.Get(conn.Context.ID()); err != nil {\n\t\t\t// PU has been deleted. Ignore these packets\n\t\t\treturn nil, conn.Context.Counters().CounterError(counters.ErrInvalidSynAck, fmt.Errorf(\"Pu with ID delete %s\", conn.Context.ID()))\n\t\t}\n\n\t\tflowHash := tcpPacket.SourceAddress().String() + \":\" + strconv.Itoa(int(tcpPacket.SourcePort()))\n\t\tif plci, plerr := context.RetrieveCachedExternalFlowPolicy(flowHash); plerr == nil {\n\t\t\tplc := plci.(*policyPair)\n\t\t\td.releaseExternalFlow(context, plc.report, plc.packet, tcpPacket)\n\t\t\tconn.Context.Counters().IncrementCounter(counters.ErrSynAckFromExtNetAccept)\n\t\t\treturn nil, nil\n\t\t}\n\n\t\t// Never seen this IP before, let's parse them.\n\t\treport, pkt, perr := context.ApplicationACLPolicyFromAddr(tcpPacket.SourceAddress(), tcpPacket.SourcePort(), tcpPacket.IPProto())\n\n\t\t// Ping packet from an external network.\n\t\tif conn.PingEnabled() {\n\t\t\terr := d.processPingNetSynAckPacket(context, conn, tcpPacket, 0, pkt, nil, true)\n\t\t\tif err != nil && err != errDropPingNetSynAck {\n\t\t\t\tzap.L().Error(\"unable to process ping network synack (externalnetwork)\", zap.Error(err))\n\t\t\t}\n\t\t\treturn nil, err\n\t\t}\n\n\t\tif perr != nil || pkt.Action.Rejected() {\n\t\t\td.reportReverseExternalServiceFlow(context, report, pkt, true, tcpPacket)\n\t\t\treturn nil, conn.Context.Counters().CounterError(counters.ErrSynAckFromExtNetReject, fmt.Errorf(\"ErrSynAckFromExtNetReject\"))\n\t\t}\n\n\t\t// Added to the cache if we can accept it\n\t\tcontext.CacheExternalFlowPolicy(\n\t\t\ttcpPacket,\n\t\t\t&policyPair{\n\t\t\t\treport: report,\n\t\t\t\tpacket: pkt,\n\t\t\t},\n\t\t)\n\n\t\t// Set the state to Data so the other state machines ignore subsequent packets\n\t\tconn.SetState(connection.TCPData)\n\t\td.releaseExternalFlow(context, report, pkt, tcpPacket)\n\t\tconn.Context.Counters().IncrementCounter(counters.ErrSynAckFromExtNetAccept)\n\n\t\treturn nil, nil\n\t}\n\n\t// This is a corner condition. We are receiving a SynAck packet and we are in\n\t// a state that indicates that we have already processed one. This means that\n\t// our ack packet was lost. We need to revert conntrack in this case and get\n\t// back into the picture.\n\tif conn.GetState() != connection.TCPSynSend {\n\t\t// Revert the connmarks - dealing with retransmissions\n\t\tif cerr := d.conntrack.UpdateApplicationFlowMark(\n\t\t\ttcpPacket.DestinationAddress(),\n\t\t\ttcpPacket.SourceAddress(),\n\t\t\ttcpPacket.IPProto(),\n\t\t\ttcpPacket.DestPort(),\n\t\t\ttcpPacket.SourcePort(),\n\t\t\tuint32(1), // We cannot put it back to zero. We need something other value.\n\t\t); cerr != nil {\n\t\t\tzap.L().Debug(\"Failed to update conntrack table for flow after synack packet\",\n\t\t\t\tzap.String(\"app-conn\", tcpPacket.L4ReverseFlowHash()),\n\t\t\t\tzap.String(\"state\", fmt.Sprintf(\"%d\", conn.GetState())),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t}\n\n\t\tconn.SetState(connection.TCPSynAckReceived)\n\t}\n\n\tif !d.mutualAuthorization {\n\t\tallowPkt()\n\t\treturn nil, nil\n\t}\n\n\ttoken := tcpPacket.ReadTCPData()\n\tif err = d.serverIdentityAllowed(context, token, tcpPacket, conn); err == nil {\n\t\tprocessAfterVerdict := func() {\n\t\t\tcreateAckToken()\n\t\t}\n\t\tallowPkt()\n\t\treturn processAfterVerdict, nil\n\t}\n\n\treturn nil, err\n}\n\nfunc (d *Datapath) serverIdentityAllowed(context *pucontext.PUContext, token []byte, tcpPacket *packet.Packet, conn *connection.TCPConnection) error {\n\n\tclaims := &conn.Auth.ConnectionClaims\n\tsecretKey, claimsHeader, controller, remoteNonce, remoteContextID, proto314, err := d.tokenAccessor.ParsePacketToken(conn.Auth.LocalDatapathPrivateKey, token, conn.Secrets, claims, true)\n\n\tif err != nil {\n\t\tzap.L().Error(\"Syn/Ack token parse error\", zap.String(\"flow\", tcpPacket.L4FlowHash()), zap.Error(err))\n\t\td.reportRejectedFlow(tcpPacket, conn, collector.DefaultEndPoint, context.ManagementID(), context, collector.InvalidToken, nil, nil, true)\n\t\treturn context.Counters().CounterError(netSynAckCounterFromError(err), err)\n\t}\n\n\tconn.Auth.SecretKey = secretKey\n\tconn.Auth.RemoteNonce = remoteNonce\n\tconn.Auth.RemoteContextID = remoteContextID\n\tconn.Auth.Proto314 = proto314\n\n\tif controller != nil && ((conn.PingEnabled()) || (!controller.SameController)) {\n\t\tconn.DestinationController = controller.Controller\n\t}\n\n\t// Add the port as a label with an @ prefix. These labels are invalid otherwise\n\t// If all policies are restricted by port numbers this will allow port-specific policies\n\ttags := claims.T.Copy()\n\ttags.AppendKeyValue(constants.PortNumberLabelString, constants.TCPProtoString+\"/\"+strconv.Itoa(int(tcpPacket.SourcePort())))\n\n\t// Add the controller to the claims\n\tif controller != nil && len(controller.Controller) > 0 {\n\t\ttags.AppendKeyValue(constants.ControllerLabelString, controller.Controller)\n\t}\n\n\treport, pkt := context.SearchTxtRules(tags, !d.mutualAuthorization)\n\n\t// Ping packet from remote enforcer.\n\tif claimsHeader != nil {\n\t\tif claimsHeader.Ping() && claims.P != nil {\n\t\t\tpayloadSize := len(token)\n\t\t\terr := d.processPingNetSynAckPacket(context, conn, tcpPacket, payloadSize, pkt, claims, false)\n\t\t\tif err != nil && err != errDropPingNetSynAck {\n\t\t\t\tzap.L().Error(\"unable to process ping network synack\", zap.Error(err))\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Report and release traffic belonging to the same pu\n\tif conn.Auth.RemoteContextID == context.ManagementID() {\n\t\tconn.SetState(connection.TCPData)\n\t\tconn.SetLoopbackConnection(true)\n\t\td.reportAcceptedFlow(tcpPacket, conn, conn.Auth.RemoteContextID, context.ManagementID(), context, nil, nil, true)\n\t\td.releaseUnmonitoredFlow(tcpPacket)\n\t\treturn nil\n\t}\n\n\tif pkt.Action.Rejected() {\n\t\td.reportRejectedFlow(tcpPacket, conn, conn.Auth.RemoteContextID, context.ManagementID(), context, collector.PolicyDrop, report, pkt, true)\n\t\treturn context.Counters().CounterError(counters.ErrSynAckRejected, fmt.Errorf(\"ErrSynAckRejected\"))\n\t}\n\n\treturn nil\n}\n\n// processNetworkAckPacket processes an Ack packet arriving from the network\nfunc (d *Datapath) processNetworkAckPacket(context *pucontext.PUContext, conn *connection.TCPConnection, tcpPacket *packet.Packet) error {\n\n\tvar err error\n\n\tif conn.GetState() == connection.TCPData || conn.GetState() == connection.TCPAckSend {\n\n\t\t// This rule is required as network packets are being duplicated by the middle box in google. Our ack packets contain payload and that will be sent to the tcp stack,\n\t\t// if we don't drop them here.\n\t\tif err := tcpPacket.CheckTCPAuthenticationOption(enforcerconstants.TCPAuthenticationOptionBaseLen); err == nil {\n\t\t\treturn conn.Context.Counters().CounterError(counters.ErrDuplicateAckDrop, fmt.Errorf(\"ErrDuplicateAckDrop\"))\n\t\t}\n\n\t\tconn.ResetTimer(waitBeforeRemovingConn)\n\t\ttcpPacket.SetConnmark = true\n\t\treturn nil\n\t}\n\n\tif conn.IsLoopbackConnection() {\n\t\tconn.SetState(connection.TCPData)\n\t\td.releaseUnmonitoredFlow(tcpPacket)\n\t\treturn nil\n\t}\n\n\t// Validate that the source/destination nonse matches. The signature has validated both directions\n\tif conn.GetState() == connection.TCPSynAckSend || conn.GetState() == connection.TCPSynReceived {\n\n\t\tif err := tcpPacket.CheckTCPAuthenticationOption(enforcerconstants.TCPAuthenticationOptionBaseLen); err != nil {\n\t\t\treturn conn.Context.Counters().CounterError(counters.ErrAckTCPNoTCPAuthOption, fmt.Errorf(\"ErrAckTCPNoTCPAuthOption\"))\n\t\t}\n\n\t\tif err = d.tokenAccessor.ParseAckToken(conn.Auth.Proto314, conn.Auth.SecretKey, conn.Auth.Nonce[:], tcpPacket.ReadTCPData(), &conn.Auth.ConnectionClaims); err != nil {\n\t\t\tzap.L().Error(\"Ack Packet dropped because signature validation failed\", zap.String(\"flow\", tcpPacket.L4FlowHash()), zap.Error(err))\n\t\t\td.reportRejectedFlow(tcpPacket, conn, collector.DefaultEndPoint, context.ManagementID(), context, collector.InvalidToken, nil, nil, false)\n\t\t\treturn conn.Context.Counters().CounterError(netAckCounterFromError(err), err)\n\t\t}\n\n\t\ttcpPacket.TCPDataDetach(enforcerconstants.TCPAuthenticationOptionBaseLen)\n\n\t\tif conn.PacketFlowPolicy != nil && conn.PacketFlowPolicy.Action.Rejected() {\n\t\t\tif !conn.PacketFlowPolicy.ObserveAction.Observed() {\n\t\t\t\tzap.L().Error(\"Flow rejected but not observed\", zap.String(\"conn\", context.ManagementID()))\n\t\t\t}\n\t\t\t// Flow has been allowed because we are observing a deny rule's impact on the system. Packets are forwarded, reported as dropped + observed.\n\t\t\td.reportRejectedFlow(tcpPacket, conn, conn.Auth.RemoteContextID, context.ManagementID(), context, collector.PolicyDrop, conn.ReportFlowPolicy, conn.PacketFlowPolicy, false)\n\t\t} else {\n\t\t\t// We accept the packet as a new flow\n\t\t\td.reportAcceptedFlow(tcpPacket, conn, conn.Auth.RemoteContextID, context.ManagementID(), context, conn.ReportFlowPolicy, conn.PacketFlowPolicy, false)\n\t\t}\n\n\t\tconn.SetState(connection.TCPData)\n\n\t\tif err := d.ignoreFlow(tcpPacket); err != nil {\n\t\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t\t}\n\t\tconn.Context.Counters().IncrementCounter(counters.ErrConnectionsProcessed)\n\t\t// Accept the packet\n\t\treturn nil\n\t}\n\n\tif conn.GetState() == connection.UnknownState {\n\t\t// Check if the destination is in the external servicess approved cache\n\t\t// and if yes, allow the packet to go and release the flow.\n\t\t_, plcy, perr := context.NetworkACLPolicy(tcpPacket)\n\n\t\t// Ignore FIN packets. Let them go through.\n\t\tif tcpPacket.GetTCPFlags()&packet.TCPFinMask != 0 {\n\t\t\tconn.Context.Counters().IncrementCounter(counters.ErrIgnoreFin)\n\t\t\treturn nil\n\t\t}\n\n\t\tif perr != nil {\n\t\t\tconn.Context.Counters().CounterError(counters.ErrAckInUnknownState, nil) //nolint\n\t\t\tzap.L().Debug(\"converting to rst network\",\n\t\t\t\tzap.String(\"SourceIP\", tcpPacket.SourceAddress().String()),\n\t\t\t\tzap.String(\"DestinationIP\", tcpPacket.DestinationAddress().String()),\n\t\t\t\tzap.Int(\"SourcePort\", int(tcpPacket.SourcePort())),\n\t\t\t\tzap.Int(\"DestinationPort\", int(tcpPacket.DestPort())),\n\t\t\t\tzap.String(\"Flags\", packet.TCPFlagsToStr(tcpPacket.GetTCPFlags())),\n\t\t\t)\n\t\t\ttcpPacket.ConvertToRst()\n\n\t\t\ttcpPacket.SetConnmark = true\n\t\t\treturn nil\n\t\t}\n\n\t\tif plcy.Action.Rejected() {\n\t\t\treturn conn.Context.Counters().CounterError(counters.ErrAckFromExtNetReject, fmt.Errorf(\"ErrAckFromExtNetReject\"))\n\t\t}\n\n\t\tif err := d.ignoreFlow(tcpPacket); err != nil {\n\t\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t\t}\n\n\t\ttcpPacket.SetConnmark = true\n\n\t\tconn.Context.Counters().IncrementCounter(counters.ErrAckFromExtNetAccept)\n\t\treturn nil\n\t}\n\n\thash := tcpPacket.L4FlowHash()\n\n\t// Everything else is dropped - ACK received in the Syn state without a SynAck\n\tzap.L().Debug(\"Invalid state reached\",\n\t\tzap.String(\"state\", fmt.Sprintf(\"%d\", conn.GetState())),\n\t\tzap.String(\"context\", context.ManagementID()),\n\t\tzap.String(\"net-conn\", hash),\n\t)\n\n\treturn conn.Context.Counters().CounterError(counters.ErrInvalidNetAckState, fmt.Errorf(\"ErrInvalidNetAckState\"))\n}\n\n// appSynRetrieveState retrieves state for the the application Syn packet.\n// It creates a new connection by default\nfunc (d *Datapath) appSynRetrieveState(p *packet.Packet) (*connection.TCPConnection, error) {\n\tvar err error\n\tvar context *pucontext.PUContext\n\n\t// If PU context doesn't exist for this syn, return error.\n\tif context, err = d.contextFromIP(true, p.Mark, p.SourcePort(), packet.IPProtocolTCP); err != nil {\n\t\treturn nil, counters.CounterError(counters.ErrSynUnexpectedPacket, err)\n\t}\n\n\t// check if app syn has been seen?\n\tif conn, exists := d.cacheGet(d.tcpClient, p.L4FlowHash()); exists {\n\t\tif !conn.GetMarkForDeletion() && conn.GetInitialSequenceNumber() == p.TCPSequenceNumber() {\n\t\t\t// return this connection only if we are not deleting this\n\t\t\t// this is marked only when we see a FINACK for this l4flowhash\n\t\t\t// this should not have happened for a connection while we are processing a appSyn for this connection\n\t\t\t// The addorupdate for this cache will happen outside in processtcppacket\n\t\t\treturn conn, nil\n\t\t} else { //nolint\n\t\t\t//stale app syn. We remove from the cache\n\t\t\td.cacheRemove(d.tcpClient, p.L4FlowHash())\n\t\t}\n\t}\n\treturn connection.NewTCPConnection(context, p), nil\n}\n\n// appSynAckRetrieveState retrieves the state for application syn/ack packet.\nfunc (d *Datapath) appSynAckRetrieveState(p *packet.Packet) (*connection.TCPConnection, error) {\n\thash := p.L4ReverseFlowHash()\n\n\t// We must have seen a network syn.\n\tif conn, exists := d.cacheGet(d.tcpServer, hash); exists {\n\t\treturn conn, nil\n\t}\n\n\treturn nil, counters.CounterError(counters.ErrNetSynNotSeen, errors.New(\"Network Syn not seen\"))\n}\n\n// appRetrieveState retrieves the state for the rest of the application packets. It\n// returns an error if it cannot find the state\nfunc (d *Datapath) appRetrieveState(p *packet.Packet) (*connection.TCPConnection, error) {\n\n\t// Is this ack generated from a PU, a tcp client.\n\tif conn, exists := d.cacheGet(d.tcpClient, p.L4FlowHash()); exists {\n\t\treturn conn, nil\n\t}\n\n\t// is this ack generated from a PU, a tcp server.\n\tif conn, exists := d.cacheGet(d.tcpServer, p.L4ReverseFlowHash()); exists {\n\t\treturn conn, nil\n\t}\n\n\tcounters.CounterError(counters.ErrNoConnFound, nil) //nolint\n\n\tzap.L().Debug(\"Application ACK Packet received with no state\",\n\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\tzap.String(\"Flags\", packet.TCPFlagsToStr(p.GetTCPFlags())))\n\n\tif p.GetTCPFlags()&packet.TCPSynAckMask == packet.TCPAckMask {\n\t\t// Let's try if its an existing connection\n\t\tcontext, err := d.contextFromIP(true, p.Mark, p.SourcePort(), packet.IPProtocolTCP)\n\t\tif err != nil {\n\t\t\treturn nil, errors.New(\"No context in app processing\")\n\t\t}\n\t\tconn := connection.NewTCPConnection(context, p)\n\t\tconn.SetState(connection.UnknownState)\n\t\treturn conn, nil\n\t}\n\n\tif p.GetTCPFlags()&packet.TCPRstMask != 0 && p.GetTCPFlags()&packet.TCPAckMask == 0 {\n\t\treturn nil, errRstPacket\n\t}\n\n\treturn nil, errNoConnection\n}\n\n// netSynRetrieveState retrieves the state for the Syn packets on the network.\n// Obviously if no state is found, it generates a new connection record.\nfunc (d *Datapath) netSynRetrieveState(p *packet.Packet) (*connection.TCPConnection, error) {\n\n\tvar conn *connection.TCPConnection\n\tvar context *pucontext.PUContext\n\tvar err error\n\n\tif context, err = d.contextFromIP(false, p.Mark, p.DestPort(), packet.IPProtocolTCP); err != nil {\n\t\treturn nil, counters.CounterError(counters.ErrInvalidNetSynState, err)\n\t}\n\n\tif conn, exists := d.cacheGet(d.tcpServer, p.L4FlowHash()); exists {\n\t\tif !conn.GetMarkForDeletion() && conn.GetInitialSequenceNumber() == p.TCPSequenceNumber() {\n\t\t\t// Only if we havent seen FINACK on this connection\n\t\t\treturn conn, nil\n\t\t} else { //nolint\n\t\t\t// remove stale net syn entry\n\t\t\td.cacheRemove(d.tcpServer, p.L4FlowHash())\n\t\t}\n\t}\n\n\tconn = connection.NewTCPConnection(context, p)\n\n\tconn.Secrets, conn.Auth.LocalDatapathPrivateKey, conn.Auth.LocalDatapathPublicKeyV1, conn.Auth.LocalDatapathPublicKeySignV1, conn.Auth.LocalDatapathPublicKeyV2, conn.Auth.LocalDatapathPublicKeySignV2 = context.GetSecrets()\n\treturn conn, nil\n}\n\n// netSynAckRetrieveState retrieves the state for SynAck packets at the network\n// It relies on the source port cache for that\nfunc (d *Datapath) netSynAckRetrieveState(p *packet.Packet) (*connection.TCPConnection, error) {\n\n\t// We must have seen App Syn\n\tif conn, exists := d.cacheGet(d.tcpClient, p.L4ReverseFlowHash()); exists {\n\t\tif conn.GetMarkForDeletion() {\n\t\t\treturn nil, errOutOfOrderSynAck\n\t\t}\n\t\treturn conn, nil\n\t}\n\n\treturn nil, counters.CounterError(counters.ErrNonPUTraffic, errNonPUTraffic)\n}\n\n// netRetrieveState retrieves the state of a network connection. Use the flow caches for that\nfunc (d *Datapath) netRetrieveState(p *packet.Packet) (*connection.TCPConnection, error) {\n\t// Is the ack received by a tcp client\n\tif conn, exists := d.cacheGet(d.tcpClient, p.L4ReverseFlowHash()); exists {\n\t\tif p.GetTCPFlags()&packet.TCPRstMask != 0 && p.GetTCPFlags()&packet.TCPAckMask == 0 {\n\t\t\tif !bytes.Equal(p.ReadTCPData(), rstIdentity) {\n\t\t\t\tconn.SetReportReason(\"reset\")\n\t\t\t}\n\t\t\treturn conn, errRstPacket\n\t\t}\n\t\treturn conn, nil\n\t}\n\n\t// Is the ack received by a tcp server\n\tif conn, exists := d.cacheGet(d.tcpServer, p.L4FlowHash()); exists {\n\t\treturn conn, nil\n\t}\n\n\tif p.GetTCPFlags()&packet.TCPSynAckMask == packet.TCPAckMask {\n\t\t// Let's try if its an existing connection\n\t\tcontext, cerr := d.contextFromIP(false, p.Mark, p.DestPort(), packet.IPProtocolTCP)\n\t\tif cerr != nil {\n\t\t\treturn nil, cerr\n\t\t}\n\t\tconn := connection.NewTCPConnection(context, p)\n\t\tconn.SetState(connection.UnknownState)\n\t\treturn conn, nil\n\t}\n\n\tif p.GetTCPFlags()&packet.TCPRstMask != 0 && p.GetTCPFlags()&packet.TCPAckMask == 0 {\n\t\treturn nil, errRstPacket\n\t}\n\n\treturn nil, errNoConnection\n}\n\n// releaseExternalFlow releases the flow and updates the conntrack table\nfunc (d *Datapath) releaseExternalFlow(context *pucontext.PUContext, report *policy.FlowPolicy, action *policy.FlowPolicy, tcpPacket *packet.Packet) {\n\n\td.cacheRemove(d.tcpClient, tcpPacket.L4ReverseFlowHash())\n\n\tif err := d.ignoreFlow(tcpPacket); err != nil {\n\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t}\n\n\ttcpPacket.SetConnmark = true\n\td.reportReverseExternalServiceFlow(context, report, action, true, tcpPacket)\n}\n\n// releaseUnmonitoredFlow releases the flow and updates the conntrack table\nfunc (d *Datapath) releaseUnmonitoredFlow(tcpPacket *packet.Packet) {\n\n\tif err := d.ignoreFlow(tcpPacket); err != nil {\n\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t}\n\n\ttcpPacket.SetConnmark = true\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath_test.go",
    "content": "// +build linux\n\npackage nfqdatapath\n\nimport (\n\t\"context\"\n\t\"crypto/ecdsa\"\n\t\"encoding/binary\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math/rand\"\n\t\"net\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/packetgen\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/flowtracking/mockflowclient\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n\t\"gotest.tools/assert\"\n)\n\nfunc TestEnforcerExternalNetworks(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\tSo(err, ShouldBeNil)\n\n\t\tsynackPacket, err := PacketFlow.GetFirstSynAckPacket().ToBytes()\n\t\tSo(err, ShouldBeNil)\n\n\t\ttcpPacket, _ := packet.New(0, synackPacket, \"0\", true)\n\t\t_, err1 := enforcer.processApplicationTCPPackets(tcpPacket)\n\t\tSo(err1, ShouldBeNil)\n\n\t}\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().TransmittedKey().Return([]byte(\"dummy\")).AnyTimes()\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\tiprules := policy.IPRuleList{policy.IPRule{\n\t\t\tAddresses: []string{\"10.1.10.76/32\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\tAction:   policy.Accept,\n\t\t\t\tPolicyID: \"tcp172/8\"},\n\t\t}}\n\n\t\tcontextID := \"123456\"\n\t\tpuInfo := policy.NewPUInfo(contextID, \"/ns1\", common.LinuxProcessPU)\n\n\t\tcontext, err := pucontext.NewPU(contextID, puInfo, mockTokenAccessor, 10*time.Second)\n\t\tSo(err, ShouldBeNil)\n\t\tenforcer.puFromContextID.AddOrUpdate(contextID, context)\n\t\ts, _ := portspec.NewPortSpec(80, 80, contextID)\n\t\tenforcer.contextIDFromTCPPort.AddPortSpec(s)\n\n\t\terr = context.UpdateNetworkACLs(iprules)\n\t\tSo(err, ShouldBeNil)\n\n\t\ttestThePackets(enforcer)\n\t})\n}\n\nfunc TestInvalidContext(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tdefer MockGetUDPRawSocket()()\n\n\tConvey(\"Given I create a new enforcer instance\", t, func() {\n\n\t\tenforcer, _, _, _, _ := NewWithMocks(ctrl, \"serverID\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\tSo(err, ShouldBeNil)\n\t\tsynPacket, err := PacketFlow.GetFirstSynPacket().ToBytes()\n\t\tSo(err, ShouldBeNil)\n\t\ttcpPacket, err := packet.New(0, synPacket, \"0\", true)\n\t\tConvey(\"When I run a TCP Syn packet through a non existing context\", func() {\n\n\t\t\t_, err1 := enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\t_, _, err2 := enforcer.processNetworkTCPPackets(tcpPacket)\n\n\t\t\tConvey(\"Then I should see an error for non existing context\", func() {\n\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(err1, ShouldNotBeNil)\n\t\t\t\tSo(err2, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestPacketHandlingFirstThreePacketsHavePayload(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\t\tSIP := net.IPv4zero\n\t\tfirstSynAckProcessed := false\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\tSo(err, ShouldBeNil)\n\t\tfor i := 0; i < PacketFlow.GetNumPackets(); i++ {\n\t\t\toldPacketFromFlow, err := PacketFlow.GetNthPacket(i).ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\toldPacket, err := packet.New(0, oldPacketFromFlow, \"0\", true)\n\t\t\tif err == nil && oldPacket != nil {\n\t\t\t\toldPacket.UpdateIPv4Checksum()\n\t\t\t\toldPacket.UpdateTCPChecksum()\n\t\t\t}\n\t\t\ttcpPacketFromFlow, err := PacketFlow.GetNthPacket(i).ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\ttcpPacket, err := packet.New(0, tcpPacketFromFlow, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Input packet\", i)\n\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t}\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(tcpPacket, ShouldNotBeNil)\n\n\t\t\tif reflect.DeepEqual(SIP, net.IPv4zero) {\n\t\t\t\tSIP = tcpPacket.SourceAddress()\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(SIP, tcpPacket.DestinationAddress()) &&\n\t\t\t\t!reflect.DeepEqual(SIP, tcpPacket.SourceAddress()) {\n\t\t\t\tt.Error(\"Invalid Test Packet\")\n\t\t\t}\n\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Intermediate packet\", i)\n\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t}\n\n\t\t\tif tcpPacket.GetTCPFlags()&packet.TCPSynMask != 0 {\n\t\t\t\tConvey(\"When I pass a packet with SYN or SYN/ACK flags for packet \"+strconv.Itoa(i), func() {\n\t\t\t\t\tConvey(\"Then I expect some data payload to exist on the packet \"+strconv.Itoa(i), func() {\n\t\t\t\t\t\t// In our 3 way security handshake syn and syn-ack packet should grow in length\n\t\t\t\t\t\tSo(tcpPacket.IPTotalLen(), ShouldBeGreaterThan, oldPacket.IPTotalLen())\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t}\n\n\t\t\tif !firstSynAckProcessed && tcpPacket.GetTCPFlags()&packet.TCPSynAckMask == packet.TCPAckMask {\n\t\t\t\tfirstSynAckProcessed = true\n\t\t\t\tConvey(\"When I pass the first packet with ACK flag for packet \"+strconv.Itoa(i), func() {\n\t\t\t\t\tConvey(\"Then I expect some data payload to exist on the packet \"+strconv.Itoa(i), func() {\n\t\t\t\t\t\t// In our 3 way security handshake first ack packet should grow in length\n\t\t\t\t\t\tSo(tcpPacket.IPTotalLen(), ShouldBeGreaterThan, oldPacket.IPTotalLen())\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t}\n\n\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, errp := packet.New(0, output, \"0\", true)\n\t\t\tSo(len(tcpPacket.GetTCPBytes()), ShouldBeLessThanOrEqualTo, len(outPacket.GetTCPBytes()))\n\t\t\tSo(errp, ShouldBeNil)\n\n\t\t\t_, f, err := enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tif f != nil {\n\t\t\t\tf()\n\t\t\t}\n\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Output packet\", i)\n\t\t\t\toutPacket.Print(0, false)\n\t\t\t}\n\t\t}\n\t}\n\n\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 666, 80, policy.Accept, \"\")\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n\n\tConvey(\"When the mode is LocalServer\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n}\n\nfunc TestInvalidIPContext(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tdefer MockGetUDPRawSocket()()\n\n\tConvey(\"Given I create a new enforcer instance\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, mockCollector, mockDNS := NewWithMocks(ctrl, \"serverID\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().TransmittedKey().Return([]byte(\"dummy\")).AnyTimes()\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\t\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().Unenforce(gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).Times(1)\n\n\t\tpuInfo := policy.NewPUInfo(\"SomeProcessingUnitId\", \"/ns2\", common.LinuxProcessPU)\n\n\t\tCounterReport := &collector.CounterReport{\n\t\t\tPUID:      puInfo.Policy.ManagementID(),\n\t\t\tNamespace: puInfo.Policy.ManagementNamespace(),\n\t\t}\n\t\tmockCollector.EXPECT().CollectCounterEvent(MyCounterMatcher(CounterReport)).MinTimes(1)\n\n\t\tenforcer.Enforce(context.Background(), \"serverID\", puInfo) // nolint\n\t\tdefer func() {\n\t\t\tif err := enforcer.Unenforce(context.Background(), \"serverID\"); err != nil {\n\t\t\t\tfmt.Println(\"Error\", err.Error())\n\t\t\t}\n\t\t}()\n\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeMultipleGoodFlow)\n\t\tSo(err, ShouldBeNil)\n\t\tsynPacket, err := PacketFlow.GetFirstSynPacket().ToBytes()\n\t\tSo(err, ShouldBeNil)\n\t\ttcpPacket, err := packet.New(0, synPacket, \"0\", true)\n\n\t\tConvey(\"When I run a TCP Syn packet through an invalid existing context (missing IP)\", func() {\n\n\t\t\t_, err1 := enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\t_, _, err2 := enforcer.processNetworkTCPPackets(tcpPacket)\n\n\t\t\tConvey(\"Then I should see an error for missing IP\", func() {\n\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(err1, ShouldNotBeNil)\n\t\t\t\tSo(err2, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\n// TestEnforcerConnUnknownState test ensures that enforcer closes the\n// connection by converting packets to rst when it finds connection\n// to be in unknown state. This happens when enforcer has not seen the\n// 3way handshake for a connection.\nfunc TestEnforcerConnUnknownState(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\t\tConvey(\"If I send an ack packet from either PU to the other, it is converted into a Fin/Ack\", func() {\n\t\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tinput, err := PacketFlow.GetFirstAckPacket().ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\ttcpPacket, err := packet.New(0, input, \"0\", true)\n\t\t\t// create a copy of the ack packet\n\t\t\ttcpPacketCopy := *tcpPacket\n\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\n\t\t\t_, err1 := enforcer.processApplicationTCPPackets(tcpPacket)\n\n\t\t\t// Test whether the packet is modified with Fin/Ack\n\t\t\tif tcpPacket.GetTCPFlags() != 0x04 {\n\t\t\t\tt.Fail()\n\t\t\t}\n\n\t\t\t_, _, err2 := enforcer.processNetworkTCPPackets(&tcpPacketCopy)\n\n\t\t\tif tcpPacket.GetTCPFlags() != 0x04 {\n\t\t\t\tt.Fail()\n\t\t\t}\n\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\tSo(err2, ShouldBeNil)\n\t\t})\n\t}\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, _ := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\ttestThePackets(enforcer)\n\n\t})\n\n\tConvey(\"When the mode is LocalServer\", t, func() {\n\n\t\tenforcer, _ := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\t\ttestThePackets(enforcer)\n\n\t})\n}\n\nfunc TestInvalidTokenContext(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tdefer MockGetUDPRawSocket()()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\tSo(err, ShouldBeNil)\n\t\tsynPacket, err := PacketFlow.GetFirstSynPacket().ToBytes()\n\t\tSo(err, ShouldBeNil)\n\t\ttcpPacket, err := packet.New(0, synPacket, \"0\", true)\n\n\t\tConvey(\"When I run a TCP Syn packet through an invalid existing context (missing IP)\", func() {\n\n\t\t\t_, err1 := enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\t_, _, err2 := enforcer.processNetworkTCPPackets(tcpPacket)\n\n\t\t\tConvey(\"Then I should see an error for missing Token\", func() {\n\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(err1, ShouldNotBeNil)\n\t\t\t\tSo(err2, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t}\n\n\tConvey(\"Given I create a new enforcer instance\", t, func() {\n\n\t\tpuInfo := policy.NewPUInfo(\"SomeProcessingUnitId\", \"/ns2\", common.LinuxProcessPU)\n\n\t\tip := policy.ExtendedMap{\n\t\t\t\"brige\": testDstIP,\n\t\t}\n\t\tpuInfo.Runtime.SetIPAddresses(ip)\n\n\t\tenforcer, secrets, mockTokenAccessor, _, mockDNS := NewWithMocks(ctrl, \"serverID\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().TransmittedKey().Return([]byte(\"dummy\")).AnyTimes()\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\t\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).Times(1)\n\n\t\tenforcer.Enforce(context.Background(), \"serverID\", puInfo) // nolint\n\n\t\ttestThePackets(enforcer)\n\t})\n}\n\nfunc TestPacketHandlingDstPortCacheBehavior(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\n\t\tSIP := net.IPv4zero\n\n\t\tConvey(\"When I pass multiple packets through the enforcer\", func() {\n\n\t\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tfor i := 0; i < PacketFlow.GetNumPackets(); i++ {\n\t\t\t\toldPacketFromFlow, err := PacketFlow.GetNthPacket(i).ToBytes()\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\toldPacket, err := packet.New(0, oldPacketFromFlow, \"0\", true)\n\t\t\t\tif err == nil && oldPacket != nil {\n\t\t\t\t\toldPacket.UpdateIPv4Checksum()\n\t\t\t\t\toldPacket.UpdateTCPChecksum()\n\t\t\t\t}\n\t\t\t\ttcpPacketFromFlow, err := PacketFlow.GetNthPacket(i).ToBytes()\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPacket, err := packet.New(0, tcpPacketFromFlow, \"0\", true)\n\t\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t\t}\n\n\t\t\t\tif debug {\n\t\t\t\t\tfmt.Println(\"Input packet\", i)\n\t\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t\t}\n\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(tcpPacket, ShouldNotBeNil)\n\n\t\t\t\tif reflect.DeepEqual(SIP, net.IPv4zero) {\n\t\t\t\t\tSIP = tcpPacket.SourceAddress()\n\t\t\t\t}\n\t\t\t\tif !reflect.DeepEqual(SIP, tcpPacket.DestinationAddress()) &&\n\t\t\t\t\t!reflect.DeepEqual(SIP, tcpPacket.SourceAddress()) {\n\t\t\t\t\tt.Error(\"Invalid Test Packet\")\n\t\t\t\t}\n\n\t\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tif debug {\n\t\t\t\t\tfmt.Println(\"Intermediate packet\", i)\n\t\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t\t}\n\n\t\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\t\toutPacket, errp := packet.New(0, output, \"0\", true)\n\t\t\t\tSo(len(tcpPacket.GetTCPBytes()), ShouldBeLessThanOrEqualTo, len(outPacket.GetTCPBytes()))\n\t\t\t\tSo(errp, ShouldBeNil)\n\t\t\t\t_, f, err := enforcer.processNetworkTCPPackets(outPacket)\n\t\t\t\tif f != nil {\n\t\t\t\t\tf()\n\t\t\t\t}\n\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tif debug {\n\t\t\t\t\tfmt.Println(\"Output packet\", i)\n\t\t\t\t\toutPacket.Print(0, false)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n\n\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 666, 80, policy.Accept, \"\")\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n\n\tConvey(\"When the mode is LocalServer\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n}\n\nfunc TestAckLost(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\tSo(err, ShouldBeNil)\n\n\t\tsynPacket, err := PacketFlow.GetFirstSynPacket().ToBytes()\n\t\tSo(err, ShouldBeNil)\n\t\ttcpPacket, err := packet.New(0, synPacket, \"0\", true)\n\t\tif err == nil && tcpPacket != nil {\n\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t}\n\n\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\tSo(err, ShouldBeNil)\n\n\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\toutPacket, errp := packet.New(0, output, \"0\", true)\n\t\tSo(len(tcpPacket.GetTCPBytes()), ShouldBeLessThanOrEqualTo, len(outPacket.GetTCPBytes()))\n\t\tSo(errp, ShouldBeNil)\n\n\t\t_, f, err := enforcer.processNetworkTCPPackets(outPacket)\n\t\tif f != nil {\n\t\t\tf()\n\t\t}\n\n\t\tSo(err, ShouldBeNil)\n\n\t\tinput, _ := PacketFlow.GetFirstSynAckPacket().ToBytes()\n\n\t\ttcpPacket, _ = packet.New(0, input, \"0\", true)\n\t\tif tcpPacket != nil {\n\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t}\n\n\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\tSo(err, ShouldBeNil)\n\n\t\toutput = make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\toutPacket, _ = packet.New(0, output, \"0\", true)\n\t\t_, f, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\tif f != nil {\n\t\t\tf()\n\t\t}\n\t\tSo(err, ShouldBeNil)\n\n\t\tinput, _ = PacketFlow.GetFirstAckPacket().ToBytes()\n\t\ttcpPacket, _ = packet.New(0, input, \"0\", true)\n\t\tif tcpPacket != nil {\n\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t}\n\n\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\tSo(err, ShouldBeNil)\n\t\t//simulate drop, and re-transmit packets.\n\n\t\tinput, _ = PacketFlow.GetFirstSynAckPacket().ToBytes()\n\n\t\ttcpPacket, _ = packet.New(0, input, \"0\", true)\n\t\tif tcpPacket != nil {\n\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t}\n\n\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\tassert.Equal(t, err, nil, \"error should be nil\")\n\n\t\toutput = make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\toutPacket, _ = packet.New(0, output, \"0\", true)\n\t\t_, f, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\tif f != nil {\n\t\t\tf()\n\t\t}\n\t\tassert.Equal(t, err, nil, \"error should be nil\")\n\n\t\tinput, _ = PacketFlow.GetFirstAckPacket().ToBytes()\n\n\t\ttcpPacket, _ = packet.New(0, input, \"0\", true)\n\t\tif tcpPacket != nil {\n\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t}\n\n\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\tassert.Equal(t, err, nil, \"error should be nil\")\n\n\t\toutput = make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\toutPacket, _ = packet.New(0, output, \"0\", true)\n\n\t\t_, f, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\tif f != nil {\n\t\t\tf()\n\t\t}\n\n\t\tassert.Equal(t, err, nil, \"error should be nil\")\n\n\t}\n\n\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 666, 80, policy.Accept, \"\")\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\tflowclient := mockflowclient.NewMockFlowClient(ctrl)\n\t\tflowclient.EXPECT().UpdateApplicationFlowMark(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return(nil).AnyTimes()\n\t\tenforcer.conntrack = flowclient\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\ttestThePackets(enforcer)\n\n\t})\n}\n\nfunc TestConnectionTrackerStateLocalContainer(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\tSo(err, ShouldBeNil)\n\t\t/*first packet in TCPFLOW slice is a syn packet*/\n\t\tConvey(\"When i pass a syn packet through the enforcer\", func() {\n\n\t\t\tinput, err := PacketFlow.GetFirstSynPacket().ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\ttcpPacket, err := packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\t//After sending syn packet\n\t\t\tCheckAfterAppSynPacket(enforcer, tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, err := packet.New(0, output, \"0\", true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, f, err := enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tif f != nil {\n\t\t\t\tf()\n\t\t\t}\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t//Check after processing networksyn packet\n\t\t\tCheckAfterNetSynPacket(enforcer, tcpPacket, outPacket)\n\n\t\t})\n\t\tConvey(\"When i pass a SYN and SYN ACK packet through the enforcer\", func() {\n\n\t\t\tinput, err := PacketFlow.GetFirstSynPacket().ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\ttcpPacket, err := packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, err := packet.New(0, output, \"0\", true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\toutPacket.Print(0, false)\n\t\t\t_, f, err := enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tif f != nil {\n\t\t\t\tf()\n\t\t\t}\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t//Now lets send the synack packet from the server in response\n\t\t\tinput, err = PacketFlow.GetFirstSynAckPacket().ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\ttcpPacket, err = packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\toutput = make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, err = packet.New(0, output, \"0\", true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\toutPacketcopy, _ := packet.New(0, output, \"0\", true)\n\t\t\t_, f, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tif f != nil {\n\t\t\t\tf()\n\t\t\t}\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tCheckAfterNetSynAckPacket(t, enforcer, outPacketcopy, outPacket)\n\t\t})\n\n\t\tConvey(\"When i pass a SYN and SYNACK and another ACK packet through the enforcer\", func() {\n\n\t\t\tinput, err := PacketFlow.GetFirstSynPacket().ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\ttcpPacket, err := packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, err := packet.New(0, output, \"0\", true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, f, err := enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tif f != nil {\n\t\t\t\tf()\n\t\t\t}\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t//Now lets send the synack packet from the server in response\n\t\t\tinput, err = PacketFlow.GetFirstSynAckPacket().ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\ttcpPacket, err = packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\toutput = make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, err = packet.New(0, output, \"0\", true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, f, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tif f != nil {\n\t\t\t\tf()\n\t\t\t}\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tinput, err = PacketFlow.GetFirstAckPacket().ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\ttcpPacket, err = packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\tCheckAfterAppAckPacket(enforcer, tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\toutput = make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, err = packet.New(0, output, \"0\", true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tCheckBeforeNetAckPacket(enforcer, tcpPacket, outPacket, false)\n\t\t\t_, f, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tif f != nil {\n\t\t\t\tf()\n\t\t\t}\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t}\n\n\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 666, 80, policy.Accept, \"\")\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).AnyTimes()\n\t\ttestThePackets(enforcer)\n\n\t})\n\n\tConvey(\"When the mode is LocalServer\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).AnyTimes()\n\t\ttestThePackets(enforcer)\n\n\t})\n}\n\nfunc CheckAfterAppSynPacket(enforcer *Datapath, tcpPacket *packet.Packet) {\n\n\tappConn, _ := enforcer.tcpClient.Get(tcpPacket.L4FlowHash())\n\tSo(appConn.GetState(), ShouldEqual, connection.TCPSynSend)\n}\n\nfunc CheckAfterNetSynPacket(enforcer *Datapath, tcpPacket, outPacket *packet.Packet) {\n\n\tappConn, _ := enforcer.tcpServer.Get(tcpPacket.L4FlowHash())\n\tSo(appConn.GetState(), ShouldEqual, connection.TCPSynReceived)\n}\n\nfunc CheckAfterNetSynAckPacket(t *testing.T, enforcer *Datapath, tcpPacket, outPacket *packet.Packet) {\n\n\tnetconn, _ := enforcer.tcpClient.Get(outPacket.L4ReverseFlowHash())\n\tSo(netconn.GetState(), ShouldEqual, connection.TCPSynAckReceived)\n}\n\nfunc CheckAfterAppAckPacket(enforcer *Datapath, tcpPacket *packet.Packet) {\n\n\tappConn, _ := enforcer.tcpClient.Get(tcpPacket.L4FlowHash())\n\tSo(appConn.GetState(), ShouldEqual, connection.TCPAckSend)\n}\n\nfunc CheckBeforeNetAckPacket(enforcer *Datapath, tcpPacket, outPacket *packet.Packet, isReplay bool) {\n\n\tappConn, _ := enforcer.tcpServer.Get(tcpPacket.L4FlowHash())\n\tif !isReplay {\n\t\tSo(appConn.GetState(), ShouldEqual, connection.TCPSynAckSend)\n\t} else {\n\t\tSo(appConn.GetState(), ShouldBeGreaterThan, connection.TCPSynAckSend)\n\t}\n}\n\nfunc TestCacheState(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tdefer MockGetUDPRawSocket()()\n\n\tConvey(\"Given I create a new enforcer instance\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, mockCollector, mockDNS := NewWithMocks(ctrl, \"serverID\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(2).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).Times(2)\n\t\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Times(2)\n\t\tmockDNS.EXPECT().Unenforce(gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).Times(2)\n\n\t\tcontextID := \"123\"\n\n\t\tpuInfo := policy.NewPUInfo(contextID, \"/ns1\", common.ContainerPU)\n\n\t\tCounterReport := &collector.CounterReport{\n\t\t\tPUID:      puInfo.Policy.ManagementID(),\n\t\t\tNamespace: puInfo.Policy.ManagementNamespace(),\n\t\t}\n\t\tmockCollector.EXPECT().CollectCounterEvent(MyCounterMatcher(CounterReport)).Times(2)\n\n\t\t// Should fail: Not in cache\n\t\terr := enforcer.Unenforce(context.Background(), contextID)\n\t\tif err == nil {\n\t\t\tt.Errorf(\"Expected failure, no contextID in cache\")\n\t\t}\n\n\t\tip := policy.ExtendedMap{\"bridge\": \"127.0.0.1\"}\n\t\tpuInfo.Runtime.SetIPAddresses(ip)\n\t\tipl := policy.ExtendedMap{\"bridge\": \"127.0.0.1\"}\n\t\tpuInfo.Policy.SetIPAddresses(ipl)\n\n\t\tip = policy.ExtendedMap{\"bridge\": \"127.0.0.1\"}\n\t\tpuInfo.Runtime.SetIPAddresses(ip)\n\n\t\tipl = policy.ExtendedMap{\"bridge\": \"127.0.0.1\"}\n\t\tpuInfo.Policy.SetIPAddresses(ipl)\n\n\t\t// Should  not fail:  IP is valid\n\t\terr = enforcer.Enforce(context.Background(), contextID, puInfo)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Expected no failure %s\", err)\n\t\t}\n\n\t\t// Should  not fail:  Update\n\t\terr = enforcer.Enforce(context.Background(), contextID, puInfo)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Expected no failure %s\", err)\n\t\t}\n\n\t\t// Should  not fail:  IP is valid\n\t\terr = enforcer.Unenforce(context.Background(), contextID)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Expected failure, no IP but passed %s\", err)\n\t\t}\n\t})\n}\n\nfunc TestDoCreatePU(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tdefer MockGetUDPRawSocket()()\n\n\tConvey(\"Given an initialized enforcer for Linux Processes\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, secrets, mockTokenAccessor, _, mockDNS := NewWithMocks(ctrl, \"serverID\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\t\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).Times(1)\n\n\t\tcontextID := \"124\"\n\t\tpuInfo := policy.NewPUInfo(contextID, \"/ns1\", common.LinuxProcessPU)\n\n\t\tspec, _ := portspec.NewPortSpecFromString(\"80\", nil)\n\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"100\",\n\t\t\tServices: []common.Service{\n\t\t\t\t{\n\t\t\t\t\tProtocol: uint8(6),\n\t\t\t\t\tPorts:    spec,\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\n\t\tConvey(\"When I create a new PU\", func() {\n\t\t\terr := enforcer.Enforce(context.Background(), contextID, puInfo)\n\n\t\t\tConvey(\"It should succeed\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t_, err := enforcer.puFromContextID.Get(contextID)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t_, err1 := enforcer.puFromMark.Get(\"100\")\n\t\t\t\tSo(err1, ShouldBeNil)\n\t\t\t\t_, err2 := enforcer.contextIDFromTCPPort.GetSpecValueFromPort(80)\n\t\t\t\tSo(err2, ShouldBeNil)\n\t\t\t\tSo(enforcer.puFromIP, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given an initialized enforcer for Linux Processes\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, _, mockDNS := NewWithMocks(ctrl, \"serverID\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\t\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).Times(1)\n\n\t\tcontextID := \"125\"\n\t\tpuInfo := policy.NewPUInfo(contextID, \"/ns1\", common.LinuxProcessPU)\n\n\t\tConvey(\"When I create a new PU without ports or mark\", func() {\n\t\t\terr := enforcer.Enforce(context.Background(), contextID, puInfo)\n\n\t\t\tConvey(\"It should succeed\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t_, err := enforcer.puFromContextID.Get(contextID)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(enforcer.puFromIP, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given an initialized enforcer for remote Linux Containers\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, _, mockDNS := NewWithMocks(ctrl, \"serverID\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\t\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).Times(1)\n\n\t\tcontextID := \"126\"\n\t\tpuInfo := policy.NewPUInfo(contextID, \"/ns1\", common.ContainerPU)\n\n\t\tConvey(\"When I create a new PU without an IP\", func() {\n\t\t\terr := enforcer.Enforce(context.Background(), contextID, puInfo)\n\n\t\t\tConvey(\"It should succeed \", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(enforcer.puFromIP, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestContextFromIP(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given an initialized enforcer for Linux Processes\", t, func() {\n\n\t\tenforcer, _, _, _, _ := NewWithMocks(ctrl, \"serverID\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tpuInfo := policy.NewPUInfo(\"SomePU\", \"/ns\", common.ContainerPU)\n\n\t\tcontext, err := pucontext.NewPU(\"SomePU\", puInfo, nil, 10*time.Second)\n\t\tcontextID := \"AporetoContext\"\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"If I try to get context based on IP and its  not there and its a local container it should fail \", func() {\n\t\t\t_, err := enforcer.contextFromIP(true, \"\", 0, packet.IPProtocolTCP)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If there is no IP match, it should try the mark for app packets \", func() {\n\t\t\tenforcer.puFromMark.AddOrUpdate(\"100\", context)\n\t\t\tenforcer.mode = constants.LocalServer\n\t\t\tConvey(\"If the mark exists\", func() {\n\t\t\t\tmarkVal := strconv.Itoa(100)\n\t\t\t\tctx, err := enforcer.contextFromIP(true, markVal, 0, packet.IPProtocolTCP)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(ctx, ShouldNotBeNil)\n\t\t\t\tSo(ctx, ShouldEqual, context)\n\t\t\t})\n\n\t\t\tConvey(\"If the mark doesn't exist\", func() {\n\t\t\t\t_, err := enforcer.contextFromIP(true, \"2000\", 0, packet.IPProtocolTCP)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"If there is no IP match, it should try the port for net packets \", func() {\n\t\t\ts, _ := portspec.NewPortSpec(8000, 8000, contextID)\n\t\t\tenforcer.contextIDFromTCPPort.AddPortSpec(s)\n\t\t\tenforcer.puFromContextID.AddOrUpdate(contextID, context)\n\t\t\tenforcer.mode = constants.LocalServer\n\n\t\t\tConvey(\"If the port exists\", func() {\n\t\t\t\tctx, err := enforcer.contextFromIP(false, \"\", 8000, packet.IPProtocolTCP)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(ctx, ShouldNotBeNil)\n\t\t\t\tSo(ctx, ShouldEqual, context)\n\t\t\t})\n\n\t\t\tConvey(\"If the port doesn't exist\", func() {\n\t\t\t\t_, err := enforcer.contextFromIP(false, \"\", 9000, packet.IPProtocolTCP)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t})\n\n\tConvey(\"Given an initialized enforcer for HostPU\", t, func() {\n\n\t\tenforcer, _, _, _, _ := NewWithMocks(ctrl, \"serverID\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tpuInfo := policy.NewPUInfo(\"SomeHostPU\", \"/ns\", common.HostPU)\n\n\t\tcontext, err := pucontext.NewPU(\"SomeHostPU\", puInfo, nil, 10*time.Second)\n\t\tSo(err, ShouldBeNil)\n\n\t\tenforcer.hostPU = context\n\n\t\tConvey(\"If I try to get context for app ICMP for HostPU it should succeed \", func() {\n\t\t\tctx, err := enforcer.contextFromIP(true, \"\", 0, packet.IPProtocolICMP)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(ctx, ShouldNotBeNil)\n\t\t\tSo(ctx, ShouldEqual, context)\n\t\t})\n\t\tConvey(\"If I try to get context for net ICMP for HostPU it should succeed \", func() {\n\t\t\tctx, err := enforcer.contextFromIP(false, \"\", 0, packet.IPProtocolICMP)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(ctx, ShouldNotBeNil)\n\t\t\tSo(ctx, ShouldEqual, context)\n\t\t})\n\t\tConvey(\"If I try to get context for another protocol it should not return host context \", func() {\n\t\t\t_, err := enforcer.contextFromIP(true, \"\", 0, packet.IPProtocolTCP)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t})\n}\n\nfunc TestInvalidPacket(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\n\t\tInvalidTCPFlow := [][]byte{\n\t\t\t{ /*0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00,*/ 0x45, 0x00, 0x00, 0x40, 0xf4, 0x1f, 0x44, 0x00, 0x40, 0x06, 0xa9, 0x6f, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xac, 0x48, 0x00, 0x00, 0x00, 0x00, 0xb0, 0x02, 0xff, 0xff, 0x6b, 0x6c, 0x00, 0x00, 0x02, 0x04, 0x05, 0xb4, 0x01, 0x03, 0x03, 0x05, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0x38, 0x00, 0x00, 0x00, 0x00, 0x04, 0x02, 0x00, 0x00, 0x4a, 0x1d, 0x70, 0xcf},\n\t\t}\n\n\t\tfor _, p := range InvalidTCPFlow {\n\t\t\ttcpPacket, err := packet.New(0, p, \"0\", true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\t\t\toutpacket, err := packet.New(0, output, \"0\", true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t//Detach the data and parse token should fail\n\t\t\toutpacket.TCPDataDetach(binary.BigEndian.Uint16([]byte{0x0, p[32]})/4 - 20)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, _, err = enforcer.processNetworkTCPPackets(outpacket)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t}\n\n\t}\n\n\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 666, 80, policy.Reject|policy.Log, collector.MissingToken)\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n\n\tConvey(\"When the mode is LocalServer\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n}\n\nfunc TestFlowReportingInvalidSyn(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\n\t\tSIP := net.IPv4zero\n\t\tpacketDiffers := false\n\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\tSo(err, ShouldBeNil)\n\t\tfor i := 0; i < PacketFlow.GetSynPackets().GetNumPackets(); i++ {\n\n\t\t\tstart, err := PacketFlow.GetSynPackets().GetNthPacket(i).ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\toldPacket, err := packet.New(0, start, \"0\", true)\n\t\t\tif err == nil && oldPacket != nil {\n\t\t\t\toldPacket.UpdateIPv4Checksum()\n\t\t\t\toldPacket.UpdateTCPChecksum()\n\t\t\t}\n\n\t\t\tinput, err := PacketFlow.GetSynPackets().GetNthPacket(i).ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\ttcpPacket, err := packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Input packet\", i)\n\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t}\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(tcpPacket, ShouldNotBeNil)\n\n\t\t\tif reflect.DeepEqual(SIP, net.IPv4zero) {\n\t\t\t\tSIP = tcpPacket.SourceAddress()\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(SIP, tcpPacket.DestinationAddress()) &&\n\t\t\t\t!reflect.DeepEqual(SIP, tcpPacket.SourceAddress()) {\n\t\t\t\tt.Error(\"Invalid Test Packet\")\n\t\t\t}\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Intermediate packet\", i)\n\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t}\n\n\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, errp := packet.New(0, output, \"0\", true)\n\t\t\tSo(len(tcpPacket.GetTCPBytes()), ShouldBeLessThanOrEqualTo, len(outPacket.GetTCPBytes()))\n\t\t\tSo(errp, ShouldBeNil)\n\t\t\t_, _, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tSo(err, ShouldNotBeNil)\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Output packet\", i)\n\t\t\t\toutPacket.Print(0, false)\n\t\t\t}\n\n\t\t\tif !reflect.DeepEqual(oldPacket.GetTCPBytes(), outPacket.GetTCPBytes()) {\n\t\t\t\tpacketDiffers = true\n\t\t\t\tfmt.Println(\"Error: packets dont match\")\n\t\t\t\tfmt.Println(\"Input Packet\")\n\t\t\t\toldPacket.Print(0, false)\n\t\t\t\tfmt.Println(\"Output Packet\")\n\t\t\t\toutPacket.Print(0, false)\n\t\t\t\tt.Errorf(\"Packet %d Input and output packet do not match\", i)\n\t\t\t\tt.FailNow()\n\t\t\t}\n\t\t}\n\n\t\tConvey(\"Then I expect all the input and output packets (after encoding and decoding) to be same\", func() {\n\n\t\t\tSo(packetDiffers, ShouldEqual, false)\n\t\t})\n\t}\n\n\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 666, 80, policy.Reject|policy.Log, collector.MissingToken)\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n\n\tConvey(\"When the mode is LocalServer\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n}\n\nfunc TestFlowReportingUptoInvalidSynAck(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\n\t\tSIP := net.IPv4zero\n\t\tpacketDiffers := false\n\n\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\tSo(err, ShouldBeNil)\n\t\tfor i := 0; i < PacketFlow.GetUptoFirstSynAckPacket().GetNumPackets(); i++ {\n\t\t\tstart, err := PacketFlow.GetUptoFirstSynAckPacket().GetNthPacket(i).ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\toldPacket, err := packet.New(0, start, \"0\", true)\n\t\t\tif err == nil && oldPacket != nil {\n\t\t\t\toldPacket.UpdateIPv4Checksum()\n\t\t\t\toldPacket.UpdateTCPChecksum()\n\t\t\t}\n\t\t\tinput, err := PacketFlow.GetUptoFirstSynAckPacket().GetNthPacket(i).ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\ttcpPacket, err := packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Input packet\", i)\n\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t}\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(tcpPacket, ShouldNotBeNil)\n\n\t\t\tif reflect.DeepEqual(SIP, net.IPv4zero) {\n\t\t\t\tSIP = tcpPacket.SourceAddress()\n\t\t\t}\n\n\t\t\tif !reflect.DeepEqual(SIP, tcpPacket.DestinationAddress()) &&\n\t\t\t\t!reflect.DeepEqual(SIP, tcpPacket.SourceAddress()) {\n\t\t\t\tt.Error(\"Invalid Test Packet\")\n\t\t\t}\n\t\t\tif PacketFlow.GetNthPacket(i).GetTCPSyn() && !PacketFlow.GetNthPacket(i).GetTCPAck() {\n\t\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t}\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Intermediate packet\", i)\n\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t}\n\n\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, errp := packet.New(0, output, \"0\", true)\n\t\t\tSo(len(tcpPacket.GetTCPBytes()), ShouldBeLessThanOrEqualTo, len(outPacket.GetTCPBytes()))\n\t\t\tSo(errp, ShouldBeNil)\n\n\t\t\tif PacketFlow.GetNthPacket(i).GetTCPSyn() && !PacketFlow.GetNthPacket(i).GetTCPAck() {\n\t\t\t\t_, _, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t}\n\t\t\tif PacketFlow.GetNthPacket(i).GetTCPSyn() && PacketFlow.GetNthPacket(i).GetTCPAck() {\n\t\t\t\t_, _, err = enforcer.processNetworkTCPPackets(outPacket)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t}\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Output packet\", i)\n\t\t\t\toutPacket.Print(0, false)\n\t\t\t}\n\n\t\t\tif !reflect.DeepEqual(oldPacket.GetTCPBytes(), outPacket.GetTCPBytes()) {\n\t\t\t\tpacketDiffers = true\n\t\t\t\tfmt.Println(\"Error: packets dont match\")\n\t\t\t\tfmt.Println(\"Input Packet\")\n\t\t\t\toldPacket.Print(0, false)\n\t\t\t\tfmt.Println(\"Output Packet\")\n\t\t\t\toutPacket.Print(0, false)\n\t\t\t\tt.Errorf(\"Packet %d Input and output packet do not match\", i)\n\t\t\t\tt.FailNow()\n\t\t\t}\n\t\t}\n\n\t\tConvey(\"Then I expect all the input and output packets (after encoding and decoding) to be same\", func() {\n\n\t\t\tSo(packetDiffers, ShouldEqual, false)\n\t\t})\n\t}\n\n\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 666, 80, policy.Reject|policy.Log, \"policy\")\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n\n\tConvey(\"When the mode is LocalServer\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n}\n\nfunc TestForPacketsWithRandomFlags(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tdebug = true\n\n\tdefer MockGetUDPRawSocket()()\n\n\ttestThePackets := func(enforcer *Datapath) {\n\n\t\tPacketFlow := packetgen.NewPacketFlow(\"aa:ff:aa:ff:aa:ff\", \"ff:aa:ff:aa:ff:aa\", testSrcIP, testDstIP, 666, 80)\n\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGenerateGoodFlow)\n\t\tSo(err, ShouldBeNil)\n\n\t\tcount := PacketFlow.GetNumPackets()\n\t\tfor i := 0; i < count; i++ {\n\t\t\t//Setting random TCP flags for all the packets\n\t\t\tPacketFlow.GetNthPacket(i).SetTCPCwr()\n\t\t\tPacketFlow.GetNthPacket(i).SetTCPPsh()\n\t\t\tPacketFlow.GetNthPacket(i).SetTCPEce()\n\t\t\tinput, err := PacketFlow.GetNthPacket(i).ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\ttcpPacket, err := packet.New(0, input, \"0\", true)\n\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t}\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Input packet\", i)\n\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t}\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(tcpPacket, ShouldNotBeNil)\n\n\t\t\tSIP := tcpPacket.SourceAddress()\n\n\t\t\tif !reflect.DeepEqual(SIP, tcpPacket.DestinationAddress()) &&\n\t\t\t\t!reflect.DeepEqual(SIP, tcpPacket.SourceAddress()) {\n\t\t\t\tt.Error(\"Invalid Test Packet\")\n\t\t\t}\n\n\t\t\t_, err = enforcer.processApplicationTCPPackets(tcpPacket)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Intermediate packet\", i)\n\t\t\t\ttcpPacket.Print(0, false)\n\t\t\t}\n\n\t\t\toutput := make([]byte, len(tcpPacket.GetTCPBytes()))\n\t\t\tcopy(output, tcpPacket.GetTCPBytes())\n\n\t\t\toutPacket, errp := packet.New(0, output, \"0\", true)\n\t\t\tSo(len(tcpPacket.GetTCPBytes()), ShouldBeLessThanOrEqualTo, len(outPacket.GetTCPBytes()))\n\t\t\tSo(errp, ShouldBeNil)\n\n\t\t\t_, f, err := enforcer.processNetworkTCPPackets(outPacket)\n\t\t\tif f != nil {\n\t\t\t\tf()\n\t\t\t}\n\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tif debug {\n\t\t\t\tfmt.Println(\"Output packet \", i)\n\t\t\t\toutPacket.Print(0, false)\n\t\t\t}\n\t\t}\n\t}\n\n\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 666, 80, policy.Accept, \"\")\n\n\tConvey(\"When the mode is RemoteConainter\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.RemoteContainer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\n\t})\n\n\tConvey(\"When the mode is LocalServer\", t, func() {\n\n\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\t\ttestThePackets(enforcer)\n\t})\n}\n\nfunc TestPUPortCreation(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, secrets, mockTokenAccessor, _, mockDNS := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\t\tif enforcer == nil { // This avoids lint error SA5011: possible nil pointer dereference (staticcheck)\n\t\t\tSo(enforcer != nil, ShouldBeTrue)\n\t\t\treturn\n\t\t}\n\n\t\tenforcer.packetLogs = true\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\tcontextID := \"1001\"\n\t\tpuInfo := policy.NewPUInfo(contextID, \"/ns1\", common.LinuxProcessPU)\n\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"100\",\n\t\t})\n\n\t\tmockDNS.EXPECT().StartDNSServer(gomock.Any(), contextID, gomock.Any()).Times(1)\n\t\tmockDNS.EXPECT().Enforce(gomock.Any(), contextID, puInfo)\n\t\tmockDNS.EXPECT().SyncWithPlatformCache(gomock.Any(), gomock.Any()).Times(1)\n\n\t\tenforcer.Enforce(context.Background(), contextID, puInfo) // nolint\n\t})\n}\n\nfunc TestCollectTCPPacket(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, mockCollector, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\t\tSo(enforcer != nil, ShouldBeTrue)\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\tcontextID := \"dummy\"\n\t\t_, err := CreatePUContext(enforcer, contextID, \"/ns1\", common.ContainerPU, mockTokenAccessor)\n\t\tSo(err, ShouldBeNil)\n\n\t\ttcpPacket, err := newPacket(1, packet.TCPSynMask, testSrcIP, testDstIP, srcPort, dstPort, true, false)\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"We setup tcp network packet tracing for this pu with incomplete state\", func() {\n\t\t\tinterval := 10 * time.Second\n\t\t\terr := enforcer.EnableDatapathPacketTracing(context.TODO(), contextID, packettracing.NetworkOnly, interval)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tpacketreport := collector.PacketReport{\n\t\t\t\tDestinationIP: tcpPacket.DestinationAddress().String(),\n\t\t\t\tSourceIP:      tcpPacket.SourceAddress().String(),\n\t\t\t}\n\t\t\tmockCollector.EXPECT().CollectPacketEvent(PacketEventMatcher(&packetreport)).Times(0)\n\t\t\tenforcer.collectTCPPacket(&debugpacketmessage{\n\t\t\t\tMark:    10,\n\t\t\t\tp:       tcpPacket,\n\t\t\t\ttcpConn: nil,\n\t\t\t\tudpConn: nil,\n\t\t\t\terr:     nil,\n\t\t\t\tnetwork: true,\n\t\t\t})\n\t\t})\n\t\tConvey(\"We setup tcp network packet tracing for this pu with tcpConn != nil state\", func() {\n\t\t\tinterval := 10 * time.Second\n\t\t\terr := enforcer.EnableDatapathPacketTracing(context.TODO(), contextID, packettracing.NetworkOnly, interval)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tpacketreport := collector.PacketReport{\n\t\t\t\tDestinationIP: tcpPacket.DestinationAddress().String(),\n\t\t\t\tSourceIP:      tcpPacket.SourceAddress().String(),\n\t\t\t}\n\t\t\tcontext, _ := enforcer.puFromContextID.Get(contextID)\n\t\t\ttcpConn := connection.NewTCPConnection(context.(*pucontext.PUContext), nil)\n\n\t\t\tmockCollector.EXPECT().CollectPacketEvent(PacketEventMatcher(&packetreport)).Times(1)\n\t\t\tenforcer.collectTCPPacket(&debugpacketmessage{\n\t\t\t\tMark:    10,\n\t\t\t\tp:       tcpPacket,\n\t\t\t\ttcpConn: tcpConn,\n\t\t\t\tudpConn: nil,\n\t\t\t\terr:     nil,\n\t\t\t\tnetwork: true,\n\t\t\t})\n\t\t})\n\t\tConvey(\"We setup tcp network packet tracing for this pu with tcpConn != nil and inject application packet\", func() {\n\t\t\tinterval := 10 * time.Second\n\t\t\terr := enforcer.EnableDatapathPacketTracing(context.TODO(), contextID, packettracing.NetworkOnly, interval)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tpacketreport := collector.PacketReport{\n\t\t\t\tDestinationIP: tcpPacket.DestinationAddress().String(),\n\t\t\t\tSourceIP:      tcpPacket.SourceAddress().String(),\n\t\t\t}\n\t\t\tcontext, _ := enforcer.puFromContextID.Get(contextID)\n\t\t\ttcpConn := connection.NewTCPConnection(context.(*pucontext.PUContext), nil)\n\t\t\tmockCollector.EXPECT().CollectPacketEvent(PacketEventMatcher(&packetreport)).Times(0)\n\t\t\tenforcer.collectTCPPacket(&debugpacketmessage{\n\t\t\t\tMark:    10,\n\t\t\t\tp:       tcpPacket,\n\t\t\t\ttcpConn: tcpConn,\n\t\t\t\tudpConn: nil,\n\t\t\t\terr:     nil,\n\t\t\t\tnetwork: false,\n\t\t\t})\n\t\t})\n\n\t})\n}\n\nfunc TestEnableDatapathPacketTracing(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\t\tif enforcer == nil { // This avoids lint error SA5011: possible nil pointer dereference (staticcheck)\n\t\t\tSo(enforcer != nil, ShouldBeTrue)\n\t\t\treturn\n\t\t}\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\tcontextID := \"dummy\"\n\t\t_, err := CreatePUContext(enforcer, contextID, \"/ns1\", common.ContainerPU, mockTokenAccessor)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = enforcer.EnableDatapathPacketTracing(context.TODO(), contextID, packettracing.ApplicationOnly, 10*time.Second)\n\t\tSo(err, ShouldBeNil)\n\t\t_, err = enforcer.packetTracingCache.Get(contextID)\n\t\tSo(err, ShouldBeNil)\n\t})\n}\n\nfunc Test_CheckCounterCollection(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\tcollectCounterInterval = 1 * time.Second\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tConvey(\"So When enforcer exits\", func() {\n\n\t\t\tenforcer, secrets, mockTokenAccessor, mockCollector, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\t\t\tSo(enforcer != nil, ShouldBeTrue)\n\n\t\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\t\tpuContext, err := CreatePUContext(enforcer, \"dummy\", \"/ns1\", common.ContainerPU, mockTokenAccessor)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tCounterReport := &collector.CounterReport{\n\t\t\t\tPUID:      puContext.ManagementID(),\n\t\t\t\tNamespace: puContext.ManagementNamespace(),\n\t\t\t}\n\t\t\tmockCollector.EXPECT().CollectCounterEvent(MyCounterMatcher(CounterReport)).MinTimes(1)\n\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tgo enforcer.counterCollector(ctx)\n\n\t\t\tpuErr := puContext.Counters().CounterError((counters.ErrNonPUTraffic), fmt.Errorf(\"error\"))\n\n\t\t\tSo(puErr, ShouldNotBeNil)\n\t\t\tcancel()\n\t\t})\n\n\t\tConvey(\"So When enforer exits and waits for stuff to exit\", func() {\n\t\t\tenforcer, secrets, mockTokenAccessor, mockCollector, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\t\t\tSo(enforcer != nil, ShouldBeTrue)\n\n\t\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\t\tpuContext, err := CreatePUContext(enforcer, \"dummy\", \"/ns1\", common.ContainerPU, mockTokenAccessor)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tc := &collector.CounterReport{\n\t\t\t\tPUID:      puContext.ManagementID(),\n\t\t\t\tNamespace: puContext.ManagementNamespace(),\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectCounterEvent(MyCounterMatcher(c)).MinTimes(1)\n\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tgo enforcer.counterCollector(ctx)\n\n\t\t\tpuErr := puContext.Counters().CounterError(counters.ErrNonPUTraffic, fmt.Errorf(\"error\"))\n\n\t\t\tSo(puErr, ShouldNotBeNil)\n\t\t\tcancel()\n\t\t\t<-time.After(5 * time.Second)\n\n\t\t})\n\t\tConvey(\"So When an error is reported and the enforcer waits for collection interval\", func() {\n\t\t\tenforcer, secrets, mockTokenAccessor, mockCollector, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\t\t\tSo(enforcer != nil, ShouldBeTrue)\n\n\t\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\t\tpuContext, err := CreatePUContext(enforcer, \"dummy\", \"/ns1\", common.ContainerPU, mockTokenAccessor)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tc := &collector.CounterReport{\n\t\t\t\tPUID:      puContext.ManagementID(),\n\t\t\t\tNamespace: puContext.ManagementNamespace(),\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectCounterEvent(MyCounterMatcher(c)).MinTimes(1)\n\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tgo enforcer.counterCollector(ctx)\n\t\t\tpuErr := puContext.Counters().CounterError(counters.ErrNonPUTraffic, fmt.Errorf(\"error\"))\n\t\t\tSo(puErr, ShouldNotBeNil)\n\t\t\t<-time.After(5 * collectCounterInterval)\n\t\t\tcancel()\n\n\t\t})\n\n\t})\n}\n\nfunc Test_CounterReportedOnAuthSetAppSyn(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\t\tSo(enforcer != nil, ShouldBeTrue)\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\t\tmockTokenAccessor.EXPECT().Randomize(gomock.Any(), gomock.Any()).Times(2)\n\n\t\tcontext, err := CreatePUContext(enforcer, \"dummy\", \"/ns1\", common.ContainerPU, mockTokenAccessor)\n\t\tSo(err, ShouldBeNil)\n\n\t\tp, err := newPacket(packet.PacketTypeApplication, packet.TCPSynMask, \"1.1.1.1\", \"2.2.2.2\", srcPort, dstPort, false, false)\n\t\tSo(err, ShouldBeNil)\n\t\tconn := connection.NewTCPConnection(context, p)\n\t\terr = enforcer.processApplicationSynPacket(p, context, conn)\n\t\tSo(err, ShouldBeNil)\n\n\t\tc := conn.Context.Counters().GetErrorCounters()\n\t\tSo(c[counters.ErrAppSynAuthOptionSet], ShouldBeZeroValue)\n\n\t\tp, err = newPacket(packet.PacketTypeApplication, packet.TCPSynMask, \"1.1.1.1\", \"2.2.2.2\", srcPort, dstPort, true, false)\n\t\tSo(err, ShouldBeNil)\n\t\tconn = connection.NewTCPConnection(context, p)\n\t\terr = enforcer.processApplicationSynPacket(p, context, conn)\n\t\tSo(err, ShouldBeNil)\n\n\t\tc = conn.Context.Counters().GetErrorCounters()\n\t\tSo(c[counters.ErrAppSynAuthOptionSet], ShouldEqual, 1)\n\t})\n}\n\nfunc Test_CounterOnSynCacheTimeout(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\t\tif enforcer == nil { // This avoids lint error SA5011: possible nil pointer dereference (staticcheck)\n\t\t\tSo(enforcer != nil, ShouldBeTrue)\n\t\t\treturn\n\t\t}\n\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\t\tmockTokenAccessor.EXPECT().Randomize(gomock.Any(), gomock.Any()).Times(1)\n\n\t\tcontext, err := CreatePUContext(enforcer, \"dummy\", \"/ns1\", common.ContainerPU, mockTokenAccessor)\n\t\tSo(err, ShouldBeNil)\n\n\t\tp, err := newPacket(packet.PacketTypeApplication, packet.TCPSynMask, \"1.1.1.1\", \"2.2.2.2\", srcPort, dstPort, false, false)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// Update the connection timer for testing.\n\t\tconn := connection.NewTCPConnection(context, p)\n\t\tconn.ChangeConnectionTimeout(2 * time.Second)\n\n\t\terr = enforcer.processApplicationSynPacket(p, context, conn)\n\t\tSo(err, ShouldBeNil)\n\n\t\tc := conn.Context.Counters().GetErrorCounters()\n\t\tSo(c[counters.ErrTCPConnectionsExpired], ShouldBeZeroValue)\n\n\t\t// Wait for the connection to expire.\n\t\ttime.Sleep(3 * time.Second)\n\t\t_, exists := enforcer.tcpClient.Get(p.L4FlowHash())\n\t\tif exists {\n\t\t\tt.Fail()\n\t\t}\n\n\t\tc = conn.Context.Counters().GetErrorCounters()\n\t\tSo(c[counters.ErrTCPConnectionsExpired], ShouldEqual, 1)\n\t})\n}\n\nfunc Test_NOClaims(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tenforcer, _, _, mockCollector, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\t\tSo(enforcer != nil, ShouldBeTrue)\n\n\t\tflowRecord := CreateFlowRecord(1, \"1.1.1.1\", \"2.2.2.2\", 2000, 80, policy.Reject|policy.Log, collector.PolicyDrop)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\tcontext, err := CreatePUContext(enforcer, \"dummy\", \"/ns1\", common.ContainerPU, nil)\n\t\tSo(err, ShouldBeNil)\n\n\t\tp, err := newPacket(packet.PacketTypeNetwork, packet.TCPSynAckMask, \"2.2.2.2\", \"1.1.1.1\", dstPort, srcPort, true, false)\n\t\tSo(err, ShouldBeNil)\n\n\t\tconn := connection.NewTCPConnection(context, p)\n\n\t\t_, err = enforcer.processNetworkSynAckPacket(context, conn, p)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n}\n\nfunc newPacket(context uint64, tcpFlags uint8, src, dst string, srcPort, desPort uint16, addOptions bool, addPayload bool) (*packet.Packet, error) { //nolint\n\n\tp, err := packet.NewIpv4TCPPacket(context, tcpFlags, src, dst, srcPort, dstPort)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tp.SetTCPSeq(rand.Uint32())\n\n\tif addOptions {\n\t\toptions := []byte{2 /*Maximum Segment Size*/, 4, 0x05, 0x8C, 34, enforcerconstants.TCPAuthenticationOptionBaseLen, 0, 0}\n\t\tbuffer := append(p.GetBuffer(0), options...)\n\t\terr = p.UpdatePacketBuffer(buffer, uint16(len(options)))\n\t}\n\n\tif addPayload {\n\t\tbuffer := append(p.GetBuffer(0), []byte(\"dummy payload\")...)\n\t\terr = p.UpdatePacketBuffer(buffer, 0)\n\t}\n\n\treturn p, err\n}\n\nfunc TestCheckConnectionDeletion(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tenforcer, secrets, mockTokenAccessor, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.RemoteContainer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().TransmittedKey().Return([]byte(\"dummy\")).AnyTimes()\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\terr := CreatePortPolicy(enforcer, \"dummy\", \"/ns1\", common.ContainerPU, mockTokenAccessor, \"2\", dstPort, dstPort)\n\t\tSo(err, ShouldBeNil)\n\n\t\ttcpPacket, err := newPacket(1, packet.TCPSynMask, testSrcIP, testDstIP, srcPort, dstPort, true, false)\n\t\tSo(err, ShouldBeNil)\n\n\t\tconn := &connection.TCPConnection{\n\t\t\tServiceConnection: true,\n\t\t\tMarkForDeletion:   true,\n\t\t}\n\n\t\thash := tcpPacket.L4FlowHash()\n\t\tenforcer.tcpClient.Put(hash, conn)\n\n\t\ttcpPacket.Mark = \"2\"\n\n\t\tconn1, err := enforcer.appSynRetrieveState(tcpPacket)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(conn1.MarkForDeletion, ShouldBeFalse)\n\n\t\tenforcer.tcpServer.Put(hash, conn)\n\t\t_, err = enforcer.netSynRetrieveState(tcpPacket)\n\t\tSo(err, ShouldBeNil)\n\n\t\ttcpSynAckPacket, err := newPacket(1, packet.TCPSynAckMask, testDstIP, testSrcIP, dstPort, srcPort, true, false)\n\t\tSo(err, ShouldBeNil)\n\n\t\t_, err = enforcer.netSynAckRetrieveState(tcpSynAckPacket)\n\t\tSo(err, ShouldNotBeNil)\n\t\tShouldEqual(err, errNonPUTraffic)\n\t})\n}\n\nfunc TestNetSynRetrieveState(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\t// Testing datapath.netSynRetrieveState\n\t// There are 4 different code branches in this functions\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, secrets, mockTokenAccessor, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tsecrets.EXPECT().TransmittedKey().Return([]byte(\"dummy\")).AnyTimes()\n\t\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil).AnyTimes()\n\t\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\t\terr := CreatePortPolicy(enforcer, \"123456\", \"/ns1\", common.LinuxProcessPU, mockTokenAccessor, \"2\", 9000, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// Test the error case\n\t\tp, err := packet.NewIpv4TCPPacket(1, 0x2, \"127.0.0.1\", \"127.0.0.1\", 43758, 8000)\n\t\tSo(err, ShouldBeNil)\n\t\t_, err = enforcer.netSynRetrieveState(p)\n\t\tSo(err, ShouldNotBeNil)\n\n\t\tp, err = packet.NewIpv4TCPPacket(1, 0x2, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\tconn, err := enforcer.netSynRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\n\t\tenforcer.tcpServer.Put(p.L4FlowHash(), conn)\n\n\t\tSo(conn.GetInitialSequenceNumber(), ShouldEqual, p.TCPSequenceNumber())\n\t\tConvey(\"I retry the same packet\", func() {\n\t\t\tretryconn, err := enforcer.netSynRetrieveState(p)\n\t\t\tassert.Equal(t, err, nil, \"error should be nil\")\n\t\t\tassert.Equal(t, retryconn, conn, \"connection should be same\")\n\t\t})\n\t\tConvey(\"Then i modify the sequence number and retry the packet\", func() {\n\t\t\tp.IncreaseTCPSeq(10)\n\t\t\tconn1, err := enforcer.netSynRetrieveState(p)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(conn1.GetInitialSequenceNumber(), ShouldNotEqual, conn.GetInitialSequenceNumber())\n\t\t\t_, exists := enforcer.tcpServer.Get(p.L4FlowHash())\n\t\t\tif exists {\n\t\t\t\tt.Fail()\n\t\t\t}\n\t\t})\n\n\t})\n}\n\nfunc TestAppSynRetrieveState(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\t// Testing datapath.appSynRetrieveState\n\t// There are 4 different code branches in the function\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, _, _, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\terr := CreatePortPolicy(enforcer, \"testContextID\", \"/ns1\", common.LinuxProcessPU, nil, \"2\", 9000, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// Create a Syn packet\n\t\tp, err := packet.NewIpv4TCPPacket(1, 0x2, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// The error case \"PU context doesn't exist for this syn, return error\"\n\t\t_, err = enforcer.appSynRetrieveState(p)\n\t\tSo(err, ShouldNotBeNil)\n\n\t\tp.Mark = \"2\"\n\n\t\tconn, err := enforcer.appSynRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\n\t\tenforcer.tcpClient.Put(p.L4FlowHash(), conn)\n\n\t\tConvey(\"I replay the same packet\", func() {\n\t\t\tretryconn, err := enforcer.appSynRetrieveState(p)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(retryconn, ShouldNotBeNil)\n\n\t\t})\n\t\tConvey(\"I modify the sequence number and retransmit the packet\", func() {\n\t\t\tp.IncreaseTCPSeq(10)\n\t\t\tretryconn, err := enforcer.appSynRetrieveState(p)\n\t\t\tSo(retryconn, ShouldNotBeNil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, exists := enforcer.tcpClient.Get(p.L4FlowHash())\n\t\t\tif exists {\n\t\t\t\tt.Fail()\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestAppSynAckRetrieveState(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\t// Testing datapath.appSynAckRetrieveState\n\t// There are 2 different code branches in this functions\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, _, _, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\t// Create a SynAck packet\n\t\tp, err := packet.NewIpv4TCPPacket(1, packet.TCPSynAckMask, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// The error case when nothing is in the cache\n\t\t_, err = enforcer.appSynAckRetrieveState(p)\n\t\tSo(err, ShouldNotBeNil)\n\n\t\t// add connection to the cache\n\t\tenforcer.tcpServer.Put(p.L4ReverseFlowHash(), &connection.TCPConnection{})\n\n\t\t// Should be in the cache\n\t\tconn, err := enforcer.appSynAckRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(conn, ShouldNotBeNil)\n\t})\n}\n\nfunc TestNetSynAckRetrieveState(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\t// Testing datapath.netSynAckRetrieveState\n\t// There are 3 different code branches in this functions\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, _, _, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\t// Create a SynAck packet\n\t\tp, err := packet.NewIpv4TCPPacket(1, packet.TCPSynAckMask, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// The error case when nothing is in the cache\n\t\t_, err = enforcer.netSynAckRetrieveState(p)\n\t\tShouldEqual(err, errNonPUTraffic)\n\n\t\t// add connection to the cache\n\t\tenforcer.tcpClient.Put(p.L4ReverseFlowHash(), &connection.TCPConnection{})\n\n\t\t// Should be in the cache\n\t\tconn, err := enforcer.netSynAckRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(conn, ShouldNotBeNil)\n\n\t\t// Mark the connection as deleted\n\t\tconn.MarkForDeletion = true\n\n\t\t// We should get an error\n\t\t_, err = enforcer.netSynAckRetrieveState(p)\n\t\tShouldEqual(err, errOutOfOrderSynAck)\n\t})\n}\n\nfunc TestAppRetrieveState(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\t// Testing datapath.appRetrieveState\n\t// There are 6 branch conditions in this function.\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, _, _, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\t// Create a Rst packet\n\t\tp, err := packet.NewIpv4TCPPacket(1, packet.TCPRstMask, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// 1. We should get the errRstPacket error\n\t\t_, err = enforcer.appRetrieveState(p)\n\t\tShouldEqual(err, errRstPacket)\n\n\t\t// Create a Syn packet\n\t\tp, err = packet.NewIpv4TCPPacket(1, packet.TCPSynMask, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// 2. We should get errNoConnection error\n\t\t_, err = enforcer.appRetrieveState(p)\n\t\tShouldEqual(err, errNoConnection)\n\n\t\t// Create a Ack packet\n\t\tp, err = packet.NewIpv4TCPPacket(1, packet.TCPAckMask, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// 3. We should get error \"No context in app processing\"\n\t\t_, err = enforcer.appRetrieveState(p)\n\t\tShouldResemble(err, errors.New(\"No context in app processing\"))\n\n\t\t// Create port policy\n\t\terr = CreatePortPolicy(enforcer, \"testContextID\", \"/ns1\", common.LinuxProcessPU, nil, \"2\", 43758, 43758)\n\t\tSo(err, ShouldBeNil)\n\n\t\tp.Mark = \"2\"\n\n\t\t// 4. We should get a connection object with UnknownState\n\t\tconn, err := enforcer.appRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(conn, ShouldNotBeNil)\n\t\tShouldEqual(conn.GetState(), connection.UnknownState)\n\n\t\t// add connection to the server cache\n\t\tconnServer := &connection.TCPConnection{}\n\t\tenforcer.tcpServer.Put(p.L4ReverseFlowHash(), connServer)\n\n\t\t// 5. Should be in the cache\n\t\tconn, err = enforcer.appRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\t\tShouldEqual(conn, connServer)\n\n\t\t// add connection to the client cache\n\t\tconnClient := &connection.TCPConnection{}\n\t\tenforcer.tcpClient.Put(p.L4FlowHash(), connClient)\n\n\t\t// 6. Should be in the cache\n\t\tconn, err = enforcer.appRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\t\tShouldEqual(conn, connClient)\n\t})\n}\n\nfunc TestNetRetrieveState(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\t// Testing datapath.netRetrieveState\n\t// There are 7 branch conditions in this function.\n\n\tConvey(\"Given I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, _, _, _, _ := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\t// Create a Rst packet\n\t\tp, err := packet.NewIpv4TCPPacket(1, packet.TCPRstMask, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// 1. We should get the errRstPacket error\n\t\t_, err = enforcer.netRetrieveState(p)\n\t\tShouldEqual(err, errRstPacket)\n\n\t\t// Create a Syn packet\n\t\tp, err = packet.NewIpv4TCPPacket(1, packet.TCPSynMask, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// 2. We should get errNoConnection error\n\t\t_, err = enforcer.netRetrieveState(p)\n\t\tShouldEqual(err, errNoConnection)\n\n\t\t// Create a Ack packet\n\t\tp, err = packet.NewIpv4TCPPacket(1, packet.TCPAckMask, \"127.0.0.1\", \"127.0.0.1\", 43758, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// 3. We should get error \" TCP Port Not Found 9000\"\n\t\t_, err = enforcer.netRetrieveState(p)\n\t\tShouldResemble(err, errors.New(\" TCP Port Not Found 9000\"))\n\n\t\t// Create port policy\n\t\terr = CreatePortPolicy(enforcer, \"testContextID\", \"/ns1\", common.LinuxProcessPU, nil, \"2\", 9000, 9000)\n\t\tSo(err, ShouldBeNil)\n\n\t\tp.Mark = \"2\"\n\n\t\t// 4. We should get a connection object with UnknownState\n\t\tconn, err := enforcer.netRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(conn, ShouldNotBeNil)\n\t\tShouldEqual(conn.GetState(), connection.UnknownState)\n\n\t\t// add connection to the server cache\n\t\tconnServer := &connection.TCPConnection{}\n\t\tenforcer.tcpServer.Put(p.L4FlowHash(), connServer)\n\n\t\t// 5. Should be in the cache\n\t\tconn, err = enforcer.netRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\t\tShouldEqual(conn, connServer)\n\n\t\t// add connection to the client cache\n\t\tconnClient := &connection.TCPConnection{}\n\t\tenforcer.tcpClient.Put(p.L4ReverseFlowHash(), connClient)\n\n\t\t// 6. Should be in the cache\n\t\tconn, err = enforcer.netRetrieveState(p)\n\t\tSo(err, ShouldBeNil)\n\t\tShouldEqual(conn, connClient)\n\n\t\t// Change to a Rst packet\n\t\tp.SetTCPFlags(packet.TCPRstMask)\n\n\t\t// 7. Should be in the cache, but should get error errRstPacket\n\t\t_, err = enforcer.netRetrieveState(p)\n\t\tSo(err, ShouldNotBeNil)\n\t\tShouldEqual(err, errRstPacket)\n\t})\n}\n\n// This is to ensure that if we get tcp fo packet with no identity payload that we drop the packet\nfunc TestProcessNetworkSynPacket(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, _, _, mockCollector, _ := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tflowRecord := CreateFlowRecord(1, testSrcIP, testDstIP, 43758, 80, policy.Reject|policy.Log, collector.MissingToken)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\tConvey(\"So I received a packet with tcp fast open option set but no payload\", func() {\n\n\t\t\tp, err := packet.NewIpv4TCPPacket(1, 0x2, testSrcIP, testDstIP, 43758, 80)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p, ShouldNotBeNil)\n\n\t\t\t// Add the fast open option\n\t\t\tbuffer := append(p.GetBuffer(0), []byte{packet.TCPAuthenticationOption, enforcerconstants.TCPAuthenticationOptionBaseLen, 0, 0}...)\n\t\t\terr = p.UpdatePacketBuffer(buffer, 4)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\terr = p.CheckTCPAuthenticationOption(enforcerconstants.TCPAuthenticationOptionBaseLen)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.IsEmptyTCPPayload(), ShouldBeTrue)\n\n\t\t\tcontext, err := CreatePUContext(enforcer, \"dummyContext\", \"/ns1\", common.LinuxProcessPU, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(context, ShouldNotBeNil)\n\n\t\t\t_, err = enforcer.processNetworkSynPacket(context, connection.NewTCPConnection(context, p), p)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc TestProcessNetworkSynAckPacket(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I setup an enforcer\", t, func() {\n\n\t\tdefer MockGetUDPRawSocket()()\n\n\t\tenforcer, _, _, mockCollector, _ := NewWithMocks(ctrl, \"serverID1\", constants.LocalServer, []string{\"0.0.0.0/0\"}, true)\n\n\t\tflowRecord1 := CreateFlowRecord(1, testDstIP, testSrcIP, 80, 43758, policy.Reject|policy.Log, collector.PolicyDrop)\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord1)).Times(1)\n\n\t\tflowRecord2 := CreateFlowRecord(1, testDstIP, testSrcIP, 80, 43758, policy.Accept, \"\")\n\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord2)).Times(1)\n\n\t\tConvey(\"So I received a packet with tcp fast open option set but no payload\", func() {\n\n\t\t\tp, err := packet.NewIpv4TCPPacket(1, 0x2, testSrcIP, testDstIP, 43758, 80)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p, ShouldNotBeNil)\n\n\t\t\t// Add the fast open option\n\t\t\tbuffer := append(p.GetBuffer(0), []byte{packet.TCPAuthenticationOption, enforcerconstants.TCPAuthenticationOptionBaseLen, 0, 0}...)\n\n\t\t\terr = p.UpdatePacketBuffer(buffer, 4)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\terr = p.CheckTCPAuthenticationOption(enforcerconstants.TCPAuthenticationOptionBaseLen)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.IsEmptyTCPPayload(), ShouldBeTrue)\n\n\t\t\tcontext, err := CreatePUContext(enforcer, \"dummyContext\", \"/ns1\", common.LinuxProcessPU, nil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(context, ShouldNotBeNil)\n\n\t\t\t_, err = enforcer.processNetworkSynAckPacket(context, connection.NewTCPConnection(context, p), p)\n\t\t\tSo(err, ShouldNotBeNil)\n\n\t\t\tConvey(\"Then i add ip acl rule.\", func() {\n\t\t\t\tiprules := policy.IPRuleList{policy.IPRule{\n\t\t\t\t\tAddresses: []string{\"10.1.10.76/32\"},\n\t\t\t\t\tPorts:     []string{\"43758\"},\n\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\t\tPolicyID: \"tcp172/8\"},\n\t\t\t\t}}\n\t\t\t\terr = context.UpdateApplicationACLs(iprules)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t_, err = enforcer.processNetworkSynAckPacket(context, connection.NewTCPConnection(context, p), p)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath_udp.go",
    "content": "package nfqdatapath\n\n// Go libraries\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n\tmarkconstants \"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\t// Default retransmit delay for first packet\n\tretransmitDelay = 200\n\t// rentrasmitRetries is the number of times we will retry\n\tretransmitRetries = 3\n\t// ACLCheckMultipler is the multiplie on delay that is used to attempt and fallbackto acls\n\tACLCheckMultipler = retransmitDelay * 12\n)\n\n// DropReason is used to indicate the drop reason for a packet\ntype DropReason string\n\n// DropReason is the reason a packet is dropped and fin packets are generated\nconst (\n\tInvalidUDPState DropReason = \"invalidUDPState\"\n\tPolicyDrop      DropReason = \"policyDrop\"\n)\n\nvar errHandshakePacket = errors.New(\"handshake packet\")\nvar errDropQueuedPacket = errors.New(\"dropping queued packet\")\n\nfunc calculatedelay(retransmitDelay uint32, multiplier uint32) time.Duration {\n\treturn time.Duration(retransmitDelay * (multiplier + 1))\n}\n\n// ProcessNetworkUDPPacket processes packets arriving from network and are destined to the application.\nfunc (d *Datapath) ProcessNetworkUDPPacket(p *packet.Packet) (conn *connection.UDPConnection, err error) {\n\n\tif d.PacketLogsEnabled() {\n\t\tzap.L().Debug(\"Processing network packet \",\n\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t)\n\t\tdefer zap.L().Debug(\"Finished Processing network packet \",\n\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\tudpPacketType := p.GetUDPType()\n\n\tswitch udpPacketType {\n\tcase packet.UDPSynMask:\n\t\tconn, err = d.netSynUDPRetrieveState(p)\n\t\tif err != nil {\n\t\t\tif d.PacketLogsEnabled() {\n\t\t\t\tzap.L().Debug(\"Packet rejected\",\n\t\t\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t\treturn nil, err\n\t\t}\n\tcase packet.UDPSynAckMask, packet.UDPPolicyRejectMask:\n\t\tconn, err = d.netSynAckUDPRetrieveState(p)\n\t\tif err != nil {\n\t\t\tif d.PacketLogsEnabled() {\n\t\t\t\tzap.L().Debug(\"Syn ack Packet Rejected/ignored\",\n\t\t\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t\t\t)\n\t\t\t}\n\t\t\treturn nil, err\n\t\t}\n\n\tcase packet.UDPFinAckMask:\n\t\tif err := d.processUDPFinPacket(p); err != nil {\n\t\t\tzap.L().Debug(\"unable to process udp fin ack\",\n\t\t\t\tzap.String(\"flowhash\", p.L4FlowHash()), zap.Error(err))\n\t\t\treturn nil, err\n\t\t}\n\t\t// drop control packets\n\t\treturn conn, fmt.Errorf(\"dropping udp fin ack control packet\")\n\n\tdefault:\n\t\t// Process packets that don't have the control header. These are data packets.\n\t\tconn, err = d.netUDPAckRetrieveState(p)\n\t\tif err != nil {\n\t\t\t// Retrieve the context from the packet information.\n\t\t\tcontext, err := d.contextFromIP(false, p.Mark, p.DestPort(), packet.IPProtocolUDP)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, counters.CounterError(counters.ErrNonPUUDPTraffic, errNonPUUDPTraffic)\n\t\t\t}\n\t\t\t// Check if a network acl allows this traffic traffic coming from external network\n\t\t\t_, packetPolicy, err := context.NetworkACLPolicy(p)\n\n\t\t\tif err == nil && packetPolicy.Action.Accepted() {\n\t\t\t\tcontext.Counters().IncrementCounter(counters.ErrSynAckToExtNetAccept)\n\t\t\t\tif err = d.conntrack.UpdateApplicationFlowMark(\n\t\t\t\t\tp.SourceAddress(),\n\t\t\t\t\tp.DestinationAddress(),\n\t\t\t\t\tp.IPProto(),\n\t\t\t\t\tp.SourcePort(),\n\t\t\t\t\tp.DestPort(),\n\t\t\t\t\tmarkconstants.DefaultConnMark,\n\t\t\t\t); err != nil {\n\t\t\t\t\tzap.L().Error(\"Failed to update conntrack table for UDP flow at transmitter\",\n\t\t\t\t\t\tzap.String(\"net-data-acl\", p.L4FlowHash()),\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t)\n\n\t\t\t\t}\n\t\t\t\treturn conn, nil\n\t\t\t}\n\n\t\t\tif err := d.sendUDPFinPacket(p); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"net state not found, unable to send fin ack packets: %s\", err)\n\t\t\t}\n\t\t\tif d.PacketLogsEnabled() {\n\t\t\t\tzap.L().Debug(\"No connection found for the flow, Dropping it\",\n\t\t\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\t// We are processing only one connection at a time.\n\tconn.Lock()\n\tdefer conn.Unlock()\n\n\tp.Print(packet.PacketStageIncoming, d.PacketLogsEnabled())\n\n\tif d.service != nil {\n\t\tif !d.service.PreProcessUDPNetPacket(p, conn.Context, conn) {\n\t\t\tp.Print(packet.PacketFailureService, d.PacketLogsEnabled())\n\t\t\treturn conn, conn.Context.Counters().CounterError(counters.ErrUDPNetPreProcessingFailed, errors.New(\"pre  processing failed for network packet\"))\n\t\t}\n\t}\n\n\t// handle handshake packets and do not deliver to application.\n\taction, claims, err := d.processNetUDPPacket(p, conn.Context, conn)\n\tif err != nil && err != errHandshakePacket && err != errDropQueuedPacket {\n\t\tzap.L().Debug(\"Rejecting packet because of policy decision\",\n\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t\tzap.Error(err),\n\t\t)\n\t\treturn conn, fmt.Errorf(\"packet processing failed for network packet: %s\", err)\n\t}\n\n\t// Process the packet by any external services.\n\tif d.service != nil {\n\t\tif !d.service.PostProcessUDPNetPacket(p, action, claims, conn.Context, conn) {\n\t\t\tp.Print(packet.PacketFailureService, d.PacketLogsEnabled())\n\t\t\treturn conn, conn.Context.Counters().CounterError(counters.ErrUDPNetPostProcessingFailed, errors.New(\"post service processing failed for network packet\"))\n\t\t}\n\t}\n\n\t// If reached the final state, drain the queue.\n\tif conn.GetState() == connection.UDPClientSendAck {\n\t\tconn.SetState(connection.UDPData)\n\t\tfor udpPacket := conn.ReadPacket(); udpPacket != nil; udpPacket = conn.ReadPacket() {\n\t\t\tif d.service != nil {\n\t\t\t\t// PostProcessServiceInterface\n\t\t\t\t// We call it for all outgoing packets.\n\t\t\t\tif !d.service.PostProcessUDPAppPacket(udpPacket, nil, conn.Context, conn) {\n\t\t\t\t\tudpPacket.Print(packet.PacketFailureService, d.PacketLogsEnabled())\n\t\t\t\t\tzap.L().Error(\"Failed to encrypt queued packet\")\n\t\t\t\t}\n\t\t\t}\n\n\t\t\terr = d.ignoreFlow(udpPacket)\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Error(\"Unable to ignore the flow\", zap.Error(err))\n\t\t\t}\n\n\t\t\terr = d.writeUDPSocket(udpPacket.GetBuffer(0), udpPacket)\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Error(\"Unable to transmit Queued UDP packets\", zap.Error(err))\n\t\t\t}\n\t\t}\n\t\treturn conn, fmt.Errorf(\"Drop the packet\")\n\t}\n\n\tif conn.GetState() != connection.UDPData {\n\t\t// handshake packets are not to be delivered to application.\n\n\t\treturn conn, errHandshakePacket\n\n\t}\n\n\treturn conn, nil\n}\n\nfunc (d *Datapath) netSynUDPRetrieveState(p *packet.Packet) (*connection.UDPConnection, error) {\n\n\t// Retrieve the context from the packet information.\n\tcontext, err := d.contextFromIP(false, p.Mark, p.DestPort(), packet.IPProtocolUDP)\n\tif err != nil {\n\t\treturn nil, counters.CounterError(counters.ErrNonPUTraffic, errNonPUTraffic)\n\t}\n\n\t// Check if a connection already exists for this flow. This can happen\n\t// in the case of retransmissions. If there is no connection, create\n\t// a new one.\n\tconn, cerr := d.udpNetOrigConnectionTracker.Get(p.L4FlowHash())\n\tif cerr != nil {\n\t\tconn := connection.NewUDPConnection(context, d.udpSocketWriter)\n\t\tconn.Secrets, conn.Auth.LocalDatapathPrivateKey, conn.Auth.LocalDatapathPublicKeyV1, conn.Auth.LocalDatapathPublicKeySignV1, conn.Auth.LocalDatapathPublicKeyV2, conn.Auth.LocalDatapathPublicKeySignV2 = context.GetSecrets()\n\t\treturn conn, nil\n\t}\n\treturn conn.(*connection.UDPConnection), nil\n}\n\nfunc (d *Datapath) netSynAckUDPRetrieveState(p *packet.Packet) (*connection.UDPConnection, error) {\n\tconn, err := d.udpSourcePortConnectionCache.GetReset(p.SourcePortHash(packet.PacketTypeNetwork), 0)\n\tif err != nil {\n\t\treturn nil, counters.CounterError(counters.ErrUDPSynAckNoConnection, errors.New(\"No connection.Drop the syn ack packet\"))\n\t}\n\n\treturn conn.(*connection.UDPConnection), nil\n}\n\nfunc (d *Datapath) netUDPAckRetrieveState(p *packet.Packet) (*connection.UDPConnection, error) {\n\n\thash := p.L4FlowHash()\n\tconn, err := d.udpNetReplyConnectionTracker.GetReset(hash, 0)\n\tif err != nil {\n\t\tconn, err = d.udpNetOrigConnectionTracker.GetReset(hash, 0)\n\t\tif err != nil {\n\t\t\t// This might be an existing udp connection.\n\t\t\t// Send FinAck to reauthorize the connection.\n\n\t\t\treturn nil, fmt.Errorf(\"net state not found: %s\", err)\n\t\t}\n\t}\n\treturn conn.(*connection.UDPConnection), nil\n}\n\n// processNetUDPPacket processes a network UDP packet and dispatches it to different methods based on the flags.\n// This applies only to control packets.\nfunc (d *Datapath) processNetUDPPacket(udpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection) (action interface{}, claims *tokens.ConnectionClaims, err error) {\n\n\t// Extra check, just in case the caller didn't provide a connection.\n\tif conn == nil {\n\t\treturn nil, nil, fmt.Errorf(\"no connection provided\")\n\t}\n\n\tudpPacketType := udpPacket.GetUDPType()\n\n\t// Update connection state in the internal state machine tracker\n\tswitch udpPacketType {\n\tcase packet.UDPSynMask:\n\n\t\t// Parse the packet for the identity information.\n\t\taction, claims, err = d.processNetworkUDPSynPacket(context, conn, udpPacket)\n\t\tif err != nil {\n\t\t\tif err = d.sendUDPRstPacket(udpPacket, conn); err != nil {\n\t\t\t\tzap.L().Error(\"Unable to send rst packet\", zap.Error(err), zap.String(\"FlowHash\", udpPacket.L4FlowHash()))\n\t\t\t}\n\n\t\t\treturn nil, nil, err\n\t\t}\n\t\t// Send the return packet.\n\t\tif err = d.sendUDPSynAckPacket(udpPacket, context, conn); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\t// Mark the state that we have transmitted a SynAck packet.\n\t\tconn.SetState(connection.UDPReceiverSendSynAck)\n\t\treturn action, claims, errHandshakePacket\n\n\tcase packet.UDPAckMask:\n\t\t// Retrieve the header and parse the signatures.\n\t\tif err = d.processNetworkUDPAckPacket(udpPacket, context, conn); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\t// Set the connection to\n\t\tconn.SetState(connection.UDPReceiverProcessedAck)\n\t\treturn nil, nil, errHandshakePacket\n\n\tcase packet.UDPSynAckMask:\n\t\t// Process the synack header and claims of the other side.\n\t\taction, claims, err = d.processNetworkUDPSynAckPacket(udpPacket, context, conn)\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t\t// Send back the acknowledgement.\n\t\terr = d.sendUDPAckPacket(udpPacket, context, conn)\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\tconn.SetState(connection.UDPClientSendAck)\n\n\t\treturn action, claims, errHandshakePacket\n\tcase packet.UDPPolicyRejectMask:\n\n\t\tif err := d.processUDPPolicyRstPacket(udpPacket, context, conn); err != nil {\n\t\t\tzap.L().Debug(\"unable to process udp policy rst\",\n\t\t\t\tzap.String(\"flowhash\", udpPacket.L4FlowHash()), zap.Error(err))\n\t\t\treturn conn, nil, err\n\t\t}\n\t\treturn conn, nil, fmt.Errorf(\"dropping udp rst control packet\")\n\tdefault:\n\t\tstate := conn.GetState()\n\t\tif state == connection.UDPReceiverProcessedAck || state == connection.UDPClientSendAck || state == connection.UDPData {\n\t\t\tconn.SetState(connection.UDPData)\n\t\t\treturn nil, nil, nil\n\t\t}\n\t\treturn nil, nil, fmt.Errorf(\"invalid packet at state: %d\", state)\n\t}\n}\n\n// ProcessApplicationUDPPacket processes packets arriving from an application and are destined to the network\nfunc (d *Datapath) ProcessApplicationUDPPacket(p *packet.Packet) (conn *connection.UDPConnection, err error) {\n\n\tif d.PacketLogsEnabled() {\n\t\tzap.L().Debug(\"Processing application UDP packet \",\n\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t)\n\t\tdefer zap.L().Debug(\"Finished Processing UDP application packet \",\n\t\t\tzap.String(\"flow\", p.L4FlowHash()),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\t// First retrieve the connection state.\n\tconn, err = d.appUDPRetrieveState(p)\n\tif err != nil {\n\t\tzap.L().Debug(\"Connection not found\", zap.Error(err))\n\t\treturn nil, counters.CounterError(counters.ErrNonPUTraffic, errNonPUTraffic)\n\t}\n\n\t// We are processing only one packet from a given connection at a time.\n\tconn.Lock()\n\tdefer conn.Unlock()\n\n\t// do some pre processing.\n\tif d.service != nil {\n\t\t// PreProcessServiceInterface\n\t\tif !d.service.PreProcessUDPAppPacket(p, conn.Context, conn, packet.UDPSynMask) {\n\t\t\tp.Print(packet.PacketFailureService, d.PacketLogsEnabled())\n\t\t\treturn nil, conn.Context.Counters().CounterError(counters.ErrUDPAppPreProcessingFailed, errors.New(\"pre service processing failed for UDP application packet\"))\n\t\t}\n\t}\n\n\ttriggerControlProtocol := false\n\tswitch conn.GetState() {\n\tcase connection.UDPStart:\n\t\t// Queue the packet. We will send it after we authorize the session.\n\t\tif err = conn.QueuePackets(p); err != nil {\n\t\t\t// unable to queue packets, perhaps queue is full. if start\n\t\t\t// machine is still in start state, we can start authorisation\n\t\t\t// again. A drop counter is incremented.\n\t\t\tzap.L().Debug(\"udp queue full for connection\", zap.String(\"flow\", p.L4FlowHash()))\n\t\t}\n\n\t\t// Set the state indicating that we send out a Syn packet\n\t\tconn.SetState(connection.UDPClientSendSyn)\n\t\t// Drop the packet. We stored it in the queue.\n\t\ttriggerControlProtocol = true\n\n\tcase connection.UDPReceiverProcessedAck, connection.UDPClientSendAck, connection.UDPData:\n\t\tconn.SetState(connection.UDPData)\n\n\tdefault:\n\t\tif err = conn.QueuePackets(p); err != nil {\n\t\t\treturn conn, conn.Context.Counters().CounterError(counters.ErrUDPDropQueueFull, fmt.Errorf(\"Unable to queue packets:%s\", err))\n\t\t}\n\t\treturn conn, conn.Context.Counters().CounterError(counters.ErrUDPDropInNfQueue, errDropQueuedPacket)\n\t}\n\n\tif d.service != nil {\n\t\t// PostProcessServiceInterface\n\t\tif !d.service.PostProcessUDPAppPacket(p, nil, conn.Context, conn) {\n\t\t\tp.Print(packet.PacketFailureService, d.PacketLogsEnabled())\n\t\t\treturn conn, conn.Context.Counters().CounterError(counters.ErrUDPAppPostProcessingFailed, errors.New(\"Encryption failed for application packet\"))\n\t\t}\n\t}\n\n\tif triggerControlProtocol {\n\t\terr = d.triggerNegotiation(p, conn.Context, conn)\n\t\tif err != nil {\n\t\t\treturn conn, conn.Context.Counters().CounterError(counters.ErrUDPDropInNfQueue, errDropQueuedPacket)\n\t\t}\n\t\treturn conn, errDropQueuedPacket\n\t}\n\n\treturn conn, nil\n}\n\nfunc (d *Datapath) appUDPRetrieveState(p *packet.Packet) (*connection.UDPConnection, error) {\n\n\thash := p.L4FlowHash()\n\n\tif conn, err := d.udpAppReplyConnectionTracker.GetReset(hash, 0); err == nil {\n\t\treturn conn.(*connection.UDPConnection), nil\n\t}\n\n\tif conn, err := d.udpAppOrigConnectionTracker.GetReset(hash, 0); err == nil {\n\t\treturn conn.(*connection.UDPConnection), nil\n\t}\n\n\tcontext, err := d.contextFromIP(true, p.Mark, p.SourcePort(), packet.IPProtocolUDP)\n\tif err != nil {\n\t\treturn nil, counters.CounterError(counters.ErrNonPUTraffic, errors.New(\"No context in app processing\"))\n\t}\n\n\treturn connection.NewUDPConnection(context, d.udpSocketWriter), nil\n}\n\n// processApplicationUDPSynPacket processes a single Syn Packet\nfunc (d *Datapath) triggerNegotiation(udpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection) (err error) {\n\tnewPacket, err := d.clonePacketHeaders(udpPacket)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Unable to clone packet: %s\", err)\n\t}\n\tvar udpData []byte\n\tconn.Secrets, conn.Auth.LocalDatapathPrivateKey, udpData = context.GetSynToken(nil, conn.Auth.Nonce, nil)\n\tudpOptions := packet.CreateUDPAuthMarker(packet.UDPSynMask, uint16(len(udpData)))\n\t// Attach the UDP data and token\n\tnewPacket.UDPTokenAttach(udpOptions, udpData)\n\tif udpPacket.PlatformMetadata != nil {\n\t\tnewPacket.PlatformMetadata = udpPacket.PlatformMetadata.Clone()\n\t}\n\tstatusChannel := make(chan bool)\n\n\tgo func() {\n\t\t// We started a handhsake drop reverse packets automatically\n\t\t// Assert connmark before relaseing packets if response is receied\n\t\tif err = d.conntrack.UpdateApplicationFlowMark(\n\t\t\tudpPacket.SourceAddress(),\n\t\t\tudpPacket.DestinationAddress(),\n\t\t\tudpPacket.IPProto(),\n\t\t\tudpPacket.SourcePort(),\n\t\t\tudpPacket.DestPort(),\n\t\t\tmarkconstants.HandshakeConnmark,\n\t\t); err != nil {\n\t\t\tzap.L().Error(\"Failed to update conntrack table for UDP flow at transmitter\",\n\t\t\t\tzap.String(\"app-conn\", udpPacket.L4FlowHash()),\n\t\t\t\tzap.String(\"state\", fmt.Sprintf(\"%d\", conn.GetState())),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\n\t\t}\n\tloop:\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-statusChannel:\n\t\t\t\tbreak loop\n\t\t\tcase <-time.After(ACLCheckMultipler * time.Millisecond):\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tconn.Lock()\n\t\tdefer conn.Unlock()\n\t\tif conn.GetState() == connection.UDPStart {\n\t\t\t// We did not receive any response from the remote.\n\t\t\t// It is most likely an external network lets evaluate acls at this point to see if we are allowed to talk to this ip\n\t\t\treport, pkt, perr := context.ApplicationACLPolicyFromAddr(udpPacket.DestinationAddress(), udpPacket.DestPort(), udpPacket.IPProto())\n\t\t\tif perr != nil && pkt.Action.Rejected() {\n\t\t\t\td.reportExternalServiceFlow(context, report, pkt, true, udpPacket)\n\t\t\t\treturn\n\t\t\t}\n\t\t\t<-time.After(50 * time.Millisecond) //Arbitrary number to ensure last handshake packet is dropped in our tables\n\t\t\t// Assert connmark before relaseing packets if response is receied\n\t\t\tif err = d.conntrack.UpdateApplicationFlowMark(\n\t\t\t\tudpPacket.SourceAddress(),\n\t\t\t\tudpPacket.DestinationAddress(),\n\t\t\t\tudpPacket.IPProto(),\n\t\t\t\tudpPacket.SourcePort(),\n\t\t\t\tudpPacket.DestPort(),\n\t\t\t\tmarkconstants.DefaultExternalConnMark,\n\t\t\t); err != nil {\n\t\t\t\tzap.L().Error(\"Failed to update conntrack table for UDP flow at transmitter\",\n\t\t\t\t\tzap.String(\"app-conn\", udpPacket.L4FlowHash()),\n\t\t\t\t\tzap.String(\"state\", fmt.Sprintf(\"%d\", conn.GetState())),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\n\t\t\t}\n\t\t\tfor udpPacket := conn.ReadPacket(); udpPacket != nil; udpPacket = conn.ReadPacket() {\n\t\t\t\tif d.service != nil {\n\t\t\t\t\t// PostProcessServiceInterface\n\t\t\t\t\t// We call it for all outgoing packets.\n\t\t\t\t\tif !d.service.PostProcessUDPAppPacket(udpPacket, nil, conn.Context, conn) {\n\t\t\t\t\t\tudpPacket.Print(packet.PacketFailureService, d.PacketLogsEnabled())\n\t\t\t\t\t\tzap.L().Error(\"Failed to encrypt queued packet\")\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\terr = d.ignoreFlow(udpPacket)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"Unable to ignore the flow\", zap.Error(err))\n\t\t\t\t}\n\n\t\t\t\terr = d.writeUDPSocket(udpPacket.GetBuffer(0), udpPacket)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"Unable to transmit Queued UDP packets\", zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t\tconn.SetState(connection.UDPData)\n\t\t\td.reportExternalServiceFlow(context, report, pkt, true, udpPacket)\n\t\t\treturn\n\t\t}\n\n\t}()\n\n\t// send packet\n\terr = d.writeWithRetransmit(newPacket, conn, conn.SynChannel(), statusChannel)\n\tif err != nil {\n\t\tzap.L().Error(\"Unable to send syn token on raw socket\", zap.Error(err), zap.Time(\"time\", time.Now()))\n\t\treturn fmt.Errorf(\"unable to transmit syn packet\")\n\t}\n\n\t// Populate the caches to track the connection\n\thash := udpPacket.L4FlowHash()\n\td.udpAppOrigConnectionTracker.AddOrUpdate(hash, conn)\n\td.udpSourcePortConnectionCache.AddOrUpdate(newPacket.SourcePortHash(packet.PacketTypeApplication), conn)\n\n\treturn nil\n\n}\n\nfunc (d *Datapath) writeWithRetransmit(udpPacket *packet.Packet, conn *connection.UDPConnection, stop chan bool, statusChan chan bool) error {\n\tbuffer := udpPacket.GetBuffer(0)\n\tlocalBuffer := make([]byte, len(buffer))\n\tcopy(localBuffer, buffer)\n\tzap.L().Debug(\"TRYINGT to send control packet\", zap.String(\"FlowHash\", udpPacket.L4FlowHash()))\n\tif err := d.writeUDPSocket(localBuffer, udpPacket); err != nil {\n\t\tzap.L().Error(\"Failed to write control packet to socket\", zap.Error(err), zap.String(\"FlowHash\", udpPacket.L4FlowHash()))\n\t\treturn err\n\t}\n\n\tgo func() {\n\n\t\tfor retries := 0; retries < retransmitRetries; retries++ {\n\t\t\tdelay := time.Millisecond * time.Duration((retransmitDelay * (retries + 1)))\n\t\t\tselect {\n\t\t\tcase <-stop:\n\t\t\t\treturn\n\t\t\tcase <-time.After(delay):\n\t\t\t\tif err := d.writeUDPSocket(localBuffer, udpPacket); err != nil {\n\t\t\t\t\tzap.L().Error(\"Failed to write control packet to socket\", zap.Error(err), zap.String(\"FlowHash\", udpPacket.L4FlowHash()))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t// We did not get a synack maybe this dest is an external network\n\t\tif statusChan != nil {\n\t\t\tzap.L().Debug(\"Timedout should start acl\")\n\t\t\tstatusChan <- true\n\t\t}\n\t\t// retransmits did not succeed. Reset the state machine so that\n\t\t// next packet can try again.\n\t\tconn.SetState(connection.UDPStart)\n\n\t}()\n\treturn nil\n}\n\nfunc (d *Datapath) clonePacketHeaders(p *packet.Packet) (*packet.Packet, error) {\n\t// copy the ip and udp headers.\n\tnewSize := uint16(p.IPHeaderLen() + packet.UDPDataPos)\n\tnewPacket := make([]byte, newSize)\n\tp.FixupIPHdrOnDataModify(p.IPTotalLen(), newSize)\n\n\torigBuffer := p.GetBuffer(0)\n\t_ = copy(newPacket, origBuffer[:newSize])\n\n\treturn packet.New(packet.PacketTypeApplication, newPacket, p.Mark, true)\n}\n\n// sendUDPSynAckPacket processes a UDP SynAck packet\nfunc (d *Datapath) sendUDPSynAckPacket(udpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection) (err error) {\n\n\tclaimsHeader := claimsheader.NewClaimsHeader()\n\tclaims := &tokens.ConnectionClaims{\n\t\tCT:       context.CompressedTags(),\n\t\tLCL:      conn.Auth.Nonce[:],\n\t\tRMT:      conn.Auth.RemoteNonce,\n\t\tDEKV1:    conn.Auth.LocalDatapathPublicKeyV1,\n\t\tSDEKV1:   conn.Auth.LocalDatapathPublicKeySignV1,\n\t\tDEKV2:    conn.Auth.LocalDatapathPublicKeyV2,\n\t\tSDEKV2:   conn.Auth.LocalDatapathPublicKeySignV2,\n\t\tID:       context.ManagementID(),\n\t\tRemoteID: conn.Auth.RemoteContextID,\n\t}\n\n\tvar udpData []byte\n\n\tudpData, err = d.tokenAccessor.CreateSynAckPacketToken(conn.Auth.Proto314, claims, conn.EncodedBuf[:], conn.Auth.Nonce[:], claimsHeader, conn.Secrets, conn.Auth.SecretKey)\n\tif err != nil {\n\t\treturn counters.CounterError(appUDPSynAckCounterFromError(err), err)\n\t}\n\n\t// Create UDP Option\n\n\tudpPacket.CreateReverseFlowPacket()\n\n\t// This for Windows and isn't necessary, but helps when driver is logging\n\terr = d.reverseFlow(udpPacket)\n\tif err != nil {\n\t\treturn counters.CounterError(appUDPSynAckCounterFromError(err), err)\n\t}\n\t// Create UDP Option\n\tudpOptions := packet.CreateUDPAuthMarker(packet.UDPSynAckMask, uint16(len(udpData)))\n\t// Attach the UDP data and token\n\tudpPacket.UDPTokenAttach(udpOptions, udpData)\n\n\t// If we have already a backgroun re-transmit session, stop it at this point. We will\n\t// start from the beginning.\n\tif conn.GetState() == connection.UDPReceiverSendSynAck {\n\t\tconn.SynAckStop()\n\t}\n\n\t// Only start the retransmission timer once. Not on every packet.\n\tif err := d.writeWithRetransmit(udpPacket, conn, conn.SynAckChannel(), nil); err != nil {\n\t\tzap.L().Debug(\"Unable to send synack token on raw socket\", zap.Error(err))\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (d *Datapath) sendUDPAckPacket(udpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection) (err error) {\n\t// This for Windows and isn't necessary, but helps when driver is logging\n\terr = d.reverseFlow(udpPacket)\n\tif err != nil {\n\t\treturn counters.CounterError(appUDPAckCounterFromError(err), err)\n\t}\n\n\tudpPacket.CreateReverseFlowPacket()\n\n\tclaims := &tokens.ConnectionClaims{\n\t\tID:       context.ManagementID(),\n\t\tRMT:      conn.Auth.RemoteNonce,\n\t\tRemoteID: conn.Auth.RemoteContextID,\n\t}\n\n\tudpData, err := d.tokenAccessor.CreateAckPacketToken(conn.Auth.Proto314, conn.Auth.SecretKey, claims, conn.EncodedBuf[:])\n\tif err != nil {\n\t\treturn counters.CounterError(appUDPAckCounterFromError(err), err)\n\t}\n\t// Create UDP Option\n\tudpOptions := packet.CreateUDPAuthMarker(packet.UDPAckMask, uint16(len(udpData)))\n\t// Attach the UDP data and token\n\tudpPacket.UDPTokenAttach(udpOptions, udpData)\n\n\t// send packet\n\terr = d.writeUDPSocket(udpPacket.GetBuffer(0), udpPacket)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// We reached final state drain the queue here\n\n\t<-time.After(40 * time.Millisecond) //Arbitrary number give receiver chance to plumb conntrack\n\tfor udpPacket := conn.ReadPacket(); udpPacket != nil; udpPacket = conn.ReadPacket() {\n\t\tif d.service != nil {\n\t\t\t// PostProcessServiceInterface\n\t\t\t// We call it for all outgoing packets.\n\t\t\tif !d.service.PostProcessUDPAppPacket(udpPacket, nil, conn.Context, conn) {\n\t\t\t\tudpPacket.Print(packet.PacketFailureService, d.PacketLogsEnabled())\n\t\t\t\tzap.L().Error(\"Failed to encrypt queued packet\")\n\t\t\t}\n\t\t}\n\n\t\terr = d.ignoreFlow(udpPacket)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Unable to ignore the flow\", zap.Error(err))\n\t\t}\n\n\t\terr = d.writeUDPSocket(udpPacket.GetBuffer(0), udpPacket)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"Unable to transmit Queued UDP packets\", zap.Error(err))\n\t\t}\n\t}\n\n\t// When server and client are the same machine, we can't ignore the\n\t// flow until the server side receives the Ack packet\n\tif !udpPacket.SourceAddress().Equal(udpPacket.DestinationAddress()) {\n\t\tif err := d.ignoreFlow(udpPacket); err != nil {\n\t\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t\t}\n\t}\n\tif err = d.conntrack.UpdateApplicationFlowMark(\n\t\tudpPacket.SourceAddress(),\n\t\tudpPacket.DestinationAddress(),\n\t\tudpPacket.IPProto(),\n\t\tudpPacket.SourcePort(),\n\t\tudpPacket.DestPort(),\n\t\tmarkconstants.DefaultConnMark,\n\t); err != nil {\n\t\tzap.L().Error(\"Failed to update conntrack table for UDP flow at transmitter\",\n\t\t\tzap.String(\"app-conn\", udpPacket.L4FlowHash()),\n\t\t\tzap.String(\"state\", fmt.Sprintf(\"%d\", conn.GetState())),\n\t\t\tzap.Error(err),\n\t\t)\n\t\treturn err\n\t}\n\n\tconn.SetState(connection.UDPData)\n\tzap.L().Debug(\"Clearing fin packet entry in cache\", zap.String(\"flowhash\", udpPacket.L4FlowHash()))\n\tif err := d.udpFinPacketTracker.Remove(udpPacket.L4FlowHash()); err != nil {\n\t\tzap.L().Debug(\"Unable to remove entry from udp finack cache\")\n\t}\n\treturn nil\n}\n\n// processNetworkUDPSynPacket processes a syn packet arriving from the network\nfunc (d *Datapath) processNetworkUDPSynPacket(context *pucontext.PUContext, conn *connection.UDPConnection, udpPacket *packet.Packet) (action interface{}, claims *tokens.ConnectionClaims, err error) {\n\n\trejected := false\n\tnetworkReport, pkt, perr := context.NetworkACLPolicy(udpPacket)\n\tif perr == nil {\n\t\trejected = pkt.Action.Rejected()\n\t\tif rejected {\n\t\t\tperr = fmt.Errorf(\"rejected by ACL policy %s\", pkt.PolicyID)\n\t\t}\n\t} else {\n\t\t// We got an error, but ensure it isn't the catch all policy\n\t\tif !(pkt != nil && pkt.Action.Rejected() && pkt.PolicyID == \"default\") {\n\t\t\trejected = true\n\t\t}\n\t}\n\n\tif rejected {\n\t\td.reportExternalServiceFlow(context, networkReport, pkt, false, udpPacket)\n\t\treturn nil, nil, context.Counters().CounterError(counters.ErrUDPSynDroppedPolicy, fmt.Errorf(\"packet had identity: incoming connection dropped:due to reject acl %s\", perr))\n\t}\n\tclaims = &conn.Auth.ConnectionClaims\n\tsecretKey, _, controller, remoteNonce, remoteContextID, proto314, err := d.tokenAccessor.ParsePacketToken(conn.Auth.LocalDatapathPrivateKey, udpPacket.ReadUDPToken(), conn.Secrets, claims, false)\n\n\tif err != nil {\n\t\td.reportUDPRejectedFlow(udpPacket, conn, collector.DefaultEndPoint, context.ManagementID(), context, collector.InvalidToken, nil, nil, false)\n\t\treturn nil, nil, conn.Context.Counters().CounterError(netUDPSynCounterFromError(err), fmt.Errorf(\"UDP Syn packet dropped because of invalid token: %s\", err))\n\t}\n\n\tif controller != nil && !controller.SameController {\n\t\tconn.SourceController = controller.Controller\n\t}\n\n\t// Why is this required. Take a look.\n\t//txLabel, _ := claims.T.Get(enforcerconstants.TransmitterLabel)\n\n\t// Add the port as a label with an @ prefix. These labels are invalid otherwise\n\t// If all policies are restricted by port numbers this will allow port-specific policies\n\ttags := claims.T.Copy()\n\ttags.AppendKeyValue(constants.PortNumberLabelString, fmt.Sprintf(\"%s/%s\", constants.UDPProtoString, strconv.Itoa(int(udpPacket.DestPort()))))\n\n\t// Add the controller to the claims\n\tif controller != nil && len(controller.Controller) > 0 {\n\t\ttags.AppendKeyValue(constants.ControllerLabelString, controller.Controller)\n\t}\n\n\treport, pkt := context.SearchRcvRules(tags)\n\tif pkt.Action.Rejected() {\n\t\td.reportUDPRejectedFlow(udpPacket, conn, remoteContextID, context.ManagementID(), context, collector.PolicyDrop, report, pkt, false)\n\t\treturn nil, nil, conn.Context.Counters().CounterError(counters.ErrUDPSynDroppedPolicy, fmt.Errorf(\"connection rejected because of policy: %s\", claims.T.String()))\n\t}\n\n\thash := udpPacket.L4FlowHash()\n\n\t// conntrack\n\td.udpNetOrigConnectionTracker.AddOrUpdate(hash, conn)\n\td.udpAppReplyConnectionTracker.AddOrUpdate(udpPacket.L4ReverseFlowHash(), conn)\n\n\tconn.Auth.SecretKey = secretKey\n\tconn.Auth.RemoteNonce = remoteNonce\n\tconn.Auth.RemoteContextID = remoteContextID\n\tconn.Auth.Proto314 = proto314\n\n\t// Record actions\n\tconn.ReportFlowPolicy = report\n\tconn.PacketFlowPolicy = pkt\n\n\treturn pkt, claims, nil\n}\n\nfunc (d *Datapath) processNetworkUDPSynAckPacket(udpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection) (action interface{}, claims *tokens.ConnectionClaims, err error) {\n\tconn.SynStop()\n\tclaims = &conn.Auth.ConnectionClaims\n\tsecretKey, _, controller, remoteNonce, remoteContextID, proto314, err := d.tokenAccessor.ParsePacketToken(conn.Auth.LocalDatapathPrivateKey, udpPacket.ReadUDPToken(), conn.Secrets, claims, true)\n\tif err != nil {\n\t\td.reportUDPRejectedFlow(udpPacket, conn, context.ManagementID(), collector.DefaultEndPoint, context, collector.MissingToken, nil, nil, true)\n\t\treturn nil, nil, conn.Context.Counters().CounterError(netUDPSynAckCounterFromError(err), errors.New(\"SynAck packet dropped because of bad claims\"))\n\t}\n\n\tif controller != nil && !controller.SameController {\n\t\tconn.DestinationController = controller.Controller\n\t}\n\t// Add the port as a label with an @ prefix. These labels are invalid otherwise\n\t// If all policies are restricted by port numbers this will allow port-specific policies\n\ttags := claims.T.Copy()\n\ttags.AppendKeyValue(constants.PortNumberLabelString, fmt.Sprintf(\"%s/%s\", constants.UDPProtoString, strconv.Itoa(int(udpPacket.SourcePort()))))\n\n\t// Add the controller to the claims\n\tif controller != nil && len(controller.Controller) > 0 {\n\t\ttags.AppendKeyValue(constants.ControllerLabelString, controller.Controller)\n\t}\n\n\treport, pkt := context.SearchTxtRules(tags, !d.mutualAuthorization)\n\tif pkt.Action.Rejected() {\n\t\td.reportUDPRejectedFlow(udpPacket, conn, remoteContextID, context.ManagementID(), context, collector.PolicyDrop, report, pkt, true)\n\t\treturn nil, nil, conn.Context.Counters().CounterError(counters.ErrUDPSynAckPolicy, fmt.Errorf(\"dropping because of reject rule on transmitter: %s\", claims.T.String()))\n\t}\n\n\t// conntrack\n\td.udpNetReplyConnectionTracker.AddOrUpdate(udpPacket.L4FlowHash(), conn)\n\tconn.Auth.SecretKey = secretKey\n\tconn.Auth.RemoteNonce = remoteNonce\n\tconn.Auth.RemoteContextID = remoteContextID\n\tconn.Auth.Proto314 = proto314\n\n\treturn pkt, claims, nil\n}\n\nfunc (d *Datapath) processNetworkUDPAckPacket(udpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection) (err error) {\n\tconn.SynAckStop()\n\tif err = d.tokenAccessor.ParseAckToken(conn.Auth.Proto314, conn.Auth.SecretKey, conn.Auth.Nonce[:], udpPacket.ReadUDPToken(), &conn.Auth.ConnectionClaims); err != nil {\n\t\td.reportUDPRejectedFlow(udpPacket, conn, conn.Auth.RemoteContextID, context.ManagementID(), context, collector.InvalidToken, conn.ReportFlowPolicy, conn.PacketFlowPolicy, false)\n\t\treturn conn.Context.Counters().CounterError(netUDPAckCounterFromError(err), fmt.Errorf(\"ack packet dropped because signature validation failed: %s\", err))\n\t}\n\n\t// For Windows, we allow the flow\n\tif err := d.setFlowState(udpPacket, true); err != nil {\n\t\tzap.L().Error(\"Failed to ignore flow\", zap.Error(err))\n\t}\n\n\t// Plumb connmark rule here.\n\tif err := d.conntrack.UpdateNetworkFlowMark(\n\t\tudpPacket.SourceAddress(),\n\t\tudpPacket.DestinationAddress(),\n\t\tudpPacket.IPProto(),\n\t\tudpPacket.SourcePort(),\n\t\tudpPacket.DestPort(),\n\t\tmarkconstants.DefaultConnMark,\n\t); err != nil {\n\t\tzap.L().Error(\"Failed to update conntrack table after ack packet\")\n\t}\n\n\td.reportUDPAcceptedFlow(udpPacket, conn, conn.Auth.RemoteContextID, context.ManagementID(), context, conn.ReportFlowPolicy, conn.PacketFlowPolicy, false)\n\n\tconn.Context.Counters().IncrementCounter(counters.ErrUDPConnectionsProcessed)\n\treturn nil\n}\n\n// sendUDPFinPacket sends a Fin packet to Peer.\nfunc (d *Datapath) sendUDPFinPacket(udpPacket *packet.Packet) (err error) {\n\t// Create UDP Option\n\tudpOptions := packet.CreateUDPAuthMarker(packet.UDPFinAckMask, 0)\n\tudpPacket.CreateReverseFlowPacket()\n\n\terr = d.reverseFlow(udpPacket)\n\tif err != nil {\n\n\t\treturn counters.CounterError(counters.ErrUDPDropFin, err)\n\t}\n\t// Attach the UDP data and token\n\tudpPacket.UDPTokenAttach(udpOptions, []byte{})\n\n\t// no need for retransmits here.\n\terr = d.writeUDPSocket(udpPacket.GetBuffer(0), udpPacket)\n\tif err != nil {\n\t\tzap.L().Debug(\"Unable to send fin packet on raw socket:\", zap.Error(err))\n\t\treturn counters.CounterError(counters.ErrUDPDropFin, fmt.Errorf(\"Unable to send fin packet on raw socket: %s\", err.Error()))\n\t}\n\n\treturn nil\n}\n\n// sendUDPRstPacket sends a rst packet to Peer.\nfunc (d *Datapath) sendUDPRstPacket(udpPacket *packet.Packet, conn *connection.UDPConnection) (err error) {\n\t// Create UDP Option\n\tudpOptions := packet.CreateUDPAuthMarker(packet.UDPPolicyRejectMask, 0)\n\tudpPacket.CreateReverseFlowPacket()\n\t// TODO ::: Have a signed payload this packets will force remote end to process acls\n\t// So we have to be sure someone we trust send this\n\terr = d.reverseFlow(udpPacket)\n\tif err != nil {\n\t\treturn conn.Context.Counters().CounterError(counters.ErrUDPDropRst, err)\n\t}\n\n\t// Attach the UDP data and token\n\tudpPacket.UDPTokenAttach(udpOptions, []byte{})\n\n\t// For Windows, this mark udpPacket packet so that when writeUDPSocket is called,\n\t// it will send the packet but will drop additional packets for this flow.\n\tif err := d.dropFlow(udpPacket); err != nil {\n\t\tzap.L().Error(\"Failed to drop flow\", zap.Error(err))\n\t}\n\n\t// no need for retransmits here.\n\terr = d.writeUDPSocket(udpPacket.GetBuffer(0), udpPacket)\n\tif err != nil {\n\t\tzap.L().Debug(\"Unable to send fin packet on raw socket\", zap.Error(err))\n\t\treturn conn.Context.Counters().CounterError(counters.ErrUDPDropRst, fmt.Errorf(\"Unable to send rst packet on raw socket: %s\", err.Error()))\n\t}\n\n\t// conn.SynStop()\n\t// conn.SynAckStop()\n\t// Plumb connmark rule here. drop packet on this flow. Till we see a acceptable handshake packet again\n\tif err := d.conntrack.UpdateNetworkFlowMark(\n\t\tudpPacket.SourceAddress(),\n\t\tudpPacket.DestinationAddress(),\n\t\tudpPacket.IPProto(),\n\t\tudpPacket.SourcePort(),\n\t\tudpPacket.DestPort(),\n\t\tmarkconstants.DropConnmark,\n\t); err != nil {\n\t\tzap.L().Error(\"Failed to update conntrack table after ack packet\")\n\t}\n\treturn nil\n}\n\nfunc (d *Datapath) processUDPPolicyRstPacket(udpPacket *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection) (err error) { // nolint\n\tconn.SetState(connection.UDPRST)\n\tconn.SynStop()\n\tconn.SynAckStop()\n\tif err := d.udpAppOrigConnectionTracker.Remove(udpPacket.L4ReverseFlowHash()); err != nil {\n\t\tzap.L().Debug(\"Failed to clean cache udpappOrigConnectionTracker\", zap.Error(err))\n\t}\n\tif err := d.udpSourcePortConnectionCache.Remove(udpPacket.SourcePortHash(packet.PacketTypeNetwork)); err != nil {\n\t\tzap.L().Debug(\"Failed to clean cache udpsourcePortConnectionCache\", zap.Error(err))\n\t}\n\tif err := d.setFlowState(udpPacket, false); err != nil {\n\t\tzap.L().Error(\"Failed to drop flow\", zap.Error(err))\n\t}\n\tif err := d.conntrack.UpdateNetworkFlowMark(\n\t\tudpPacket.SourceAddress(),\n\t\tudpPacket.DestinationAddress(),\n\t\tudpPacket.IPProto(),\n\t\tudpPacket.SourcePort(),\n\t\tudpPacket.DestPort(),\n\t\tmarkconstants.DropConnmark,\n\t); err != nil {\n\t\tzap.L().Error(\"Failed to update conntrack table after ack packet\")\n\t}\n\treturn nil\n}\n\n// Update the udp fin cache and delete the connmark.\nfunc (d *Datapath) processUDPFinPacket(udpPacket *packet.Packet) (err error) { // nolint\n\n\t// add it to the udp fin cache. If we have already received the fin packet\n\t// for this flow. There is no need to change the connmark label again.\n\tif d.udpFinPacketTracker.AddOrUpdate(udpPacket.L4ReverseFlowHash(), true) {\n\t\treturn nil\n\t}\n\n\t// clear cache entries.\n\tif err := d.udpAppOrigConnectionTracker.Remove(udpPacket.L4ReverseFlowHash()); err != nil {\n\t\tzap.L().Debug(\"Failed to clean cache udpappOrigConnectionTracker\", zap.Error(err))\n\t}\n\tif err := d.udpSourcePortConnectionCache.Remove(udpPacket.SourcePortHash(packet.PacketTypeNetwork)); err != nil {\n\t\tzap.L().Debug(\"Failed to clean cache udpsourcePortConnectionCache\", zap.Error(err))\n\t}\n\tif err := d.setFlowState(udpPacket, false); err != nil {\n\t\tzap.L().Error(\"Failed to drop flow\", zap.Error(err))\n\t}\n\tif err = d.conntrack.UpdateNetworkFlowMark(\n\t\tudpPacket.SourceAddress(),\n\t\tudpPacket.DestinationAddress(),\n\t\tudpPacket.IPProto(),\n\t\tudpPacket.SourcePort(),\n\t\tudpPacket.DestPort(),\n\t\tmarkconstants.DeleteConnmark,\n\t); err != nil {\n\t\tzap.L().Error(\"Failed to update conntrack table for flow to terminate connection\",\n\t\t\tzap.String(\"app-conn\", udpPacket.L4FlowHash()),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\treturn nil\n}\n\n// note: for platforms that need it (Windows), please ensure that udpPacket.PlatformMetadata is set.\n// thus, for any Packets created outside of the driver packet callback, the originating metadata must be\n// propagated to the udpPacket argument before this call.\nfunc (d *Datapath) writeUDPSocket(buf []byte, udpPacket *packet.Packet) error {\n\treturn d.udpSocketWriter.WriteSocket(buf, udpPacket.IPversion(), udpPacket.PlatformMetadata)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/datapath_windows.go",
    "content": "// +build windows\n\npackage nfqdatapath\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"syscall\"\n\t\"unsafe\"\n\n\t\"github.com/ghedo/go.pkt/layers\"\n\tgpacket \"github.com/ghedo/go.pkt/packet\"\n\t\"github.com/ghedo/go.pkt/packet/ipv4\"\n\t\"github.com/ghedo/go.pkt/packet/tcp\"\n\t\"github.com/pkg/errors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/afinetrawsocket\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/sys/windows\"\n)\n\nfunc adjustConntrack(mode constants.ModeType) {\n}\n\nfunc (d *Datapath) reverseFlow(pkt *packet.Packet) error {\n\twindata, ok := pkt.PlatformMetadata.(*afinetrawsocket.WindowPlatformMetadata)\n\tif !ok {\n\t\treturn errors.New(\"no WindowPlatformMetadata for reverseFlow\")\n\t}\n\n\taddress := windata.PacketInfo.RemoteAddr\n\twindata.PacketInfo.RemoteAddr = windata.PacketInfo.LocalAddr\n\twindata.PacketInfo.LocalAddr = address\n\n\tport := windata.PacketInfo.RemotePort\n\twindata.PacketInfo.RemotePort = windata.PacketInfo.LocalPort\n\twindata.PacketInfo.LocalPort = port\n\n\treturn nil\n}\n\nfunc (d *Datapath) drop(pkt *packet.Packet) error {\n\twindata, ok := pkt.PlatformMetadata.(*afinetrawsocket.WindowPlatformMetadata)\n\tif !ok {\n\t\treturn errors.New(\"no WindowPlatformMetadata for drop\")\n\t}\n\twindata.Drop = true\n\treturn nil\n}\n\nfunc (d *Datapath) setMark(pkt *packet.Packet, mark uint32) error {\n\twindata, ok := pkt.PlatformMetadata.(*afinetrawsocket.WindowPlatformMetadata)\n\tif !ok {\n\t\treturn errors.New(\"no WindowPlatformMetadata for setMark\")\n\t}\n\twindata.SetMark = mark\n\treturn nil\n}\n\n// ignoreFlow is for Windows, because we need a way to explicitly notify of an 'ignore flow' condition,\n// without going through flowtracking, to be called synchronously in datapath processing\nfunc (d *Datapath) ignoreFlow(pkt *packet.Packet) error {\n\twindata, ok := pkt.PlatformMetadata.(*afinetrawsocket.WindowPlatformMetadata)\n\tif !ok {\n\t\treturn errors.New(\"no WindowPlatformMetadata for ignoreFlow\")\n\t}\n\twindata.IgnoreFlow = true\n\treturn nil\n}\n\n// dropFlow will tell the windows driver to continue to drop packets for this flow.\nfunc (d *Datapath) dropFlow(pkt *packet.Packet) error {\n\twindata, ok := pkt.PlatformMetadata.(*afinetrawsocket.WindowPlatformMetadata)\n\tif !ok {\n\t\treturn errors.New(\"no WindowPlatformMetadata for dropFlow\")\n\t}\n\twindata.DropFlow = true\n\treturn nil\n}\n\n// setFlowState will not send the packet but will tell the Windows driver to either accept or drop the flow.\nfunc (d *Datapath) setFlowState(pkt *packet.Packet, accepted bool) error {\n\twindata, ok := pkt.PlatformMetadata.(*afinetrawsocket.WindowPlatformMetadata)\n\tif !ok {\n\t\treturn errors.New(\"no WindowPlatformMetadata for setFlowState\")\n\t}\n\n\tbuf := pkt.GetBuffer(0)\n\tpacketInfo := windata.PacketInfo\n\tpacketInfo.NewPacket = 1\n\tpacketInfo.Drop = 1\n\tpacketInfo.IgnoreFlow = 0\n\tpacketInfo.DropFlow = 0\n\tif accepted {\n\t\tpacketInfo.IgnoreFlow = 1\n\t} else {\n\t\tpacketInfo.DropFlow = 1\n\t}\n\tpacketInfo.PacketSize = uint32(len(buf))\n\tif err := frontman.Wrapper.PacketFilterForward(&packetInfo, buf); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc (d *Datapath) startInterceptors(ctx context.Context) {\n\terr := d.startFrontmanPacketFilter(ctx, d.nflogger)\n\tif err != nil {\n\t\tzap.L().Fatal(\"Unable to initialize windows packet proxy\", zap.Error(err))\n\t}\n}\n\ntype pingConn struct {\n\tSourceIP   net.IP\n\tDestIP     net.IP\n\tSourcePort uint16\n\tDestPort   uint16\n}\n\nfunc dialIP(srcIP, dstIP net.IP) (PingConn, error) {\n\treturn &pingConn{}, nil\n}\n\n// Close not implemented.\nfunc (p *pingConn) Close() error {\n\treturn nil\n}\n\n// Write sends the packet to network.\nfunc (p *pingConn) Write(data []byte) (int, error) {\n\n\tipv4 := uint8(0)\n\tif len(p.SourceIP) == net.IPv4len {\n\t\tipv4 = 1\n\t}\n\n\tpacketInfo := frontman.PacketInfo{\n\t\tIpv4:             ipv4,\n\t\tProtocol:         windows.IPPROTO_TCP,\n\t\tOutbound:         1,\n\t\tNewPacket:        1,\n\t\tNoPidMatchOnFlow: 1,\n\t\tLocalPort:        p.SourcePort,\n\t\tRemotePort:       p.DestPort,\n\t\tLocalAddr:        convertToDriverFormat(p.SourceIP),\n\t\tRemoteAddr:       convertToDriverFormat(p.DestIP),\n\t\tPacketSize:       uint32(len(data)),\n\t}\n\n\tdllRet, err := frontman.Driver.PacketFilterForward(uintptr(unsafe.Pointer(&packetInfo)), uintptr(unsafe.Pointer(&data[0])))\n\tif dllRet == 0 && err != nil {\n\t\treturn 0, err\n\t}\n\n\treturn len(data), nil\n}\n\n// ConstructWirePacket returns IP packet with given TCP and payload in wire format.\nfunc (p *pingConn) ConstructWirePacket(srcIP, dstIP net.IP, transport gpacket.Packet, payload gpacket.Packet) ([]byte, error) {\n\n\tipPacket := ipv4.Make()\n\tipPacket.SrcAddr = srcIP\n\tipPacket.DstAddr = dstIP\n\tipPacket.Protocol = ipv4.TCP\n\n\t// pack the layers together.\n\tbuf, err := layers.Pack(ipPacket, transport, payload)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to encode packet to wire format: %v\", err)\n\t}\n\n\ttcpPacket := transport.(*tcp.Packet)\n\n\tp.SourceIP = srcIP\n\tp.DestIP = dstIP\n\tp.SourcePort = tcpPacket.SrcPort\n\tp.DestPort = tcpPacket.DstPort\n\n\treturn buf, nil\n}\n\nfunc bindRandomPort(tcpConn *connection.TCPConnection) (uint16, error) {\n\n\tfd, err := windows.Socket(windows.AF_INET, windows.SOCK_STREAM, windows.IPPROTO_TCP)\n\tif err != nil {\n\t\treturn 0, fmt.Errorf(\"unable to open socket, fd: %d : %s\", fd, err)\n\t}\n\n\taddr := windows.SockaddrInet4{Port: 0}\n\tcopy(addr.Addr[:], net.ParseIP(\"127.0.0.1\").To4())\n\tif err = windows.Bind(fd, &addr); err != nil {\n\t\twindows.CloseHandle(fd) // nolint: errcheck\n\t\treturn 0, fmt.Errorf(\"unable to bind socket: %s\", err)\n\t}\n\n\tsockAddr, err := windows.Getsockname(fd)\n\tif err != nil {\n\t\twindows.CloseHandle(fd) // nolint: errcheck\n\t\treturn 0, fmt.Errorf(\"unable to get socket address: %s\", err)\n\t}\n\n\tip4Addr, ok := sockAddr.(*windows.SockaddrInet4)\n\tif !ok {\n\t\twindows.CloseHandle(fd) // nolint: errcheck\n\t\treturn 0, fmt.Errorf(\"invalid socket address: %T\", sockAddr)\n\t}\n\n\ttcpConn.PingConfig.SetSocketFd(uintptr(fd))\n\treturn uint16(ip4Addr.Port), nil\n}\n\nfunc closeRandomPort(tcpConn *connection.TCPConnection) error {\n\n\tfd := tcpConn.PingConfig.SocketFd()\n\ttcpConn.PingConfig.SetSocketClosed(true)\n\n\treturn windows.CloseHandle(windows.Handle(fd))\n}\n\nfunc convertToDriverFormat(ip net.IP) [4]uint32 {\n\tvar addr [4]uint32\n\tbyteAddr := (*[16]byte)(unsafe.Pointer(&addr))\n\tcopy(byteAddr[:], ip)\n\treturn addr\n}\n\nfunc isAddrInUseErrno(errNo syscall.Errno) bool {\n\treturn errNo == windows.WSAEADDRINUSE\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/diagnostics_tcp.go",
    "content": "// +build !windows\n\npackage nfqdatapath\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"math/rand\"\n\t\"net\"\n\t\"strconv\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/aporeto-inc/gopkt/layers\"\n\t\"github.com/aporeto-inc/gopkt/packet/ipv4\"\n\t\"github.com/aporeto-inc/gopkt/packet/raw\"\n\t\"github.com/aporeto-inc/gopkt/packet/tcp\"\n\t\"github.com/aporeto-inc/gopkt/routing\"\n\t\"github.com/phayes/freeport\"\n\t\"github.com/vmihailenco/msgpack\"\n\t\"go.aporeto.io/trireme-lib/collector\"\n\tenforcerconstants \"go.aporeto.io/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/connection\"\n\ttpacket \"go.aporeto.io/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/tokens\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/crypto\"\n\t\"go.uber.org/zap\"\n)\n\nfunc (d *Datapath) initiateDiagnostics(_ context.Context, contextID string, pingConfig *policy.PingConfig) error {\n\n\tif pingConfig == nil {\n\t\treturn nil\n\t}\n\n\tzap.L().Debug(\"Initiating diagnostics (syn)\")\n\n\tsrcIP, err := getSrcIP(pingConfig.IP)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to get source ip: %v\", err)\n\t}\n\n\tconn, err := dialWithMark(srcIP, pingConfig.IP)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to dial on app syn: %v\", err)\n\t}\n\tdefer conn.Close() // nolint:errcheck\n\n\titem, err := d.puFromContextID.Get(contextID)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to find context with ID %s in cache: %v\", contextID, err)\n\t}\n\n\tcontext, ok := item.(*pucontext.PUContext)\n\tif !ok {\n\t\treturn fmt.Errorf(\"invalid pu context: %v\", contextID)\n\t}\n\n\tfor i := 1; i <= pingConfig.Requests; i++ {\n\t\tfor _, ports := range pingConfig.Ports {\n\t\t\tfor dstPort := ports.Min; dstPort <= ports.Max; dstPort++ {\n\t\t\t\tif err := d.sendSynPacket(context, pingConfig, conn, srcIP, dstPort, i); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// sendSynPacket sends tcp syn packet to the socket. It also dispatches a report.\nfunc (d *Datapath) sendSynPacket(context *pucontext.PUContext, pingConfig *policy.PingConfig, conn net.Conn, srcIP net.IP, dstPort uint16, request int) error {\n\n\ttcpConn := connection.NewTCPConnection(context, nil)\n\ttcpConn.Secrets = d.secrets()\n\n\tclaimsHeader := claimsheader.NewClaimsHeader(\n\t\tclaimsheader.OptionPingType(pingConfig.Type),\n\t)\n\n\ttcpData, err := d.tokenAccessor.CreateSynPacketToken(context, &tcpConn.Auth, claimsHeader, d.secrets())\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to create syn token: %v\", err)\n\t}\n\n\tsrcPort, err := freeport.GetFreePort()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to get free source port: %v\", err)\n\t}\n\n\tp, err := constructTCPPacket(srcIP, pingConfig.IP, uint16(srcPort), dstPort, tcp.Syn, tcpData)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to construct syn packet: %v\", err)\n\t}\n\n\tsessionID, err := crypto.GenerateRandomString(20)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := write(conn, p); err != nil {\n\t\treturn fmt.Errorf(\"unable to send syn packet: %v\", err)\n\t}\n\n\ttcpConn.PingConfig = &connection.PingConfig{\n\t\tStartTime: time.Now(),\n\t\tType:      pingConfig.Type,\n\t\tSessionID: sessionID,\n\t\tRequest:   request,\n\t}\n\n\td.sendOriginPingReport(\n\t\tsessionID,\n\t\td.agentVersion.String(),\n\t\tflowTuple(\n\t\t\ttpacket.PacketTypeApplication,\n\t\t\tsrcIP.String(),\n\t\t\tpingConfig.IP.String(),\n\t\t\tuint16(srcPort),\n\t\t\tdstPort,\n\t\t),\n\t\tcontext,\n\t\tpingConfig.Type,\n\t\tlen(tcpData),\n\t\trequest,\n\t)\n\n\ttcpConn.SetState(connection.TCPSynSend)\n\td.sourcePortConnectionCache.AddOrUpdate(\n\t\tpacketTuple(tpacket.PacketTypeApplication, srcIP.String(), pingConfig.IP.String(), uint16(srcPort), dstPort),\n\t\ttcpConn,\n\t)\n\n\treturn nil\n}\n\n// processDiagnosticNetSynPacket should only be called when the packet is recognized as a diagnostic syn packet.\nfunc (d *Datapath) processDiagnosticNetSynPacket(\n\tcontext *pucontext.PUContext,\n\ttcpConn *connection.TCPConnection,\n\ttcpPacket *tpacket.Packet,\n\tclaims *tokens.ConnectionClaims,\n) error {\n\n\tch := claims.H.ToClaimsHeader()\n\ttcpConn.PingConfig.Type = ch.PingType()\n\ttcpConn.SetState(connection.TCPSynReceived)\n\n\tzap.L().Debug(\"Processing diagnostic network syn packet\",\n\t\tzap.String(\"pingType\", ch.PingType().String()),\n\t)\n\n\tif ch.PingType() == claimsheader.PingTypeDefaultIdentityPassthrough {\n\t\tzap.L().Debug(\"Processing diagnostic network syn packet: defaultpassthrough\")\n\n\t\ttcpConn.PingConfig.Passthrough = true\n\t\td.appReplyConnectionTracker.AddOrUpdate(tcpPacket.L4ReverseFlowHash(), tcpConn)\n\t\treturn nil\n\t}\n\n\tconn, err := dialWithMark(tcpPacket.DestinationAddress(), tcpPacket.SourceAddress())\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to dial on net syn: %v\", err)\n\t}\n\tdefer conn.Close() // nolint:errcheck\n\n\tvar tcpData []byte\n\t// If diagnostic type is custom, we add custom payload.\n\t// Else, we add default payload.\n\tif ch.PingType() == claimsheader.PingTypeCustomIdentity {\n\t\tci := &customIdentity{\n\t\t\tAgentVersion:         d.agentVersion.String(),\n\t\t\tTransmitterID:        context.ManagementID(),\n\t\t\tTransmitterNamespace: context.ManagementNamespace(),\n\t\t\tFlowTuple: flowTuple(\n\t\t\t\ttpacket.PacketTypeApplication,\n\t\t\t\ttcpPacket.SourceAddress().String(),\n\t\t\t\ttcpPacket.DestinationAddress().String(),\n\t\t\t\ttcpPacket.SourcePort(),\n\t\t\t\ttcpPacket.DestPort(),\n\t\t\t),\n\t\t}\n\t\ttcpData, err = ci.encode()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\ttcpData, err = d.tokenAccessor.CreateSynAckPacketToken(context, &tcpConn.Auth, ch, d.secrets())\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"unable to create default synack token: %v\", err)\n\t\t}\n\t}\n\n\tp, err := constructTCPPacket(\n\t\ttcpPacket.DestinationAddress(),\n\t\ttcpPacket.SourceAddress(),\n\t\ttcpPacket.DestPort(),\n\t\ttcpPacket.SourcePort(),\n\t\ttcp.Syn|tcp.Ack,\n\t\ttcpData,\n\t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to construct synack packet: %v\", err)\n\t}\n\n\tif err := write(conn, p); err != nil {\n\t\treturn fmt.Errorf(\"unable to send synack packet: %v\", err)\n\t}\n\n\ttcpConn.SetState(connection.TCPSynAckSend)\n\treturn nil\n}\n\n// processDiagnosticNetSynAckPacket should only be called when the packet is recognized as a diagnostic synack packet.\nfunc (d *Datapath) processDiagnosticNetSynAckPacket(\n\tcontext *pucontext.PUContext,\n\ttcpConn *connection.TCPConnection,\n\ttcpPacket *tpacket.Packet,\n\tclaims *tokens.ConnectionClaims,\n\text bool,\n\tcustom bool,\n) error {\n\tzap.L().Debug(\"Processing diagnostic network synack packet\",\n\t\tzap.Bool(\"externalNetwork\", ext),\n\t\tzap.Bool(\"customPayload\", custom),\n\t\tzap.String(\"pingType\", tcpConn.PingConfig.Type.String()),\n\t)\n\n\tif tcpConn.GetState() == connection.TCPSynAckReceived {\n\t\tzap.L().Debug(\"Ignoring duplicate synack packets\")\n\t\treturn nil\n\t}\n\n\treceiveTime := time.Since(tcpConn.PingConfig.StartTime)\n\ttcpConn.SetState(connection.TCPSynAckReceived)\n\n\t// Synack from externalnetwork.\n\tif ext {\n\t\ttcpConn.PingConfig.Passthrough = true\n\t\td.sendReplyPingReport(&customIdentity{}, tcpConn, context, receiveTime.String(), len(tcpPacket.ReadTCPData()))\n\t\treturn nil\n\t}\n\n\t// Synack from an endpoint with custom identity enabled.\n\tif custom {\n\t\tci := &customIdentity{}\n\t\tif err := ci.decode(tcpPacket.ReadTCPData()); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\td.sendReplyPingReport(ci, tcpConn, context, receiveTime.String(), len(tcpPacket.ReadTCPData()))\n\t\treturn nil\n\t}\n\n\ttxtID, ok := claims.T.Get(enforcerconstants.TransmitterLabel)\n\tif !ok {\n\t\treturn fmt.Errorf(\"missing transmitter label\")\n\t}\n\n\tci := &customIdentity{\n\t\tTransmitterID: txtID,\n\t}\n\n\td.sendReplyPingReport(ci, tcpConn, context, receiveTime.String(), len(tcpPacket.ReadTCPData()))\n\n\tif tcpConn.PingConfig.Type == claimsheader.PingTypeDefaultIdentityPassthrough {\n\t\tzap.L().Debug(\"Processing diagnostic network synack packet: defaultpassthrough\")\n\t\ttcpConn.PingConfig.Passthrough = true\n\t\treturn nil\n\t}\n\n\treturn nil\n}\n\n// constructTCPPacket constructs a valid tcp packet that can be sent on wire.\nfunc constructTCPPacket(srcIP, dstIP net.IP, srcPort, dstPort uint16, flag tcp.Flags, tcpData []byte) ([]byte, error) {\n\n\t// pseudo header.\n\t// NOTE: Used only for computing checksum.\n\tipPacket := ipv4.Make()\n\tipPacket.SrcAddr = srcIP\n\tipPacket.DstAddr = dstIP\n\tipPacket.Protocol = ipv4.TCP\n\n\t// tcp.\n\ttcpPacket := tcp.Make()\n\ttcpPacket.SrcPort = srcPort\n\ttcpPacket.DstPort = dstPort\n\ttcpPacket.Flags = flag\n\ttcpPacket.Seq = rand.Uint32()\n\ttcpPacket.WindowSize = 0xAAAA\n\ttcpPacket.Options = []tcp.Option{\n\t\t{\n\t\t\tType: tcp.MSS,\n\t\t\tLen:  4,\n\t\t\tData: []byte{0x05, 0x8C},\n\t\t}, {\n\t\t\tType: 34, // tfo\n\t\t\tLen:  enforcerconstants.TCPAuthenticationOptionBaseLen,\n\t\t\tData: make([]byte, 2),\n\t\t},\n\t}\n\ttcpPacket.DataOff = uint8(7) // 5 (header size) + 2 * (4 byte options)\n\n\t// payload.\n\tpayload := raw.Make()\n\tpayload.Data = tcpData\n\n\ttcpPacket.SetPayload(payload)  // nolint:errcheck\n\tipPacket.SetPayload(tcpPacket) // nolint:errcheck\n\n\t// pack the layers together.\n\tbuf, err := layers.Pack(tcpPacket, payload)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to encode packet to wire format: %v\", err)\n\t}\n\n\treturn buf, nil\n}\n\n// getSrcIP returns the interface ip that can reach the destination.\nfunc getSrcIP(dstIP net.IP) (net.IP, error) {\n\n\troute, err := routing.RouteTo(dstIP)\n\tif err != nil || route == nil {\n\t\treturn nil, fmt.Errorf(\"no route found for destination %s: %v\", dstIP.String(), err)\n\t}\n\n\tip, err := route.GetIfaceIPv4Addr()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to get interface ip address: %v\", err)\n\t}\n\n\treturn ip, nil\n}\n\n// flowTuple returns the tuple based on the stage in format <sip:dip:spt:dpt> or <dip:sip:dpt:spt>\nfunc flowTuple(stage uint64, srcIP, dstIP string, srcPort, dstPort uint16) string {\n\n\tif stage == tpacket.PacketTypeNetwork {\n\t\treturn fmt.Sprintf(\"%s:%s:%s:%s\", dstIP, srcIP, strconv.Itoa(int(dstPort)), strconv.Itoa(int(srcPort)))\n\t}\n\n\treturn fmt.Sprintf(\"%s:%s:%s:%s\", srcIP, dstIP, strconv.Itoa(int(srcPort)), strconv.Itoa(int(dstPort)))\n}\n\n// packetTuple returns the tuple based on the stage in format <sip:spt> or <dip:dpt>\nfunc packetTuple(stage uint64, srcIP, dstIP string, srcPort, dstPort uint16) string {\n\n\tif stage == tpacket.PacketTypeNetwork {\n\t\treturn dstIP + \":\" + strconv.Itoa(int(dstPort))\n\t}\n\n\treturn srcIP + \":\" + strconv.Itoa(int(srcPort))\n}\n\n// dialWithMark opens raw ipv4:tcp socket and connects to the remote network.\nfunc dialWithMark(srcIP, dstIP net.IP) (net.Conn, error) {\n\n\td := net.Dialer{\n\t\tTimeout:   5 * time.Second,\n\t\tKeepAlive: -1, // keepalive disabled.\n\t\tLocalAddr: &net.IPAddr{IP: srcIP},\n\t\tControl: func(_, _ string, c syscall.RawConn) error {\n\t\t\treturn c.Control(func(fd uintptr) {\n\t\t\t\tif err := syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, 0x24, 0x40); err != nil {\n\t\t\t\t\tzap.L().Error(\"unable to assign mark\", zap.Error(err))\n\t\t\t\t}\n\t\t\t})\n\t\t},\n\t}\n\n\treturn d.Dial(\"ip4:tcp\", dstIP.String())\n}\n\n// write writes the given data to the conn.\nfunc write(conn net.Conn, data []byte) error {\n\n\tn, err := conn.Write(data)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif n != len(data) {\n\t\treturn fmt.Errorf(\"partial data written, total: %v, written: %v\", len(data), n)\n\t}\n\n\treturn nil\n}\n\n// sendOriginPingReport sends a report on syn sent state.\nfunc (d *Datapath) sendOriginPingReport(\n\tsessionID,\n\tagentVersion,\n\tflowTuple string,\n\tcontext *pucontext.PUContext,\n\tpingType claimsheader.PingType,\n\tpayloadSize,\n\trequest int,\n) {\n\td.sendPingReport(\n\t\tsessionID,\n\t\tagentVersion,\n\t\tflowTuple,\n\t\t\"\",\n\t\tcontext.ManagementID(),\n\t\tcontext.ManagementNamespace(),\n\t\t\"\",\n\t\t\"\",\n\t\tpingType,\n\t\tcollector.Origin,\n\t\tpayloadSize,\n\t\trequest,\n\t)\n}\n\n// sendOriginPingReport sends a report on synack recv state.\nfunc (d *Datapath) sendReplyPingReport(\n\tci *customIdentity,\n\ttcpConn *connection.TCPConnection,\n\tcontext *pucontext.PUContext,\n\trtt string,\n\tpayloadSize int,\n) {\n\td.sendPingReport(\n\t\ttcpConn.PingConfig.SessionID,\n\t\tci.AgentVersion,\n\t\tci.FlowTuple,\n\t\trtt,\n\t\tcontext.ManagementID(),\n\t\tcontext.ManagementNamespace(),\n\t\tci.TransmitterID,\n\t\tci.TransmitterNamespace,\n\t\ttcpConn.PingConfig.Type,\n\t\tcollector.Reply,\n\t\tpayloadSize,\n\t\ttcpConn.PingConfig.Request,\n\t)\n}\n\nfunc (d *Datapath) sendPingReport(\n\tsessionID,\n\tagentVersion,\n\tflowTuple,\n\trtt,\n\tsrcID,\n\tsrcNS,\n\tdstID,\n\tdstNS string,\n\tPingType claimsheader.PingType,\n\tstage collector.Stage,\n\tpayloadSize,\n\trequest int,\n) {\n\n\treport := &collector.PingReport{\n\t\tAgentVersion:         agentVersion,\n\t\tFlowTuple:            flowTuple,\n\t\tLatency:              rtt,\n\t\tPayloadSize:          payloadSize,\n\t\tType:                 PingType,\n\t\tStage:                stage,\n\t\tSourceID:             srcID,\n\t\tSourceNamespace:      srcNS,\n\t\tDestinationNamespace: dstNS,\n\t\tDestinationID:        dstID,\n\t\tSessionID:            sessionID,\n\t\tProtocol:             tpacket.IPProtocolTCP,\n\t\tServiceType:          \"L3\",\n\t\tRequest:              request,\n\t}\n\n\td.collector.CollectPingEvent(report)\n}\n\n// customIdentity holds data that needs to be passed on wire.\ntype customIdentity struct {\n\tAgentVersion         string\n\tTransmitterID        string\n\tTransmitterNamespace string\n\tFlowTuple            string\n}\n\n// encode returns bytes of c, returns error on nil.\nfunc (c *customIdentity) encode() ([]byte, error) {\n\n\tif c == nil {\n\t\treturn nil, fmt.Errorf(\"cannot encode nil custom identity\")\n\t}\n\n\tb, err := msgpack.Marshal(c)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to encode custom identity: %v\", err)\n\t}\n\n\treturn b, nil\n}\n\n// decode returns customIdentity, returns error on nil.\nfunc (c *customIdentity) decode(b []byte) error {\n\n\tif c == nil {\n\t\treturn fmt.Errorf(\"cannot decode nil custom identity\")\n\t}\n\n\tif err := msgpack.Unmarshal(b, c); err != nil {\n\t\treturn fmt.Errorf(\"unable to decode custom identity: %v\", err)\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/interfaces.go",
    "content": "package nfqdatapath\n\nimport (\n\t\"net\"\n\n\t\"github.com/ghedo/go.pkt/packet\"\n)\n\n// ContextProcessor is an interface to provide context checks\ntype ContextProcessor interface {\n\tDoesContextExist(contextID string) bool\n\tIsContextServer(contextID string, backendip string) bool\n}\n\n// RuleProcessor is an interface to access rules\ntype RuleProcessor interface {\n\tCheckRejectRecvRules(contextID string) (int, bool)\n\tCheckAcceptRecvRules(contextID string) (int, bool)\n\tCheckRejectTxRules(contextID string) (int, bool)\n\tCheckAcceptTxRules(contextID string) (int, bool)\n}\n\n// Accessor is an interface for datapth to access contexts/rules/tokens\ntype Accessor interface {\n\tContextProcessor\n\tRuleProcessor\n}\n\n// PingConn is an interface to send ping packets/data to network.\n// Also implements io.Writer interface.\ntype PingConn interface {\n\tConstructWirePacket(srcIP, dstIP net.IP, transport packet.Packet, payload packet.Packet) ([]byte, error)\n\tWrite(data []byte) (int, error)\n\tClose() error\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nflog/nflog_common.go",
    "content": "package nflog\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\n// NFLogger provides an interface for NFLog\ntype NFLogger interface {\n\tRun(ctx context.Context)\n}\n\n// GetPUContextFunc provides PU information given the id\ntype GetPUContextFunc func(hash string) (*pucontext.PUContext, error)\n\nfunc recordCounters(protocol uint8, dstport uint16, srcport uint16, pu *pucontext.PUContext, puIsSource bool) {\n\tswitch protocol {\n\tcase packet.IPProtocolTCP:\n\t\tpu.Counters().IncrementCounter(counters.ErrDroppedTCPPackets)\n\tcase packet.IPProtocolUDP:\n\t\tpu.Counters().IncrementCounter(counters.ErrDroppedUDPPackets)\n\t\tif puIsSource {\n\t\t\tswitch dstport {\n\t\t\tcase 53:\n\t\t\t\tpu.Counters().IncrementCounter(counters.ErrDroppedDNSPackets)\n\t\t\tcase 67, 68:\n\t\t\t\tpu.Counters().IncrementCounter(counters.ErrDroppedDHCPPackets)\n\t\t\tcase 123:\n\t\t\t\tpu.Counters().IncrementCounter(counters.ErrDroppedNTPPackets)\n\t\t\t}\n\t\t} else {\n\t\t\tswitch srcport {\n\t\t\tcase 53:\n\t\t\t\tpu.Counters().IncrementCounter(counters.ErrDroppedDNSPackets)\n\t\t\tcase 67, 68:\n\t\t\t\tpu.Counters().IncrementCounter(counters.ErrDroppedDHCPPackets)\n\t\t\tcase 123:\n\t\t\t\tpu.Counters().IncrementCounter(counters.ErrDroppedNTPPackets)\n\t\t\t}\n\t\t}\n\n\tcase packet.IPProtocolICMP:\n\t\tpu.Counters().IncrementCounter(counters.ErrDroppedICMPPackets)\n\n\t}\n}\n\nfunc recordDroppedPacket(payload []byte, protocol uint8, srcIP, dstIP net.IP, srcPort, dstPort uint16, pu *pucontext.PUContext, puIsSource bool) (*collector.PacketReport, error) {\n\n\treport := &collector.PacketReport{}\n\n\treport.PUID = pu.ManagementID()\n\treport.Namespace = pu.ManagementNamespace()\n\tipPacket, err := packet.New(packet.PacketTypeNetwork, payload, \"\", false)\n\tif err == nil {\n\t\treport.Length = int(ipPacket.GetIPLength())\n\t\treport.PacketID, _ = strconv.Atoi(ipPacket.ID())\n\n\t} else {\n\t\tzap.L().Debug(\"payload not valid\", zap.Error(err))\n\t\treturn nil, err\n\t}\n\trecordCounters(protocol, dstPort, srcPort, pu, puIsSource)\n\tif protocol == packet.IPProtocolTCP || protocol == packet.IPProtocolUDP {\n\t\treport.SourcePort = int(srcPort)\n\t\treport.DestinationPort = int(dstPort)\n\t}\n\tif protocol == packet.IPProtocolTCP {\n\t\treport.TCPFlags = int(ipPacket.GetTCPFlags())\n\t}\n\treport.Protocol = int(protocol)\n\treport.DestinationIP = dstIP.String()\n\treport.SourceIP = srcIP.String()\n\treport.TriremePacket = false\n\treport.DropReason = collector.PacketDrop\n\n\tif payload == nil {\n\t\treport.Payload = []byte{}\n\t\treturn report, nil\n\t}\n\tif len(payload) <= 64 {\n\t\treport.Payload = make([]byte, len(payload))\n\t\tcopy(report.Payload, payload)\n\n\t} else {\n\t\treport.Payload = make([]byte, 64)\n\t\tcopy(report.Payload, payload[0:64])\n\t}\n\n\treturn report, nil\n}\n\nfunc recordFromNFLogData(payload []byte, prefix string, protocol uint8, srcIP, dstIP net.IP, srcPort, dstPort uint16, getPUContext GetPUContextFunc, puIsSource bool) (*collector.FlowRecord, *collector.PacketReport, error) {\n\n\tvar packetReport *collector.PacketReport\n\tvar err error\n\n\tvar hashID string\n\tvar policyID string\n\tvar extNetworkID string\n\tvar ruleName string\n\tvar encodedAction string\n\n\tparts := strings.Split(prefix, \":\")\n\tswitch len(parts) {\n\tcase 4:\n\t\t// hashID:policyID:extNetworkID:action\n\t\thashID, policyID, extNetworkID, encodedAction = parts[0], parts[1], parts[2], parts[3]\n\tcase 5:\n\t\t// hashID:policyID:extNetworkID:ruleName:action\n\t\thashID, policyID, extNetworkID, ruleName, encodedAction = parts[0], parts[1], parts[2], parts[3], parts[4]\n\tdefault:\n\t\treturn nil, nil, fmt.Errorf(\"nflog: prefix doesn't contain sufficient information: %s\", prefix)\n\t}\n\n\tpu, err := getPUContext(hashID)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\t// If we have a rule name, then look up the long version of logging prefix\n\tif len(ruleName) > 0 {\n\t\trealPrefix, ok := pu.LookupLogPrefix(policyID + \":\" + extNetworkID + \":\" + ruleName)\n\t\tif !ok {\n\t\t\treturn nil, nil, fmt.Errorf(\"nflog: prefix not found in pucontext mapping: %s\", prefix)\n\t\t}\n\t\tparts = strings.SplitN(realPrefix, \":\", 3)\n\t\tif len(parts) != 3 {\n\t\t\treturn nil, nil, fmt.Errorf(\"nflog: realPrefix doesn't contain sufficient information: %s\", realPrefix)\n\t\t}\n\t\tpolicyID, extNetworkID, ruleName = parts[0], parts[1], parts[2]\n\t}\n\n\tif encodedAction == \"10\" {\n\t\tpacketReport, err = recordDroppedPacket(payload, protocol, srcIP, dstIP, srcPort, dstPort, pu, puIsSource)\n\t\treturn nil, packetReport, err\n\t}\n\n\taction, observedActionType, err := policy.EncodedStringToAction(encodedAction)\n\tif err != nil {\n\t\treturn nil, packetReport, fmt.Errorf(\"nflog: unable to decode action for context id: %s (%s)\", pu.ID(), encodedAction)\n\t}\n\n\tdropReason := \"\"\n\tif action.Rejected() {\n\t\tdropReason = collector.PolicyDrop\n\t}\n\n\t// point fix for now.\n\tvar destination collector.EndPoint\n\tif protocol == packet.IPProtocolUDP || protocol == packet.IPProtocolTCP {\n\t\tdestination = collector.EndPoint{\n\t\t\tIP:   dstIP.String(),\n\t\t\tPort: dstPort,\n\t\t}\n\t} else {\n\t\tdestination = collector.EndPoint{\n\t\t\tIP: dstIP.String(),\n\t\t}\n\t}\n\n\trecord := &collector.FlowRecord{\n\t\tContextID: pu.ID(),\n\t\tSource: collector.EndPoint{\n\t\t\tIP: srcIP.String(),\n\t\t},\n\t\tDestination: destination,\n\t\tDropReason:  dropReason,\n\t\tPolicyID:    policyID,\n\t\tTags:        pu.Annotations().GetSlice(),\n\t\tAction:      action | policy.Log, // Add the logging flag back\n\t\tL4Protocol:  protocol,\n\t\tNamespace:   pu.ManagementNamespace(),\n\t\tCount:       1,\n\t\tRuleName:    ruleName,\n\t}\n\n\tif action.Observed() {\n\t\trecord.ObservedAction = action\n\t\trecord.ObservedPolicyID = policyID\n\t\trecord.ObservedActionType = observedActionType\n\t}\n\n\tif puIsSource {\n\t\trecord.Source.Type = collector.EndPointTypePU\n\t\trecord.Source.ID = pu.ManagementID()\n\t\trecord.Destination.Type = collector.EndPointTypeExternalIP\n\t\trecord.Destination.ID = extNetworkID\n\t} else {\n\t\trecord.Source.Type = collector.EndPointTypeExternalIP\n\t\trecord.Source.ID = extNetworkID\n\t\trecord.Destination.Type = collector.EndPointTypePU\n\t\trecord.Destination.ID = pu.ManagementID()\n\t}\n\n\treturn record, packetReport, nil\n}\n\nfunc handleFlowReport(flowReportCache cache.DataStore, eventCollector collector.EventCollector, record *collector.FlowRecord, puIsSource bool) {\n\n\tif record == nil {\n\t\treturn\n\t}\n\n\tuniqueKey := fmt.Sprintf(\"%d:%s:%d:%s:%d\",\n\t\trecord.L4Protocol, record.Source.IP, record.Source.Port, record.Destination.IP, record.Destination.Port)\n\n\t// If the flow record is ObserveContinue\n\tif record.ObservedActionType.ObserveContinue() {\n\n\t\t// If another observed continue policy is reported, then we ignore it.\n\t\tif _, err := flowReportCache.Get(uniqueKey); err == nil {\n\t\t\treturn\n\t\t}\n\n\t\t// Add the observed policy report to the cache\n\t\terr := flowReportCache.Add(uniqueKey, record)\n\t\tif err != nil {\n\t\t\teventCollector.CollectFlowEvent(record)\n\t\t\tzap.L().Error(\"handleFlowReport: unable to add flow record to cache\", zap.Error(err))\n\t\t}\n\t\treturn\n\t}\n\n\t// See if there was an ObserveContinue policy\n\tvalue, err := flowReportCache.Get(uniqueKey)\n\tif err == nil {\n\t\treport := value.(*collector.FlowRecord)\n\t\trecord.ObservedAction = report.ObservedAction\n\t\trecord.ObservedPolicyID = report.ObservedPolicyID\n\t\trecord.ObservedActionType = report.ObservedActionType\n\t\tif puIsSource {\n\t\t\trecord.Destination.ID = report.Destination.ID\n\t\t\trecord.Destination.Type = report.Destination.Type\n\t\t} else {\n\t\t\trecord.Source.ID = report.Source.ID\n\t\t\trecord.Source.Type = report.Source.Type\n\t\t}\n\t\terr = flowReportCache.Remove(uniqueKey)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"handleFlowReport: failed to remove flow from cache\", zap.Error(err))\n\t\t}\n\t}\n\teventCollector.CollectFlowEvent(record)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nflog/nflog_darwin.go",
    "content": "// +build darwin\n\npackage nflog\n\nimport (\n\t\"context\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\n// nfLog TODO\ntype nfLog struct {\n\tgetPUContext GetPUContextFunc // not called\n}\n\n// NewNFLogger provides an NFLog instance\nfunc NewNFLogger(ipv4groupSource, ipv4groupDest uint16, getPUContext GetPUContextFunc, collector collector.EventCollector) NFLogger {\n\treturn &nfLog{getPUContext: getPUContext}\n}\n\nfunc (n *nfLog) Run(ctx context.Context) {}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nflog/nflog_linux.go",
    "content": "// +build linux\n\npackage nflog\n\nimport (\n\t\"context\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/netlink-go/nflog\"\n\t\"go.uber.org/zap\"\n)\n\ntype nfLog struct {\n\tgetPUContext    GetPUContextFunc\n\tipv4groupSource uint16\n\tipv4groupDest   uint16\n\tcollector       collector.EventCollector\n\tsrcNflogHandle  nflog.NFLog\n\tdstNflogHandle  nflog.NFLog\n\tflowReportCache cache.DataStore\n\tsync.Mutex\n}\n\n// NewNFLogger provides an NFLog instance\nfunc NewNFLogger(ipv4groupSource, ipv4groupDest uint16, getPUContext GetPUContextFunc, collector collector.EventCollector) NFLogger {\n\tnfLog := &nfLog{\n\t\tipv4groupSource: ipv4groupSource,\n\t\tipv4groupDest:   ipv4groupDest,\n\t\tcollector:       collector,\n\t\tgetPUContext:    getPUContext,\n\t}\n\tnfLog.flowReportCache = cache.NewCacheWithExpirationNotifier(\"flowReportCache\", time.Second*5, nfLog.logExpirationNotifier)\n\treturn nfLog\n}\n\n// Run runs the Nf Logger\nfunc (a *nfLog) Run(ctx context.Context) {\n\ta.Lock()\n\ta.srcNflogHandle, _ = nflog.BindAndListenForLogs([]uint16{a.ipv4groupSource}, 64, a.sourceNFLogsHanlder, a.nflogErrorHandler)\n\ta.dstNflogHandle, _ = nflog.BindAndListenForLogs([]uint16{a.ipv4groupDest}, 64, a.destNFLogsHandler, a.nflogErrorHandler)\n\ta.Unlock()\n\n\tgo func() {\n\t\t<-ctx.Done()\n\t\ta.Lock()\n\t\ta.srcNflogHandle.NFlogClose()\n\t\ta.dstNflogHandle.NFlogClose()\n\t\ta.Unlock()\n\n\t}()\n}\n\nfunc (a *nfLog) sourceNFLogsHanlder(buf *nflog.NfPacket, _ interface{}) {\n\n\trecord, packetEvent, err := a.recordFromNFLogBuffer(buf, false)\n\tif err != nil {\n\t\tzap.L().Error(\"sourceNFLogsHanlder: create flow record\", zap.Error(err))\n\t\treturn\n\t}\n\n\thandleFlowReport(a.flowReportCache, a.collector, record, false)\n\n\tif packetEvent != nil {\n\t\ta.collector.CollectPacketEvent(packetEvent)\n\t}\n}\n\nfunc (a *nfLog) destNFLogsHandler(buf *nflog.NfPacket, _ interface{}) {\n\n\trecord, packetEvent, err := a.recordFromNFLogBuffer(buf, true)\n\tif err != nil {\n\t\tzap.L().Error(\"destNFLogsHandler: create flow record\", zap.Error(err))\n\t\treturn\n\t}\n\n\thandleFlowReport(a.flowReportCache, a.collector, record, true)\n\n\tif packetEvent != nil {\n\t\ta.collector.CollectPacketEvent(packetEvent)\n\t}\n}\n\nfunc (a *nfLog) nflogErrorHandler(err error) {\n\tcounters.IncrementCounter(counters.ErrNfLogError)\n\tzap.L().Debug(\"Error while processing nflog packet\", zap.Error(err))\n}\n\nfunc (a *nfLog) recordFromNFLogBuffer(buf *nflog.NfPacket, puIsSource bool) (*collector.FlowRecord, *collector.PacketReport, error) {\n\treturn recordFromNFLogData(buf.Payload, buf.Prefix, buf.Protocol, buf.SrcIP, buf.DstIP, buf.SrcPort, buf.DstPort, a.getPUContext, puIsSource)\n}\n\nfunc (a *nfLog) logExpirationNotifier(_ interface{}, item interface{}) {\n\tif item != nil {\n\t\t// Basically we had an observed flow report that didn't get reported yet.\n\t\trecord := item.(*collector.FlowRecord)\n\t\ta.collector.CollectFlowEvent(record)\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nflog/nflog_test.go",
    "content": "// +build linux\n\npackage nflog\n\nimport (\n\t\"errors\"\n\t\"strconv\"\n\t\"testing\"\n\t\"time\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/packetgen\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/netlink-go/nflog\"\n)\n\nfunc TestRecordDroppedPacket(t *testing.T) {\n\tConvey(\"I report a dropped packet\", t, func() {\n\t\tpuID := \"SomeProcessingUnitId\"\n\t\tpuInfo := policy.NewPUInfo(puID, \"/ns\", common.ContainerPU)\n\n\t\tpu, err := pucontext.NewPU(\"contextID\", puInfo, nil, 5*time.Second)\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"I report a packet with length less than 64 bytes\", func() {\n\t\t\t//\tpacketbuf := make([]byte, 40)\n\t\t\tPacketFlow := packetgen.NewTemplateFlow()\n\n\t\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tpkt := PacketFlow.GetNthPacket(0)\n\t\t\tpayloadBuf, _ := pkt.ToBytes()\n\t\t\tnfPacket := &nflog.NfPacket{\n\t\t\t\tPayload: payloadBuf,\n\t\t\t}\n\t\t\tipPacket, err := packet.New(packet.PacketTypeNetwork, nfPacket.Payload, \"\", false)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnfPacket.Protocol = ipPacket.IPProto()\n\t\t\treport, err := recordDroppedPacket(nfPacket.Payload, nfPacket.Protocol, nfPacket.SrcIP, nfPacket.DstIP, nfPacket.SrcPort, nfPacket.DstPort, pu, true)\n\t\t\tSo(report.TriremePacket, ShouldBeFalse)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(report.Payload), ShouldEqual, len(nfPacket.Payload))\n\n\t\t})\n\t\tConvey(\"I report a packet with length greater than 64 bytes\", func() {\n\t\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tpkt := PacketFlow.GetAckPackets().GetNthPacket(1)\n\t\t\terr = pkt.NewTCPPayload(\"abcdedghijklmnopqrstuvwxyz\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tpayloadBuf, err := pkt.ToBytes()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnfPacket := &nflog.NfPacket{\n\t\t\t\tPayload: payloadBuf,\n\t\t\t}\n\n\t\t\tipPacket, err := packet.New(packet.PacketTypeNetwork, nfPacket.Payload, \"\", false)\n\t\t\tnfPacket.Protocol = ipPacket.IPProto()\n\t\t\tnfPacket.SrcIP = ipPacket.SourceAddress()\n\t\t\tnfPacket.DstIP = ipPacket.DestinationAddress()\n\t\t\tSo(err, ShouldBeNil)\n\t\t\treport, err := recordDroppedPacket(nfPacket.Payload, nfPacket.Protocol, nfPacket.SrcIP, nfPacket.DstIP, nfPacket.SrcPort, nfPacket.DstPort, pu, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(report.TriremePacket, ShouldBeFalse)\n\t\t\tSo(report.Protocol, ShouldEqual, int(packet.IPProtocolTCP))\n\t\t\tSo(len(report.Payload), ShouldEqual, 64)\n\t\t\tid, _ := strconv.Atoi(ipPacket.ID())\n\t\t\tSo(report.PacketID, ShouldEqual, id)\n\t\t\tSo(report.SourceIP, ShouldEqual, ipPacket.SourceAddress().String())\n\t\t\tSo(report.DestinationIP, ShouldEqual, ipPacket.DestinationAddress().String())\n\n\t\t\tSo(report.Payload, ShouldResemble, payloadBuf[:64])\n\t\t})\n\n\t})\n}\n\nfunc dummyPUContext(string) (*pucontext.PUContext, error) {\n\treturn nil, errors.New(\"Unknown Context\")\n}\nfunc TestRecordFromNFLogBuffer(t *testing.T) {\n\t// puID := \"SomeProcessingUnitId\"\n\t// puInfo := policy.NewPUInfo(puID, \"/ns\", common.ContainerPU)\n\t// pu, err := pucontext.NewPU(\"contextID\", puInfo, 5*time.Second)\n\t// So(err, ShouldBeNil)\n\tnflogger := NewNFLogger(10, 11, nil, nil)\n\tConvey(\"I get a nfpacket from nflog library\", t, func() {\n\t\tConvey(\"If Packet does not contain valid format prefix\", func() {\n\t\t\tPacketFlow := packetgen.NewTemplateFlow()\n\n\t\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tpkt := PacketFlow.GetNthPacket(0)\n\t\t\tpayloadBuf, _ := pkt.ToBytes()\n\t\t\tnfPacket := &nflog.NfPacket{\n\t\t\t\tPayload: payloadBuf,\n\t\t\t}\n\t\t\tnfPacket.Prefix = \"p1:p2\"\n\t\t\tflowreport, packetreport, err := nflogger.(*nfLog).recordFromNFLogBuffer(nfPacket, false)\n\t\t\tSo(flowreport, ShouldBeNil)\n\t\t\tSo(packetreport, ShouldBeNil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t\tConvey(\"nfPacket with hashID that is not for a valid PU\", func() {\n\n\t\t\tnflogger.(*nfLog).getPUContext = dummyPUContext\n\t\t\tPacketFlow := packetgen.NewTemplateFlow()\n\n\t\t\t_, err := PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tpkt := PacketFlow.GetNthPacket(0)\n\t\t\tpayloadBuf, _ := pkt.ToBytes()\n\t\t\tnfPacket := &nflog.NfPacket{\n\t\t\t\tPayload: payloadBuf,\n\t\t\t}\n\t\t\tnfPacket.Prefix = \"p1:p2:p4:p5\"\n\t\t\tflowreport, packetreport, err := nflogger.(*nfLog).recordFromNFLogBuffer(nfPacket, false)\n\t\t\tSo(flowreport, ShouldBeNil)\n\t\t\tSo(packetreport, ShouldBeNil)\n\t\t\tSo(err, ShouldNotBeNil)\n\n\t\t})\n\n\t})\n}\n\nfunc Test_RecordCounters(t *testing.T) {\n\tConvey(\"I report a dropped packet\", t, func() {\n\t\tpuID := \"SomeProcessingUnitId\"\n\t\tpuInfo := policy.NewPUInfo(puID, \"/ns\", common.ContainerPU)\n\t\tpu, err := pucontext.NewPU(\"contextID\", puInfo, nil, 5*time.Second)\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"I call record counters\", func() {\n\t\t\trecordCounters(6, 80, 2333, pu, true)\n\t\t\tSo(pu.Counters().GetErrorCounters()[counters.ErrDroppedTCPPackets], ShouldEqual, 1)\n\n\t\t\trecordCounters(17, 80, 2333, pu, true)\n\t\t\tc := pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 1)\n\t\t\trecordCounters(17, 53, 2333, pu, true)\n\t\t\tc = pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 1)\n\t\t\tSo(c[counters.ErrDroppedDNSPackets], ShouldEqual, 1)\n\t\t\trecordCounters(17, 67, 2333, pu, true)\n\t\t\tc = pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 1)\n\t\t\tSo(c[counters.ErrDroppedDHCPPackets], ShouldEqual, 1)\n\t\t\trecordCounters(17, 68, 2333, pu, true)\n\t\t\tc = pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 1)\n\t\t\tSo(c[counters.ErrDroppedDHCPPackets], ShouldEqual, 1)\n\t\t\trecordCounters(17, 123, 2333, pu, true)\n\t\t\tc = pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 1)\n\t\t\tSo(c[counters.ErrDroppedNTPPackets], ShouldEqual, 1)\n\n\t\t\trecordCounters(17, 2333, 53, pu, false)\n\t\t\tc = pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 1)\n\t\t\tSo(c[counters.ErrDroppedDNSPackets], ShouldEqual, 1)\n\t\t\trecordCounters(17, 2333, 67, pu, false)\n\t\t\trecordCounters(17, 2333, 67, pu, false)\n\t\t\tc = pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 2)\n\t\t\tSo(c[counters.ErrDroppedDHCPPackets], ShouldEqual, 2)\n\t\t\trecordCounters(17, 2333, 68, pu, false)\n\t\t\tc = pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 1)\n\t\t\tSo(c[counters.ErrDroppedDHCPPackets], ShouldEqual, 1)\n\t\t\trecordCounters(17, 2333, 123, pu, false)\n\t\t\tc = pu.Counters().GetErrorCounters()\n\t\t\tSo(c[counters.ErrDroppedUDPPackets], ShouldEqual, 1)\n\t\t\tSo(c[counters.ErrDroppedNTPPackets], ShouldEqual, 1)\n\n\t\t\trecordCounters(1, 80, 2333, pu, true)\n\t\t\tSo(pu.Counters().GetErrorCounters()[counters.ErrDroppedICMPPackets], ShouldEqual, 1)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nflog/nflog_windows.go",
    "content": "// +build windows\n\npackage nflog\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n\t\"go.uber.org/zap\"\n)\n\n// NfLogWindows has nflog data for windows\ntype NfLogWindows struct { // nolint:golint // ignore type name stutters\n\tgetPUContext    GetPUContextFunc\n\tipv4groupSource uint16\n\tipv4groupDest   uint16\n\tcollector       collector.EventCollector\n\tflowReportCache cache.DataStore\n}\n\n// NewNFLogger provides an NFLog instance\nfunc NewNFLogger(ipv4groupSource, ipv4groupDest uint16, getPUContext GetPUContextFunc, collector collector.EventCollector) NFLogger {\n\tnfLog := &NfLogWindows{\n\t\tipv4groupSource: ipv4groupSource,\n\t\tipv4groupDest:   ipv4groupDest,\n\t\tcollector:       collector,\n\t\tgetPUContext:    getPUContext,\n\t}\n\tnfLog.flowReportCache = cache.NewCacheWithExpirationNotifier(\"flowReportCache\", time.Second*5, nfLog.logExpirationNotifier)\n\treturn nfLog\n}\n\n// Run does nothing for Windows\nfunc (n *NfLogWindows) Run(ctx context.Context) {\n}\n\n// NfLogHandler handles log info from our Windows driver\nfunc (n *NfLogWindows) NfLogHandler(logPacketInfo *frontman.LogPacketInfo, packetHeaderBytes []byte) error {\n\tvar puIsSource bool\n\tswitch uint16(logPacketInfo.GroupID) {\n\tcase n.ipv4groupSource:\n\t\tpuIsSource = false\n\tcase n.ipv4groupDest:\n\t\tpuIsSource = true\n\tdefault:\n\t\treturn fmt.Errorf(\"unrecognized log group id: %d\", logPacketInfo.GroupID)\n\t}\n\n\tipPacket, err := packet.New(packet.PacketTypeNetwork, packetHeaderBytes, \"\", false)\n\tif err != nil {\n\t\tcounters.IncrementCounter(counters.ErrNfLogError)\n\t\tzap.L().Debug(\"Error while processing nflog packet\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\trecord, packetEvent, err := recordFromNFLogData(packetHeaderBytes, syscall.UTF16ToString(logPacketInfo.LogPrefix[:]),\n\t\tipPacket.IPProto(), ipPacket.SourceAddress(), ipPacket.DestinationAddress(), ipPacket.SourcePort(), ipPacket.DestPort(),\n\t\tn.getPUContext, puIsSource)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif record != nil {\n\t\thandleFlowReport(n.flowReportCache, n.collector, record, puIsSource)\n\t}\n\tif packetEvent != nil {\n\t\tn.collector.CollectPacketEvent(packetEvent)\n\t}\n\n\treturn nil\n}\n\nfunc (n *NfLogWindows) logExpirationNotifier(_ interface{}, item interface{}) {\n\tif item != nil {\n\t\t// Basically we had an observed flow report that didn't get reported yet.\n\t\trecord := item.(*collector.FlowRecord)\n\t\tn.collector.CollectFlowEvent(record)\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nfq_darwin.go",
    "content": "// +build darwin\n\npackage nfqdatapath\n\nfunc (d *Datapath) cleanupPlatform() {}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nfq_linux.go",
    "content": "// +build linux\n\npackage nfqdatapath\n\n// Go libraries\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n\tmarkconstants \"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n\tnfqueue \"go.aporeto.io/netlink-go/nfqueue\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\t// nfq actions.\n\tallow  = 1\n\tdrop   = 0\n\trepeat = 4\n)\n\nconst (\n\t// max\n\tmaxTriesNfq = 5\n)\n\nfunc (d *Datapath) errorCallback(err error, _ interface{}) {\n\tzap.L().Error(\"Error while processing packets on queue\", zap.Error(err))\n}\n\nfunc (d *Datapath) callback(packet *nfqueue.NFPacket, _ interface{}) {\n\tpacket.Mark = packet.Mark - int(packet.QueueHandle.QueueNum)*constants.QueueBalanceFactor\n\n\tif packet.Mark == int(constants.DefaultInputMark) {\n\t\td.processNetworkPacketsFromNFQ(packet)\n\t\treturn\n\t}\n\n\tpacket.Mark = packet.Mark / d.filterQueue.GetNumQueues()\n\td.processApplicationPacketsFromNFQ(packet)\n}\n\nfunc (d *Datapath) startInterceptor(ctx context.Context) {\n\n\tvar err error\nLOOP:\n\tfor i := 0; i < d.filterQueue.GetNumQueues(); i++ {\n\t\t// Initialize all the queues\n\t\tfor tries := 0; tries < maxTriesNfq; tries++ {\n\t\t\tif _, err = nfqueue.CreateAndStartNfQueue(ctx, uint16(i), 4096, nfqueue.NfDefaultPacketSize, d.callback, d.errorCallback, nil); err == nil {\n\t\t\t\tcontinue LOOP\n\t\t\t}\n\n\t\t\ttime.Sleep(1 * time.Second)\n\t\t}\n\n\t\tzap.L().Fatal(\"Unable to initialize netfilter queue\", zap.Error(err))\n\t}\n}\n\n// processNetworkPacketsFromNFQ processes packets arriving from the network in an NF queue\nfunc (d *Datapath) processNetworkPacketsFromNFQ(p *nfqueue.NFPacket) {\n\tvar processError error\n\tvar tcpConn *connection.TCPConnection\n\tvar udpConn *connection.UDPConnection\n\tvar processAfterVerdict func()\n\n\tnetPacket := &packet.Packet{}\n\terr := netPacket.NewPacket(packet.PacketTypeNetwork, p.Buffer, strconv.Itoa(p.Mark), true)\n\tif err != nil {\n\t\tcounters.CounterError(counters.ErrCorruptPacket, err) //nolint\n\t\tzap.L().Debug(\"Dropping corrupted packet on network path\", zap.Error(err))\n\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), drop, 0, 0, uint32(p.ID), []byte{0})\n\t\treturn\n\t} else if netPacket.IPProto() == packet.IPProtocolTCP {\n\t\ttcpConn, processAfterVerdict, processError = d.processNetworkTCPPackets(netPacket)\n\t} else if netPacket.IPProto() == packet.IPProtocolUDP {\n\t\tudpConn, processError = d.ProcessNetworkUDPPacket(netPacket)\n\t} else if netPacket.IPProto() == packet.IPProtocolICMP {\n\t\ticmpType, icmpCode := netPacket.GetICMPTypeCode()\n\t\tcontext, err := d.contextFromIP(false, netPacket.Mark, 0, packet.IPProtocolICMP)\n\n\t\tif err == nil {\n\t\t\taction := d.processNetworkICMPPacket(context, netPacket, icmpType, icmpCode)\n\t\t\tif action == icmpAccept {\n\t\t\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), allow, 0, uint32(len(netPacket.GetBuffer(0))), uint32(p.ID), netPacket.GetBuffer(0))\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\n\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), drop, 0, uint32(len(netPacket.GetBuffer(0))), uint32(p.ID), netPacket.GetBuffer(0))\n\t\tzap.L().Debug(\"dropping Network ICMP Packet\",\n\t\t\tzap.Error(err),\n\t\t\tzap.String(\"SourceIP\", netPacket.SourceAddress().String()),\n\t\t\tzap.String(\"DestinationIP\", netPacket.DestinationAddress().String()),\n\t\t\tzap.Int8(\"icmp type\", icmpType),\n\t\t\tzap.Int8(\"icmp code\", icmpCode))\n\n\t\treturn\n\t} else {\n\t\tprocessError = counters.CounterError(counters.ErrInvalidProtocol, fmt.Errorf(\"Invalid Protocol %d\", int(netPacket.IPProto())))\n\t}\n\n\t// TODO: Use error types and handle it in switch case here\n\n\tif processError != nil {\n\t\tif processError != errDropPingNetSynAck && processError != errHandshakePacket && processError != errDropQueuedPacket {\n\t\t\tzap.L().Debug(\"Dropping packet on network path\",\n\t\t\t\tzap.Error(processError),\n\t\t\t\tzap.String(\"SourceIP\", netPacket.SourceAddress().String()),\n\t\t\t\tzap.String(\"DestinationIP\", netPacket.DestinationAddress().String()),\n\t\t\t\tzap.Int(\"SourcePort\", int(netPacket.SourcePort())),\n\t\t\t\tzap.Int(\"DestinationPort\", int(netPacket.DestPort())),\n\t\t\t\tzap.Int(\"Protocol\", int(netPacket.IPProto())),\n\t\t\t\tzap.String(\"Flags\", packet.TCPFlagsToStr(netPacket.GetTCPFlags())),\n\t\t\t)\n\t\t}\n\n\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), drop, uint32(p.Mark), 0, uint32(p.ID), []byte{0})\n\n\t\tif processError != errDropPingNetSynAck {\n\t\t\tif netPacket.IPProto() == packet.IPProtocolTCP {\n\t\t\t\td.collectTCPPacket(&debugpacketmessage{\n\t\t\t\t\tMark:    p.Mark,\n\t\t\t\t\tp:       netPacket,\n\t\t\t\t\ttcpConn: tcpConn,\n\t\t\t\t\tudpConn: nil,\n\t\t\t\t\terr:     processError,\n\t\t\t\t\tnetwork: true,\n\t\t\t\t})\n\t\t\t} else if netPacket.IPProto() == packet.IPProtocolUDP {\n\t\t\t\td.collectUDPPacket(&debugpacketmessage{\n\t\t\t\t\tMark:    p.Mark,\n\t\t\t\t\tp:       netPacket,\n\t\t\t\t\ttcpConn: nil,\n\t\t\t\t\tudpConn: udpConn,\n\t\t\t\t\terr:     processError,\n\t\t\t\t\tnetwork: true,\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\n\t\treturn\n\t}\n\n\tif netPacket.IPProto() == packet.IPProtocolTCP {\n\t\tif netPacket.SetConnmark {\n\t\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), repeat, markconstants.PacketMarkToSetConnmark, uint32(len(netPacket.GetBuffer(0))), uint32(p.ID), netPacket.GetBuffer(0))\n\t\t} else {\n\t\t\tif d.serviceMeshType == policy.Istio {\n\t\t\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), allow, markconstants.IstioPacketMark, uint32(len(netPacket.GetBuffer(0))), uint32(p.ID), netPacket.GetBuffer(0))\n\t\t\t} else {\n\t\t\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), allow, 0, uint32(len(netPacket.GetBuffer(0))), uint32(p.ID), netPacket.GetBuffer(0))\n\t\t\t}\n\t\t}\n\t} else {\n\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), allow, 0, uint32(len(netPacket.GetBuffer(0))), uint32(p.ID), netPacket.GetBuffer(0))\n\t}\n\n\tif processAfterVerdict != nil {\n\t\tprocessAfterVerdict()\n\t}\n\n\tif netPacket.IPProto() == packet.IPProtocolTCP {\n\t\td.collectTCPPacket(&debugpacketmessage{\n\t\t\tMark:    p.Mark,\n\t\t\tp:       netPacket,\n\t\t\ttcpConn: tcpConn,\n\t\t\tudpConn: nil,\n\t\t\terr:     nil,\n\t\t\tnetwork: true,\n\t\t})\n\t} else if netPacket.IPProto() == packet.IPProtocolUDP {\n\t\td.collectUDPPacket(&debugpacketmessage{\n\t\t\tMark:    p.Mark,\n\t\t\tp:       netPacket,\n\t\t\ttcpConn: nil,\n\t\t\tudpConn: udpConn,\n\t\t\terr:     nil,\n\t\t\tnetwork: true,\n\t\t})\n\t}\n\n}\n\n// processApplicationPackets processes packets arriving from an application and are destined to the network\nfunc (d *Datapath) processApplicationPacketsFromNFQ(p *nfqueue.NFPacket) {\n\n\tvar processError error\n\tvar tcpConn *connection.TCPConnection\n\tvar udpConn *connection.UDPConnection\n\n\tappPacket := &packet.Packet{}\n\terr := appPacket.NewPacket(packet.PacketTypeApplication, p.Buffer, strconv.Itoa(p.Mark), true)\n\n\tif err != nil {\n\t\tzap.L().Debug(\"Dropping corrupted packet on application path\", zap.Error(err))\n\t\tcounters.CounterError(counters.ErrCorruptPacket, err) //nolint\n\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), drop, 0, 0, uint32(p.ID), []byte{0})\n\t\treturn\n\t} else if appPacket.IPProto() == packet.IPProtocolTCP {\n\t\ttcpConn, processError = d.processApplicationTCPPackets(appPacket)\n\t} else if appPacket.IPProto() == packet.IPProtocolUDP {\n\t\tudpConn, processError = d.ProcessApplicationUDPPacket(appPacket)\n\t} else if appPacket.IPProto() == packet.IPProtocolICMP {\n\t\ticmpType, icmpCode := appPacket.GetICMPTypeCode()\n\t\tcontext, err := d.contextFromIP(true, appPacket.Mark, 0, packet.IPProtocolICMP)\n\n\t\tif err == nil {\n\t\t\taction := d.processApplicationICMPPacket(context, appPacket, icmpType, icmpCode)\n\t\t\tif action == icmpAccept {\n\t\t\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), allow, 0, uint32(len(appPacket.GetBuffer(0))), uint32(p.ID), appPacket.GetBuffer(0))\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\n\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), drop, 0, uint32(len(appPacket.GetBuffer(0))), uint32(p.ID), appPacket.GetBuffer(0))\n\t\tzap.L().Debug(\"dropping Application ICMP Packet\",\n\t\t\tzap.Error(err),\n\t\t\tzap.String(\"SourceIP\", appPacket.SourceAddress().String()),\n\t\t\tzap.String(\"DestinationIP\", appPacket.DestinationAddress().String()),\n\t\t\tzap.Int8(\"icmp type\", icmpType),\n\t\t\tzap.Int8(\"icmp code\", icmpCode))\n\n\t\treturn\n\t} else {\n\t\tprocessError = counters.CounterError(counters.ErrInvalidProtocol, fmt.Errorf(\"Invalid Protocol %d\", int(appPacket.IPProto())))\n\t}\n\n\tif processError != nil {\n\t\tif processError != errHandshakePacket && processError != errDropQueuedPacket {\n\n\t\t\tzap.L().Debug(\"Dropping packet on app path\",\n\t\t\t\tzap.Error(processError),\n\t\t\t\tzap.String(\"SourceIP\", appPacket.SourceAddress().String()),\n\t\t\t\tzap.String(\"DestinationIP\", appPacket.DestinationAddress().String()),\n\t\t\t\tzap.Int(\"SourcePort\", int(appPacket.SourcePort())),\n\t\t\t\tzap.Int(\"DestinationPort\", int(appPacket.DestPort())),\n\t\t\t\tzap.Int(\"Protocol\", int(appPacket.IPProto())),\n\t\t\t\tzap.String(\"Flags\", packet.TCPFlagsToStr(appPacket.GetTCPFlags())),\n\t\t\t)\n\t\t}\n\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), drop, uint32(p.Mark), 0, uint32(p.ID), []byte{0})\n\n\t\tif appPacket.IPProto() == packet.IPProtocolTCP {\n\t\t\td.collectTCPPacket(&debugpacketmessage{\n\t\t\t\tMark:    p.Mark,\n\t\t\t\tp:       appPacket,\n\t\t\t\ttcpConn: tcpConn,\n\t\t\t\tudpConn: nil,\n\t\t\t\terr:     processError,\n\t\t\t\tnetwork: false,\n\t\t\t})\n\n\t\t} else if appPacket.IPProto() == packet.IPProtocolUDP {\n\t\t\td.collectUDPPacket(&debugpacketmessage{\n\t\t\t\tMark:    p.Mark,\n\t\t\t\tp:       appPacket,\n\t\t\t\ttcpConn: nil,\n\t\t\t\tudpConn: udpConn,\n\t\t\t\terr:     processError,\n\t\t\t\tnetwork: false,\n\t\t\t})\n\t\t}\n\t\treturn\n\t}\n\n\tif appPacket.IPProto() == packet.IPProtocolTCP {\n\t\tif appPacket.SetConnmark {\n\t\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), repeat, markconstants.PacketMarkToSetConnmark, uint32(len(appPacket.GetBuffer(0))), uint32(p.ID), appPacket.GetBuffer(0))\n\t\t} else {\n\t\t\tif d.serviceMeshType == policy.Istio {\n\t\t\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), allow, markconstants.IstioPacketMark, uint32(len(appPacket.GetBuffer(0))), uint32(p.ID), appPacket.GetBuffer(0))\n\t\t\t} else {\n\t\t\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), allow, 0, uint32(len(appPacket.GetBuffer(0))), uint32(p.ID), appPacket.GetBuffer(0))\n\t\t\t}\n\t\t}\n\t} else {\n\t\tp.QueueHandle.SetVerdict2(uint32(p.QueueHandle.QueueNum), allow, 0, uint32(len(appPacket.GetBuffer(0))), uint32(p.ID), appPacket.GetBuffer(0))\n\t}\n\n\tif appPacket.IPProto() == packet.IPProtocolTCP {\n\t\tvar id string\n\t\tif tcpConn != nil {\n\t\t\tid = tcpConn.Context.ID()\n\t\t} else if d.puFromIP != nil {\n\t\t\tid = d.puFromIP.ID()\n\t\t}\n\n\t\tif _, err = d.packetTracingCache.Get(id); err == nil {\n\t\t\td.collectTCPPacket(&debugpacketmessage{\n\t\t\t\tMark:    p.Mark,\n\t\t\t\tp:       appPacket,\n\t\t\t\ttcpConn: tcpConn,\n\t\t\t\tudpConn: nil,\n\t\t\t\terr:     nil,\n\t\t\t\tnetwork: false,\n\t\t\t})\n\t\t}\n\n\t} else if appPacket.IPProto() == packet.IPProtocolUDP {\n\t\td.collectUDPPacket(&debugpacketmessage{\n\t\t\tMark:    p.Mark,\n\t\t\tp:       appPacket,\n\t\t\ttcpConn: nil,\n\t\t\tudpConn: udpConn,\n\t\t\terr:     nil,\n\t\t\tnetwork: false,\n\t\t})\n\t}\n}\n\nfunc (d *Datapath) cleanupPlatform() {}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nfq_windows.go",
    "content": "// +build windows\n\npackage nfqdatapath\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"unsafe\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/afinetrawsocket\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/nflog\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n\t\"go.uber.org/zap\"\n)\n\nfunc (d *Datapath) startFrontmanPacketFilter(_ context.Context, nflogger nflog.NFLogger) error {\n\n\tnflogWin := nflogger.(*nflog.NfLogWindows)\n\n\tpacketCallback := func(packetInfoPtr, dataPtr uintptr) uintptr {\n\n\t\tpacketInfo := *(*frontman.PacketInfo)(unsafe.Pointer(packetInfoPtr))                                   // nolint:govet\n\t\tpacketBytes := (*[1 << 30]byte)(unsafe.Pointer(dataPtr))[:packetInfo.PacketSize:packetInfo.PacketSize] // nolint:govet\n\n\t\tvar packetType int\n\t\tif packetInfo.Outbound != 0 {\n\t\t\tpacketType = packet.PacketTypeApplication\n\t\t} else {\n\t\t\tpacketType = packet.PacketTypeNetwork\n\t\t}\n\n\t\t// Parse the packet\n\t\tmark := int(packetInfo.Mark)\n\t\tparsedPacket, err := packet.New(uint64(packetType), packetBytes, strconv.Itoa(mark), true)\n\n\t\tif parsedPacket.IPProto() == packet.IPProtocolUDP && parsedPacket.SourcePort() == 53 {\n\t\t\t// notify PUs of DNS results\n\t\t\terr := d.dnsProxy.HandleDNSResponsePacket(parsedPacket.GetUDPData(), parsedPacket.SourceAddress(), parsedPacket.SourcePort(), parsedPacket.DestinationAddress(), parsedPacket.DestPort(), func(id string) (*pucontext.PUContext, error) {\n\t\t\t\tpuCtx, err1 := d.puFromContextID.Get(id)\n\t\t\t\tif err1 != nil {\n\t\t\t\t\treturn nil, err1\n\t\t\t\t}\n\t\t\t\treturn puCtx.(*pucontext.PUContext), nil\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Debug(\"Failed to handle DNS response\", zap.Error(err))\n\t\t\t}\n\t\t\t// forward packet\n\t\t\terr = frontman.Wrapper.PacketFilterForward(&packetInfo, packetBytes)\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Error(\"failed to forward packet\", zap.Error(err))\n\t\t\t}\n\t\t\treturn 0\n\t\t}\n\n\t\tparsedPacket.PlatformMetadata = &afinetrawsocket.WindowPlatformMetadata{\n\t\t\tPacketInfo: packetInfo,\n\t\t\tIgnoreFlow: false,\n\t\t\tDrop:       false,\n\t\t\tSetMark:    0,\n\t\t}\n\n\t\tvar processError error\n\t\tvar tcpConn *connection.TCPConnection\n\t\tvar udpConn *connection.UDPConnection\n\t\tvar f func()\n\n\t\tif err != nil {\n\t\t\tparsedPacket.Print(packet.PacketFailureCreate, d.packetLogs)\n\t\t} else if parsedPacket.IPProto() == packet.IPProtocolTCP {\n\t\t\tif packetType == packet.PacketTypeNetwork {\n\t\t\t\ttcpConn, f, processError = d.processNetworkTCPPackets(parsedPacket)\n\t\t\t\tif f != nil {\n\t\t\t\t\tf()\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\ttcpConn, processError = d.processApplicationTCPPackets(parsedPacket)\n\t\t\t}\n\t\t} else if parsedPacket.IPProto() == packet.IPProtocolUDP {\n\t\t\t// process udp packet\n\t\t\tif packetType == packet.PacketTypeNetwork {\n\t\t\t\tudpConn, processError = d.ProcessNetworkUDPPacket(parsedPacket)\n\t\t\t} else {\n\t\t\t\tudpConn, processError = d.ProcessApplicationUDPPacket(parsedPacket)\n\t\t\t}\n\t\t} else {\n\t\t\tprocessError = fmt.Errorf(\"invalid ip protocol: %d\", parsedPacket.IPProto())\n\t\t}\n\n\t\tif processError != nil {\n\t\t\tif parsedPacket.IPProto() == packet.IPProtocolTCP {\n\t\t\t\td.collectTCPPacket(&debugpacketmessage{\n\t\t\t\t\tMark:    mark,\n\t\t\t\t\tp:       parsedPacket,\n\t\t\t\t\ttcpConn: tcpConn,\n\t\t\t\t\tudpConn: nil,\n\t\t\t\t\terr:     processError,\n\t\t\t\t\tnetwork: packetType == packet.PacketTypeNetwork,\n\t\t\t\t})\n\t\t\t} else if parsedPacket.IPProto() == packet.IPProtocolUDP {\n\t\t\t\td.collectUDPPacket(&debugpacketmessage{\n\t\t\t\t\tMark:    mark,\n\t\t\t\t\tp:       parsedPacket,\n\t\t\t\t\ttcpConn: nil,\n\t\t\t\t\tudpConn: udpConn,\n\t\t\t\t\terr:     processError,\n\t\t\t\t\tnetwork: packetType == packet.PacketTypeNetwork,\n\t\t\t\t})\n\t\t\t}\n\t\t\t// drop packet by not forwarding it\n\t\t\treturn 0\n\t\t}\n\n\t\t// accept the (modified) packet by forwarding it\n\t\tmodifiedPacketBytes := parsedPacket.GetBuffer(0)\n\t\tpacketInfo.PacketSize = uint32(parsedPacket.IPTotalLen())\n\n\t\tplatformMetadata := parsedPacket.PlatformMetadata.(*afinetrawsocket.WindowPlatformMetadata)\n\t\tif platformMetadata.IgnoreFlow {\n\t\t\tpacketInfo.IgnoreFlow = 1\n\t\t} else if platformMetadata.DropFlow {\n\t\t\tpacketInfo.DropFlow = 1\n\t\t}\n\t\tif platformMetadata.Drop {\n\t\t\tpacketInfo.Drop = 1\n\t\t}\n\t\tif platformMetadata.SetMark != 0 {\n\t\t\tpacketInfo.SetMark = 1\n\t\t\tpacketInfo.SetMarkValue = platformMetadata.SetMark\n\t\t}\n\n\t\tif err := frontman.Wrapper.PacketFilterForward(&packetInfo, modifiedPacketBytes); err != nil {\n\t\t\tzap.L().Error(\"failed to forward packet\", zap.Error(err))\n\t\t}\n\n\t\tif parsedPacket.IPProto() == packet.IPProtocolTCP {\n\t\t\td.collectTCPPacket(&debugpacketmessage{\n\t\t\t\tMark:    mark,\n\t\t\t\tp:       parsedPacket,\n\t\t\t\ttcpConn: tcpConn,\n\t\t\t\tudpConn: nil,\n\t\t\t\terr:     nil,\n\t\t\t\tnetwork: packetType == packet.PacketTypeNetwork,\n\t\t\t})\n\t\t} else if parsedPacket.IPProto() == packet.IPProtocolUDP {\n\t\t\td.collectUDPPacket(&debugpacketmessage{\n\t\t\t\tMark:    mark,\n\t\t\t\tp:       parsedPacket,\n\t\t\t\ttcpConn: nil,\n\t\t\t\tudpConn: udpConn,\n\t\t\t\terr:     nil,\n\t\t\t\tnetwork: packetType == packet.PacketTypeNetwork,\n\t\t\t})\n\t\t}\n\n\t\treturn 0\n\t}\n\n\tlogCallback := func(logPacketInfoPtr, dataPtr uintptr) uintptr {\n\n\t\tlogPacketInfo := *(*frontman.LogPacketInfo)(unsafe.Pointer(logPacketInfoPtr)) // nolint:govet\n\t\tpacketHeaderBytes := (*[1 << 30]byte)(unsafe.Pointer(dataPtr))[:logPacketInfo.PacketSize:logPacketInfo.PacketSize]\n\n\t\terr := nflogWin.NfLogHandler(&logPacketInfo, packetHeaderBytes)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"error in log callback\", zap.Error(err))\n\t\t}\n\n\t\treturn 0\n\t}\n\n\tif err := frontman.Wrapper.PacketFilterStart(\"Aporeto Enforcer\", packetCallback, logCallback); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// cleanupPlatform for windows is needed to stop the frontman threads and permit the enforcerd app to shut down\nfunc (d *Datapath) cleanupPlatform() {\n\n\tif err := frontman.Wrapper.PacketFilterClose(); err != nil {\n\t\tzap.L().Error(\"Failed to close packet proxy\", zap.Error(err))\n\t}\n\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/nfq_windows_test.go",
    "content": "// +build windows\n\npackage nfqdatapath\n\nimport (\n\t\"context\"\n\t\"encoding/hex\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"sync\"\n\t\"syscall\"\n\t\"testing\"\n\t\"unsafe\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/packetgen\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n)\n\n// Declare function pointer so that it can be overridden by unit test.\n// This is not actually needed in Windows, but we need the declaration and the empty function for tests.\nvar procSetValuePtr func(procName string, value int) error = procSetValueMock\n\ntype forwardedPacket struct {\n\toutbound, drop, ignoreFlow bool\n\tmark                       int\n\tpacketBytes                []byte\n}\n\n// fakeWrapper is the mock for frontman.Wrapper.\n// We mock frontman.Wrapper and not frontman.Driver because we need to save the go funcs passed to PacketFilterStart.\ntype fakeWrapper struct {\n\treceiveCallback, loggingCallback func(uintptr, uintptr) uintptr\n\tforwardedPackets                 []*forwardedPacket\n\tsync.Mutex\n}\n\nfunc (w *fakeWrapper) queuePacket(p *forwardedPacket) {\n\tw.Lock()\n\tdefer w.Unlock()\n\tw.forwardedPackets = append(w.forwardedPackets, p)\n}\n\nfunc (w *fakeWrapper) GetForwardedPackets() []*forwardedPacket {\n\tw.Lock()\n\tdefer w.Unlock()\n\tresult := w.forwardedPackets\n\tw.forwardedPackets = nil\n\treturn result\n}\n\nfunc (w *fakeWrapper) PacketFilterStart(firewallName string, receiveCallback, loggingCallback func(uintptr, uintptr) uintptr) error {\n\tw.receiveCallback = receiveCallback\n\tw.loggingCallback = loggingCallback\n\treturn nil\n}\n\nfunc (w *fakeWrapper) PacketFilterForward(info *frontman.PacketInfo, packetBytes []byte) error {\n\tp := &forwardedPacket{\n\t\toutbound:    info.Outbound != 0,\n\t\tdrop:        info.Drop != 0,\n\t\tignoreFlow:  info.IgnoreFlow != 0,\n\t\tmark:        int(info.Mark),\n\t\tpacketBytes: make([]byte, info.PacketSize),\n\t}\n\tif n := copy(p.packetBytes, packetBytes); n != int(info.PacketSize) {\n\t\treturn fmt.Errorf(\"%d bytes copied for packet, but expected %d\", n, info.PacketSize)\n\t}\n\tw.queuePacket(p)\n\treturn nil\n}\n\nfunc Test_WindowsPacketCallbacks(t *testing.T) {\n\n\t// unused in Windows\n\t_ = testDstIP\n\t_ = debug\n\n\tConvey(\"Given I create a new enforcer instance for Windows and have a valid processing unit context\", t, func() {\n\n\t\twrapper := &fakeWrapper{}\n\t\tfrontman.Wrapper = wrapper\n\n\t\tConvey(\"Given I create a two processing unit instances\", func() {\n\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tdefer ctrl.Finish()\n\n\t\t\tenforcer, mockCollector := createEnforcerWithPolicy(ctrl, constants.LocalServer)\n\n\t\t\terr := enforcer.startFrontmanPacketFilter(context.Background(), enforcer.nflogger)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tConvey(\"When I pass a syn packet through the enforcer\", func() {\n\n\t\t\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t\t\t_, err = PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPacketFromFlow, err := PacketFlow.GetFirstSynPacket().ToBytes()\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tmark := 12345\n\t\t\t\ttcpPacket, err := packet.New(0, tcpPacketFromFlow, strconv.Itoa(mark), true)\n\t\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t\t}\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(tcpPacket.Mark, ShouldEqual, strconv.Itoa(mark))\n\n\t\t\t\tpacketBytes := tcpPacket.GetTCPBytes()\n\t\t\t\tpacketInfo := &frontman.PacketInfo{\n\t\t\t\t\tIpv4:       1,\n\t\t\t\t\tProtocol:   tcpPacket.IPProto(),\n\t\t\t\t\tPacketSize: uint32(len(packetBytes)),\n\t\t\t\t\tMark:       uint32(mark),\n\t\t\t\t}\n\t\t\t\tif tcpPacket.SourceAddress().String() == testSrcIP {\n\t\t\t\t\tpacketInfo.Outbound = 1\n\t\t\t\t}\n\t\t\t\tret := wrapper.receiveCallback(uintptr(unsafe.Pointer(packetInfo)), uintptr(unsafe.Pointer(&packetBytes[0])))\n\t\t\t\tSo(ret, ShouldBeZeroValue)\n\n\t\t\t\toldPacket := tcpPacket\n\t\t\t\tforwardedPackets := wrapper.GetForwardedPackets()\n\t\t\t\tSo(forwardedPackets, ShouldHaveLength, 1)\n\t\t\t\ttcpPacket, err = packet.New(0, forwardedPackets[0].packetBytes, strconv.Itoa(mark), true)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t// In our 3 way security handshake syn and syn-ack packet should grow in length\n\t\t\t\tSo(tcpPacket.GetTCPFlags()&packet.TCPSynMask, ShouldNotBeZeroValue)\n\t\t\t\tSo(tcpPacket.IPTotalLen(), ShouldBeGreaterThan, oldPacket.IPTotalLen())\n\n\t\t\t\t// reverse it and strip identity\n\t\t\t\tpacketInfo.Outbound ^= 1\n\t\t\t\tpacketBytes = tcpPacket.GetTCPBytes()\n\t\t\t\tpacketInfo.PacketSize = uint32(len(packetBytes))\n\t\t\t\tret = wrapper.receiveCallback(uintptr(unsafe.Pointer(packetInfo)), uintptr(unsafe.Pointer(&packetBytes[0])))\n\t\t\t\tSo(ret, ShouldBeZeroValue)\n\t\t\t\tforwardedPackets = wrapper.GetForwardedPackets()\n\t\t\t\tSo(forwardedPackets, ShouldHaveLength, 1)\n\t\t\t\ttcpPacket, err = packet.New(0, forwardedPackets[0].packetBytes, strconv.Itoa(mark), true)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(tcpPacket.IPTotalLen(), ShouldEqual, oldPacket.IPTotalLen())\n\t\t\t})\n\n\t\t\tConvey(\"When I pass a synack packet for non-PU traffic\", func() {\n\n\t\t\t\tPacketFlow := packetgen.NewTemplateFlow()\n\t\t\t\t_, err = PacketFlow.GenerateTCPFlow(packetgen.PacketFlowTypeGoodFlowTemplate)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPacketFromFlow, err := PacketFlow.GetFirstSynAckPacket().ToBytes()\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tmark := 12345\n\t\t\t\ttcpPacket, err := packet.New(0, tcpPacketFromFlow, strconv.Itoa(mark), true)\n\t\t\t\tif err == nil && tcpPacket != nil {\n\t\t\t\t\ttcpPacket.UpdateIPv4Checksum()\n\t\t\t\t\ttcpPacket.UpdateTCPChecksum()\n\t\t\t\t}\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(tcpPacket.Mark, ShouldEqual, strconv.Itoa(mark))\n\n\t\t\t\tpacketBytes := tcpPacket.GetTCPBytes()\n\t\t\t\tpacketInfo := &frontman.PacketInfo{\n\t\t\t\t\tIpv4:       1,\n\t\t\t\t\tProtocol:   tcpPacket.IPProto(),\n\t\t\t\t\tPacketSize: uint32(len(packetBytes)),\n\t\t\t\t\tMark:       uint32(mark),\n\t\t\t\t}\n\t\t\t\tif tcpPacket.SourceAddress().String() == testSrcIP {\n\t\t\t\t\tpacketInfo.Outbound = 1\n\t\t\t\t}\n\t\t\t\tret := wrapper.receiveCallback(uintptr(unsafe.Pointer(packetInfo)), uintptr(unsafe.Pointer(&packetBytes[0])))\n\t\t\t\tSo(ret, ShouldBeZeroValue)\n\n\t\t\t\tforwardedPackets := wrapper.GetForwardedPackets()\n\t\t\t\tSo(forwardedPackets, ShouldHaveLength, 1)\n\t\t\t\ttcpPacket, err = packet.New(0, forwardedPackets[0].packetBytes, strconv.Itoa(mark), true)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(tcpPacket, ShouldNotBeNil)\n\t\t\t\t// IgnoreFlow flag should be set\n\t\t\t\tSo(forwardedPackets[0].ignoreFlow, ShouldNotBeZeroValue)\n\t\t\t})\n\n\t\t\tConvey(\"When I say to log that a packet is rejected\", func() {\n\n\t\t\t\tpuHash, err := policy.Fnv32Hash(\"SomeProcessingUnitId1\")\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tdnsRequestPacket, err := hex.DecodeString(\"450000380542000080110000c0a8446dc0a84401ebe60035002409f5df510100000100000000000006676f6f676c6503636f6d0000010001\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tdnsPacket, err := packet.New(0, dnsRequestPacket, \"0\", true)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpacketHeaderBytes := dnsPacket.GetBuffer(0)[:dnsPacket.IPHeaderLen()+packet.UDPDataPos]\n\t\t\t\tlogPacketInfo := &frontman.LogPacketInfo{\n\t\t\t\t\tIpv4:       1,\n\t\t\t\t\tProtocol:   dnsPacket.IPProto(),\n\t\t\t\t\tPacketSize: uint32(len(packetHeaderBytes)),\n\t\t\t\t\tGroupID:    11,\n\t\t\t\t}\n\n\t\t\t\tcopy(logPacketInfo.LogPrefix[:], syscall.StringToUTF16(puHash+\":5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\")) // nolint:staticcheck\n\n\t\t\t\tflowRecord := CreateFlowRecord(1, \"192.168.68.109\", \"192.168.68.1\", 0, 53, policy.Reject|policy.Log, collector.PolicyDrop)\n\t\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\t\tret := wrapper.loggingCallback(uintptr(unsafe.Pointer(logPacketInfo)), uintptr(unsafe.Pointer(&packetHeaderBytes[0])))\n\t\t\t\tSo(ret, ShouldBeZeroValue)\n\t\t\t})\n\t\t})\n\t})\n}\n\n// Empty interface implementations\n\nfunc (w *fakeWrapper) GetDestInfo(socket uintptr, destInfo *frontman.DestInfo) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) ApplyDestHandle(socket, destHandle uintptr) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) FreeDestHandle(destHandle uintptr) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) NewIpset(name, ipsetType string) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (w *fakeWrapper) GetIpset(name string) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (w *fakeWrapper) DestroyAllIpsets(prefix string) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) ListIpsets() ([]string, error) {\n\treturn nil, nil\n}\n\nfunc (w *fakeWrapper) ListIpsetsDetail(format int) (string, error) {\n\treturn \"\", nil\n}\n\nfunc (w *fakeWrapper) IpsetAdd(ipsetHandle uintptr, entry string, timeout int) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) IpsetAddOption(ipsetHandle uintptr, entry, option string, timeout int) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) IpsetDelete(ipsetHandle uintptr, entry string) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) IpsetDestroy(ipsetHandle uintptr, name string) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) IpsetFlush(ipsetHandle uintptr) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) IpsetTest(ipsetHandle uintptr, entry string) (bool, error) {\n\treturn true, nil\n}\n\nfunc (w *fakeWrapper) AppendFilter(outbound bool, filterName string, isGotoFilter bool) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) InsertFilter(outbound bool, priority int, filterName string, isGotoFilter bool) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) DestroyFilter(filterName string) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) EmptyFilter(filterName string) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) GetFilterList(outbound bool) ([]string, error) {\n\treturn nil, nil\n}\n\nfunc (w *fakeWrapper) AppendFilterCriteria(filterName, criteriaName string, ruleSpec *frontman.RuleSpec, ipsetRuleSpecs []frontman.IpsetRuleSpec) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) DeleteFilterCriteria(filterName, criteriaName string) error {\n\treturn nil\n}\n\nfunc (w *fakeWrapper) GetCriteriaList(format int) (string, error) {\n\treturn \"\", nil\n}\n\nfunc (w *fakeWrapper) PacketFilterClose() error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/ping_tcp.go",
    "content": "package nfqdatapath\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"math/rand\"\n\t\"net\"\n\t\"os\"\n\t\"strconv\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"github.com/ghedo/go.pkt/packet/raw\"\n\t\"github.com/ghedo/go.pkt/packet/tcp\"\n\t\"github.com/ghedo/go.pkt/routing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\ttpacket \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pingconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/gaia\"\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\t// For unit tests.\n\tsrcip          = getSrcIP\n\tdial           = dialIP\n\tbind           = bindRandomPort\n\tclose          = closeRandomPort\n\trandUint32     = rand.Uint32\n\tsince          = time.Since\n\tisAppListening = isAppListeningInPort\n\n\tremoveDelay = 10 * time.Second\n\tsynAckDelay = 3 * time.Second\n\n\t_ io.Writer = &pingConn{}\n)\n\nfunc (d *Datapath) initiatePingHandshake(_ context.Context, context *pucontext.PUContext, pingConfig *policy.PingConfig) error {\n\n\tzap.L().Debug(\"Initiating ping (syn)\")\n\n\tsrcIP, err := srcip(pingConfig.IP)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to get source ip: %v\", err)\n\t}\n\n\tconn, err := dial(srcIP, pingConfig.IP)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to dial on app syn: %v\", err)\n\t}\n\tdefer conn.Close() // nolint: errcheck\n\n\tfor i := 0; i < pingConfig.Iterations; i++ {\n\t\tif err := d.sendSynPacket(context, pingConfig, conn, srcIP, i); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// sendSynPacket sends tcp syn packet to the socket. It also dispatches a report.\nfunc (d *Datapath) sendSynPacket(context *pucontext.PUContext, pingConfig *policy.PingConfig, conn PingConn, srcIP net.IP, iterationID int) error {\n\n\ttcpConn := connection.NewTCPConnection(context, nil)\n\ttcpConn.PingConfig = pingconfig.New()\n\n\tsrcPort, err := bind(tcpConn)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to bind free source port: %v\", err)\n\t}\n\n\tclaimsHeader := claimsheader.NewClaimsHeader(\n\t\tclaimsheader.OptionPing(true),\n\t)\n\n\tpingPayload := &policy.PingPayload{\n\t\tPingID:        pingConfig.ID,\n\t\tIterationID:   iterationID,\n\t\tNamespaceHash: context.ManagementNamespaceHash(),\n\t}\n\n\tvar tcpData []byte\n\ttcpConn.Secrets, tcpConn.Auth.LocalDatapathPrivateKey, tcpData = context.GetSynToken(pingPayload, tcpConn.Auth.Nonce, claimsHeader)\n\n\tseqNum := randUint32()\n\tp, err := constructTCPPacket(conn, srcIP, pingConfig.IP, srcPort, pingConfig.Port, seqNum, 0, tcp.Syn, tcpData)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to construct syn packet: %v\", err)\n\t}\n\n\t// We always get a default policy.\n\t_, pkt, _ := context.ApplicationACLPolicyFromAddr(pingConfig.IP, pingConfig.Port, packet.IPProtocolTCP)\n\n\tpingErr := \"timeout\"\n\tif e := pingConfig.Error(); e != \"\" {\n\t\tpingErr = e\n\t}\n\n\t// RequestTimeout report cached in the connection. This will be sent on\n\t// expiration timeout for this connection.\n\ttcpConn.PingConfig.SetPingReport(&collector.PingReport{\n\t\tPingID:          pingConfig.ID,\n\t\tIterationID:     iterationID,\n\t\tAgentVersion:    d.agentVersion.String(),\n\t\tPayloadSize:     len(tcpData),\n\t\tPayloadSizeType: gaia.PingProbePayloadSizeTypeTransmitted,\n\t\tType:            gaia.PingProbeTypeRequest,\n\t\tError:           pingErr,\n\t\tPUID:            context.ManagementID(),\n\t\tNamespace:       context.ManagementNamespace(),\n\t\tProtocol:        tpacket.IPProtocolTCP,\n\t\tServiceType:     \"L3\",\n\t\tFourTuple: flowTuple(\n\t\t\ttpacket.PacketTypeApplication,\n\t\t\tsrcIP,\n\t\t\tpingConfig.IP,\n\t\t\tsrcPort,\n\t\t\tpingConfig.Port,\n\t\t),\n\t\tSeqNum:              seqNum,\n\t\tTargetTCPNetworks:   pingConfig.TargetTCPNetworks,\n\t\tExcludedNetworks:    pingConfig.ExcludedNetworks,\n\t\tRemoteNamespaceType: gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tClaims:              context.Identity().GetSlice(),\n\t\tClaimsType:          gaia.PingProbeClaimsTypeTransmitted,\n\t\tACLPolicyID:         pkt.PolicyID,\n\t\tACLPolicyAction:     pkt.Action,\n\t})\n\ttcpConn.TCPtuple = &connection.TCPTuple{\n\t\tSourceAddress:      srcIP,\n\t\tDestinationAddress: pingConfig.IP,\n\t\tSourcePort:         srcPort,\n\t\tDestinationPort:    pingConfig.Port,\n\t}\n\ttcpConn.PingConfig.StartTime = time.Now()\n\ttcpConn.PingConfig.SetPingID(pingConfig.ID)\n\ttcpConn.PingConfig.SetIterationID(iterationID)\n\ttcpConn.PingConfig.SetSeqNum(seqNum)\n\ttcpConn.SetState(connection.TCPSynSend)\n\tkey := flowTuple(tpacket.PacketTypeApplication, srcIP, pingConfig.IP, srcPort, pingConfig.Port)\n\n\td.cachePut(d.tcpClient, key, tcpConn)\n\n\tif _, err := conn.Write(p); err != nil {\n\t\treturn fmt.Errorf(\"unable to send syn packet: %v\", err)\n\t}\n\n\treturn nil\n}\n\n// processPingNetSynPacket should only be called when the packet is recognized as a ping syn packet.\nfunc (d *Datapath) processPingNetSynPacket(\n\tcontext *pucontext.PUContext,\n\ttcpConn *connection.TCPConnection,\n\ttcpPacket *tpacket.Packet,\n\tpayloadSize int,\n\tpkt *policy.FlowPolicy,\n\tclaims *tokens.ConnectionClaims,\n) error {\n\tzap.L().Debug(\"Processing ping network syn packet\", zap.String(\"conn\", tcpPacket.L4FlowHash()))\n\n\tif tcpConn.GetState() == connection.TCPSynReceived || tcpConn.GetState() == connection.TCPSynAckSend {\n\t\tzap.L().Debug(\"Dropping duplicate ping syn packets\")\n\t\treturn errDropPingNetSyn\n\t}\n\n\tdefer func() {\n\t\ttcpConn.SetState(connection.TCPSynReceived)\n\t\ttcpConn.PingConfig.SetSocketClosed(true)\n\t\ttcpConn.PingConfig.SetPingID(claims.P.PingID)\n\t\ttcpConn.PingConfig.SetIterationID(claims.P.IterationID)\n\n\t\ttxtID, ok := claims.T.Get(enforcerconstants.TransmitterLabel)\n\t\tif !ok {\n\t\t\tzap.L().Warn(\"missing transmitter label\")\n\t\t}\n\n\t\td.cachePut(d.tcpServer, tcpPacket.L4FlowHash(), tcpConn)\n\t\td.sendRequestRecvReport(txtID, claims.P, tcpPacket, context, pkt, payloadSize, tcpConn.SourceController)\n\t}()\n\n\tif tcpConn.PingConfig == nil {\n\t\ttcpConn.PingConfig = pingconfig.New()\n\t}\n\n\tlistening, err := isAppListening(tcpPacket.DestPort())\n\tif listening && !pkt.Action.Rejected() {\n\t\tzap.L().Debug(\"Appplication listening\", zap.String(\"conn\", tcpPacket.L4FlowHash()), zap.Error(err))\n\n\t\ttime.AfterFunc(synAckDelay, func() {\n\n\t\t\tif tcpConn.PingConfig.ApplicationListening() {\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tif err := d.sendSynAckPacket(context, tcpConn, tcpPacket, claims); err != nil {\n\t\t\t\tzap.L().Error(\"unable to send synack paket\", zap.Error(err))\n\t\t\t}\n\t\t})\n\n\t\treturn nil\n\t}\n\n\tif err := d.sendSynAckPacket(context, tcpConn, tcpPacket, claims); err != nil {\n\t\treturn err\n\t}\n\n\treturn errDropPingNetSyn\n}\n\nfunc (d *Datapath) sendSynAckPacket(\n\tcontext *pucontext.PUContext,\n\ttcpConn *connection.TCPConnection,\n\ttcpPacket *tpacket.Packet,\n\tclaims *tokens.ConnectionClaims,\n) error {\n\n\tclaimsHeader := claimsheader.NewClaimsHeader(\n\t\tclaimsheader.OptionPing(true),\n\t)\n\n\tpingPayload := &policy.PingPayload{\n\t\tPingID:        claims.P.PingID,\n\t\tIterationID:   claims.P.IterationID,\n\t\tNamespaceHash: context.ManagementNamespaceHash(),\n\t}\n\n\tclaimsNew := &tokens.ConnectionClaims{\n\t\tCT:       context.CompressedTags(),\n\t\tLCL:      tcpConn.Auth.Nonce[:],\n\t\tRMT:      tcpConn.Auth.RemoteNonce,\n\t\tDEKV1:    tcpConn.Auth.LocalDatapathPublicKeyV1,\n\t\tSDEKV1:   tcpConn.Auth.LocalDatapathPublicKeySignV1,\n\t\tDEKV2:    tcpConn.Auth.LocalDatapathPublicKeyV2,\n\t\tSDEKV2:   tcpConn.Auth.LocalDatapathPublicKeySignV2,\n\t\tID:       context.ManagementID(),\n\t\tRemoteID: tcpConn.Auth.RemoteContextID,\n\t\tP:        pingPayload,\n\t}\n\n\ttcpData, err := d.tokenAccessor.CreateSynAckPacketToken(tcpConn.Auth.Proto314, claimsNew, tcpConn.EncodedBuf[:], tcpConn.Auth.Nonce[:], claimsHeader, tcpConn.Secrets, tcpConn.Auth.SecretKey) //nolint\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to create ping synack token: %w\", err)\n\t}\n\n\tconn, err := dial(tcpPacket.DestinationAddress(), tcpPacket.SourceAddress())\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to construct synack packet %w\", err)\n\t}\n\tdefer conn.Close() // nolint: errcheck\n\n\tp, err := constructTCPPacket(\n\t\tconn,\n\t\ttcpPacket.DestinationAddress(),\n\t\ttcpPacket.SourceAddress(),\n\t\ttcpPacket.DestPort(),\n\t\ttcpPacket.SourcePort(),\n\t\trandUint32(),\n\t\ttcpPacket.TCPSeqNum()+1,\n\t\ttcp.Syn|tcp.Ack,\n\t\ttcpData,\n\t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to construct synack packet: %w\", err)\n\t}\n\n\tif _, err := conn.Write(p); err != nil {\n\t\treturn fmt.Errorf(\"unable to send synack packet: %w\", err)\n\t}\n\n\ttcpConn.SetState(connection.TCPSynAckSend)\n\n\treturn nil\n}\n\n// processPingNetSynAckPacket should only be called when the packet is recognized as a ping synack packet.\nfunc (d *Datapath) processPingNetSynAckPacket(\n\tcontext *pucontext.PUContext,\n\ttcpConn *connection.TCPConnection,\n\ttcpPacket *tpacket.Packet,\n\tpayloadSize int,\n\tpkt *policy.FlowPolicy,\n\tclaims *tokens.ConnectionClaims,\n\text bool,\n) error {\n\tzap.L().Debug(\"Processing ping network synack packet\",\n\t\tzap.Bool(\"externalNetwork\", ext),\n\t\tzap.String(\"conn\", tcpPacket.SourcePortHash(packet.PacketTypeNetwork)),\n\t)\n\n\tif tcpConn.PingConfig == nil {\n\t\treturn errDropPingNetSynAck\n\t}\n\n\treceiveTime := since(tcpConn.PingConfig.StartTime).String()\n\n\tdefer func() {\n\t\ttcpConn.SetState(connection.TCPSynAckReceived)\n\n\t\tif !tcpConn.PingConfig.SocketClosed() {\n\t\t\tdefer func() {\n\t\t\t\tif err := close(tcpConn); err != nil {\n\t\t\t\t\tzap.L().Warn(\"unable to close socket\", zap.Reflect(\"fd\", tcpConn.PingConfig.SocketFd()), zap.Error(err))\n\t\t\t\t}\n\t\t\t}()\n\t\t}\n\n\t\ttime.AfterFunc(removeDelay, func() {\n\t\t\td.cacheRemove(d.tcpClient, tcpPacket.SourcePortHash(packet.PacketTypeNetwork))\n\t\t})\n\n\t\tif err := respondWithRstPacket(tcpPacket, nil); err != nil {\n\t\t\tzap.L().Warn(\"unable to send rst packet\", zap.Error(err))\n\t\t}\n\t}()\n\n\t// Drop duplicate synack packets.\n\tif tcpConn.GetState() == connection.TCPSynAckReceived {\n\t\treturn errDropPingNetSynAck\n\t}\n\n\t// Synack from externalnetwork.\n\tif ext {\n\t\td.sendExtResponseRecvReport(\n\t\t\tcontext,\n\t\t\treceiveTime,\n\t\t\tpkt,\n\t\t\tpayloadSize,\n\t\t\ttcpConn,\n\t\t)\n\t\treturn errDropPingNetSynAck\n\t}\n\n\ttxtID, ok := claims.T.Get(enforcerconstants.TransmitterLabel)\n\tif !ok {\n\t\treturn fmt.Errorf(\"missing transmitter label\")\n\t}\n\n\td.sendResponseRecvReport(\n\t\ttxtID,\n\t\tclaims.P,\n\t\tcontext,\n\t\treceiveTime,\n\t\tpkt,\n\t\tpayloadSize,\n\t\ttcpConn,\n\t\ttcpConn.DestinationController,\n\t)\n\n\treturn errDropPingNetSynAck\n}\n\n// respondWithRstPacket sends a rst packet in response to tcpPacket.\nfunc respondWithRstPacket(tcpPacket *tpacket.Packet, payload []byte) error {\n\n\tconn, err := dial(tcpPacket.DestinationAddress(), tcpPacket.SourceAddress())\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to dial: %w\", err)\n\t}\n\tdefer conn.Close() // nolint: errcheck\n\n\tp, err := constructTCPPacket(\n\t\tconn,\n\t\ttcpPacket.DestinationAddress(),\n\t\ttcpPacket.SourceAddress(),\n\t\ttcpPacket.DestPort(),\n\t\ttcpPacket.SourcePort(),\n\t\ttcpPacket.TCPAckNum(),\n\t\ttcpPacket.TCPSeqNum()+1,\n\t\ttcp.Rst,\n\t\tpayload,\n\t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to construct rst packet: %w\", err)\n\t}\n\n\tif _, err := conn.Write(p); err != nil {\n\t\treturn fmt.Errorf(\"unable to send rst packet: %w\", err)\n\t}\n\n\treturn nil\n}\n\n// sendRequestRecvReport sends a report on syn recv state.\nfunc (d *Datapath) sendRequestRecvReport(\n\tsrcPUID string,\n\tpingPayload *policy.PingPayload,\n\ttcpPacket *tpacket.Packet,\n\tcontext *pucontext.PUContext,\n\tpkt *policy.FlowPolicy,\n\tpayloadSize int,\n\tcontroller string,\n) {\n\n\terr := \"\"\n\tif pkt.Action.Rejected() {\n\t\terr = collector.PolicyDrop\n\t}\n\n\td.sendPingReport(\n\t\tpingPayload.PingID,\n\t\tpingPayload.IterationID,\n\t\td.agentVersion.String(),\n\t\ttcpPacket.L4FlowHash(),\n\t\t\"\",\n\t\tsrcPUID,\n\t\tcontext.ManagementID(),\n\t\tcontext.ManagementNamespace(),\n\t\tpingPayload.NamespaceHash,\n\t\tgaia.PingProbeTypeRequest,\n\t\tpayloadSize,\n\t\tpkt.PolicyID,\n\t\tpkt.Action,\n\t\tfalse,\n\t\tcollector.EndPointTypePU,\n\t\ttcpPacket.TCPSeqNum(),\n\t\tcontroller,\n\t\ttrue,\n\t\tfalse,\n\t\tcontext.Identity().GetSlice(),\n\t\t\"\",\n\t\tpolicy.ActionType(0),\n\t\ttrue,\n\t\terr,\n\t)\n}\n\n// sendResponseRecvReport sends a report on synack recv state.\nfunc (d *Datapath) sendResponseRecvReport(\n\tsrcPUID string,\n\tpingPayload *policy.PingPayload,\n\tcontext *pucontext.PUContext,\n\trtt string,\n\tpkt *policy.FlowPolicy,\n\tpayloadSize int,\n\ttcpConn *connection.TCPConnection,\n\tcontroller string,\n) {\n\n\tpingErr := \"\"\n\tif !tcpConn.PingConfig.PingReport().TargetTCPNetworks {\n\t\tpingErr = policy.ErrTargetTCPNetworks\n\t}\n\n\tif tcpConn.PingConfig.PingReport().ExcludedNetworks {\n\t\tpingErr = policy.ErrExcludedNetworks\n\t}\n\n\tif pkt.Action.Rejected() {\n\t\tpingErr = collector.PolicyDrop\n\t}\n\n\td.sendPingReport(\n\t\tpingPayload.PingID,\n\t\tpingPayload.IterationID,\n\t\td.agentVersion.String(),\n\t\tflowTuple(\n\t\t\ttpacket.PacketTypeNetwork,\n\t\t\ttcpConn.TCPtuple.SourceAddress,\n\t\t\ttcpConn.TCPtuple.DestinationAddress,\n\t\t\ttcpConn.TCPtuple.SourcePort,\n\t\t\ttcpConn.TCPtuple.DestinationPort,\n\t\t),\n\t\trtt,\n\t\tsrcPUID,\n\t\tcontext.ManagementID(),\n\t\tcontext.ManagementNamespace(),\n\t\tpingPayload.NamespaceHash,\n\t\tgaia.PingProbeTypeResponse,\n\t\tpayloadSize,\n\t\tpkt.PolicyID,\n\t\tpkt.Action,\n\t\tpingPayload.ApplicationListening,\n\t\tcollector.EndPointTypePU,\n\t\ttcpConn.PingConfig.SeqNum(),\n\t\tcontroller,\n\t\ttcpConn.PingConfig.PingReport().TargetTCPNetworks,\n\t\ttcpConn.PingConfig.PingReport().ExcludedNetworks,\n\t\ttcpConn.PingConfig.PingReport().Claims,\n\t\ttcpConn.PingConfig.PingReport().ACLPolicyID,\n\t\ttcpConn.PingConfig.PingReport().ACLPolicyAction,\n\t\tfalse,\n\t\tpingErr,\n\t)\n}\n\n// sendExtResponseRecvReport sends a report on synack from ext net.\nfunc (d *Datapath) sendExtResponseRecvReport(\n\tcontext *pucontext.PUContext,\n\trtt string,\n\tpkt *policy.FlowPolicy,\n\tpayloadSize int,\n\ttcpConn *connection.TCPConnection,\n) {\n\td.sendPingReport(\n\t\ttcpConn.PingConfig.PingID(),\n\t\ttcpConn.PingConfig.IterationID(),\n\t\td.agentVersion.String(),\n\t\tflowTuple(\n\t\t\ttpacket.PacketTypeNetwork,\n\t\t\ttcpConn.TCPtuple.SourceAddress,\n\t\t\ttcpConn.TCPtuple.DestinationAddress,\n\t\t\ttcpConn.TCPtuple.SourcePort,\n\t\t\ttcpConn.TCPtuple.DestinationPort,\n\t\t),\n\t\trtt,\n\t\t\"\",\n\t\tcontext.ManagementID(),\n\t\tcontext.ManagementNamespace(),\n\t\t\"\",\n\t\tgaia.PingProbeTypeResponse,\n\t\tpayloadSize,\n\t\tpkt.PolicyID,\n\t\tpkt.Action,\n\t\ttrue,\n\t\tcollector.EndPointTypeExternalIP,\n\t\ttcpConn.PingConfig.SeqNum(),\n\t\t\"\",\n\t\ttcpConn.PingConfig.PingReport().TargetTCPNetworks,\n\t\ttcpConn.PingConfig.PingReport().ExcludedNetworks,\n\t\ttcpConn.PingConfig.PingReport().Claims,\n\t\ttcpConn.PingConfig.PingReport().ACLPolicyID,\n\t\ttcpConn.PingConfig.PingReport().ACLPolicyAction,\n\t\tfalse,\n\t\t\"\",\n\t)\n}\n\nfunc (d *Datapath) sendPingReport(\n\tpingID string,\n\titerationID int,\n\tagentVersion,\n\tfourTuple,\n\trtt,\n\tremoteID,\n\tpuid,\n\tns string,\n\tnsHash string,\n\tptype gaia.PingProbeTypeValue,\n\tpayloadSize int,\n\tpolicyID string,\n\tpolicyAction policy.ActionType,\n\tappListening bool,\n\ttxType collector.EndPointType,\n\tseqNum uint32,\n\tcontroller string,\n\ttn,\n\ten bool,\n\tclaims []string,\n\taclPolicyID string,\n\taclPolicyAction policy.ActionType,\n\tisServer bool,\n\terr string,\n) {\n\n\treport := &collector.PingReport{\n\t\tPingID:               pingID,\n\t\tIterationID:          iterationID,\n\t\tAgentVersion:         agentVersion,\n\t\tFourTuple:            fourTuple,\n\t\tRTT:                  rtt,\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tType:                 ptype,\n\t\tPUID:                 puid,\n\t\tRemotePUID:           remoteID,\n\t\tNamespace:            ns,\n\t\tRemoteNamespace:      nsHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tProtocol:             tpacket.IPProtocolTCP,\n\t\tServiceType:          \"L3\",\n\t\tPolicyID:             policyID,\n\t\tPolicyAction:         policyAction,\n\t\tApplicationListening: appListening,\n\t\tRemoteEndpointType:   txType,\n\t\tSeqNum:               seqNum,\n\t\tRemoteController:     controller,\n\t\tTargetTCPNetworks:    tn,\n\t\tExcludedNetworks:     en,\n\t\tClaims:               claims,\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tACLPolicyID:          aclPolicyID,\n\t\tACLPolicyAction:      aclPolicyAction,\n\t\tIsServer:             isServer,\n\t\tError:                err,\n\t}\n\n\td.collector.CollectPingEvent(report)\n}\n\n// constructTCPPacket constructs a valid tcp packet that can be sent on wire.\nfunc constructTCPPacket(conn PingConn, srcIP, dstIP net.IP, srcPort, dstPort uint16, seqNum, ackNum uint32, flag tcp.Flags, tcpData []byte) ([]byte, error) {\n\n\t// tcp.\n\ttcpPacket := tcp.Make()\n\ttcpPacket.SrcPort = srcPort\n\ttcpPacket.DstPort = dstPort\n\ttcpPacket.Flags = flag\n\ttcpPacket.Seq = seqNum\n\ttcpPacket.Ack = ackNum\n\ttcpPacket.WindowSize = 0xAAAA\n\ttcpPacket.Options = []tcp.Option{\n\t\t{\n\t\t\tType: tcp.MSS,\n\t\t\tLen:  4,\n\t\t\tData: []byte{0x05, 0x8C},\n\t\t},\n\t}\n\ttcpPacket.DataOff = uint8(6) // 5 (header size) + 1 * (4 byte options)\n\n\tif len(tcpData) != 0 {\n\t\ttcpPacket.Options = append(\n\t\t\ttcpPacket.Options,\n\t\t\ttcp.Option{\n\t\t\t\tType: 34, // tfo\n\t\t\t\tLen:  enforcerconstants.TCPAuthenticationOptionBaseLen,\n\t\t\t\tData: make([]byte, 2),\n\t\t\t},\n\t\t)\n\t\ttcpPacket.DataOff += uint8(1) // 6 + 1 * (4 byte options)\n\t}\n\n\t// payload.\n\tpayload := raw.Make()\n\tpayload.Data = tcpData\n\n\t// construct the wire packet\n\tbuf, err := conn.ConstructWirePacket(srcIP, dstIP, tcpPacket, payload)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to encode packet to wire format: %v\", err)\n\t}\n\n\treturn buf, nil\n}\n\n// getSrcIP returns the interface ip that can reach the destination.\nfunc getSrcIP(dstIP net.IP) (net.IP, error) {\n\n\troute, err := routing.RouteTo(dstIP)\n\tif err != nil || route == nil {\n\t\treturn nil, fmt.Errorf(\"no route found for destination %s: %v\", dstIP.String(), err)\n\t}\n\n\tip, err := route.GetIfaceIPv4Addr()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to get interface ip address: %v\", err)\n\t}\n\n\treturn ip, nil\n}\n\n// flowTuple returns the tuple based on the stage in format <sip:dip:spt:dpt> or <dip:sip:dpt:spt>\nfunc flowTuple(stage uint64, srcIP, dstIP net.IP, srcPort, dstPort uint16) string {\n\n\tif stage == tpacket.PacketTypeNetwork {\n\t\treturn fmt.Sprintf(\"%s:%s:%s:%s\", dstIP.String(), srcIP.String(), strconv.Itoa(int(dstPort)), strconv.Itoa(int(srcPort)))\n\t}\n\n\treturn fmt.Sprintf(\"%s:%s:%s:%s\", srcIP.String(), dstIP.String(), strconv.Itoa(int(srcPort)), strconv.Itoa(int(dstPort)))\n}\n\n// isAppListeningInPort returns true if the port is in use by IPv4:TCP app.\n// It immediately closes the listener socket.\n// Also returns the actual error for further scrutiny.\nfunc isAppListeningInPort(port uint16) (bool, error) {\n\n\taddr := fmt.Sprintf(\":%d\", port)\n\tlistener, err := net.Listen(\"tcp4\", addr)\n\tif listener != nil {\n\t\tlistener.Close() // nolint:errcheck\n\t}\n\n\treturn isAddressInUse(err), err\n}\n\n// isAddressInUse returns true for and unix.EADDRINUSE or windows.WSAEADDRINUSE errors.\nfunc isAddressInUse(err error) bool {\n\n\topErr, ok := err.(*net.OpError)\n\tif !ok {\n\t\treturn false\n\t}\n\n\tsyscallErr, ok := opErr.Unwrap().(*os.SyscallError)\n\tif !ok {\n\t\treturn false\n\t}\n\n\terrNo, ok := syscallErr.Unwrap().(syscall.Errno)\n\tif !ok {\n\t\treturn false\n\t}\n\n\treturn isAddrInUseErrno(errNo)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/ping_test.go",
    "content": "// +build linux\n\npackage nfqdatapath\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"sync/atomic\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/ghedo/go.pkt/layers\"\n\tgpacket \"github.com/ghedo/go.pkt/packet\"\n\t\"github.com/ghedo/go.pkt/packet/ipv4\"\n\t\"github.com/ghedo/go.pkt/packet/tcp\"\n\t\"github.com/golang/mock/gomock\"\n\t\"github.com/mitchellh/copystructure\"\n\t\"github.com/stretchr/testify/require\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector/mockcollector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor/mocktokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/gaia\"\n\t\"go.aporeto.io/underwater/core/tagutils\"\n)\n\nvar (\n\ttestPU1CtxID  = \"pu1abc\"\n\ttestPU1NS     = \"/ns1\"\n\ttestPU1NSHash = \"\"\n\ttestPU2CtxID  = \"pu2abc\"\n\ttestPU2NS     = \"/ns2\"\n\ttestPU2NSHash = \"\"\n\tsrcCtrl       = \"srcctrl\"\n\tdstCtrl       = \"dstctrl\"\n\n\tsip                 = net.ParseIP(\"192.168.100.1\").To4()\n\tdip                 = net.ParseIP(\"172.17.0.2\").To4()\n\tspt          uint16 = 2020\n\tdpt          uint16 = 80\n\tseqnum       uint32 = 123456\n\tduration            = \"20ms\"\n\tc                   = &fakeConn{}\n\tappListening int32  = 0\n)\n\nfunc switchAppListening(enable bool) {\n\n\tvar state int32\n\tif enable {\n\t\tstate = 1\n\t}\n\n\tatomic.StoreInt32(&appListening, state)\n}\n\nfunc init() {\n\tif hash, err := tagutils.Hash(testPU1NS); err == nil {\n\t\ttestPU1NSHash = hash\n\t}\n\n\tif hash, err := tagutils.Hash(testPU2NS); err == nil {\n\t\ttestPU2NSHash = hash\n\t}\n\n\tsrcip = func(_ net.IP) (net.IP, error) {\n\t\treturn sip, nil\n\t}\n\tdial = func(_, _ net.IP) (PingConn, error) {\n\t\treturn c, nil\n\t}\n\tbind = func(tcpConn *connection.TCPConnection) (uint16, error) {\n\t\ttcpConn.PingConfig.SetSocketFd(8)\n\t\treturn spt, nil\n\t}\n\tclose = func(tcpConn *connection.TCPConnection) error {\n\t\ttcpConn.PingConfig.SetSocketClosed(true)\n\t\treturn nil\n\t}\n\trandUint32 = func() uint32 {\n\t\treturn seqnum\n\t}\n\tsince = func(_ time.Time) time.Duration {\n\t\td, _ := time.ParseDuration(duration)\n\t\treturn d\n\t}\n\n\tisAppListening = func(port uint16) (bool, error) {\n\t\tif atomic.LoadInt32(&appListening) == 1 {\n\t\t\treturn true, nil\n\t\t}\n\t\treturn false, nil\n\t}\n}\n\nfunc setupDatapathAndPUs(ctrl *gomock.Controller, collector collector.EventCollector, tokenAccessor tokenaccessor.TokenAccessor) (*Datapath, *fakeConn, error) {\n\n\tdp := setupDatapath(ctrl, collector)\n\tdp.tokenAccessor = tokenAccessor\n\n\tpu1info := policy.NewPUInfo(testPU1CtxID, testPU1NS, common.ContainerPU)\n\tpu1info.Policy = policy.NewPUPolicy(\n\t\ttestPU1CtxID,\n\t\ttestPU1NS,\n\t\tpolicy.Police,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tpolicy.NewTagStoreFromSlice([]string{\"x=y\"}),\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\t0,\n\t\t0,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tpolicy.EnforcerMapping,\n\t\tpolicy.Reject|policy.Log,\n\t\tpolicy.Reject|policy.Log,\n\t)\n\n\tif err := dp.Enforce(context.Background(), testPU1CtxID, pu1info); err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tpu2info := policy.NewPUInfo(testPU2CtxID, testPU2NS, common.ContainerPU)\n\tpu2info.Policy = policy.NewPUPolicy(\n\t\ttestPU2CtxID,\n\t\ttestPU2NS,\n\t\tpolicy.Police,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tpolicy.NewTagStoreFromSlice([]string{\"a=b\"}),\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\t0,\n\t\t0,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tpolicy.EnforcerMapping,\n\t\tpolicy.Reject|policy.Log,\n\t\tpolicy.Reject|policy.Log,\n\t)\n\n\tif err := dp.Enforce(context.Background(), testPU2CtxID, pu2info); err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\treturn dp, c, nil\n}\n\nfunc wrapIP(d []byte, swap bool, changeSeqNum bool, flags tcp.Flags) ([]byte, error) {\n\n\tipPacket := ipv4.Make()\n\tipPacket.SrcAddr = sip\n\tipPacket.DstAddr = dip\n\tif swap {\n\t\tipPacket.SrcAddr = dip\n\t\tipPacket.DstAddr = sip\n\t}\n\tipPacket.Protocol = ipv4.TCP\n\n\tp, err := layers.UnpackAll(d, gpacket.TCP)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttcpPacket, ok := p.(*tcp.Packet)\n\tif !ok {\n\t\treturn nil, errors.New(\"not a tcp packet\")\n\t}\n\n\tif flags != 0 {\n\t\tif flags != tcpPacket.Flags {\n\t\t\treturn nil, fmt.Errorf(\"Expected: %s, Actual: %s\", flags.String(), tcpPacket.Flags.String())\n\t\t}\n\t}\n\n\tif swap {\n\t\ttcpPacket.SrcPort = dpt\n\t\ttcpPacket.DstPort = spt\n\t}\n\n\tif changeSeqNum {\n\t\ttcpPacket.Seq = 111111\n\t}\n\n\t// pack the layers together.\n\tbuf, err := layers.Pack(ipPacket, tcpPacket)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn buf, nil\n}\n\nfunc Test_ValidPing(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\tmockTokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil)\n\tdp, conn, err := setupDatapathAndPUs(ctrl, mockCollector, mockTokenAccessor)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, dp)\n\n\titem, err := dp.puFromContextID.Get(testPU1CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx1 := item.(*pucontext.PUContext)\n\n\titem, err = dp.puFromContextID.Get(testPU2CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx2 := item.(*pucontext.PUContext)\n\n\tpc := &policy.PingConfig{\n\t\tID:                \"5e9e38ad39483e772044095e\",\n\t\tIP:                dip,\n\t\tPort:              dpt,\n\t\tIterations:        1,\n\t\tTargetTCPNetworks: true,\n\t\tExcludedNetworks:  false,\n\t}\n\n\tpingPayload := &policy.PingPayload{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tApplicationListening: false,\n\t\tNamespaceHash:        testPU1NSHash,\n\t}\n\n\tpuctx1.UpdateApplicationACLs( // nolint: errcheck\n\t\tpolicy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\t\tPorts:     []string{\"1:65535\"},\n\t\t\t\tProtocols: []string{\"6\", \"17\"},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"extnetidabc\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t)\n\n\t//\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\terr = dp.Ping(context.Background(), testPU1CtxID, pc)\n\trequire.Nil(t, err)\n\n\t// NetSyn\n\tipPacket, err := wrapIP(conn.data(), false, false, tcp.Syn)\n\trequire.Nil(t, err)\n\n\tp, err := packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\tpayloadSize := len(p.ReadTCPData())\n\ttcpConn := connection.NewTCPConnection(puctx2, p)\n\ttcpConn.SourceController = srcCtrl\n\tpkt := &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tts := policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU1CtxID)\n\tclaims := &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeRequest,\n\t\tPUID:                 testPU2CtxID,\n\t\tRemotePUID:           testPU1CtxID,\n\t\tNamespace:            testPU2NS,\n\t\tFourTuple:            \"192.168.100.1:172.17.0.2:2020:80\",\n\t\tRTT:                  \"\",\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"a=b\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU1NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"\",\n\t\tACLPolicyAction:      policy.ActionType(0),\n\t\tRemoteController:     srcCtrl,\n\t\tIsServer:             true,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\tcopiedData, err := copystructure.Copy(pingPayload)\n\tsynAckPayload := copiedData.(*policy.PingPayload)\n\tsynAckPayload.NamespaceHash = testPU2NSHash\n\trequire.Nil(t, err)\n\tmockTokenAccessor.EXPECT().CreateSynAckPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\toldsynAckDelay := synAckDelay\n\tsynAckDelay = 1 * time.Second\n\tdefer func() {\n\t\tsynAckDelay = oldsynAckDelay\n\t}()\n\n\terr = dp.processPingNetSynPacket(puctx2, tcpConn, p, payloadSize, pkt, claims)\n\trequire.Equal(t, errDropPingNetSyn, err)\n\n\ttime.Sleep(2 * time.Second)\n\n\t// NetSynAck\n\tipPacket, err = wrapIP(conn.data(), true, false, tcp.Syn|tcp.Ack)\n\trequire.Nil(t, err)\n\n\tp, err = packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\n\titem1, exists := dp.tcpClient.Get(p.L4ReverseFlowHash())\n\tif !exists {\n\t\tt.Fail()\n\t}\n\n\ttcpConn = item1\n\ttcpConn.DestinationController = dstCtrl\n\n\tpkt = &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tts = policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU2CtxID)\n\tclaims = &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr = &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeResponse,\n\t\tPUID:                 testPU1CtxID,\n\t\tRemotePUID:           testPU2CtxID,\n\t\tNamespace:            testPU1NS,\n\t\tFourTuple:            \"172.17.0.2:192.168.100.1:80:2020\",\n\t\tRTT:                  duration,\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"x=y\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU2NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"extnetidabc\",\n\t\tACLPolicyAction:      policy.Accept,\n\t\tRemoteController:     dstCtrl,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\n\tclaims.P.NamespaceHash = testPU2NSHash\n\terr = dp.processPingNetSynAckPacket(puctx1, tcpConn, p, payloadSize, pkt, claims, false)\n\trequire.Equal(t, errDropPingNetSynAck, err)\n}\n\nfunc Test_ValidPingAppListening(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer func() {\n\t\tswitchAppListening(false)\n\t\tctrl.Finish()\n\t}()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\tmockTokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil)\n\n\tdp, conn, err := setupDatapathAndPUs(ctrl, mockCollector, mockTokenAccessor)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, dp)\n\n\titem, err := dp.puFromContextID.Get(testPU1CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx1 := item.(*pucontext.PUContext)\n\n\titem, err = dp.puFromContextID.Get(testPU2CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx2 := item.(*pucontext.PUContext)\n\n\tpc := &policy.PingConfig{\n\t\tID:                \"5e9e38ad39483e772044095e\",\n\t\tIP:                dip,\n\t\tPort:              dpt,\n\t\tIterations:        1,\n\t\tTargetTCPNetworks: true,\n\t\tExcludedNetworks:  false,\n\t}\n\n\tpingPayload := &policy.PingPayload{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tApplicationListening: false,\n\t\tNamespaceHash:        testPU1NSHash,\n\t}\n\n\tpuctx1.UpdateApplicationACLs( // nolint: errcheck\n\t\tpolicy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\t\tPorts:     []string{\"1:65535\"},\n\t\t\t\tProtocols: []string{\"6\", \"17\"},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"extnetidabc\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t)\n\n\terr = dp.Ping(context.Background(), testPU1CtxID, pc)\n\trequire.Nil(t, err)\n\n\t// NetSyn\n\tipPacket, err := wrapIP(conn.data(), false, false, tcp.Syn)\n\trequire.Nil(t, err)\n\n\tp, err := packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\tpayloadSize := len(p.ReadTCPData())\n\ttcpConn := connection.NewTCPConnection(puctx2, p)\n\ttcpConn.SourceController = srcCtrl\n\tpkt := &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tts := policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU1CtxID)\n\tclaims := &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeRequest,\n\t\tPUID:                 testPU2CtxID,\n\t\tRemotePUID:           testPU1CtxID,\n\t\tNamespace:            testPU2NS,\n\t\tFourTuple:            \"192.168.100.1:172.17.0.2:2020:80\",\n\t\tRTT:                  \"\",\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"a=b\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU1NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"\",\n\t\tACLPolicyAction:      policy.ActionType(0),\n\t\tRemoteController:     srcCtrl,\n\t\tIsServer:             true,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\n\toldsynAckDelay := synAckDelay\n\tsynAckDelay = 1 * time.Second\n\tdefer func() {\n\t\tsynAckDelay = oldsynAckDelay\n\t}()\n\n\tswitchAppListening(true)\n\terr = dp.processPingNetSynPacket(puctx2, tcpConn, p, payloadSize, pkt, claims)\n\trequire.Nil(t, err)\n\n\ttcpConn.PingConfig.SetApplicationListening(true)\n\n\ttime.Sleep(2 * time.Second)\n\n\t// NetSynAck\n\tipPacket, err = wrapIP(conn.data(), true, false, 0)\n\trequire.Nil(t, err)\n\n\tp, err = packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\n\titem1, exists := dp.tcpClient.Get(p.L4ReverseFlowHash())\n\tif !exists {\n\t\tt.Fail()\n\t}\n\n\ttcpConn = item1\n\ttcpConn.DestinationController = dstCtrl\n\n\tpkt = &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tts = policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU2CtxID)\n\tclaims = &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr = &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeResponse,\n\t\tPUID:                 testPU1CtxID,\n\t\tRemotePUID:           testPU2CtxID,\n\t\tNamespace:            testPU1NS,\n\t\tFourTuple:            \"172.17.0.2:192.168.100.1:80:2020\",\n\t\tRTT:                  duration,\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: true,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"x=y\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU2NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"extnetidabc\",\n\t\tACLPolicyAction:      policy.Accept,\n\t\tRemoteController:     dstCtrl,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\n\tclaims.P.ApplicationListening = true\n\tclaims.P.NamespaceHash = testPU2NSHash\n\terr = dp.processPingNetSynAckPacket(puctx1, tcpConn, p, payloadSize, pkt, claims, false)\n\trequire.Equal(t, errDropPingNetSynAck, err)\n}\n\nfunc Test_ValidPingAppListeningNoReply(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer func() {\n\t\tswitchAppListening(false)\n\t\tctrl.Finish()\n\t}()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\tmockTokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(2).Return([]byte(\"token\"), nil)\n\n\tdp, conn, err := setupDatapathAndPUs(ctrl, mockCollector, mockTokenAccessor)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, dp)\n\n\titem, err := dp.puFromContextID.Get(testPU1CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx1 := item.(*pucontext.PUContext)\n\n\titem, err = dp.puFromContextID.Get(testPU2CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx2 := item.(*pucontext.PUContext)\n\n\tpc := &policy.PingConfig{\n\t\tID:                \"5e9e38ad39483e772044095e\",\n\t\tIP:                dip,\n\t\tPort:              dpt,\n\t\tIterations:        1,\n\t\tTargetTCPNetworks: true,\n\t\tExcludedNetworks:  false,\n\t}\n\n\tpingPayload := &policy.PingPayload{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tApplicationListening: false,\n\t\tNamespaceHash:        testPU1NSHash,\n\t}\n\n\tpuctx1.UpdateApplicationACLs( // nolint: errcheck\n\t\tpolicy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\t\tPorts:     []string{\"1:65535\"},\n\t\t\t\tProtocols: []string{\"6\", \"17\"},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"extnetidabc\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t)\n\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\terr = dp.Ping(context.Background(), testPU1CtxID, pc)\n\trequire.Nil(t, err)\n\n\t// NetSyn\n\tipPacket, err := wrapIP(conn.data(), false, false, tcp.Syn)\n\trequire.Nil(t, err)\n\n\tp, err := packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\tpayloadSize := len(p.ReadTCPData())\n\ttcpConn := connection.NewTCPConnection(puctx2, p)\n\ttcpConn.SourceController = srcCtrl\n\tpkt := &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tts := policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU1CtxID)\n\tclaims := &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeRequest,\n\t\tPUID:                 testPU2CtxID,\n\t\tRemotePUID:           testPU1CtxID,\n\t\tNamespace:            testPU2NS,\n\t\tFourTuple:            \"192.168.100.1:172.17.0.2:2020:80\",\n\t\tRTT:                  \"\",\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"a=b\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU1NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"\",\n\t\tACLPolicyAction:      policy.ActionType(0),\n\t\tRemoteController:     srcCtrl,\n\t\tIsServer:             true,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\tcopiedData, err := copystructure.Copy(pingPayload)\n\tsynAckPayload := copiedData.(*policy.PingPayload)\n\tsynAckPayload.NamespaceHash = testPU2NSHash\n\trequire.Nil(t, err)\n\tmockTokenAccessor.EXPECT().CreateSynAckPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\toldsynAckDelay := synAckDelay\n\tsynAckDelay = 1 * time.Second\n\tdefer func() {\n\t\tsynAckDelay = oldsynAckDelay\n\t}()\n\n\tswitchAppListening(true)\n\terr = dp.processPingNetSynPacket(puctx2, tcpConn, p, payloadSize, pkt, claims)\n\trequire.Nil(t, err)\n\n\ttime.Sleep(2 * time.Second)\n\n\t// NetSynAck\n\tipPacket, err = wrapIP(conn.data(), true, false, tcp.Syn|tcp.Ack)\n\trequire.Nil(t, err)\n\n\tp, err = packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\n\titem1, exists := dp.tcpClient.Get(p.L4ReverseFlowHash())\n\tif !exists {\n\t\tt.Fail()\n\t}\n\n\ttcpConn = item1\n\ttcpConn.DestinationController = dstCtrl\n\n\tpkt = &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tts = policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU2CtxID)\n\tclaims = &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr = &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeResponse,\n\t\tPUID:                 testPU1CtxID,\n\t\tRemotePUID:           testPU2CtxID,\n\t\tNamespace:            testPU1NS,\n\t\tFourTuple:            \"172.17.0.2:192.168.100.1:80:2020\",\n\t\tRTT:                  duration,\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"x=y\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU2NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"extnetidabc\",\n\t\tACLPolicyAction:      policy.Accept,\n\t\tRemoteController:     dstCtrl,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\n\tclaims.P.NamespaceHash = testPU2NSHash\n\terr = dp.processPingNetSynAckPacket(puctx1, tcpConn, p, payloadSize, pkt, claims, false)\n\trequire.Equal(t, errDropPingNetSynAck, err)\n}\n\nfunc Test_ValidPingReject(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\tmockTokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil)\n\n\tdp, conn, err := setupDatapathAndPUs(ctrl, mockCollector, mockTokenAccessor)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, dp)\n\n\titem, err := dp.puFromContextID.Get(testPU1CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx1 := item.(*pucontext.PUContext)\n\n\titem, err = dp.puFromContextID.Get(testPU2CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx2 := item.(*pucontext.PUContext)\n\n\tpc := &policy.PingConfig{\n\t\tID:                \"5e9e38ad39483e772044095e\",\n\t\tIP:                dip,\n\t\tPort:              dpt,\n\t\tIterations:        1,\n\t\tTargetTCPNetworks: true,\n\t\tExcludedNetworks:  false,\n\t}\n\n\tpingPayload := &policy.PingPayload{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tApplicationListening: false,\n\t\tNamespaceHash:        testPU1NSHash,\n\t}\n\n\tpuctx1.UpdateApplicationACLs( // nolint: errcheck\n\t\tpolicy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"0.0.0.0/0\"},\n\t\t\t\tPorts:     []string{\"1:65535\"},\n\t\t\t\tProtocols: []string{\"6\", \"17\"},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t\tPolicyID: \"extnetidabc\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t)\n\n\terr = dp.Ping(context.Background(), testPU1CtxID, pc)\n\trequire.Nil(t, err)\n\n\t// NetSyn\n\tipPacket, err := wrapIP(conn.data(), false, false, tcp.Syn)\n\trequire.Nil(t, err)\n\n\tp, err := packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\tpayloadSize := len(p.ReadTCPData())\n\ttcpConn := connection.NewTCPConnection(puctx2, p)\n\ttcpConn.SourceController = srcCtrl\n\tpkt := &policy.FlowPolicy{Action: policy.Reject, PolicyID: \"xyz\"}\n\n\tts := policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU1CtxID)\n\tclaims := &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeRequest,\n\t\tPUID:                 testPU2CtxID,\n\t\tRemotePUID:           testPU1CtxID,\n\t\tNamespace:            testPU2NS,\n\t\tFourTuple:            \"192.168.100.1:172.17.0.2:2020:80\",\n\t\tRTT:                  \"\",\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"xyz\",\n\t\tPolicyAction:         policy.Reject,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"a=b\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU1NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"\",\n\t\tACLPolicyAction:      policy.ActionType(0),\n\t\tRemoteController:     srcCtrl,\n\t\tIsServer:             true,\n\t\tError:                collector.PolicyDrop,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\tcopiedData, err := copystructure.Copy(pingPayload)\n\tsynAckPayload := copiedData.(*policy.PingPayload)\n\tsynAckPayload.NamespaceHash = testPU2NSHash\n\trequire.Nil(t, err)\n\tmockTokenAccessor.EXPECT().CreateSynAckPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\toldsynAckDelay := synAckDelay\n\tsynAckDelay = 1 * time.Second\n\tdefer func() {\n\t\tsynAckDelay = oldsynAckDelay\n\t}()\n\n\terr = dp.processPingNetSynPacket(puctx2, tcpConn, p, payloadSize, pkt, claims)\n\trequire.Equal(t, errDropPingNetSyn, err)\n\n\ttime.Sleep(2 * time.Second)\n\n\t// NetSynAck\n\tipPacket, err = wrapIP(conn.data(), true, false, tcp.Syn|tcp.Ack)\n\trequire.Nil(t, err)\n\n\tp, err = packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\n\titem1, exists := dp.tcpClient.Get(p.L4ReverseFlowHash())\n\tif !exists {\n\t\tt.Fail()\n\t}\n\ttcpConn = item1\n\ttcpConn.DestinationController = dstCtrl\n\n\tpkt = &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tts = policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU2CtxID)\n\tclaims = &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr = &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeResponse,\n\t\tPUID:                 testPU1CtxID,\n\t\tRemotePUID:           testPU2CtxID,\n\t\tNamespace:            testPU1NS,\n\t\tFourTuple:            \"172.17.0.2:192.168.100.1:80:2020\",\n\t\tRTT:                  duration,\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"x=y\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU2NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"extnetidabc\",\n\t\tACLPolicyAction:      policy.Accept,\n\t\tRemoteController:     dstCtrl,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\n\tclaims.P.NamespaceHash = testPU2NSHash\n\terr = dp.processPingNetSynAckPacket(puctx1, tcpConn, p, payloadSize, pkt, claims, false)\n\trequire.Equal(t, errDropPingNetSynAck, err)\n}\n\nfunc Test_ValidPingUnequalSeqNum(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\tmockTokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil)\n\n\tdp, conn, err := setupDatapathAndPUs(ctrl, mockCollector, mockTokenAccessor)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, dp)\n\n\titem, err := dp.puFromContextID.Get(testPU1CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx1 := item.(*pucontext.PUContext)\n\n\titem, err = dp.puFromContextID.Get(testPU2CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx2 := item.(*pucontext.PUContext)\n\n\tpc := &policy.PingConfig{\n\t\tID:                \"5e9e38ad39483e772044095e\",\n\t\tIP:                dip,\n\t\tPort:              dpt,\n\t\tIterations:        1,\n\t\tTargetTCPNetworks: true,\n\t\tExcludedNetworks:  false,\n\t}\n\n\tpingPayload := &policy.PingPayload{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tApplicationListening: false,\n\t\tNamespaceHash:        testPU1NSHash,\n\t}\n\n\terr = dp.Ping(context.Background(), testPU1CtxID, pc)\n\trequire.Nil(t, err)\n\n\t// NetSyn\n\tipPacket, err := wrapIP(conn.data(), false, true, tcp.Syn)\n\trequire.Nil(t, err)\n\n\tp, err := packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\tpayloadSize := len(p.ReadTCPData())\n\ttcpConn := connection.NewTCPConnection(puctx2, p)\n\tpkt := &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tts := policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU1CtxID)\n\tclaims := &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeRequest,\n\t\tPUID:                 testPU2CtxID,\n\t\tRemotePUID:           testPU1CtxID,\n\t\tNamespace:            testPU2NS,\n\t\tFourTuple:            \"192.168.100.1:172.17.0.2:2020:80\",\n\t\tRTT:                  \"\",\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               111111,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"a=b\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tRemoteNamespace:      testPU1NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tACLPolicyID:          \"\",\n\t\tACLPolicyAction:      policy.ActionType(0),\n\t\tIsServer:             true,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\tcopiedData, err := copystructure.Copy(pingPayload)\n\tsynAckPayload := copiedData.(*policy.PingPayload)\n\tsynAckPayload.NamespaceHash = testPU2NSHash\n\trequire.Nil(t, err)\n\tmockTokenAccessor.EXPECT().CreateSynAckPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return([]byte(\"token\"), nil)\n\n\toldsynAckDelay := synAckDelay\n\tsynAckDelay = 1 * time.Second\n\tdefer func() {\n\t\tsynAckDelay = oldsynAckDelay\n\t}()\n\n\terr = dp.processPingNetSynPacket(puctx2, tcpConn, p, payloadSize, pkt, claims)\n\trequire.Equal(t, errDropPingNetSyn, err)\n\n\ttime.Sleep(2 * time.Second)\n\n\t// NetSynAck\n\tipPacket, err = wrapIP(conn.data(), true, false, tcp.Syn|tcp.Ack)\n\trequire.Nil(t, err)\n\n\tp, err = packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\tpayloadSize = len(p.ReadTCPData())\n\n\titem1, _ := dp.tcpClient.Get(p.L4ReverseFlowHash())\n\ttcpConn = item1\n\n\tpkt = &policy.FlowPolicy{Action: policy.Reject, PolicyID: \"cde\"}\n\n\tts = policy.NewTagStore()\n\tts.AppendKeyValue(enforcerconstants.TransmitterLabel, testPU2CtxID)\n\tclaims = &tokens.ConnectionClaims{\n\t\tP: pingPayload,\n\t\tT: ts,\n\t}\n\n\tpr = &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeResponse,\n\t\tPUID:                 testPU1CtxID,\n\t\tRemotePUID:           testPU2CtxID,\n\t\tNamespace:            testPU1NS,\n\t\tFourTuple:            \"172.17.0.2:192.168.100.1:80:2020\",\n\t\tRTT:                  duration,\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"cde\",\n\t\tPolicyAction:         policy.Reject,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tRemoteEndpointType:   collector.EndPointTypePU,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tRemoteNamespace:      testPU2NSHash,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tClaims:               []string{\"x=y\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tACLPolicyID:          \"default\",\n\t\tACLPolicyAction:      policy.Reject | policy.Log,\n\t\tError:                collector.PolicyDrop,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\tclaims.P.NamespaceHash = testPU2NSHash\n\terr = dp.processPingNetSynAckPacket(puctx1, tcpConn, p, payloadSize, pkt, claims, false)\n\trequire.Equal(t, errDropPingNetSynAck, err)\n}\n\nfunc Test_ValidPingExtNet(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\tmockTokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil)\n\n\tdp, conn, err := setupDatapathAndPUs(ctrl, mockCollector, mockTokenAccessor)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, dp)\n\n\titem, err := dp.puFromContextID.Get(testPU1CtxID)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, item)\n\tpuctx1 := item.(*pucontext.PUContext)\n\n\tpc := &policy.PingConfig{\n\t\tID:                \"5e9e38ad39483e772044095e\",\n\t\tIP:                dip,\n\t\tPort:              dpt,\n\t\tIterations:        1,\n\t\tTargetTCPNetworks: true,\n\t\tExcludedNetworks:  false,\n\t}\n\n\terr = dp.Ping(context.Background(), testPU1CtxID, pc)\n\trequire.Nil(t, err)\n\n\t// NetSynAck\n\tipPacket, err := wrapIP(conn.data(), true, false, tcp.Syn)\n\trequire.Nil(t, err)\n\n\tp, err := packet.New(packet.PacketTypeNetwork, ipPacket, \"0\", false)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, p)\n\tpayloadSize := len(p.ReadTCPData())\n\n\titem1, _ := dp.tcpClient.Get(p.L4ReverseFlowHash())\n\ttcpConn := item1\n\n\tpkt := &policy.FlowPolicy{Action: policy.Accept, PolicyID: \"abc\"}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeResponse,\n\t\tPUID:                 testPU1CtxID,\n\t\tNamespace:            testPU1NS,\n\t\tFourTuple:            \"172.17.0.2:192.168.100.1:80:2020\",\n\t\tRTT:                  duration,\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          payloadSize,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeReceived,\n\t\tPolicyID:             \"abc\",\n\t\tPolicyAction:         policy.Accept,\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: true,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tRemoteEndpointType:   collector.EndPointTypeExternalIP,\n\t\tSeqNum:               seqnum,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     false,\n\t\tClaims:               []string{\"x=y\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tACLPolicyID:          \"default\",\n\t\tACLPolicyAction:      policy.Reject | policy.Log,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\n\t// NOTE: Overriding the default conn timeout of 15s to 3s.\n\toldremoveDelay := removeDelay\n\tremoveDelay = 2 * time.Second\n\tdefer func() {\n\t\tremoveDelay = oldremoveDelay\n\t}()\n\n\ttcpConn.ChangeConnectionTimeout(3 * time.Second)\n\terr = dp.processPingNetSynAckPacket(puctx1, tcpConn, p, payloadSize, pkt, nil, true)\n\trequire.Equal(t, errDropPingNetSynAck, err)\n\n\trequire.Equal(t, connection.TCPSynAckReceived, tcpConn.GetState())\n\n\ttime.Sleep(4 * time.Second)\n\n\t_, exists := dp.tcpClient.Get(p.L4ReverseFlowHash())\n\tif exists {\n\t\tt.Fail()\n\t}\n}\n\nfunc Test_PingRequestTimeout(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\tmockTokenAccessor := mocktokenaccessor.NewMockTokenAccessor(ctrl)\n\n\tmockTokenAccessor.EXPECT().Sign(gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil).AnyTimes()\n\tmockTokenAccessor.EXPECT().CreateSynPacketToken(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(3).Return([]byte(\"token\"), nil)\n\n\tdp, _, err := setupDatapathAndPUs(ctrl, mockCollector, mockTokenAccessor)\n\trequire.Nil(t, err)\n\trequire.NotNil(t, dp)\n\n\tpc := &policy.PingConfig{\n\t\tID:                \"5e9e38ad39483e772044095e\",\n\t\tIP:                dip,\n\t\tPort:              dpt,\n\t\tIterations:        1,\n\t\tTargetTCPNetworks: true,\n\t\tExcludedNetworks:  true,\n\t}\n\n\tpr := &collector.PingReport{\n\t\tPingID:               pc.ID,\n\t\tIterationID:          0,\n\t\tType:                 gaia.PingProbeTypeRequest,\n\t\tPUID:                 testPU1CtxID,\n\t\tNamespace:            testPU1NS,\n\t\tFourTuple:            \"192.168.100.1:172.17.0.2:2020:80\",\n\t\tRTT:                  \"\",\n\t\tProtocol:             6,\n\t\tServiceType:          \"L3\",\n\t\tPayloadSize:          5,\n\t\tPayloadSizeType:      gaia.PingProbePayloadSizeTypeTransmitted,\n\t\tPolicyID:             \"\",\n\t\tPolicyAction:         policy.ActionType(0), // Not a valid action, defaults to \"unknown\"\n\t\tAgentVersion:         \"0.0.0\",\n\t\tApplicationListening: false,\n\t\tSeqNum:               seqnum,\n\t\tRemoteNamespaceType:  gaia.PingProbeRemoteNamespaceTypeHash,\n\t\tTargetTCPNetworks:    true,\n\t\tExcludedNetworks:     true,\n\t\tClaims:               []string{\"x=y\"},\n\t\tClaimsType:           gaia.PingProbeClaimsTypeTransmitted,\n\t\tACLPolicyID:          \"default\",\n\t\tACLPolicyAction:      policy.Reject | policy.Log,\n\t\tError:                policy.ErrExcludedNetworks,\n\t}\n\n\tmockCollector.EXPECT().CollectPingEvent(pr).Times(1)\n\n\toldconnTimeout := connection.DefaultConnectionTimeout\n\tconnection.DefaultConnectionTimeout = 3 * time.Second\n\tdefer func() {\n\t\tconnection.DefaultConnectionTimeout = oldconnTimeout\n\t}()\n\n\terr = dp.Ping(context.Background(), testPU1CtxID, pc)\n\trequire.Nil(t, err)\n\ttime.Sleep(4 * time.Second)\n\n\tlist := dp.tcpClient.Len()\n\tif list != 0 {\n\t\tt.Fail()\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/test_utils.go",
    "content": "package nfqdatapath\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\n\t\"github.com/golang/mock/gomock\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/mocksecrets\"\n)\n\ntype endpointTypeMatcher struct {\n\tx           interface{}\n\tbaseMatcher *myMatcher\n}\n\nfunc (m *endpointTypeMatcher) Matches(x interface{}) bool {\n\tf1 := m.x.(*collector.FlowRecord)\n\tf2 := x.(*collector.FlowRecord)\n\n\tdefaultChecks := f1.Destination.Type == f2.Destination.Type &&\n\t\tf1.Destination.ID == f2.Destination.ID &&\n\t\tf1.Source.Type == f2.Source.Type &&\n\t\tf1.Source.ID == f2.Source.ID\n\n\tif m.baseMatcher != nil {\n\t\treturn defaultChecks && m.baseMatcher.Matches(x)\n\t}\n\n\treturn defaultChecks\n}\n\nfunc (m *endpointTypeMatcher) String() string {\n\n\tout, err := json.Marshal(m.x)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\treturn fmt.Sprintf(\"is equal to %v\", string(out))\n}\n\n// EndpointTypeMatcher extends MyMatcher to match endpoint Type and ID\nfunc EndpointTypeMatcher(x interface{}) gomock.Matcher {\n\treturn gomock.GotFormatterAdapter(&myGotFormatter{}, &endpointTypeMatcher{x: x, baseMatcher: &myMatcher{x: x}})\n}\n\ntype myMatcher struct {\n\tx interface{}\n}\n\nfunc (m *myMatcher) Matches(x interface{}) bool {\n\tf1 := m.x.(*collector.FlowRecord)\n\tf2 := x.(*collector.FlowRecord)\n\n\tdefaultChecks := f1.Destination.IP == f2.Destination.IP &&\n\t\tf1.Source.IP == f2.Source.IP &&\n\t\tf1.Destination.Port == f2.Destination.Port &&\n\t\tf1.Action == f2.Action &&\n\t\tf1.Count == f2.Count &&\n\t\tf1.DropReason == f2.DropReason\n\n\treturn defaultChecks\n}\n\nfunc (m *myMatcher) String() string {\n\n\tf := m.x.(*collector.FlowRecord)\n\treturn fmt.Sprintf(\"%d, %v, %v, %d, %d, %s\", f.Count, f.Source.IP, f.Destination.IP, f.Destination.Port, f.Action, f.DropReason)\n}\n\ntype myGotFormatter struct{}\n\nfunc (g *myGotFormatter) Got(got interface{}) string {\n\n\tf := got.(*collector.FlowRecord)\n\treturn fmt.Sprintf(\"%d, %v, %v, %d, %d, %s\", f.Count, f.Source.IP, f.Destination.IP, f.Destination.Port, f.Action, f.DropReason)\n}\n\n// MyMatcher returns gomock matcher\nfunc MyMatcher(x interface{}) gomock.Matcher {\n\treturn gomock.GotFormatterAdapter(&myGotFormatter{}, &myMatcher{x: x})\n}\n\ntype packetEventMatcher struct {\n\tx interface{}\n}\n\nfunc (p *packetEventMatcher) Matches(x interface{}) bool {\n\tf1 := p.x.(*collector.PacketReport)\n\tf2 := x.(*collector.PacketReport)\n\treturn f1.DestinationIP == f2.DestinationIP\n}\n\nfunc (p *packetEventMatcher) String() string {\n\treturn fmt.Sprintf(\"is equal to %v\", p.x)\n}\n\n// PacketEventMatcher return gomock matcher\nfunc PacketEventMatcher(x interface{}) gomock.Matcher {\n\treturn &packetEventMatcher{x: x}\n}\n\ntype myCounterMatcher struct {\n\tx *collector.CounterReport\n}\n\nfunc (m *myCounterMatcher) Matches(x interface{}) bool {\n\n\tf := x.(*collector.CounterReport)\n\tif f.Namespace != \"/ns1\" {\n\t\treturn true\n\t}\n\treturn m.x.PUID == f.PUID && m.x.Namespace == f.Namespace\n}\n\nfunc (m *myCounterMatcher) String() string {\n\treturn fmt.Sprintf(\"is equal to %v\", m.x)\n}\n\n// MyCounterMatcher custom matcher for counter record\nfunc MyCounterMatcher(x *collector.CounterReport) gomock.Matcher {\n\treturn &myCounterMatcher{x: x}\n}\n\ntype fakeSecrets struct {\n\tid string\n\t*mocksecrets.MockSecrets\n}\n\nfunc (f *fakeSecrets) setID(id string) {\n\tf.id = id\n}\n\nfunc (f *fakeSecrets) getID() string {\n\treturn f.id\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/test_utils_linux.go",
    "content": "// +build linux\n\npackage nfqdatapath\n\nimport (\n\t\"net\"\n\t\"sync\"\n\n\t\"github.com/ghedo/go.pkt/packet\"\n)\n\ntype fakeConn struct {\n\tb []byte\n\n\tsync.RWMutex\n}\n\nfunc (f *fakeConn) Close() error {\n\treturn nil\n}\n\nfunc (f *fakeConn) Write(b []byte) (int, error) {\n\tf.Lock()\n\tdefer f.Unlock()\n\n\tf.b = b\n\treturn len(b), nil\n}\n\nfunc (f *fakeConn) data() []byte {\n\tf.RLock()\n\tdefer f.RUnlock()\n\n\treturn f.b\n}\n\nfunc (f *fakeConn) ConstructWirePacket(srcIP, dstIP net.IP, transport packet.Packet, payload packet.Packet) ([]byte, error) {\n\treturn packLayers(srcIP, dstIP, transport, payload)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/tokenaccessor/interfaces.go",
    "content": "package tokenaccessor\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n)\n\n// TokenAccessor define an interface to access LockedTokenEngine\ntype TokenAccessor interface {\n\tGetTokenValidity() time.Duration\n\tGetTokenServerID() string\n\n\t// Token creation methods.\n\tCreateAckPacketToken(proto314 bool, secretKey []byte, claims *tokens.ConnectionClaims, encodedBuf []byte) ([]byte, error)\n\tCreateSynPacketToken(claims *tokens.ConnectionClaims, encodedBuf []byte, nonce []byte, claimsHeader *claimsheader.ClaimsHeader, secrets secrets.Secrets) ([]byte, error)\n\n\tCreateSynAckPacketToken(proto314 bool, claims *tokens.ConnectionClaims, encodedBuf []byte, nonce []byte, claimsHeader *claimsheader.ClaimsHeader, secrets secrets.Secrets, secretKey []byte) ([]byte, error)\n\t// Token parsing methods.\n\tParsePacketToken(privateKey *ephemeralkeys.PrivateKey, data []byte, secrets secrets.Secrets, c *tokens.ConnectionClaims, b bool) ([]byte, *claimsheader.ClaimsHeader, *pkiverifier.PKIControllerInfo, []byte, string, bool, error)\n\tParseAckToken(proto314 bool, secretKey []byte, nonce []byte, data []byte, connClaims *tokens.ConnectionClaims) error\n\n\tRandomize([]byte, []byte) error\n\tSign([]byte, *ecdsa.PrivateKey) ([]byte, error)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/tokenaccessor/mocktokenaccessor/mocktokenaccessor.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/internal/enforcer/nfqdatapath/tokenaccessor/interfaces.go\n\n// Package mocktokenaccessor is a generated GoMock package.\npackage mocktokenaccessor\n\nimport (\n\tecdsa \"crypto/ecdsa\"\n\treflect \"reflect\"\n\ttime \"time\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tephemeralkeys \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\tclaimsheader \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\tpkiverifier \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\tsecrets \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\ttokens \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n)\n\n// MockTokenAccessor is a mock of TokenAccessor interface\ntype MockTokenAccessor struct {\n\tctrl     *gomock.Controller\n\trecorder *MockTokenAccessorMockRecorder\n}\n\n// MockTokenAccessorMockRecorder is the mock recorder for MockTokenAccessor\ntype MockTokenAccessorMockRecorder struct {\n\tmock *MockTokenAccessor\n}\n\n// NewMockTokenAccessor creates a new mock instance\nfunc NewMockTokenAccessor(ctrl *gomock.Controller) *MockTokenAccessor {\n\tmock := &MockTokenAccessor{ctrl: ctrl}\n\tmock.recorder = &MockTokenAccessorMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockTokenAccessor) EXPECT() *MockTokenAccessorMockRecorder {\n\treturn m.recorder\n}\n\n// GetTokenValidity mocks base method\nfunc (m *MockTokenAccessor) GetTokenValidity() time.Duration {\n\tret := m.ctrl.Call(m, \"GetTokenValidity\")\n\tret0, _ := ret[0].(time.Duration)\n\treturn ret0\n}\n\n// GetTokenValidity indicates an expected call of GetTokenValidity\nfunc (mr *MockTokenAccessorMockRecorder) GetTokenValidity() *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetTokenValidity\", reflect.TypeOf((*MockTokenAccessor)(nil).GetTokenValidity))\n}\n\n// GetTokenServerID mocks base method\nfunc (m *MockTokenAccessor) GetTokenServerID() string {\n\tret := m.ctrl.Call(m, \"GetTokenServerID\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// GetTokenServerID indicates an expected call of GetTokenServerID\nfunc (mr *MockTokenAccessorMockRecorder) GetTokenServerID() *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetTokenServerID\", reflect.TypeOf((*MockTokenAccessor)(nil).GetTokenServerID))\n}\n\n// CreateAckPacketToken mocks base method\nfunc (m *MockTokenAccessor) CreateAckPacketToken(proto314 bool, secretKey []byte, claims *tokens.ConnectionClaims, encodedBuf []byte) ([]byte, error) {\n\tret := m.ctrl.Call(m, \"CreateAckPacketToken\", proto314, secretKey, claims, encodedBuf)\n\tret0, _ := ret[0].([]byte)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CreateAckPacketToken indicates an expected call of CreateAckPacketToken\nfunc (mr *MockTokenAccessorMockRecorder) CreateAckPacketToken(proto314, secretKey, claims, encodedBuf interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateAckPacketToken\", reflect.TypeOf((*MockTokenAccessor)(nil).CreateAckPacketToken), proto314, secretKey, claims, encodedBuf)\n}\n\n// CreateSynPacketToken mocks base method\nfunc (m *MockTokenAccessor) CreateSynPacketToken(claims *tokens.ConnectionClaims, encodedBuf, nonce []byte, claimsHeader *claimsheader.ClaimsHeader, secrets secrets.Secrets) ([]byte, error) {\n\tret := m.ctrl.Call(m, \"CreateSynPacketToken\", claims, encodedBuf, nonce, claimsHeader, secrets)\n\tret0, _ := ret[0].([]byte)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CreateSynPacketToken indicates an expected call of CreateSynPacketToken\nfunc (mr *MockTokenAccessorMockRecorder) CreateSynPacketToken(claims, encodedBuf, nonce, claimsHeader, secrets interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateSynPacketToken\", reflect.TypeOf((*MockTokenAccessor)(nil).CreateSynPacketToken), claims, encodedBuf, nonce, claimsHeader, secrets)\n}\n\n// CreateSynAckPacketToken mocks base method\nfunc (m *MockTokenAccessor) CreateSynAckPacketToken(proto314 bool, claims *tokens.ConnectionClaims, encodedBuf, nonce []byte, claimsHeader *claimsheader.ClaimsHeader, secrets secrets.Secrets, secretKey []byte) ([]byte, error) {\n\tret := m.ctrl.Call(m, \"CreateSynAckPacketToken\", proto314, claims, encodedBuf, nonce, claimsHeader, secrets, secretKey)\n\tret0, _ := ret[0].([]byte)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CreateSynAckPacketToken indicates an expected call of CreateSynAckPacketToken\nfunc (mr *MockTokenAccessorMockRecorder) CreateSynAckPacketToken(proto314, claims, encodedBuf, nonce, claimsHeader, secrets, secretKey interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateSynAckPacketToken\", reflect.TypeOf((*MockTokenAccessor)(nil).CreateSynAckPacketToken), proto314, claims, encodedBuf, nonce, claimsHeader, secrets, secretKey)\n}\n\n// ParsePacketToken mocks base method\nfunc (m *MockTokenAccessor) ParsePacketToken(privateKey *ephemeralkeys.PrivateKey, data []byte, secrets secrets.Secrets, c *tokens.ConnectionClaims, b bool) ([]byte, *claimsheader.ClaimsHeader, *pkiverifier.PKIControllerInfo, []byte, string, bool, error) {\n\tret := m.ctrl.Call(m, \"ParsePacketToken\", privateKey, data, secrets, c, b)\n\tret0, _ := ret[0].([]byte)\n\tret1, _ := ret[1].(*claimsheader.ClaimsHeader)\n\tret2, _ := ret[2].(*pkiverifier.PKIControllerInfo)\n\tret3, _ := ret[3].([]byte)\n\tret4, _ := ret[4].(string)\n\tret5, _ := ret[5].(bool)\n\tret6, _ := ret[6].(error)\n\treturn ret0, ret1, ret2, ret3, ret4, ret5, ret6\n}\n\n// ParsePacketToken indicates an expected call of ParsePacketToken\nfunc (mr *MockTokenAccessorMockRecorder) ParsePacketToken(privateKey, data, secrets, c, b interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ParsePacketToken\", reflect.TypeOf((*MockTokenAccessor)(nil).ParsePacketToken), privateKey, data, secrets, c, b)\n}\n\n// ParseAckToken mocks base method\nfunc (m *MockTokenAccessor) ParseAckToken(proto314 bool, secretKey, nonce, data []byte, connClaims *tokens.ConnectionClaims) error {\n\tret := m.ctrl.Call(m, \"ParseAckToken\", proto314, secretKey, nonce, data, connClaims)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ParseAckToken indicates an expected call of ParseAckToken\nfunc (mr *MockTokenAccessorMockRecorder) ParseAckToken(proto314, secretKey, nonce, data, connClaims interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ParseAckToken\", reflect.TypeOf((*MockTokenAccessor)(nil).ParseAckToken), proto314, secretKey, nonce, data, connClaims)\n}\n\n// Randomize mocks base method\nfunc (m *MockTokenAccessor) Randomize(arg0, arg1 []byte) error {\n\tret := m.ctrl.Call(m, \"Randomize\", arg0, arg1)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Randomize indicates an expected call of Randomize\nfunc (mr *MockTokenAccessorMockRecorder) Randomize(arg0, arg1 interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Randomize\", reflect.TypeOf((*MockTokenAccessor)(nil).Randomize), arg0, arg1)\n}\n\n// Sign mocks base method\nfunc (m *MockTokenAccessor) Sign(arg0 []byte, arg1 *ecdsa.PrivateKey) ([]byte, error) {\n\tret := m.ctrl.Call(m, \"Sign\", arg0, arg1)\n\tret0, _ := ret[0].([]byte)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Sign indicates an expected call of Sign\nfunc (mr *MockTokenAccessorMockRecorder) Sign(arg0, arg1 interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Sign\", reflect.TypeOf((*MockTokenAccessor)(nil).Sign), arg0, arg1)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/tokenaccessor/tokenaccessor.go",
    "content": "package tokenaccessor\n\nimport (\n\t\"bytes\"\n\t\"crypto/ecdsa\"\n\t\"errors\"\n\t\"fmt\"\n\t\"time\"\n\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n)\n\n// tokenAccessor is a wrapper around tokenEngine to provide locks for accessing\ntype tokenAccessor struct {\n\ttokens   tokens.TokenEngine\n\tserverID string\n\tvalidity time.Duration\n}\n\n// New creates a new instance of TokenAccessor interface\nfunc New(serverID string, validity time.Duration, secret secrets.Secrets) (TokenAccessor, error) {\n\n\tvar tokenEngine tokens.TokenEngine\n\tvar err error\n\n\ttokenEngine, err = tokens.NewBinaryJWT(validity, serverID)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &tokenAccessor{\n\t\ttokens:   tokenEngine,\n\t\tserverID: serverID,\n\t\tvalidity: validity,\n\t}, nil\n}\n\n// GetTokenValidity returns the duration the token is valid for\nfunc (t *tokenAccessor) GetTokenValidity() time.Duration {\n\treturn t.validity\n}\n\n// GetTokenServerID returns the server ID which is used the generate the token.\nfunc (t *tokenAccessor) GetTokenServerID() string {\n\treturn t.serverID\n}\n\n// CreateAckPacketToken creates the authentication token\nfunc (t *tokenAccessor) CreateAckPacketToken(proto314 bool, secretKey []byte, claims *tokens.ConnectionClaims, encodedBuf []byte) ([]byte, error) {\n\n\ttoken, err := t.tokens.CreateAckToken(proto314, secretKey, claims, encodedBuf, claimsheader.NewClaimsHeader())\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to create ack token: %v\", err)\n\t}\n\n\treturn token, nil\n}\n\nfunc (t *tokenAccessor) Randomize(token []byte, nonce []byte) error {\n\treturn t.tokens.Randomize(token, nonce)\n}\n\nfunc (t *tokenAccessor) Sign(buf []byte, key *ecdsa.PrivateKey) ([]byte, error) {\n\treturn t.tokens.Sign(buf, key)\n}\n\n// createSynPacketToken creates the authentication token\nfunc (t *tokenAccessor) CreateSynPacketToken(claims *tokens.ConnectionClaims, encodedBuf []byte, nonce []byte, claimsHeader *claimsheader.ClaimsHeader, secrets secrets.Secrets) ([]byte, error) {\n\ttoken, err := t.tokens.CreateSynToken(claims, encodedBuf, nonce, claimsHeader, secrets)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to create syn token: %v\", err)\n\t}\n\n\treturn token, nil\n}\n\n// createSynAckPacketToken  creates the authentication token for SynAck packets\n// We need to sign the received token. No caching possible here\nfunc (t *tokenAccessor) CreateSynAckPacketToken(proto314 bool, claims *tokens.ConnectionClaims, encodedBuf []byte, nonce []byte, claimsHeader *claimsheader.ClaimsHeader, secrets secrets.Secrets, secretKey []byte) ([]byte, error) {\n\ttoken, err := t.tokens.CreateSynAckToken(proto314, claims, encodedBuf, nonce, claimsHeader, secrets, secretKey)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to create synack token: %v\", err)\n\t}\n\n\treturn token, nil\n}\n\n// parsePacketToken parses the packet token and populates the right state.\n// Returns an error if the token cannot be parsed or the signature fails\nfunc (t *tokenAccessor) ParsePacketToken(privateKey *ephemeralkeys.PrivateKey, data []byte, secrets secrets.Secrets, claims *tokens.ConnectionClaims, isSynAck bool) ([]byte, *claimsheader.ClaimsHeader, *pkiverifier.PKIControllerInfo, []byte, string, bool, error) {\n\n\t// Validate the certificate and parse the token\n\tsecretKey, header, nonce, controller, proto314, err := t.tokens.DecodeSyn(isSynAck, data, privateKey, secrets, claims)\n\tif err != nil {\n\t\treturn nil, nil, nil, nil, \"\", false, err\n\t}\n\n\t// We always a need a valid remote context ID\n\tif claims.T == nil {\n\t\treturn nil, nil, nil, nil, \"\", false, errors.New(\"no claims found\")\n\t}\n\n\tremoteContextID, ok := claims.T.Get(enforcerconstants.TransmitterLabel)\n\tif !ok {\n\t\treturn nil, nil, nil, nil, \"\", false, errors.New(\"no transmitter label\")\n\t}\n\n\treturn secretKey, header, controller, nonce, remoteContextID, proto314, nil\n}\n\n// parseAckToken parses the tokens in Ack packets. They don't carry all the state context\n// and it needs to be recovered\nfunc (t *tokenAccessor) ParseAckToken(proto314 bool, secretKey []byte, nonce []byte, data []byte, connClaims *tokens.ConnectionClaims) error {\n\n\t// Validate the certificate and parse the token\n\tif err := t.tokens.DecodeAck(proto314, secretKey, data, connClaims); err != nil {\n\t\treturn err\n\t}\n\n\tif !bytes.Equal(connClaims.RMT, nonce) {\n\t\treturn errors.New(\"failed to match context in ack packet\")\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/tokenaccessor/tokenaccessor_test.go",
    "content": "// +build !windows\n\npackage tokenaccessor\n\nimport (\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/mocksecrets\"\n)\n\nfunc Test_NewTokenAccessor(t *testing.T) {\n\tConvey(\"Given I create new token accessor\", t, func() {\n\t\tctrl := gomock.NewController(t)\n\t\tdefer ctrl.Finish()\n\n\t\ttok, err := New(\"serverID\", 2, &mocksecrets.MockSecrets{})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(tok, ShouldNotBeNil)\n\n\t\ttok, err = New(\"serverID\", 2, &mocksecrets.MockSecrets{})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(tok, ShouldNotBeNil)\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/utils.go",
    "content": "package nfqdatapath\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc (d *Datapath) reportAcceptedFlow(p *packet.Packet, conn *connection.TCPConnection, sourceID string, destID string, context *pucontext.PUContext, report *policy.FlowPolicy, packet *policy.FlowPolicy, reverse bool) { // nolint:unparam\n\n\tif sourceID == destID {\n\t\treport = &policy.FlowPolicy{\n\t\t\tAction:   policy.Accept | policy.Log,\n\t\t\tPolicyID: \"local\",\n\t\t}\n\t\tpacket = report\n\t}\n\n\tsourceController, destinationController := getTCPConnectionInfo(conn)\n\n\tsrc, dst := d.generateEndpoints(p, sourceID, destID, reverse)\n\n\td.reportFlow(p, src, dst, context, \"\", report, packet, sourceController, destinationController)\n}\n\nfunc (d *Datapath) reportRejectedFlow(p *packet.Packet, conn *connection.TCPConnection, sourceID string, destID string, context *pucontext.PUContext, mode string, report *policy.FlowPolicy, packet *policy.FlowPolicy, reverse bool) { // nolint:unparam\n\n\tif report == nil {\n\t\treport = &policy.FlowPolicy{\n\t\t\tAction:   policy.Reject | policy.Log,\n\t\t\tPolicyID: \"default\",\n\t\t}\n\t}\n\tif packet == nil {\n\t\tpacket = report\n\t}\n\n\tsourceController, destinationController := getTCPConnectionInfo(conn)\n\n\tsrc, dst := d.generateEndpoints(p, sourceID, destID, reverse)\n\n\td.reportFlow(p, src, dst, context, mode, report, packet, sourceController, destinationController)\n}\n\nfunc (d *Datapath) reportUDPAcceptedFlow(p *packet.Packet, conn *connection.UDPConnection, sourceID string, destID string, context *pucontext.PUContext, report *policy.FlowPolicy, packet *policy.FlowPolicy, reverse bool) { // nolint:unparam\n\n\tsourceController, destinationController := getUDPConnectionInfo(conn)\n\n\tsrc, dst := d.generateEndpoints(p, sourceID, destID, reverse)\n\n\td.reportFlow(p, src, dst, context, \"\", report, packet, sourceController, destinationController)\n}\n\nfunc (d *Datapath) reportUDPRejectedFlow(p *packet.Packet, conn *connection.UDPConnection, sourceID string, destID string, context *pucontext.PUContext, mode string, report *policy.FlowPolicy, packet *policy.FlowPolicy, reverse bool) { // nolint:unparam\n\tif report == nil {\n\t\treport = &policy.FlowPolicy{\n\t\t\tAction:   policy.Reject | policy.Log,\n\t\t\tPolicyID: \"default\",\n\t\t}\n\t}\n\tif packet == nil {\n\t\tpacket = report\n\t}\n\n\tsourceController, destinationController := getUDPConnectionInfo(conn)\n\n\tsrc, dst := d.generateEndpoints(p, sourceID, destID, reverse)\n\td.reportFlow(p, src, dst, context, mode, report, packet, sourceController, destinationController)\n}\n\nfunc (d *Datapath) reportExternalServiceFlowCommon(context *pucontext.PUContext, report *policy.FlowPolicy, actual *policy.FlowPolicy, app bool, p *packet.Packet, src, dst *collector.EndPoint) {\n\tif app {\n\t\t// If you have an observe policy then its external network gets reported as the dest or src ID.\n\t\t// Really we should has an oSrc and oDest ID but currently we don't.\n\t\tsrc.ID = context.ManagementID()\n\t\tsrc.Type = collector.EndPointTypePU\n\t\tdst.ID = report.ServiceID\n\t\tdst.Type = collector.EndPointTypeExternalIP\n\t} else {\n\t\tsrc.ID = report.ServiceID\n\t\tsrc.Type = collector.EndPointTypeExternalIP\n\t\tdst.ID = context.ManagementID()\n\t\tdst.Type = collector.EndPointTypePU\n\t}\n\n\tdropReason := \"\"\n\tif report.Action.Rejected() || actual.Action.Rejected() {\n\t\tdropReason = collector.PolicyDrop\n\t}\n\n\trecord := &collector.FlowRecord{\n\t\tContextID:   context.ID(),\n\t\tSource:      *src,\n\t\tDestination: *dst,\n\t\tDropReason:  dropReason,\n\t\tAction:      actual.Action,\n\t\tPolicyID:    actual.PolicyID,\n\t\tL4Protocol:  p.IPProto(),\n\t\tNamespace:   context.ManagementNamespace(),\n\t\tCount:       1,\n\t\tRuleName:    actual.RuleName,\n\t}\n\n\tif context.Annotations() != nil {\n\t\trecord.Tags = context.Annotations().GetSlice()\n\t}\n\n\tif report.ObserveAction.Observed() {\n\t\trecord.ObservedAction = report.Action\n\t\trecord.ObservedPolicyID = report.PolicyID\n\t\trecord.ObservedActionType = report.ObserveAction\n\t}\n\n\td.collector.CollectFlowEvent(record)\n}\n\nfunc (d *Datapath) reportExternalServiceFlow(context *pucontext.PUContext, report *policy.FlowPolicy, packet *policy.FlowPolicy, app bool, p *packet.Packet) {\n\n\tsrc := &collector.EndPoint{\n\t\tIP:   p.SourceAddress().String(),\n\t\tPort: p.SourcePort(),\n\t}\n\n\tdst := &collector.EndPoint{\n\t\tIP:   p.DestinationAddress().String(),\n\t\tPort: p.DestPort(),\n\t}\n\n\td.reportExternalServiceFlowCommon(context, report, packet, app, p, src, dst)\n}\n\nfunc (d *Datapath) reportReverseExternalServiceFlow(context *pucontext.PUContext, report *policy.FlowPolicy, packet *policy.FlowPolicy, app bool, p *packet.Packet) {\n\n\tsrc := &collector.EndPoint{\n\t\tIP:   p.DestinationAddress().String(),\n\t\tPort: p.DestPort(),\n\t}\n\n\tdst := &collector.EndPoint{\n\t\tIP:   p.SourceAddress().String(),\n\t\tPort: p.SourcePort(),\n\t}\n\n\td.reportExternalServiceFlowCommon(context, report, packet, app, p, src, dst)\n}\n\nfunc (d *Datapath) generateEndpoints(p *packet.Packet, sourceID string, destID string, reverse bool) (*collector.EndPoint, *collector.EndPoint) {\n\n\tsrc := &collector.EndPoint{\n\t\tID:   sourceID,\n\t\tIP:   p.SourceAddress().String(),\n\t\tPort: p.SourcePort(),\n\t\tType: collector.EndPointTypePU,\n\t}\n\n\tdst := &collector.EndPoint{\n\t\tID:   destID,\n\t\tIP:   p.DestinationAddress().String(),\n\t\tPort: p.DestPort(),\n\t\tType: collector.EndPointTypePU,\n\t}\n\n\tif src.ID == collector.DefaultEndPoint {\n\t\tsrc.Type = collector.EndPointTypeExternalIP\n\t}\n\tif dst.ID == collector.DefaultEndPoint {\n\t\tdst.Type = collector.EndPointTypeExternalIP\n\t}\n\n\tif reverse {\n\t\treturn dst, src\n\t}\n\n\treturn src, dst\n}\n\nfunc getTCPConnectionInfo(conn *connection.TCPConnection) (string, string) {\n\tsourceController, destinationController := \"\", \"\"\n\tif conn != nil {\n\t\tsourceController, destinationController = conn.SourceController, conn.DestinationController\n\t}\n\treturn sourceController, destinationController\n}\n\nfunc getUDPConnectionInfo(conn *connection.UDPConnection) (string, string) {\n\tsourceController, destinationController := \"\", \"\"\n\tif conn != nil {\n\t\tsourceController, destinationController = conn.SourceController, conn.DestinationController\n\t}\n\treturn sourceController, destinationController\n}\n"
  },
  {
    "path": "controller/internal/enforcer/nfqdatapath/utils_test.go",
    "content": "// +build linux\n\npackage nfqdatapath\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"net\"\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector/mockcollector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/mocksecrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nvar (\n\tsrcAddress        = net.ParseIP(\"1.1.1.1\")\n\tsrcPort    uint16 = 2000\n\tsrcID             = \"src5454\"\n\tdstAddress        = net.ParseIP(\"2.2.2.2\")\n\tdstPort    uint16 = 80\n\tdstID             = \"dst4545\"\n)\n\nfunc setupDatapath(ctrl *gomock.Controller, collector collector.EventCollector) *Datapath {\n\n\tdefer MockGetUDPRawSocket()()\n\n\tsecrets := mocksecrets.NewMockSecrets(ctrl)\n\tsecrets.EXPECT().AckSize().Return(uint32(300)).AnyTimes()\n\tsecrets.EXPECT().EncodingKey().Return(&ecdsa.PrivateKey{}).AnyTimes()\n\tsecrets.EXPECT().TransmittedKey().Return([]byte(\"dummy\")).AnyTimes()\n\n\treturn newWithDefaults(ctrl, \"serverID\", collector, secrets, constants.RemoteContainer, []string{\"1._,1.1.1/31\"}, false)\n}\n\nfunc generateCommonTestData(action policy.ActionType, oaction policy.ObserveActionType) (*packet.Packet, *connection.TCPConnection, *connection.UDPConnection, *pucontext.PUContext, *policy.FlowPolicy) { // nolint\n\n\tp := packet.TestGetTCPPacket(srcAddress, dstAddress, srcPort, dstPort)\n\n\ttcpConn := &connection.TCPConnection{}\n\tudpConn := &connection.UDPConnection{}\n\tpuContext := &pucontext.PUContext{}\n\tpolicy := &policy.FlowPolicy{Action: action}\n\n\treturn p, tcpConn, udpConn, puContext, policy\n}\n\nfunc generateTestEndpoints(reverse bool) (*collector.EndPoint, *collector.EndPoint) {\n\n\tsrc := &collector.EndPoint{\n\t\tIP:   srcAddress.String(),\n\t\tPort: srcPort,\n\t\tID:   srcID,\n\t}\n\tdst := &collector.EndPoint{\n\t\tIP:   dstAddress.String(),\n\t\tPort: dstPort,\n\t\tID:   dstID,\n\t}\n\n\tif reverse {\n\t\treturn dst, src\n\t}\n\n\treturn src, dst\n}\n\nfunc TestReportAcceptedFlow(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\n\tConvey(\"Given I setup datapath\", t, func() {\n\t\tdp := setupDatapath(ctrl, mockCollector)\n\n\t\tConvey(\"Then datapath should not be nil\", func() {\n\t\t\tSo(dp, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Then check reportAcceptedFlow\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Accept,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, conn, _, context, policy := generateCommonTestData(policy.Accept, policy.ObserveNone)\n\n\t\t\tdp.reportAcceptedFlow(p, conn, srcID, dstID, context, policy, policy, false)\n\t\t})\n\n\t\tConvey(\"Then check reportAcceptedFlow with same src dst ID\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Accept | policy.Log,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, conn, _, context, policy := generateCommonTestData(policy.Accept|policy.Log, policy.ObserveNone)\n\n\t\t\tdp.reportAcceptedFlow(p, conn, srcID, srcID, context, policy, policy, false)\n\t\t})\n\n\t\tConvey(\"Then check reportAcceptedFlow with reverse\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(true)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Accept,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, conn, _, context, policy := generateCommonTestData(policy.Accept, policy.ObserveNone)\n\n\t\t\tdp.reportAcceptedFlow(p, conn, srcID, dstID, context, policy, policy, true)\n\t\t})\n\t})\n}\n\nfunc TestReportExternalServiceFlow(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\n\tConvey(\"Given I setup datapath\", t, func() {\n\t\tdp := setupDatapath(ctrl, mockCollector)\n\n\t\tConvey(\"Then datapath should not be nil\", func() {\n\t\t\tSo(dp, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Then check reportAcceptedFlow\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Accept,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, _, _, context, policy := generateCommonTestData(policy.Accept, policy.ObserveNone)\n\n\t\t\tdp.reportExternalServiceFlow(context, policy, policy, true, p)\n\t\t})\n\n\t\tConvey(\"Then check reportAcceptedFlow reverse\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *dst,\n\t\t\t\tDestination: *src,\n\t\t\t\tAction:      policy.Reject,\n\t\t\t\tDropReason:  collector.PolicyDrop,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, _, _, context, policy := generateCommonTestData(policy.Reject, policy.ObserveContinue)\n\n\t\t\tdp.reportReverseExternalServiceFlow(context, policy, policy, false, p)\n\t\t})\n\t})\n}\n\nfunc TestReportUDPAcceptedFlow(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\n\tConvey(\"Given I setup datapath\", t, func() {\n\t\tdp := setupDatapath(ctrl, mockCollector)\n\n\t\tConvey(\"Then datapath should not be nil\", func() {\n\t\t\tSo(dp, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Then check reportAcceptedFlow\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Accept,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, _, conn, context, policy := generateCommonTestData(policy.Accept, policy.ObserveNone)\n\n\t\t\tdp.reportUDPAcceptedFlow(p, conn, srcID, dstID, context, policy, policy, false)\n\t\t})\n\n\t\tConvey(\"Then check reportAcceptedFlow with reverse\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(true)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Accept,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, _, conn, context, policy := generateCommonTestData(policy.Accept, policy.ObserveNone)\n\n\t\t\tdp.reportUDPAcceptedFlow(p, conn, srcID, dstID, context, policy, policy, true)\n\t\t})\n\t})\n}\n\nfunc TestReportRejectedFlow(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\n\tConvey(\"Given I setup datapath\", t, func() {\n\t\tdp := setupDatapath(ctrl, mockCollector)\n\n\t\tConvey(\"Then datapath should not be nil\", func() {\n\t\t\tSo(dp, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Then check reportRejectedFlow\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Reject,\n\t\t\t\tDropReason:  collector.PolicyDrop,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, conn, _, context, policy := generateCommonTestData(policy.Reject, policy.ObserveNone)\n\n\t\t\tdp.reportRejectedFlow(p, conn, srcID, dstID, context, collector.PolicyDrop, policy, policy, false)\n\t\t})\n\n\t\tConvey(\"Then check reportRejectedFlow with report and packet policy nil\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Reject | policy.Log,\n\t\t\t\tDropReason:  collector.PolicyDrop,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, conn, _, context, _ := generateCommonTestData(policy.Reject|policy.Log, policy.ObserveNone)\n\n\t\t\tdp.reportRejectedFlow(p, conn, srcID, dstID, context, collector.PolicyDrop, nil, nil, false)\n\t\t})\n\n\t\tConvey(\"Then check reportRejectedFlow with reverse\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(true)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Reject,\n\t\t\t\tDropReason:  collector.PolicyDrop,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, conn, _, context, policy := generateCommonTestData(policy.Reject, policy.ObserveNone)\n\n\t\t\tdp.reportRejectedFlow(p, conn, srcID, dstID, context, collector.PolicyDrop, policy, policy, true)\n\t\t})\n\t})\n}\n\nfunc TestReportUDPRejectedFlow(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\n\tConvey(\"Given I setup datapath\", t, func() {\n\t\tdp := setupDatapath(ctrl, mockCollector)\n\n\t\tConvey(\"Then datapath should not be nil\", func() {\n\t\t\tSo(dp, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Then check reportRejectedFlow\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Reject,\n\t\t\t\tDropReason:  collector.PolicyDrop,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, _, conn, context, policy := generateCommonTestData(policy.Reject, policy.ObserveNone)\n\n\t\t\tdp.reportUDPRejectedFlow(p, conn, srcID, dstID, context, collector.PolicyDrop, policy, policy, false)\n\t\t})\n\n\t\tConvey(\"Then check reportRejectedFlow with policy and packet policy nil\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Reject | policy.Log,\n\t\t\t\tDropReason:  collector.PolicyDrop,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, _, conn, context, _ := generateCommonTestData(policy.Reject|policy.Log, policy.ObserveNone)\n\n\t\t\tdp.reportUDPRejectedFlow(p, conn, srcID, dstID, context, collector.PolicyDrop, nil, nil, false)\n\t\t})\n\n\t\tConvey(\"Then check reportRejectedFlow with reverse\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(true)\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Reject,\n\t\t\t\tDropReason:  collector.PolicyDrop,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(MyMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, _, conn, context, policy := generateCommonTestData(policy.Reject, policy.ObserveNone)\n\n\t\t\tdp.reportUDPRejectedFlow(p, conn, srcID, dstID, context, collector.PolicyDrop, policy, policy, true)\n\t\t})\n\t})\n}\n\nfunc TestReportDefaultEndpoint(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tmockCollector := mockcollector.NewMockEventCollector(ctrl)\n\n\tConvey(\"Given I setup datapath\", t, func() {\n\t\tdp := setupDatapath(ctrl, mockCollector)\n\n\t\tConvey(\"Then datapath should not be nil\", func() {\n\t\t\tSo(dp, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Then check reportRejectedFlow with dest ID set to default\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\t\t\tsrc.Type = collector.EndPointTypePU\n\t\t\tdst.ID = collector.DefaultEndPoint\n\t\t\tdst.Type = collector.EndPointTypeExternalIP\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Reject,\n\t\t\t\tDropReason:  collector.PolicyDrop,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(EndpointTypeMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, conn, _, context, policy := generateCommonTestData(policy.Reject, policy.ObserveNone)\n\n\t\t\tdp.reportRejectedFlow(p, conn, src.ID, dst.ID, context, collector.PolicyDrop, policy, policy, false)\n\t\t})\n\n\t\tConvey(\"Then check reportAcceptedFlow with src ID set to default\", func() {\n\n\t\t\tsrc, dst := generateTestEndpoints(false)\n\t\t\tdst.Type = collector.EndPointTypePU\n\t\t\tsrc.ID = collector.DefaultEndPoint\n\t\t\tsrc.Type = collector.EndPointTypeExternalIP\n\n\t\t\tflowRecord := collector.FlowRecord{\n\t\t\t\tCount:       1,\n\t\t\t\tSource:      *src,\n\t\t\t\tDestination: *dst,\n\t\t\t\tAction:      policy.Accept,\n\t\t\t}\n\n\t\t\tmockCollector.EXPECT().CollectFlowEvent(EndpointTypeMatcher(&flowRecord)).Times(1)\n\n\t\t\tp, conn, _, context, policy := generateCommonTestData(policy.Accept, policy.ObserveNone)\n\n\t\t\tdp.reportAcceptedFlow(p, conn, src.ID, dst.ID, context, policy, policy, false)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/proxy/enforcerproxy.go",
    "content": "// Package enforcerproxy :: This is the implementation of the RPC client\n// It implements the interface of Trireme Enforcer and forwards these\n// requests to the actual remote enforcer instead of implementing locally\npackage enforcerproxy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/processmon\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ebpf\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/env\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n\t\"go.uber.org/zap\"\n)\n\n// ProxyInfo is the struct used to hold state about active enforcers in the system\ntype ProxyInfo struct {\n\tmutualAuth             bool\n\tpacketLogs             bool\n\tSecrets                secrets.Secrets\n\tserverID               string\n\tvalidity               time.Duration\n\tprochdl                processmon.ProcessManager\n\trpchdl                 rpcwrapper.RPCClient\n\tfilterQueue            fqconfig.FilterQueue\n\tcommandArg             string\n\tstatsServerSecret      string\n\tprocMountPoint         string\n\tExternalIPCacheTimeout time.Duration\n\tcollector              collector.EventCollector\n\tcfg                    *runtime.Configuration\n\ttokenIssuer            common.ServiceTokenIssuer\n\tbinaryTokens           bool\n\tisBPFEnabled           bool\n\tipv6Enabled            bool\n\tserviceMeshType        policy.ServiceMesh\n\trpcServer              rpcwrapper.RPCServer\n\tiptablesLockfile       string\n\tsync.RWMutex\n}\n\n// Enforce method makes a RPC call for the remote enforcer enforce method\nfunc (s *ProxyInfo) Enforce(ctx context.Context, contextID string, puInfo *policy.PUInfo) error {\n\n\tinitEnforcer, err := s.prochdl.LaunchRemoteEnforcer(\n\t\tcontextID,\n\t\tpuInfo.Runtime.Pid(),\n\t\tpuInfo.Runtime.NSPath(),\n\t\ts.commandArg,\n\t\ts.statsServerSecret,\n\t\ts.procMountPoint,\n\t\tpuInfo.Policy.EnforcerType(),\n\t)\n\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tzap.L().Debug(\"Called enforce and launched remote process\", zap.String(\"contextID\", contextID),\n\t\tzap.String(\"enforcer type\", puInfo.Policy.EnforcerType().String()),\n\t\tzap.String(\"serviceMeshType\", puInfo.Runtime.ServiceMeshType.String()),\n\t\tzap.String(\"name\", puInfo.Runtime.Name()))\n\n\ts.serviceMeshType = puInfo.Runtime.ServiceMeshType\n\tif initEnforcer {\n\t\tif err := s.initRemoteEnforcer(contextID); err != nil {\n\t\t\ts.prochdl.KillRemoteEnforcer(contextID, true) // nolint errcheck\n\t\t\treturn err\n\t\t}\n\t}\n\n\tenforcerPayload := &rpcwrapper.EnforcePayload{\n\t\tContextID: contextID,\n\t\tPolicy:    puInfo.Policy.ToPublicPolicy(),\n\t}\n\n\t//Only the secrets need to be under lock. They can change async to the enforce call from Updatesecrets\n\ts.RLock()\n\tenforcerPayload.Secrets = s.Secrets.RPCSecrets()\n\ts.RUnlock()\n\trequest := &rpcwrapper.Request{\n\t\tPayload: enforcerPayload,\n\t}\n\n\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.Enforce, request, &rpcwrapper.Response{}); err != nil {\n\t\ts.prochdl.KillRemoteEnforcer(contextID, true) // nolint errcheck\n\t\treturn fmt.Errorf(\"failed to send message to remote enforcer: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// Unenforce stops enforcing policy for the given contextID.\nfunc (s *ProxyInfo) Unenforce(ctx context.Context, contextID string) error {\n\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.UnEnforcePayload{\n\t\t\tContextID: contextID,\n\t\t},\n\t}\n\n\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.Unenforce, request, &rpcwrapper.Response{}); err != nil {\n\t\tzap.L().Error(\"failed to send message to remote enforcer\", zap.Error(err))\n\t}\n\n\treturn s.prochdl.KillRemoteEnforcer(contextID, true)\n}\n\n// UpdateSecrets updates the secrets used for signing communication between trireme instances\nfunc (s *ProxyInfo) UpdateSecrets(token secrets.Secrets) error {\n\ts.Lock()\n\ts.Secrets = token\n\ts.Unlock()\n\n\tvar allErrors string\n\n\tresp := &rpcwrapper.Response{}\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.UpdateSecretsPayload{\n\t\t\tSecrets: s.Secrets.RPCSecrets(),\n\t\t},\n\t}\n\n\tfor _, contextID := range s.rpchdl.ContextList() {\n\t\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.UpdateSecrets, request, resp); err != nil {\n\t\t\tallErrors = allErrors + \" contextID \" + contextID + \":\" + err.Error()\n\t\t}\n\t}\n\n\tif len(allErrors) > 0 {\n\t\treturn fmt.Errorf(\"unable to update secrets for some remotes: %s\", allErrors)\n\t}\n\n\treturn nil\n}\n\n// SetLogLevel sets log level.\nfunc (s *ProxyInfo) SetLogLevel(level constants.LogLevel) error {\n\n\tresp := &rpcwrapper.Response{}\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.SetLogLevelPayload{\n\t\t\tLevel: level,\n\t\t},\n\t}\n\n\tvar allErrors string\n\n\tfor _, contextID := range s.rpchdl.ContextList() {\n\t\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.SetLogLevel, request, resp); err != nil {\n\t\t\tallErrors = allErrors + \" contextID \" + contextID + \":\" + err.Error()\n\t\t}\n\t}\n\n\tif len(allErrors) > 0 {\n\t\treturn fmt.Errorf(\"unable to set log level: %s\", allErrors)\n\t}\n\n\treturn nil\n}\n\n// CleanUp sends a cleanup command to all the remotes forcing them to exit and clean their state.\nfunc (s *ProxyInfo) CleanUp() error {\n\tvar synch sync.Mutex\n\tvar wg sync.WaitGroup\n\n\tcontextList := s.rpchdl.ContextList()\n\tlenCids := len(contextList)\n\n\tif lenCids == 0 {\n\t\treturn nil\n\t}\n\n\tzap.L().Info(strconv.Itoa(lenCids) + \" remote enforcers waiting to be exited\")\n\n\tvar chs []chan string\n\n\twg.Add(lenCids)\n\tfor i := 0; i < 4; i++ {\n\t\tch := make(chan string)\n\t\tchs = append(chs, ch)\n\n\t\tgo func(ch chan string) {\n\t\t\tvar cid string\n\t\t\tfor {\n\t\t\t\tcid = <-ch\n\t\t\t\tif err := s.prochdl.KillRemoteEnforcer(cid, false); err != nil {\n\t\t\t\t\tzap.L().Error(\"enforcer with contextID \"+cid+\"failed to exit\", zap.Error(err))\n\t\t\t\t}\n\t\t\t\tsynch.Lock()\n\t\t\t\tlenCids = lenCids - 1\n\t\t\t\tm := 0\n\t\t\t\tswitch {\n\t\t\t\tcase lenCids >= 500:\n\t\t\t\t\tm = 250\n\t\t\t\tcase lenCids >= 100:\n\t\t\t\t\tm = 100\n\t\t\t\tcase lenCids >= 10:\n\t\t\t\t\tm = 10\n\t\t\t\tdefault:\n\t\t\t\t\tm = 1\n\t\t\t\t}\n\n\t\t\t\tif lenCids%m == 0 {\n\t\t\t\t\tzap.L().Info(strconv.Itoa(lenCids) + \" remote enforcers waiting to be exited\")\n\t\t\t\t}\n\t\t\t\tsynch.Unlock()\n\t\t\t\twg.Done()\n\t\t\t}\n\t\t}(ch)\n\t}\n\n\tfor i, contextID := range contextList {\n\t\tchs[i%4] <- contextID\n\t}\n\n\twg.Wait()\n\tzap.L().Info(\"All remote enforcers have exited...\")\n\n\treturn nil\n}\n\n// EnableDatapathPacketTracing enable nfq packet tracing in remote container\nfunc (s *ProxyInfo) EnableDatapathPacketTracing(ctx context.Context, contextID string, direction packettracing.TracingDirection, interval time.Duration) error {\n\n\tresp := &rpcwrapper.Response{}\n\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.EnableDatapathPacketTracingPayLoad{\n\t\t\tDirection: direction,\n\t\t\tInterval:  interval,\n\t\t\tContextID: contextID,\n\t\t},\n\t}\n\n\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.EnableDatapathPacketTracing, request, resp); err != nil {\n\t\treturn fmt.Errorf(\"unable to enable datapath packet tracing %s -- %s\", err, resp.Status)\n\t}\n\n\treturn nil\n}\n\n// EnableIPTablesPacketTracing enable iptables tracing\nfunc (s *ProxyInfo) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.EnableIPTablesPacketTracingPayLoad{\n\t\t\tIPTablesPacketTracing: true,\n\t\t\tInterval:              interval,\n\t\t\tContextID:             contextID,\n\t\t},\n\t}\n\n\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.EnableIPTablesPacketTracing, request, &rpcwrapper.Response{}); err != nil {\n\t\treturn fmt.Errorf(\"Unable to enable iptables tracing for contextID %s: %s\", contextID, err)\n\t}\n\n\treturn nil\n}\n\n// SetTargetNetworks does the RPC call for SetTargetNetworks to the corresponding\n// remote enforcers\nfunc (s *ProxyInfo) SetTargetNetworks(cfg *runtime.Configuration) error {\n\tresp := &rpcwrapper.Response{}\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.SetTargetNetworksPayload{\n\t\t\tConfiguration: cfg,\n\t\t},\n\t}\n\n\tvar allErrors string\n\n\tfor _, contextID := range s.rpchdl.ContextList() {\n\t\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.SetTargetNetworks, request, resp); err != nil {\n\t\t\tallErrors = allErrors + \" contextID \" + contextID + \":\" + err.Error()\n\t\t}\n\t}\n\n\ts.Lock()\n\ts.cfg = cfg\n\ts.Unlock()\n\n\tif len(allErrors) > 0 {\n\t\treturn fmt.Errorf(\"Remote enforcers failed: %s\", allErrors)\n\t}\n\n\treturn nil\n}\n\n// GetBPFObject returns the bpf object\nfunc (s *ProxyInfo) GetBPFObject() ebpf.BPFModule {\n\treturn nil\n}\n\n// GetServiceMeshType is unimplemented in the envoy authorizer\nfunc (s *ProxyInfo) GetServiceMeshType() policy.ServiceMesh {\n\treturn policy.None\n}\n\n// GetFilterQueue returns the current FilterQueueConfig.\nfunc (s *ProxyInfo) GetFilterQueue() fqconfig.FilterQueue {\n\treturn s.filterQueue\n}\n\n// Run starts the the remote enforcer proxy.\nfunc (s *ProxyInfo) Run(ctx context.Context) error {\n\n\thandler := &ProxyRPCServer{\n\t\trpchdl:      s.rpcServer,\n\t\tcollector:   s.collector,\n\t\tsecret:      s.statsServerSecret,\n\t\ttokenIssuer: s.tokenIssuer,\n\t\tctx:         ctx,\n\t}\n\n\t// Start the server for statistics collection.\n\tgo s.rpcServer.StartServer(ctx, \"unix\", constants.StatsChannel, handler) // nolint\n\n\treturn nil\n}\n\n// Ping runs ping from the given config.\nfunc (s *ProxyInfo) Ping(ctx context.Context, contextID string, pingConfig *policy.PingConfig) error {\n\n\tresp := &rpcwrapper.Response{}\n\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.PingPayload{\n\t\t\tContextID:  contextID,\n\t\t\tPingConfig: pingConfig,\n\t\t},\n\t}\n\n\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.Ping, request, resp); err != nil {\n\t\treturn fmt.Errorf(\"unable to run ping %s -- %s\", err, resp.Status)\n\t}\n\n\treturn nil\n}\n\n// DebugCollect tells remote enforcer to start collecting debug info (pcap or misc commands).\n// It does not wait for pcap collection to complete: the pid of tcpdump is returned.\n// If another command is meant to be executed in remote enforcer, it should be quick, and its output is returned.\nfunc (s *ProxyInfo) DebugCollect(ctx context.Context, contextID string, debugConfig *policy.DebugConfig) error {\n\tresp := &rpcwrapper.Response{}\n\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.DebugCollectPayload{\n\t\t\tContextID:    contextID,\n\t\t\tPcapFilePath: debugConfig.FilePath,\n\t\t\tPcapFilter:   debugConfig.PcapFilter,\n\t\t\tCommandExec:  debugConfig.CommandExec,\n\t\t},\n\t}\n\n\tif err := s.rpchdl.RemoteCall(contextID, remoteenforcer.DebugCollect, request, resp); err != nil {\n\t\treturn fmt.Errorf(\"unable to run debug collect %s -- %s\", err, resp.Status)\n\t}\n\n\tresponsePayload := resp.Payload.(rpcwrapper.DebugCollectResponsePayload)\n\tdebugConfig.PID = responsePayload.PID\n\tdebugConfig.CommandOutput = responsePayload.CommandOutput\n\n\treturn nil\n}\n\n// initRemoteEnforcer method makes a RPC call to the remote enforcer\nfunc (s *ProxyInfo) initRemoteEnforcer(contextID string) error {\n\n\tresp := &rpcwrapper.Response{}\n\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.InitRequestPayload{\n\t\t\tMutualAuth:             s.mutualAuth,\n\t\t\tValidity:               s.validity,\n\t\t\tServerID:               s.serverID,\n\t\t\tExternalIPCacheTimeout: s.ExternalIPCacheTimeout,\n\t\t\tPacketLogs:             s.packetLogs,\n\t\t\tSecrets:                s.Secrets.RPCSecrets(),\n\t\t\tConfiguration:          s.cfg,\n\t\t\tBinaryTokens:           s.binaryTokens,\n\t\t\tIsBPFEnabled:           s.isBPFEnabled,\n\t\t\tServiceMeshType:        s.serviceMeshType,\n\t\t\tIPv6Enabled:            s.ipv6Enabled,\n\t\t\tIPTablesLockfile:       s.iptablesLockfile,\n\t\t},\n\t}\n\n\treturn s.rpchdl.RemoteCall(contextID, remoteenforcer.InitEnforcer, request, resp)\n}\n\n// NewProxyEnforcer creates a new proxy to remote enforcers.\nfunc NewProxyEnforcer(\n\tctx context.Context,\n\tmutualAuth bool,\n\tfilterQueue fqconfig.FilterQueue,\n\tcollector collector.EventCollector,\n\tsecrets secrets.Secrets,\n\tserverID string,\n\tvalidity time.Duration,\n\tcmdArg string,\n\tprocMountPoint string,\n\tExternalIPCacheTimeout time.Duration,\n\tpacketLogs bool,\n\tcfg *runtime.Configuration,\n\truntimeError chan *policy.RuntimeError,\n\tremoteParameters *env.RemoteParameters,\n\ttokenIssuer common.ServiceTokenIssuer,\n\tisBPFEnabled bool,\n\tipv6Enabled bool,\n\tiptablesLockfile string,\n\trpcServer rpcwrapper.RPCServer,\n) enforcer.Enforcer {\n\n\tstatsServersecret, err := crypto.GenerateRandomString(32)\n\tif err != nil {\n\t\t// There is a very small chance of this happening we will log an error here.\n\t\tzap.L().Error(\"Failed to generate random secret for stats reporting\", zap.Error(err))\n\t\t// We will use current time as the secret\n\t\tstatsServersecret = time.Now().String()\n\t}\n\n\trpcClient := rpcwrapper.NewRPCWrapper()\n\n\treturn &ProxyInfo{\n\t\tmutualAuth:             mutualAuth,\n\t\tSecrets:                secrets,\n\t\tserverID:               serverID,\n\t\tvalidity:               validity,\n\t\tprochdl:                processmon.New(ctx, remoteParameters, runtimeError, rpcClient, filterQueue.GetNumQueues()),\n\t\trpchdl:                 rpcClient,\n\t\tfilterQueue:            filterQueue,\n\t\tcommandArg:             cmdArg,\n\t\tstatsServerSecret:      statsServersecret,\n\t\tprocMountPoint:         procMountPoint,\n\t\tExternalIPCacheTimeout: ExternalIPCacheTimeout,\n\t\tpacketLogs:             packetLogs,\n\t\tcollector:              collector,\n\t\tcfg:                    cfg,\n\t\ttokenIssuer:            tokenIssuer,\n\t\tisBPFEnabled:           isBPFEnabled,\n\t\tipv6Enabled:            ipv6Enabled,\n\t\tiptablesLockfile:       iptablesLockfile,\n\t\trpcServer:              rpcServer,\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/proxy/enforcerproxy_test.go",
    "content": "package enforcerproxy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\t\"time\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper/mockrpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/processmon/mockprocessmon\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/env\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/testhelper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nconst procMountPoint = \"/proc\"\n\nvar (\n\tkeypem, caPool, certPEM string\n\ttoken                   []byte\n)\n\nfunc init() {\n\tkeypem = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIPkiHqtH372JJdAG/IxJlE1gv03cdwa8Lhg2b3m/HmbyoAoGCCqGSM49\nAwEHoUQDQgAEAfAL+AfPj/DnxrU6tUkEyzEyCxnflOWxhouy1bdzhJ7vxMb1vQ31\n8ZbW/WvMN/ojIXqXYrEpISoojznj46w64w==\n-----END EC PRIVATE KEY-----`\n\n\tcaPool = `-----BEGIN CERTIFICATE-----\nMIIBhTCCASwCCQC8b53yGlcQazAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABJxneTUqhbtgEIwpKUUzwz3h92SqcOdIw3mfQkMjg3Vobvr6JKlpXYe9xhsN\nrygJmLhMAN9gjF9qM9ybdbe+m3owCgYIKoZIzj0EAwIDRwAwRAIgC1fVMqdBy/o3\njNUje/Hx0fZF9VDyUK4ld+K/wF3QdK4CID1ONj/Kqinrq2OpjYdkgIjEPuXoOoR1\ntCym8dnq4wtH\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIB3jCCAYOgAwIBAgIJALsW7pyC2ERQMAoGCCqGSM49BAMCMEsxCzAJBgNVBAYT\nAlVTMQswCQYDVQQIDAJDQTEMMAoGA1UEBwwDU0pDMRAwDgYDVQQKDAdUcmlyZW1l\nMQ8wDQYDVQQDDAZ1YnVudHUwHhcNMTYwOTI3MjI0OTAwWhcNMjYwOTI1MjI0OTAw\nWjBLMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4G\nA1UECgwHVHJpcmVtZTEPMA0GA1UEAwwGdWJ1bnR1MFkwEwYHKoZIzj0CAQYIKoZI\nzj0DAQcDQgAE4c2Fd7XeIB1Vfs51fWwREfLLDa55J+NBalV12CH7YEAnEXjl47aV\ncmNqcAtdMUpf2oz9nFVI81bgO+OSudr3CqNQME4wHQYDVR0OBBYEFOBftuI09mmu\nrXjqDyIta1gT8lqvMB8GA1UdIwQYMBaAFOBftuI09mmurXjqDyIta1gT8lqvMAwG\nA1UdEwQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAMylAHhbFA0KqhXIFiXNpEbH\nJKaELL6UXXdeQ5yup8q+AiEAh5laB9rbgTymjaANcZ2YzEZH4VFS3CKoSdVqgnwC\ndW4=\n-----END CERTIFICATE-----`\n\n\tcertPEM = `-----BEGIN CERTIFICATE-----\nMIIBhjCCASwCCQCPCdgp39gHJTAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABAHwC/gHz4/w58a1OrVJBMsxMgsZ35TlsYaLstW3c4Se78TG9b0N9fGW1v1r\nzDf6IyF6l2KxKSEqKI854+OsOuMwCgYIKoZIzj0EAwIDSAAwRQIgQwQn0jnK/XvD\nKxgQd/0pW5FOAaB41cMcw4/XVlphO1oCIQDlGie+WlOMjCzrV0Xz+XqIIi1pIgPT\nIG7Nv+YlTVp5qA==\n-----END CERTIFICATE-----`\n\n\ttoken = []byte{0x65, 0x79, 0x4A, 0x68, 0x62, 0x47, 0x63, 0x69, 0x4F, 0x69, 0x4A, 0x46, 0x55, 0x7A, 0x49, 0x31, 0x4E, 0x69, 0x49, 0x73, 0x49, 0x6E, 0x52, 0x35, 0x63, 0x43, 0x49, 0x36, 0x49, 0x6B, 0x70, 0x58, 0x56, 0x43, 0x4A, 0x39, 0x2E, 0x65, 0x79, 0x4A, 0x59, 0x49, 0x6A, 0x6F, 0x78, 0x4F, 0x44, 0x51, 0x32, 0x4E, 0x54, 0x6B, 0x78, 0x4D, 0x7A, 0x63, 0x33, 0x4E, 0x44, 0x41, 0x35, 0x4D, 0x7A, 0x4D, 0x35, 0x4D, 0x7A, 0x51, 0x33, 0x4D, 0x54, 0x4D, 0x77, 0x4D, 0x6A, 0x4D, 0x35, 0x4E, 0x6A, 0x45, 0x79, 0x4E, 0x6A, 0x55, 0x79, 0x4D, 0x7A, 0x45, 0x77, 0x4E, 0x44, 0x51, 0x30, 0x4F, 0x44, 0x63, 0x34, 0x4D, 0x7A, 0x45, 0x78, 0x4E, 0x6A, 0x4D, 0x30, 0x4E, 0x7A, 0x6B, 0x32, 0x4D, 0x6A, 0x4D, 0x32, 0x4D, 0x7A, 0x67, 0x30, 0x4E, 0x54, 0x59, 0x30, 0x4E, 0x6A, 0x51, 0x78, 0x4E, 0x7A, 0x67, 0x78, 0x4E, 0x44, 0x41, 0x78, 0x4F, 0x44, 0x63, 0x35, 0x4F, 0x44, 0x4D, 0x30, 0x4D, 0x44, 0x51, 0x78, 0x4E, 0x53, 0x77, 0x69, 0x57, 0x53, 0x49, 0x36, 0x4F, 0x44, 0x59, 0x78, 0x4F, 0x44, 0x41, 0x7A, 0x4E, 0x6A, 0x45, 0x33, 0x4D, 0x6A, 0x67, 0x34, 0x4D, 0x54, 0x6B, 0x79, 0x4D, 0x44, 0x41, 0x30, 0x4D, 0x6A, 0x41, 0x33, 0x4D, 0x44, 0x63, 0x30, 0x4D, 0x44, 0x6B, 0x78, 0x4D, 0x54, 0x41, 0x33, 0x4D, 0x54, 0x49, 0x33, 0x4D, 0x7A, 0x49, 0x78, 0x4F, 0x54, 0x45, 0x34, 0x4D, 0x54, 0x45, 0x77, 0x4F, 0x44, 0x41, 0x77, 0x4E, 0x54, 0x41, 0x79, 0x4F, 0x54, 0x59, 0x79, 0x4D, 0x6A, 0x49, 0x78, 0x4D, 0x54, 0x41, 0x32, 0x4E, 0x44, 0x41, 0x30, 0x4D, 0x54, 0x6B, 0x32, 0x4F, 0x54, 0x49, 0x34, 0x4D, 0x54, 0x55, 0x78, 0x4D, 0x6A, 0x55, 0x31, 0x4E, 0x54, 0x55, 0x30, 0x4F, 0x54, 0x63, 0x73, 0x49, 0x6D, 0x56, 0x34, 0x63, 0x43, 0x49, 0x36, 0x4D, 0x54, 0x55, 0x7A, 0x4D, 0x7A, 0x49, 0x30, 0x4D, 0x54, 0x6B, 0x78, 0x4D, 0x6E, 0x30, 0x2E, 0x56, 0x43, 0x44, 0x30, 0x54, 0x61, 0x4C, 0x69, 0x66, 0x74, 0x35, 0x63, 0x6A, 0x6E, 0x66, 0x74, 0x73, 0x7A, 0x57, 0x63, 0x43, 0x74, 0x56, 0x64, 0x59, 0x49, 0x63, 0x5A, 0x44, 0x58, 0x63, 0x73, 0x67, 0x66, 0x47, 0x41, 0x69, 0x33, 0x42, 0x77, 0x6F, 0x73, 0x4A, 0x50, 0x68, 0x6F, 0x76, 0x6A, 0x57, 0x65, 0x56, 0x65, 0x74, 0x6E, 0x55, 0x44, 0x44, 0x46, 0x69, 0x45, 0x37, 0x4E, 0x78, 0x76, 0x4E, 0x6A, 0x32, 0x52, 0x43, 0x53, 0x79, 0x4A, 0x76, 0x2D, 0x52, 0x6F, 0x71, 0x72, 0x6F, 0x78, 0x4E, 0x48, 0x4B, 0x4B, 0x37, 0x77}\n}\n\nfunc eventCollector() collector.EventCollector {\n\tnewEvent := &collector.DefaultCollector{}\n\treturn newEvent\n}\n\nfunc secretGen() secrets.Secrets {\n\n\t_, newSecret, _ := testhelper.NewTestCompactPKISecrets()\n\treturn newSecret\n}\n\nfunc createPUInfo() *policy.PUInfo {\n\n\trules := policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.30.253.0/24\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{\"TCP\"},\n\t\t\tPolicy:    &policy.FlowPolicy{Action: policy.Reject},\n\t\t},\n\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.30.253.0/24\"},\n\t\t\tPorts:     []string{\"443\"},\n\t\t\tProtocols: []string{\"TCP\"},\n\t\t\tPolicy:    &policy.FlowPolicy{Action: policy.Accept},\n\t\t},\n\t}\n\n\tips := policy.ExtendedMap{\n\t\tpolicy.DefaultNamespace: \"172.17.0.1\",\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetIPAddresses(ips)\n\tplc := policy.NewPUPolicy(\"testServerID\", \"/ns1\", policy.Police, rules, rules, nil, nil, nil, nil, nil, nil, ips, 0, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\treturn policy.PUInfoFromPolicyAndRuntime(\"testServerID\", plc, runtime)\n\n}\n\nfunc setupProxyEnforcer() enforcer.Enforcer {\n\tmutualAuthorization := false\n\tdefaultExternalIPCacheTimeout := time.Second * 40\n\n\tfqConfig := fqconfig.NewFilterQueue(\n\t\t1,\n\t\t[]string{},\n\t)\n\n\tpolicyEnf := NewProxyEnforcer(\n\t\tcontext.Background(),\n\t\tmutualAuthorization,\n\t\tfqConfig,\n\t\teventCollector(),\n\t\tsecretGen(),\n\t\t\"testServerID\",\n\t\t10*time.Minute,\n\t\tconstants.DefaultRemoteArg,\n\t\tprocMountPoint,\n\t\tdefaultExternalIPCacheTimeout,\n\t\tfalse,\n\t\t&runtime.Configuration{TCPTargetNetworks: []string{\"0.0.0.0/0\"}},\n\t\tmake(chan *policy.RuntimeError),\n\t\t&env.RemoteParameters{},\n\t\tnil,\n\t\tfalse,\n\t\tfalse,\n\t\t\"\",\n\t\trpcwrapper.NewRPCServer(),\n\t)\n\treturn policyEnf\n}\n\nfunc TestNewDefaultProxyEnforcer(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to start a proxy enforcer with defaults\", t, func() {\n\t\tpolicyEnf := setupProxyEnforcer()\n\n\t\te, ok := policyEnf.(*ProxyInfo)\n\t\tSo(ok, ShouldBeTrue)\n\t\tConvey(\"Then policyEnf should be correct\", func() {\n\t\t\tSo(e, ShouldNotBeNil)\n\t\t\tSo(e.rpchdl, ShouldNotBeNil)\n\t\t\tSo(e.statsServerSecret, ShouldNotEqual, \"\")\n\t\t})\n\t})\n}\n\nfunc TestInitRemoteEnforcer(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to start a proxy enforcer with defaults\", t, func() {\n\t\trpchdl := mockrpcwrapper.NewMockRPCClient(ctrl)\n\t\tpolicyEnf := setupProxyEnforcer()\n\t\te := policyEnf.(*ProxyInfo)\n\t\te.rpchdl = rpchdl\n\n\t\tConvey(\"When I try to initiate a remote enforcer\", func() {\n\t\t\trpchdl.EXPECT().RemoteCall(\"testServerID\", remoteenforcer.InitEnforcer, gomock.Any(), gomock.Any()).Times(1).Return(nil)\n\t\t\terr := e.initRemoteEnforcer(\"testServerID\")\n\n\t\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestEnforce(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to start a proxy enforcer with defaults\", t, func() {\n\t\trpchdl := mockrpcwrapper.NewMockRPCClient(ctrl)\n\t\tprochdl := mockprocessmon.NewMockProcessManager(ctrl)\n\t\tpolicyEnf := setupProxyEnforcer()\n\t\te := policyEnf.(*ProxyInfo)\n\t\te.rpchdl = rpchdl\n\t\te.prochdl = prochdl\n\n\t\tpu := createPUInfo()\n\n\t\tConvey(\"When launching the remote fails, it should error\", func() {\n\t\t\tprochdl.EXPECT().LaunchRemoteEnforcer(\"pu\", gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(false, fmt.Errorf(\"error\"))\n\t\t\terr := e.Enforce(context.Background(), \"pu\", pu)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When launching the remote succeeds, and init is false, but the rpc calls fails, it should work\", func() {\n\t\t\tprochdl.EXPECT().LaunchRemoteEnforcer(\"pu\", gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(false, nil)\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu\", remoteenforcer.Enforce, gomock.Any(), gomock.Any()).Return(fmt.Errorf(\"error\"))\n\t\t\tprochdl.EXPECT().KillRemoteEnforcer(\"pu\", true)\n\t\t\terr := e.Enforce(context.Background(), \"pu\", pu)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When launching the remote succeeds, and init is false, and rpc succeeds, it should work\", func() {\n\t\t\tprochdl.EXPECT().LaunchRemoteEnforcer(\"pu\", gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(false, nil)\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu\", remoteenforcer.Enforce, gomock.Any(), gomock.Any()).Return(nil)\n\t\t\terr := e.Enforce(context.Background(), \"pu\", pu)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When launching the remote succeeds, and init is true, and init of remote fails, it should error\", func() {\n\t\t\tprochdl.EXPECT().LaunchRemoteEnforcer(\"pu\", gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu\", remoteenforcer.InitEnforcer, gomock.Any(), gomock.Any()).Times(1).Return(fmt.Errorf(\"error\"))\n\t\t\tprochdl.EXPECT().KillRemoteEnforcer(\"pu\", true)\n\t\t\terr := e.Enforce(context.Background(), \"pu\", pu)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When launching succeeds with init true, it should not error\", func() {\n\t\t\tprochdl.EXPECT().LaunchRemoteEnforcer(\"pu\", gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu\", remoteenforcer.InitEnforcer, gomock.Any(), gomock.Any()).Times(1).Return(nil)\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu\", remoteenforcer.Enforce, gomock.Any(), gomock.Any()).Return(nil)\n\t\t\terr := e.Enforce(context.Background(), \"pu\", pu)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestUnenforce(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to start a proxy enforcer with defaults\", t, func() {\n\t\trpchdl := mockrpcwrapper.NewMockRPCClient(ctrl)\n\t\tprochdl := mockprocessmon.NewMockProcessManager(ctrl)\n\t\tpolicyEnf := setupProxyEnforcer()\n\t\te := policyEnf.(*ProxyInfo)\n\t\te.rpchdl = rpchdl\n\t\te.prochdl = prochdl\n\n\t\tConvey(\"When I try to call unenforce\", func() {\n\t\t\trpchdl.EXPECT().RemoteCall(\"testServerID\", remoteenforcer.Unenforce, gomock.Any(), gomock.Any()).Times(1).Return(nil)\n\t\t\tprochdl.EXPECT().KillRemoteEnforcer(\"testServerID\", true)\n\t\t\terr := e.Unenforce(context.Background(), \"testServerID\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to call unenforce and there is a failure\", func() {\n\t\t\trpchdl.EXPECT().RemoteCall(\"testServerID\", remoteenforcer.Unenforce, gomock.Any(), gomock.Any()).Times(1).Return(fmt.Errorf(\"error\"))\n\t\t\tprochdl.EXPECT().KillRemoteEnforcer(\"testServerID\", true)\n\t\t\terr := e.Unenforce(context.Background(), \"testServerID\")\n\n\t\t\tConvey(\"Then I should not get an error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestUpdateSecrets(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I update the secrets\", t, func() {\n\t\trpchdl := mockrpcwrapper.NewMockRPCClient(ctrl)\n\t\tprochdl := mockprocessmon.NewMockProcessManager(ctrl)\n\t\tpolicyEnf := setupProxyEnforcer()\n\t\te := policyEnf.(*ProxyInfo)\n\t\te.rpchdl = rpchdl\n\t\te.prochdl = prochdl\n\n\t\tConvey(\"When there is no container, I should get no error\", func() {\n\t\t\trpchdl.EXPECT().ContextList().Return([]string{})\n\t\t\terr := e.UpdateSecrets(secretGen())\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I get a set of PUs, I should update all of them\", func() {\n\t\t\trpchdl.EXPECT().ContextList().Return([]string{\"pu1\", \"pu2\"})\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu1\", remoteenforcer.UpdateSecrets, gomock.Any(), gomock.Any())\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu2\", remoteenforcer.UpdateSecrets, gomock.Any(), gomock.Any())\n\n\t\t\terr := e.UpdateSecrets(secretGen())\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I get a set of PUs, and one of them fails, I should get an error\", func() {\n\t\t\trpchdl.EXPECT().ContextList().Return([]string{\"pu1\", \"pu2\"})\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu1\", remoteenforcer.UpdateSecrets, gomock.Any(), gomock.Any()).Return(fmt.Errorf(\"error\"))\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu2\", remoteenforcer.UpdateSecrets, gomock.Any(), gomock.Any())\n\n\t\t\terr := e.UpdateSecrets(secretGen())\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n}\nfunc TestCleanup(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I request a cleanup\", t, func() {\n\t\trpchdl := mockrpcwrapper.NewMockRPCClient(ctrl)\n\t\tprochdl := mockprocessmon.NewMockProcessManager(ctrl)\n\t\tpolicyEnf := setupProxyEnforcer()\n\t\te := policyEnf.(*ProxyInfo)\n\t\te.rpchdl = rpchdl\n\t\te.prochdl = prochdl\n\n\t\tConvey(\"When there is no container, I should get no error\", func() {\n\t\t\trpchdl.EXPECT().ContextList().Return([]string{})\n\t\t\terr := e.CleanUp()\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I get a set of PUs, I should call kill for all of them\", func() {\n\t\t\trpchdl.EXPECT().ContextList().Return([]string{\"pu1\", \"pu2\"})\n\t\t\tprochdl.EXPECT().KillRemoteEnforcer(\"pu1\", false)\n\t\t\tprochdl.EXPECT().KillRemoteEnforcer(\"pu2\", false)\n\t\t\terr := e.CleanUp()\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestEnableDatapathPacketTracing(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to start a proxy enforcer with defaults\", t, func() {\n\t\trpchdl := mockrpcwrapper.NewMockRPCClient(ctrl)\n\t\tprochdl := mockprocessmon.NewMockProcessManager(ctrl)\n\t\tpolicyEnf := setupProxyEnforcer()\n\t\te := policyEnf.(*ProxyInfo)\n\t\te.rpchdl = rpchdl\n\t\te.prochdl = prochdl\n\n\t\tConvey(\"When I try to call unenforce\", func() {\n\t\t\trpchdl.EXPECT().RemoteCall(\"testServerID\", remoteenforcer.EnableDatapathPacketTracing, gomock.Any(), gomock.Any()).Times(1).Return(nil)\n\t\t\terr := e.EnableDatapathPacketTracing(context.TODO(), \"testServerID\", packettracing.NetworkOnly, 10*time.Second)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to call unenforce and there is a failure\", func() {\n\t\t\trpchdl.EXPECT().RemoteCall(\"testServerID\", remoteenforcer.EnableDatapathPacketTracing, gomock.Any(), gomock.Any()).Times(1).Return(fmt.Errorf(\"error\"))\n\t\t\terr := e.EnableDatapathPacketTracing(context.TODO(), \"testServerID\", packettracing.NetworkOnly, 10*time.Second)\n\n\t\t\tConvey(\"Then I should get an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestEnableIPTablesPacketTracing(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to start a proxy enforcer with defaults\", t, func() {\n\t\trpchdl := mockrpcwrapper.NewMockRPCClient(ctrl)\n\t\tprochdl := mockprocessmon.NewMockProcessManager(ctrl)\n\t\tpolicyEnf := setupProxyEnforcer()\n\t\te := policyEnf.(*ProxyInfo)\n\t\te.rpchdl = rpchdl\n\t\te.prochdl = prochdl\n\n\t\tConvey(\"When I try to call unenforce\", func() {\n\t\t\trpchdl.EXPECT().RemoteCall(\"testServerID\", remoteenforcer.EnableIPTablesPacketTracing, gomock.Any(), gomock.Any()).Times(1).Return(nil)\n\t\t\terr := e.EnableIPTablesPacketTracing(context.TODO(), \"testServerID\", 10*time.Second)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to call unenforce and there is a failure\", func() {\n\t\t\trpchdl.EXPECT().RemoteCall(\"testServerID\", remoteenforcer.EnableIPTablesPacketTracing, gomock.Any(), gomock.Any()).Times(1).Return(fmt.Errorf(\"error\"))\n\t\t\terr := e.EnableIPTablesPacketTracing(context.TODO(), \"testServerID\", 10*time.Second)\n\n\t\t\tConvey(\"Then I should get an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestSetTargetNetworks(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When update the target networks\", t, func() {\n\t\trpchdl := mockrpcwrapper.NewMockRPCClient(ctrl)\n\t\tprochdl := mockprocessmon.NewMockProcessManager(ctrl)\n\t\tpolicyEnf := setupProxyEnforcer()\n\t\te := policyEnf.(*ProxyInfo)\n\t\te.rpchdl = rpchdl\n\t\te.prochdl = prochdl\n\n\t\tConvey(\"When there is no container, I should get no error\", func() {\n\t\t\trpchdl.EXPECT().ContextList().Return([]string{})\n\t\t\terr := e.SetTargetNetworks(&runtime.Configuration{})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I get a set of PUs, I should call kill for all of them\", func() {\n\t\t\trpchdl.EXPECT().ContextList().Return([]string{\"pu1\", \"pu2\"})\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu1\", remoteenforcer.SetTargetNetworks, gomock.Any(), gomock.Any())\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu2\", remoteenforcer.SetTargetNetworks, gomock.Any(), gomock.Any())\n\t\t\terr := e.SetTargetNetworks(&runtime.Configuration{})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I get a set of PUs, and one of them fails, I should get an error\", func() {\n\t\t\trpchdl.EXPECT().ContextList().Return([]string{\"pu1\", \"pu2\"})\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu1\", remoteenforcer.SetTargetNetworks, gomock.Any(), gomock.Any()).Return(fmt.Errorf(\"error\"))\n\t\t\trpchdl.EXPECT().RemoteCall(\"pu2\", remoteenforcer.SetTargetNetworks, gomock.Any(), gomock.Any())\n\t\t\terr := e.SetTargetNetworks(&runtime.Configuration{})\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc TestPostReportEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\trpchdl := mockrpcwrapper.NewMockRPCServer(ctrl)\n\tc := eventCollector()\n\n\trequest := rpcwrapper.Request{\n\t\tPayloadType: rpcwrapper.PacketReport,\n\t\tPayload: &collector.PacketReport{\n\t\t\tDestinationIP: \"12.12.12.12\",\n\t\t\tSourceIP:      \"1.1.1.1\",\n\t\t},\n\t}\n\tstatsserver := &ProxyRPCServer{\n\t\trpchdl:    rpchdl,\n\t\tcollector: c,\n\t\tsecret:    \"test\",\n\t\tctx:       context.Background(),\n\t}\n\tresponse := &rpcwrapper.Response{}\n\n\tConvey(\"Given i receive a invalid message from the remote enforcer \", t, func() {\n\t\trpchdl.EXPECT().ProcessMessage(gomock.Any(), gomock.Any()).Times(1).Return(false)\n\t\terr := statsserver.PostReportEvent(request, response)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"Given i receive a valid message from the remote enforcer \", t, func() {\n\t\trpchdl.EXPECT().ProcessMessage(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\terr := statsserver.PostReportEvent(request, response)\n\t\tSo(err, ShouldBeNil)\n\t})\n}\n"
  },
  {
    "path": "controller/internal/enforcer/proxy/rpcserver.go",
    "content": "package enforcerproxy\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n)\n\n// ProxyRPCServer This struct is a receiver for Statsserver and maintains a handle to the RPC ProxyRPCServer.\ntype ProxyRPCServer struct {\n\tcollector   collector.EventCollector\n\trpchdl      rpcwrapper.RPCServer\n\tsecret      string\n\ttokenIssuer common.ServiceTokenIssuer\n\tctx         context.Context\n}\n\n// PostStats is the function called from the remoteenforcer when it has new flow events to publish.\nfunc (r *ProxyRPCServer) PostStats(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !r.rpchdl.ProcessMessage(&req, r.secret) {\n\t\treturn errors.New(\"message sender cannot be verified\")\n\t}\n\n\tpayload := req.Payload.(rpcwrapper.StatsPayload)\n\n\tfor _, record := range payload.Flows {\n\t\tr.collector.CollectFlowEvent(record)\n\t}\n\tpayload.Flows = nil\n\n\tfor _, record := range payload.Users {\n\t\tr.collector.CollectUserEvent(record)\n\t}\n\tpayload.Users = nil\n\n\treturn nil\n}\n\n// RetrieveToken propagates the master request to the token retriever and returns a token.\nfunc (r *ProxyRPCServer) RetrieveToken(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !r.rpchdl.ProcessMessage(&req, r.secret) {\n\t\treturn errors.New(\"message sender cannot be verified\")\n\t}\n\n\tpayload, ok := req.Payload.(rpcwrapper.TokenRequestPayload)\n\tif !ok {\n\t\treturn errors.New(\"invalid request payload for token request\")\n\t}\n\n\tsubctx, cancel := context.WithTimeout(r.ctx, time.Second*60)\n\tdefer cancel()\n\n\ttoken, err := r.tokenIssuer.Issue(subctx, payload.ContextID, payload.ServiceTokenType, payload.Audience, payload.Validity)\n\tif err != nil {\n\t\tresp.Status = \"error\"\n\t\treturn fmt.Errorf(\"control plane failed to issue token: %s\", err)\n\t}\n\n\tresp.Status = \"ok\"\n\tresp.Payload = &rpcwrapper.TokenResponsePayload{\n\t\tToken: token,\n\t}\n\n\treturn nil\n}\n\n// PostReportEvent posts report events to the listener.\nfunc (r *ProxyRPCServer) PostReportEvent(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !r.rpchdl.ProcessMessage(&req, r.secret) {\n\t\treturn errors.New(\"message sender cannot be verified\")\n\t}\n\n\tswitch req.PayloadType {\n\tcase rpcwrapper.PingReport:\n\t\tpingReport := req.Payload.(*collector.PingReport)\n\t\tr.collector.CollectPingEvent(pingReport)\n\n\tcase rpcwrapper.PacketReport:\n\t\tdebugReport := req.Payload.(*collector.PacketReport)\n\t\tr.collector.CollectPacketEvent(debugReport)\n\n\tcase rpcwrapper.CounterReport:\n\t\tcounterReport := req.Payload.(*collector.CounterReport)\n\t\tr.collector.CollectCounterEvent(counterReport)\n\n\tcase rpcwrapper.DNSReport:\n\t\tdnsReport := req.Payload.(*collector.DNSRequestReport)\n\t\tr.collector.CollectDNSRequests(dnsReport)\n\n\tcase rpcwrapper.ConnectionExceptionReport:\n\t\tconnectionReport := req.Payload.(*collector.ConnectionExceptionReport)\n\t\tr.collector.CollectConnectionExceptionReport(connectionReport)\n\n\tdefault:\n\t\treturn fmt.Errorf(\"unsupported report type: %v\", req.PayloadType)\n\t}\n\n\treq.Payload = nil\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/secretsproxy/secretsproxy.go",
    "content": "// +build !windows\n\npackage secretsproxy\n\nimport (\n\t\"bufio\"\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"github.com/aporeto-inc/oxy/forward\"\n\t\"github.com/shirou/gopsutil/process\"\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/urisearch\"\n\t\"go.aporeto.io/trireme-lib/monitor/remoteapi/server\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\n// SecretsProxy holds all state information for applying policy\n// in the secrets socket API.\ntype SecretsProxy struct {\n\tsocketPath      string\n\tapiCacheMapping cache.DataStore\n\tdrivers         cache.DataStore\n\tcgroupCache     cache.DataStore\n\tpolicyCache     cache.DataStore\n\n\tserver *http.Server\n\tsync.Mutex\n}\n\n// NewSecretsProxy creates a new secrets proxy.\nfunc NewSecretsProxy() *SecretsProxy {\n\n\treturn &SecretsProxy{\n\t\tsocketPath:      constants.DefaultSecretsPath,\n\t\tdrivers:         cache.NewCache(\"secrets driver cache\"),\n\t\tapiCacheMapping: cache.NewCache(\"secrets api cache\"),\n\t\tcgroupCache:     cache.NewCache(\"secrets pu cache\"),\n\t\tpolicyCache:     cache.NewCache(\"policy cache\"),\n\t}\n}\n\n// Run implements the run method of the CtrlInterface. It starts the proxy\n// server and initializes the data structures.\nfunc (s *SecretsProxy) Run(ctx context.Context) error {\n\ts.Lock()\n\tdefer s.Unlock()\n\n\tvar err error\n\n\t// Start a custom listener\n\taddr, _ := net.ResolveUnixAddr(\"unix\", s.socketPath)\n\tnl, err := net.ListenUnix(\"unix\", addr)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Unable to start API server: %s\", err)\n\t}\n\n\ts.server = &http.Server{\n\t\tHandler: http.HandlerFunc(s.apiProcessor),\n\t}\n\n\tgo func() {\n\t\t<-ctx.Done()\n\t\ts.server.Close() // nolint errcheck\n\t}()\n\n\tgo s.server.Serve(server.NewUIDListener(nl)) // nolint errcheck\n\n\treturn nil\n}\n\n// Enforce implements the corresponding interface of enforcers.\nfunc (s *SecretsProxy) Enforce(puInfo *policy.PUInfo) error {\n\treturn s.updateService(puInfo)\n}\n\n// Unenforce implements the corresponding interface of the enforcers.\nfunc (s *SecretsProxy) Unenforce(contextID string) error {\n\treturn s.deleteService(contextID)\n}\n\n// GetFilterQueue is a stub for TCP proxy\nfunc (s *SecretsProxy) GetFilterQueue() *fqconfig.FilterQueue {\n\treturn nil\n}\n\n// UpdateSecrets updates the secrets of running enforcers managed by trireme. Remote enforcers will\n// get the secret updates with the next policy push.\nfunc (s *SecretsProxy) UpdateSecrets(secret secrets.Secrets) error {\n\treturn nil\n}\n\n// apiProcessor is called for every request. It processes the request\n// and forwards to the originator of the secrets service after\n// authenticating that the client can access the service.\nfunc (s *SecretsProxy) apiProcessor(w http.ResponseWriter, r *http.Request) {\n\tzap.L().Info(\"Processing secrets call\",\n\t\tzap.String(\"URI\", r.RequestURI),\n\t\tzap.String(\"Host\", r.Host),\n\t\tzap.String(\"Remote address\", r.RemoteAddr),\n\t)\n\n\t// The remote address will contain the uid, gid and pid of the calling process.\n\t// This is because of the specific socket listener we are uing.\n\tparts := strings.Split(r.RemoteAddr, \":\")\n\tif len(parts) != 3 {\n\t\thttpError(w, fmt.Errorf(\"Bad Remote Address\"), \"Unauthorized request\", http.StatusUnauthorized)\n\t\treturn\n\t}\n\n\t// We only care about the originating PID.\n\tpid := parts[2]\n\tcgroup, err := findParentCgroup(pid)\n\tif err != nil {\n\t\thttpError(w, err, \"Unauthorized client - not the first process\", http.StatusUnauthorized)\n\t\treturn\n\t}\n\n\tdata, err := s.apiCacheMapping.Get(cgroup)\n\tif err != nil {\n\t\thttpError(w, err, \"Unauthorized client\", http.StatusUnauthorized)\n\t\treturn\n\t}\n\n\t// Find the corresponding API cache with the access permissions for\n\t// this particular client.\n\tapiCache, ok := data.(*urisearch.APICache)\n\tif !ok {\n\t\thttpError(w, fmt.Errorf(\"Invalid data types\"), \"Internal server error - invalid type\", http.StatusInternalServerError)\n\t\treturn\n\t}\n\n\t// Find the identity of the PU\n\tpolicyData, err := s.policyCache.Get(cgroup)\n\tif err != nil {\n\t\thttpError(w, err, \"Unauthorized client\", http.StatusUnauthorized)\n\t\treturn\n\t}\n\n\tscopes, ok := policyData.([]string)\n\tif !ok {\n\t\thttpError(w, fmt.Errorf(\"Invalid data types\"), \"Internal server error - invalid type\", http.StatusInternalServerError)\n\t\treturn\n\t}\n\n\t// Search the API cache for matching rules.\n\tfound, _ := apiCache.FindAndMatchScope(r.Method, r.RequestURI, scopes)\n\tif !found {\n\t\thttpError(w, fmt.Errorf(\"Unauthorized service\"), \"Unauthorized access\", http.StatusUnauthorized)\n\t\treturn\n\t}\n\n\t// Retrieve the secrets driver data and information.\n\tdriverData, err := s.drivers.Get(cgroup)\n\tif err != nil {\n\t\thttpError(w, err, \"No secrets driver for this client\", http.StatusBadRequest)\n\t\treturn\n\t}\n\tdriver, ok := driverData.(SecretsDriver)\n\tif !ok {\n\t\thttpError(w, fmt.Errorf(\"driver not found\"), \"Bad driver\", http.StatusInternalServerError)\n\t\treturn\n\t}\n\n\t// Transfor the request based on the driver.\n\tif err := driver.Transform(r); err != nil {\n\t\thttpError(w, err, \"Secrets driver error\", http.StatusInternalServerError)\n\t\treturn\n\t}\n\n\t// Forward the request. TODO .. we need to massage the return here.\n\tforwarder, err := forward.New(forward.RoundTripper(driver.Transport()))\n\tif err != nil {\n\t\thttpError(w, err, \"Failed to configure forwarder\", http.StatusInternalServerError)\n\t\treturn\n\t}\n\n\tforwarder.ServeHTTP(w, r)\n}\n\nfunc (s *SecretsProxy) updateService(puInfo *policy.PUInfo) error {\n\tvar cgroup string\n\n\t// Only supporting secrets for containers PUs at this time.\n\tif puInfo.Runtime.PUType() != common.ContainerPU {\n\t\treturn nil\n\t}\n\t// First we need to determine the corresponding cgroup. We will look at the\n\t// cache since this might be an update event. If that fails, we will look\n\t// at the runtime. If it is not found we abort. If we find the cgroup\n\t// we add it to the cache for future reference.\n\tcgroupData, err := s.cgroupCache.Get(puInfo.ContextID)\n\tif err == nil {\n\t\tcgroup = cgroupData.(string)\n\t} else {\n\t\tif puInfo.Runtime == nil {\n\t\t\treturn fmt.Errorf(\"Unable to find cgroup\")\n\t\t}\n\t\tvar found bool\n\t\tcgroup, found = puInfo.Policy.Annotations().Get(\"@sys:cgroupparent\")\n\t\tif !found {\n\t\t\t// This is not Kubernetes. We will associate the cgroup with the enforcer.\n\t\t\tcgroup = puInfo.ContextID\n\t\t}\n\t\ts.cgroupCache.AddOrUpdate(puInfo.ContextID, cgroup)\n\t}\n\n\tscopes := append(puInfo.Policy.Identity().Copy().Tags, puInfo.Policy.Scopes()...)\n\ts.policyCache.AddOrUpdate(cgroup, scopes)\n\n\t// Scan through the dependent services for secrets distribution services.\n\tfor _, service := range puInfo.Policy.DependentServices() {\n\t\t// Ignore all other services\n\t\tif service.Type != policy.ServiceSecretsProxy {\n\t\t\tcontinue\n\t\t}\n\n\t\turiCache := urisearch.NewAPICache(service.HTTPRules, service.ID, true)\n\t\ts.apiCacheMapping.AddOrUpdate(cgroup, uriCache)\n\t\t// Parse the service definition and instantiate the transform driver.\n\n\t\td, err := NewGenericSecretsDriver(service.CACert, service.AuthToken, service.NetworkInfo)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"Failed to create secrets driver: %s\", err)\n\t\t}\n\t\ts.drivers.AddOrUpdate(cgroup, d)\n\t}\n\treturn nil\n}\n\nfunc (s *SecretsProxy) deleteService(contextID string) error {\n\n\tcgroupData, err := s.cgroupCache.Get(contextID)\n\tif err != nil {\n\t\t// Returning nil here. Nothing anyone can do about it.\n\t\tzap.L().Debug(\"PU not found in secrets controller - unable to clean state\")\n\t\treturn nil\n\t}\n\ts.cgroupCache.Remove(contextID)               // nolint errcheck\n\ts.apiCacheMapping.Remove(cgroupData.(string)) // nolint errcheck\n\ts.drivers.Remove(cgroupData.(string))         // nolint errcheck\n\ts.policyCache.Remove(cgroupData.(string))     // nolint errcheck\n\n\treturn nil\n}\n\nfunc httpError(w http.ResponseWriter, err error, msg string, number int) {\n\tzap.L().Error(msg, zap.Error(err))\n\thttp.Error(w, msg, number)\n}\n\n// ValidateOriginProcess implements a strict validation of the origin process. We might add later.\nfunc ValidateOriginProcess(pid string) (string, error) {\n\n\tpidNumber, err := strconv.Atoi(pid)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"Invalid PID %s\", pid)\n\t}\n\tprocess, err := process.NewProcess(int32(pidNumber))\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"Process not found: %s\", err)\n\t}\n\tppid, err := process.Ppid()\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"Parent process not found: %s\", err)\n\t}\n\tparentPidCgroup, err := processCgroups(strconv.Itoa(int(ppid)), \"net_cls,net_prio\")\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"Parent cgroup not found: %s\", err)\n\t}\n\tif parentPidCgroup != \"/\" {\n\t\treturn \"\", fmt.Errorf(\"Parent is not root cgroup - authorization fail\")\n\t}\n\treturn findParentCgroup(pid)\n}\n\nfunc processCgroups(pid string, cgroupType string) (string, error) {\n\tpath := fmt.Sprintf(\"/aporetoproc/%s/cgroup\", pid)\n\n\tf, err := os.Open(path)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer f.Close() // nolint\n\n\tscanner := bufio.NewScanner(f)\n\tfor scanner.Scan() {\n\t\tif err := scanner.Err(); err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\ttext := scanner.Text()\n\t\tif !strings.Contains(text, cgroupType) {\n\t\t\tcontinue\n\t\t}\n\t\tparts := strings.SplitN(text, \":\", 3)\n\t\tif len(parts) < 3 {\n\t\t\tcontinue\n\t\t}\n\t\treturn parts[2], nil\n\t}\n\treturn \"\", fmt.Errorf(\"cgroup not found\")\n}\n\n// findParentCgroup returns the parent cgroup of the process caller\nfunc findParentCgroup(pid string) (string, error) {\n\n\tcgroup, err := processCgroups(pid, \"net_cls,net_prio\")\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"Invalid cgroup: %s\", err)\n\t}\n\tfor i := len(cgroup) - 1; i > 0; i-- {\n\t\tif cgroup[i:i+1] == \"/\" {\n\t\t\treturn cgroup[:i], nil\n\t\t}\n\t}\n\tif strings.HasPrefix(cgroup, \"/docker/\") && len(cgroup) > 8 {\n\t\treturn cgroup[8:20], nil\n\t}\n\treturn \"\", fmt.Errorf(\"Cannot find parent cgroup: %s\", pid)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/secretsproxy/secretsproxy_windows.go",
    "content": "// +build windows\n\npackage secretsproxy\n\nimport (\n\t\"context\"\n\n\t\"go.aporeto.io/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n)\n\n// SecretsProxy holds all state information for applying policy\n// in the secrets socket API.\ntype SecretsProxy struct {\n}\n\n// NewSecretsProxy creates a new secrets proxy.\nfunc NewSecretsProxy() *SecretsProxy {\n\treturn &SecretsProxy{}\n}\n\n// Run implements the run method of the CtrlInterface. It starts the proxy\n// server and initializes the data structures.\nfunc (s *SecretsProxy) Run(ctx context.Context) error {\n\treturn nil\n}\n\n// Enforce implements the corresponding interface of enforcers.\nfunc (s *SecretsProxy) Enforce(puInfo *policy.PUInfo) error {\n\treturn nil\n}\n\n// Unenforce implements the corresponding interface of the enforcers.\nfunc (s *SecretsProxy) Unenforce(contextID string) error {\n\treturn nil\n}\n\n// GetFilterQueue is a stub for TCP proxy\nfunc (s *SecretsProxy) GetFilterQueue() *fqconfig.FilterQueue {\n\treturn nil\n}\n\n// UpdateSecrets updates the secrets of running enforcers managed by trireme. Remote enforcers will\n// get the secret updates with the next policy push.\nfunc (s *SecretsProxy) UpdateSecrets(secret secrets.Secrets) error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/secretsproxy/transformer.go",
    "content": "package secretsproxy\n\nimport (\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strconv\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n)\n\n// SecretsDriver is a generic interface that the secrets driver must implement.\ntype SecretsDriver interface {\n\tTransport() http.RoundTripper\n\tTransform(r *http.Request) error\n}\n\n// GenericSecretsDriver holds the configuration information for the driver and implements\n// the SecretsDriver interface.\ntype GenericSecretsDriver struct {\n\ttransport *http.Transport\n\ttoken     string\n\ttargetURL *url.URL\n}\n\n// NewGenericSecretsDriver creates a new Kubernetes Secrets Driver. It\n// always uses the incluster config to automatically derive all the\n// necessary values.\nfunc NewGenericSecretsDriver(ca []byte, token string, network *common.Service) (SecretsDriver, error) {\n\n\tcaPool := x509.NewCertPool()\n\tif !caPool.AppendCertsFromPEM(ca) {\n\t\treturn nil, fmt.Errorf(\"No valid CA provided\")\n\t}\n\n\ttargetAddress := \"\"\n\tif len(network.FQDNs) > 0 {\n\t\ttargetAddress = network.FQDNs[0]\n\t} else if len(network.Addresses) > 0 {\n\t\ttargetAddress = network.Addresses[0].IP.String()\n\t} else {\n\t\treturn nil, fmt.Errorf(\"No valid target\")\n\t}\n\n\tif network.Ports.Min == 0 {\n\t\treturn nil, fmt.Errorf(\"Invalid port specification\")\n\t}\n\n\ttargetURL, err := url.Parse(\"https://\" + targetAddress + \":\" + strconv.Itoa(int(network.Ports.Min)))\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Invalid URL for secrets service\")\n\t}\n\n\treturn &GenericSecretsDriver{\n\t\ttransport: &http.Transport{\n\t\t\tTLSClientConfig: &tls.Config{\n\t\t\t\tRootCAs: caPool,\n\t\t\t},\n\t\t},\n\t\ttoken:     token,\n\t\ttargetURL: targetURL,\n\t}, nil\n}\n\n// Transport implements the transport interface of the SecretsDriver.\nfunc (k *GenericSecretsDriver) Transport() http.RoundTripper {\n\treturn k.transport\n}\n\n// Transform transforms the request of the SecretsDriver\nfunc (k *GenericSecretsDriver) Transform(r *http.Request) error {\n\n\tr.Host = k.targetURL.Host\n\tr.URL = k.targetURL\n\tr.Header.Add(\"Authorization\", \"Bearer \"+k.token)\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/ephemeralkeys/ephemeralkeys.go",
    "content": "package ephemeralkeys\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\t\"crypto/rand\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n)\n\n//PrivateKey struct holds the ecdsa private key and its encoded string\ntype PrivateKey struct {\n\t*ecdsa.PrivateKey\n\tPrivateKeyString string\n}\n\ntype ephemeralKey struct {\n\tprivateKey  *PrivateKey\n\tpublicKeyV1 []byte\n\tpublicKeyV2 []byte\n\tsync.RWMutex\n}\n\nconst keyInterval = 5 * time.Minute\n\n// New creates a new key accessor\nfunc New() (KeyAccessor, error) {\n\tprivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tpublicKeyBytesV1 := crypto.EncodePublicKeyV1(&privateKey.PublicKey)\n\tpublicKeyBytesV2 := crypto.EncodePublicKeyV2(&privateKey.PublicKey)\n\tpvtKeyBytes := crypto.EncodePrivateKey(privateKey)\n\n\tkeys := &ephemeralKey{\n\t\tprivateKey:  &PrivateKey{privateKey, string(pvtKeyBytes)},\n\t\tpublicKeyV1: publicKeyBytesV1,\n\t\tpublicKeyV2: publicKeyBytesV2,\n\t}\n\n\treturn keys, nil\n}\n\n// NewWithRenewal creates a new key accessor and renews it every keyInterval\nfunc NewWithRenewal() (KeyAccessor, error) {\n\tprivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tpublicKeyBytesV1 := crypto.EncodePublicKeyV1(&privateKey.PublicKey)\n\tpublicKeyBytesV2 := crypto.EncodePublicKeyV2(&privateKey.PublicKey)\n\tpvtKeyBytes := crypto.EncodePrivateKey(privateKey)\n\n\tkeys := &ephemeralKey{\n\t\tprivateKey:  &PrivateKey{privateKey, string(pvtKeyBytes)},\n\t\tpublicKeyV1: publicKeyBytesV1,\n\t\tpublicKeyV2: publicKeyBytesV2,\n\t}\n\n\tgo func() {\n\t\tfor {\n\t\t\t<-time.After(keyInterval)\n\t\t\tfor i := 0; i < 5; i++ {\n\t\t\t\tprivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\t\t\t\tif err != nil {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tpublicKeyBytesV1 := crypto.EncodePublicKeyV1(&privateKey.PublicKey)\n\t\t\t\tpublicKeyBytesV2 := crypto.EncodePublicKeyV2(&privateKey.PublicKey)\n\t\t\t\tpvtKeyBytes := crypto.EncodePrivateKey(privateKey)\n\n\t\t\t\tkeys.Lock()\n\t\t\t\tkeys.privateKey = &PrivateKey{privateKey, string(pvtKeyBytes)}\n\t\t\t\tkeys.publicKeyV1 = publicKeyBytesV1\n\t\t\t\tkeys.publicKeyV2 = publicKeyBytesV2\n\t\t\t\tkeys.Unlock()\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}()\n\n\treturn keys, nil\n}\n\n// PrivateKey return the private key of the keypair\nfunc (k *ephemeralKey) PrivateKey() *PrivateKey {\n\tk.RLock()\n\tdefer k.RUnlock()\n\treturn k.privateKey\n}\n\nfunc (k *ephemeralKey) DecodingKeyV1() []byte {\n\tk.RLock()\n\tdefer k.RUnlock()\n\treturn k.publicKeyV1\n}\n\nfunc (k *ephemeralKey) DecodingKeyV2() []byte {\n\tk.RLock()\n\tdefer k.RUnlock()\n\treturn k.publicKeyV2\n}\n\nvar secret secrets.Secrets\nvar lock sync.RWMutex\n\n//GetDatapathSecret returns the secrets\nfunc GetDatapathSecret() secrets.Secrets {\n\tlock.RLock()\n\tdefer lock.RUnlock()\n\treturn secret\n}\n\n//UpdateDatapathSecrets updates the secrets\nfunc UpdateDatapathSecrets(s secrets.Secrets) {\n\tlock.Lock()\n\tsecret = s\n\tlock.Unlock()\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/ephemeralkeys/interfaces.go",
    "content": "package ephemeralkeys\n\n// KeyAccessor holds the ephemeral key functions\ntype KeyAccessor interface {\n\tPrivateKey() *PrivateKey\n\tDecodingKeyV1() []byte\n\tDecodingKeyV2() []byte\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/ephemeralkeys/mockephemeralkeys/mockephemeralkeys.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/internal/enforcer/utils/ephemeralkeys/interfaces.go\n\n// Package mockephemeralkeys is a generated GoMock package.\npackage mockephemeralkeys\n\nimport (\n\tecdsa \"crypto/ecdsa\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n)\n\n// MockKeyAccessor is a mock of KeyAccessor interface\n// nolint\ntype MockKeyAccessor struct {\n\tctrl     *gomock.Controller\n\trecorder *MockKeyAccessorMockRecorder\n}\n\n// MockKeyAccessorMockRecorder is the mock recorder for MockKeyAccessor\n// nolint\ntype MockKeyAccessorMockRecorder struct {\n\tmock *MockKeyAccessor\n}\n\n// NewMockKeyAccessor creates a new mock instance\n// nolint\nfunc NewMockKeyAccessor(ctrl *gomock.Controller) *MockKeyAccessor {\n\tmock := &MockKeyAccessor{ctrl: ctrl}\n\tmock.recorder = &MockKeyAccessorMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockKeyAccessor) EXPECT() *MockKeyAccessorMockRecorder {\n\treturn m.recorder\n}\n\n// PrivateKey mocks base method\n// nolint\nfunc (m *MockKeyAccessor) PrivateKey() *ecdsa.PrivateKey {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PrivateKey\")\n\tret0, _ := ret[0].(*ecdsa.PrivateKey)\n\treturn ret0\n}\n\n// PrivateKey indicates an expected call of PrivateKey\n// nolint\nfunc (mr *MockKeyAccessorMockRecorder) PrivateKey() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PrivateKey\", reflect.TypeOf((*MockKeyAccessor)(nil).PrivateKey))\n}\n\n// DecodingKey mocks base method\n// nolint\nfunc (m *MockKeyAccessor) DecodingKey() []byte {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DecodingKey\")\n\tret0, _ := ret[0].([]byte)\n\treturn ret0\n}\n\n// DecodingKey indicates an expected call of DecodingKey\n// nolint\nfunc (mr *MockKeyAccessorMockRecorder) DecodingKey() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DecodingKey\", reflect.TypeOf((*MockKeyAccessor)(nil).DecodingKey))\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/nsenter/nsenter.c",
    "content": "// +build linux !darwin\n// +build !windows\n\n#define _GNU_SOURCE\n#include <errno.h>\n#include <fcntl.h>\n#include <sched.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <unistd.h>\n\n#define STRBUF_SIZE     128\nvoid nsexec(void) {\n\n  int fd = 0;\n  char path[STRBUF_SIZE*2]={0};\n  char msg[STRBUF_SIZE*4];\n  char mountpoint[STRBUF_SIZE] = {0};\n  char *container_pid_env = getenv(\"TRIREME_ENV_CONTAINER_PID\");\n  char *netns_path_env = getenv(\"TRIREME_ENV_NS_PATH\");\n  char *proc_mountpoint = getenv(\"TRIREME_ENV_PROC_MOUNTPOINT\");\n  if(container_pid_env == NULL){\n    // We are not running as remote enforcer\n    setenv(\"TRIREME_ENV_NSENTER_LOGS\", \"no container pid\", 1);\n    return;\n  }\n  if(netns_path_env == NULL){\n    // This means the PID Needs to be used to determine the NetNsPath.\n    if(proc_mountpoint == NULL){\n      strncpy(mountpoint, \"/proc\", strlen(\"/proc\")+1);\n    }else{\n      strncpy(mountpoint, proc_mountpoint, STRBUF_SIZE-1);\n    }\n    // Setup proc symlink\n    snprintf(path, sizeof(path), \"%s/%s/ns/net\", mountpoint, container_pid_env);\n  } else {\n    // We use the env variable as the Path.\n    strncpy(path, netns_path_env, STRBUF_SIZE);\n  }\n\n  // Setup FD to symlink\n  fd = open(path, O_RDONLY);\n  if(fd < 0) {\n    snprintf(msg, sizeof(msg), \"path:%s fd:%d\", path, fd);\n    setenv(\"TRIREME_ENV_NSENTER_ERROR_STATE\",strerror(-ENOENT), 1);\n    setenv(\"TRIREME_ENV_NSENTER_LOGS\", path, 1);\n    return;\n  }\n\n  // Set namespace\n  int retval =syscall(308,fd,CLONE_NEWNET);\n  snprintf(msg, sizeof(msg), \"path:%s fd:%d retval:%d\", path, fd, retval);\n  setenv(\"TRIREME_ENV_NSENTER_LOGS\",msg,1);\n  if(retval < 0){\n    setenv(\"TRIREME_ENV_NSENTER_ERROR_STATE\",strerror(errno),1); \t\t\n  }\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/nsenter/nsenter.go",
    "content": "// +build !linux\n\n//Package nsenter for switching namespaces\npackage nsenter\n\n//This package should only run on linux\n//Don't test functionality of this package on non linux platforms as\n//the setns call is not available there\n"
  },
  {
    "path": "controller/internal/enforcer/utils/nsenter/nsenter_linux.go",
    "content": "// +build linux\n\npackage nsenter\n\n/*\n#cgo CFLAGS: -Wall\nextern void nsexec();\nvoid __attribute__((constructor)) init(void) {\n\tnsexec();\n}\n*/\nimport \"C\"\n"
  },
  {
    "path": "controller/internal/enforcer/utils/packetgen/interfaces.go",
    "content": "package packetgen\n\nimport (\n\t\"github.com/google/gopacket\"\n\t\"github.com/google/gopacket/layers\"\n)\n\n//PacketFlowType type  for different types of flows\ntype PacketFlowType uint8\n\nconst (\n\t//PacketFlowTypeGenerateGoodFlow is used to generate a good floe\n\tPacketFlowTypeGenerateGoodFlow PacketFlowType = iota\n\t//PacketFlowTypeGoodFlowTemplate will have a good flow from a hardcoded template\n\tPacketFlowTypeGoodFlowTemplate\n\t//PacketFlowTypeMultipleGoodFlow will have two flows\n\tPacketFlowTypeMultipleGoodFlow\n\t//PacketFlowTypeMultipleIntervenedFlow will have two flows intervened to eachothers\n\tPacketFlowTypeMultipleIntervenedFlow\n)\n\n//EthernetPacketManipulator interface is used to create/manipulate Ethernet packet\ntype EthernetPacketManipulator interface {\n\t//Used to create an Ethernet layer\n\tAddEthernetLayer(srcMACstr string, dstMACstr string) error\n\t//Used to return Ethernet packet created\n\tGetEthernetPacket() layers.Ethernet\n}\n\n//IPPacketManipulator interface is used to create/manipulate IP packet\ntype IPPacketManipulator interface {\n\t//Used to create an IP layer\n\tAddIPLayer(srcIPstr string, dstIPstr string) error\n\t//Used to return IP packet created\n\tGetIPPacket() layers.IPv4\n\t//Used to return IP checksum\n\tGetIPChecksum() uint16\n}\n\n//TCPPacketManipulator interface is used to create/manipulate TCP packet\ntype TCPPacketManipulator interface {\n\t//Used to create a TCP layer\n\tAddTCPLayer(srcPort layers.TCPPort, dstPort layers.TCPPort) error\n\t//Used to return TCP packet\n\tGetTCPPacket() layers.TCP\n\t//Used to return TCP Sequence number\n\tGetTCPSequenceNumber() uint32\n\t//Used to return TCP Acknowledgement number number\n\tGetTCPAcknowledgementNumber() uint32\n\t//Used to return TCP window\n\tGetTCPWindow() uint16\n\t//Used to return TCP Syn flag\n\tGetTCPSyn() bool\n\t//Used to return TCP Ack flag\n\tGetTCPAck() bool\n\t//Used to return TCP Fin flag\n\tGetTCPFin() bool\n\t//Used to return TCP Checksum\n\tGetTCPChecksum() uint16\n\t//Used to set TCP Sequence number\n\tSetTCPSequenceNumber(seqNum uint32)\n\t//Used to set TCP Acknowledgement number\n\tSetTCPAcknowledgementNumber(ackNum uint32)\n\t//Used to set TCP Window\n\tSetTCPWindow(window uint16)\n\t//Used to set TCP Syn flag to true\n\tSetTCPSyn()\n\t//Used to set TCP Syn and Ack flag to true\n\tSetTCPSynAck()\n\t//Used to set TCP Ack flag to true\n\tSetTCPAck()\n\t//Used to set TCP Cwr flag to true\n\tSetTCPCwr()\n\t//Used to set TCP Ece flag to true\n\tSetTCPEce()\n\t//Used to set TCP Urg flag to true\n\tSetTCPUrg()\n\t//Used to set TCP Psh flag to true\n\tSetTCPPsh()\n\t//Used to set TCP Rst flag to true\n\tSetTCPRst()\n\t//Used to set TCP Fin flag to true\n\tSetTCPFin()\n\t//Used to add TCP Payload\n\tNewTCPPayload(newPayload string) error\n}\n\n//PacketHelper interface is a helper for packets and packet flows\n//Optional: not needed for actual usage\ntype PacketHelper interface {\n\tToBytes() ([]byte, error)\n\tAddPacket(packet gopacket.Packet)\n\tDecodePacket() PacketManipulator\n}\n\n//PacketManipulator is an interface for packet manipulations\n//Composition of Ethernet, IP and TCP Manipulator interface\ntype PacketManipulator interface {\n\tEthernetPacketManipulator\n\tIPPacketManipulator\n\tTCPPacketManipulator\n\tPacketHelper\n}\n\n//PacketFlowManipulator is an interface for packet flow manipulations\n//Used to create/manipulate packet flows\ntype PacketFlowManipulator interface {\n\t//Used to create a flow of TCP packets\n\tGenerateTCPFlow(pt PacketFlowType) (PacketFlowManipulator, error)\n\t//Used to return first TCP Syn packet\n\tGetFirstSynPacket() PacketManipulator\n\t//Used to return first TCP SynAck packet\n\tGetFirstSynAckPacket() PacketManipulator\n\t//Used to return first TCP Ack packet\n\tGetFirstAckPacket() PacketManipulator\n\t//Used to return all the TCP Syn packets from the flow\n\tGetSynPackets() PacketFlowManipulator\n\t//Used to return all the TCP SynAck packets from the flow\n\tGetSynAckPackets() PacketFlowManipulator\n\t//Used to return all the TCP Ack packets from the flow\n\tGetAckPackets() PacketFlowManipulator\n\t//Used to return all the packets upto first TCP SynAck packet from the flow\n\tGetUptoFirstSynAckPacket() PacketFlowManipulator\n\t//Used to return all the packets upto first TCP Ack packet from the flow\n\tGetUptoFirstAckPacket() PacketFlowManipulator\n\t//Used to return Nth packet from the flow\n\tGetNthPacket(index int) PacketManipulator\n\t//Used to return length of the flow\n\tGetNumPackets() int\n\t//Used to add a new packet to the flow\n\tAppendPacket(p PacketManipulator) int\n}\n\n//Packet is a custom type which holds the packets and implements PacketManipulator\ntype Packet struct {\n\tethernetLayer *layers.Ethernet\n\tipLayer       *layers.IPv4\n\ttcpLayer      *layers.TCP\n\tpacket        gopacket.Packet\n}\n\n//PacketFlow is a custom type which holds the packet attributes and the flow\n//Implements PacketFlowManipulator interface\ntype PacketFlow struct {\n\tsMAC, dMAC   string\n\tsIP, dIP     string\n\tsPort, dPort layers.TCPPort\n\tflow         []PacketManipulator\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/packetgen/packet_gen.go",
    "content": "//Package packetgen \"PacketGen\" is a Packet Generator library\n//Current version: V1.0, Updates are coming soon\npackage packetgen\n\n//Go libraries\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\n\t\"github.com/google/gopacket\"\n\t\"github.com/google/gopacket/layers\"\n)\n\n//NewPacket returns a packet strut which implements PacketManipulator\nfunc NewPacket() PacketManipulator {\n\n\treturn &Packet{}\n}\n\n//AddEthernetLayer creates an Ethernet layer\nfunc (p *Packet) AddEthernetLayer(srcMACstr string, dstMACstr string) error {\n\n\tif p.ethernetLayer != nil {\n\t\treturn errors.New(\"ethernet layer already exists\")\n\t}\n\n\tvar srcMAC, dstMAC net.HardwareAddr\n\n\t//MAC address of the source\n\tsrcMAC, _ = net.ParseMAC(srcMACstr)\n\n\tif srcMAC == nil {\n\t\treturn errors.New(\"no source mac given\")\n\t}\n\n\t//MAC address of the destination\n\tdstMAC, _ = net.ParseMAC(dstMACstr)\n\n\tif dstMAC == nil {\n\t\treturn errors.New(\"no destination mac given\")\n\t}\n\n\t//Ethernet packet header\n\tp.ethernetLayer = &layers.Ethernet{\n\t\tSrcMAC:       srcMAC,\n\t\tDstMAC:       dstMAC,\n\t\tEthernetType: layers.EthernetTypeIPv4,\n\t}\n\n\treturn nil\n}\n\n//GetEthernetPacket returns the ethernet layer created\nfunc (p *Packet) GetEthernetPacket() layers.Ethernet {\n\n\treturn *p.ethernetLayer\n}\n\n//AddIPLayer creates an IP layer\nfunc (p *Packet) AddIPLayer(srcIPstr string, dstIPstr string) error {\n\n\tif p.ipLayer != nil {\n\t\treturn errors.New(\"ip layer already exists\")\n\t}\n\n\tvar srcIP, dstIP net.IP\n\n\t//IP address of the source\n\tsrcIP = net.ParseIP(srcIPstr)\n\n\tif srcIP == nil {\n\t\treturn errors.New(\"no source ip given\")\n\t}\n\n\t//IP address of the destination\n\tdstIP = net.ParseIP(dstIPstr)\n\n\tif dstIP == nil {\n\t\treturn errors.New(\"no destination ip given\")\n\t}\n\n\t//IP packet header\n\tp.ipLayer = &layers.IPv4{\n\t\tSrcIP:    srcIP,\n\t\tDstIP:    dstIP,\n\t\tVersion:  4,\n\t\tTTL:      64,\n\t\tProtocol: layers.IPProtocolTCP,\n\t}\n\n\treturn nil\n}\n\n//GetIPChecksum returns IP cheksum\nfunc (p *Packet) GetIPChecksum() uint16 {\n\n\treturn p.ipLayer.Checksum\n}\n\n//GetIPPacket returns IP checksum\nfunc (p *Packet) GetIPPacket() layers.IPv4 {\n\n\treturn *p.ipLayer\n}\n\n//AddTCPLayer creates a TCP layer\nfunc (p *Packet) AddTCPLayer(srcPort layers.TCPPort, dstPort layers.TCPPort) error {\n\n\tif p.tcpLayer != nil {\n\t\treturn errors.New(\"tcp layer already exists\")\n\t}\n\n\tif srcPort == 0 {\n\t\treturn errors.New(\"no source tcp port given\")\n\t}\n\n\tif dstPort == 0 {\n\t\treturn errors.New(\"no destination tcp port given\")\n\t}\n\n\t//TCP packet header\n\tp.tcpLayer = &layers.TCP{\n\t\tSrcPort: srcPort,\n\t\tDstPort: dstPort,\n\t\tWindow:  0,\n\t\tUrgent:  0,\n\t\tSeq:     0,\n\t\tAck:     0,\n\t\tACK:     false,\n\t\tSYN:     false,\n\t\tFIN:     false,\n\t\tRST:     false,\n\t\tURG:     false,\n\t\tECE:     false,\n\t\tCWR:     false,\n\t\tNS:      false,\n\t\tPSH:     false,\n\t}\n\n\treturn p.tcpLayer.SetNetworkLayerForChecksum(p.ipLayer)\n}\n\n//GetTCPPacket returns created TCP packet\nfunc (p *Packet) GetTCPPacket() layers.TCP {\n\n\treturn *p.tcpLayer\n}\n\n//GetTCPSequenceNumber returns TCP Sequence number\nfunc (p *Packet) GetTCPSequenceNumber() uint32 {\n\n\treturn p.tcpLayer.Seq\n}\n\n//GetTCPAcknowledgementNumber returns TCP Acknowledgement number\nfunc (p *Packet) GetTCPAcknowledgementNumber() uint32 {\n\n\treturn p.tcpLayer.Ack\n}\n\n//GetTCPWindow returns TCP Window\nfunc (p *Packet) GetTCPWindow() uint16 {\n\n\treturn p.tcpLayer.Window\n}\n\n//GetTCPSyn returns TCP SYN flag\nfunc (p *Packet) GetTCPSyn() bool {\n\n\treturn p.tcpLayer.SYN\n}\n\n//GetTCPAck returns TCP ACK flag\nfunc (p *Packet) GetTCPAck() bool {\n\n\treturn p.tcpLayer.ACK\n}\n\n//GetTCPFin returns TCP FIN flag\nfunc (p *Packet) GetTCPFin() bool {\n\n\treturn p.tcpLayer.FIN\n}\n\n//GetTCPChecksum returns TCP checksum\nfunc (p *Packet) GetTCPChecksum() uint16 {\n\n\treturn p.tcpLayer.Checksum\n}\n\n//SetTCPSequenceNumber changes TCP sequence number\nfunc (p *Packet) SetTCPSequenceNumber(seqNum uint32) {\n\n\tp.tcpLayer.Seq = seqNum\n\n}\n\n//SetTCPAcknowledgementNumber changes TCP Acknowledgement number\nfunc (p *Packet) SetTCPAcknowledgementNumber(ackNum uint32) {\n\n\tp.tcpLayer.Ack = ackNum\n\n}\n\n//SetTCPWindow changes the TCP window\nfunc (p *Packet) SetTCPWindow(window uint16) {\n\n\tp.tcpLayer.Window = window\n\n}\n\n//SetTCPSyn changes the TCP SYN flag to true\nfunc (p *Packet) SetTCPSyn() {\n\n\tp.tcpLayer.SYN = true\n\tp.tcpLayer.ACK = false\n\tp.tcpLayer.FIN = false\n}\n\n//SetTCPSynAck changes the TCP SYN and ACK flag to true\nfunc (p *Packet) SetTCPSynAck() {\n\n\tp.tcpLayer.SYN = true\n\tp.tcpLayer.ACK = true\n\tp.tcpLayer.FIN = false\n}\n\n//SetTCPAck changes the TCP ACK flag to true\nfunc (p *Packet) SetTCPAck() {\n\n\tp.tcpLayer.SYN = false\n\tp.tcpLayer.ACK = true\n\tp.tcpLayer.FIN = false\n}\n\n//SetTCPCwr changes the TCP CWR flag to true\nfunc (p *Packet) SetTCPCwr() {\n\tp.tcpLayer.CWR = true\n}\n\n//SetTCPEce changes the TCP ECE flag to true\nfunc (p *Packet) SetTCPEce() {\n\tp.tcpLayer.ECE = true\n}\n\n//SetTCPUrg changes the TCP URG flag to true\nfunc (p *Packet) SetTCPUrg() {\n\tp.tcpLayer.URG = true\n}\n\n//SetTCPPsh changes the TCP PSH flag to true\nfunc (p *Packet) SetTCPPsh() {\n\tp.tcpLayer.PSH = true\n}\n\n//SetTCPRst changes the TCP RST flag to true\nfunc (p *Packet) SetTCPRst() {\n\tp.tcpLayer.RST = true\n}\n\n//SetTCPFin changes the TCP FIN flag to true\nfunc (p *Packet) SetTCPFin() {\n\tp.tcpLayer.FIN = true\n}\n\n//NewTCPPayload adds new payload to TCP layer\nfunc (p *Packet) NewTCPPayload(newPayload string) error {\n\n\tif p.tcpLayer.Payload != nil {\n\t\treturn errors.New(\"payload already exists\")\n\t}\n\n\tp.tcpLayer.Payload = []byte(newPayload)\n\n\treturn nil\n}\n\n//ToBytes creates a packet buffer and converts it into a complete packet with ethernet, IP and TCP (with options)\nfunc (p *Packet) ToBytes() ([]byte, error) {\n\n\topts := gopacket.SerializeOptions{\n\t\tFixLengths:       true, //fix lengths based on the payload (data)\n\t\tComputeChecksums: true, //compute checksum based on the payload during serialization\n\t}\n\n\tif err := p.tcpLayer.SetNetworkLayerForChecksum(p.ipLayer); err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to compute checksum: %s\", err)\n\t}\n\n\t//Creating a packet buffer by serializing the ethernet, IP and TCP layers/packets\n\tpacketBuf := gopacket.NewSerializeBuffer()\n\ttcpPayload := gopacket.Payload(p.tcpLayer.Payload)\n\tif err := gopacket.SerializeLayers(packetBuf, opts, p.ethernetLayer, p.ipLayer, p.tcpLayer, tcpPayload); err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to serialize layers: %s\", err)\n\t}\n\t//Converting into bytes and removing the ethernet from the layers\n\tbytes := packetBuf.Bytes()\n\tbytesWithoutEthernet := bytes[14:]\n\n\tvar finalBytes []byte\n\tfinalBytes = append(finalBytes, bytesWithoutEthernet...)\n\n\treturn finalBytes, nil\n}\n\n//NewTemplateFlow will return flow of packets which implements PacketManipulator\nfunc NewTemplateFlow() PacketFlowManipulator {\n\n\treturn &PacketFlow{}\n}\n\n//AddPacket is a helper method to add the packet from the template to the struct internal struct field\nfunc (p *Packet) AddPacket(packet gopacket.Packet) {\n\n\tp.packet = packet\n\n}\n\n//DecodePacket returns decoded packet which implements PacketManipulator\nfunc (p *Packet) DecodePacket() PacketManipulator {\n\n\tpacketData := &Packet{\n\t\tethernetLayer: &layers.Ethernet{},\n\t\tipLayer:       &layers.IPv4{},\n\t\ttcpLayer:      &layers.TCP{},\n\t}\n\n\tnewEthernetPacket := packetData.GetEthernetPacket()\n\tnewIPPacket := packetData.GetIPPacket()\n\tnewTCPPacket := packetData.GetTCPPacket()\n\n\tif ethernetLayer := p.packet.Layer(layers.LayerTypeEthernet); ethernetLayer != nil {\n\t\tethernet, _ := ethernetLayer.(*layers.Ethernet)\n\t\tnewEthernetPacket.SrcMAC = ethernet.SrcMAC\n\t\tnewEthernetPacket.DstMAC = ethernet.DstMAC\n\t\tnewEthernetPacket.EthernetType = ethernet.EthernetType\n\t}\n\n\tif ipLayer := p.packet.Layer(layers.LayerTypeIPv4); ipLayer != nil {\n\t\tip, _ := ipLayer.(*layers.IPv4)\n\n\t\tnewIPPacket.SrcIP = ip.SrcIP\n\t\tnewIPPacket.DstIP = ip.DstIP\n\t\tnewIPPacket.Version = ip.Version\n\t\tnewIPPacket.Length = ip.Length\n\t\tnewIPPacket.Protocol = ip.Protocol\n\t\tnewIPPacket.TTL = ip.TTL\n\t}\n\n\tif tcpLayer := p.packet.Layer(layers.LayerTypeTCP); tcpLayer != nil {\n\t\ttcp, _ := tcpLayer.(*layers.TCP)\n\n\t\tnewTCPPacket.SrcPort = tcp.SrcPort\n\t\tnewTCPPacket.DstPort = tcp.DstPort\n\t\tnewTCPPacket.Seq = tcp.Seq\n\t\tnewTCPPacket.Ack = tcp.Ack\n\t\tnewTCPPacket.SYN = tcp.SYN\n\t\tnewTCPPacket.FIN = tcp.FIN\n\t\tnewTCPPacket.RST = tcp.RST\n\t\tnewTCPPacket.PSH = tcp.PSH\n\t\tnewTCPPacket.ACK = tcp.ACK\n\t\tnewTCPPacket.URG = tcp.URG\n\t\tnewTCPPacket.ECE = tcp.ECE\n\t\tnewTCPPacket.CWR = tcp.CWR\n\t\tnewTCPPacket.NS = tcp.NS\n\t\tnewTCPPacket.Checksum = tcp.Checksum\n\t\tnewTCPPacket.Window = tcp.Window\n\t}\n\n\tpacketData = &Packet{\n\t\tethernetLayer: &newEthernetPacket,\n\t\tipLayer:       &newIPPacket,\n\t\ttcpLayer:      &newTCPPacket,\n\t}\n\n\treturn packetData\n}\n\n//NewPacketFlow returns PacketFlow struct which implements PacketFlowManipulator\nfunc NewPacketFlow(smac string, dmac string, sip string, dip string, sport layers.TCPPort, dport layers.TCPPort) PacketFlowManipulator {\n\n\tinitialTupules := &PacketFlow{\n\t\tsMAC:  smac,\n\t\tdMAC:  dmac,\n\t\tsIP:   sip,\n\t\tdIP:   dip,\n\t\tsPort: sport,\n\t\tdPort: dport,\n\t\tflow:  make([]PacketManipulator, 0),\n\t}\n\n\treturn initialTupules\n}\n\n//GenerateTCPFlow returns an array of PacketFlowManipulator interface\nfunc (p *PacketFlow) GenerateTCPFlow(pt PacketFlowType) (PacketFlowManipulator, error) {\n\n\tif pt == 0 {\n\t\t//Create a SYN packet to initialize the flow\n\t\tfirstPacket := NewPacket()\n\t\tif err := firstPacket.AddEthernetLayer(p.sMAC, p.dMAC); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable tp add ethernet layer: %s\", err)\n\t\t}\n\t\tif err := firstPacket.AddIPLayer(p.sIP, p.dIP); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to add ip layer: %s\", err)\n\t\t}\n\t\tif err := firstPacket.AddTCPLayer(p.sPort, p.dPort); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to add tcp layer: %s\", err)\n\t\t}\n\t\tfirstPacket.SetTCPSyn()\n\t\tfirstPacket.SetTCPSequenceNumber(firstPacket.GetTCPSequenceNumber())\n\t\tfirstPacket.SetTCPAcknowledgementNumber(firstPacket.GetTCPAcknowledgementNumber())\n\t\tsynPacket, _ := firstPacket.(*Packet)\n\n\t\tp.flow = append(p.flow, synPacket)\n\n\t\t//Create a SynAck packet\n\t\tsecondPacket := NewPacket()\n\t\tif err := secondPacket.AddEthernetLayer(p.sMAC, p.dMAC); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to add ethernet layer: %s\", err)\n\t\t}\n\t\tif err := secondPacket.AddIPLayer(p.dIP, p.sIP); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to add ip layer: %s\", err)\n\t\t}\n\t\tif err := secondPacket.AddTCPLayer(p.dPort, p.sPort); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to add tcp layer: %s\", err)\n\t\t}\n\t\tsecondPacket.SetTCPSynAck()\n\t\tsecondPacket.SetTCPSequenceNumber(0)\n\t\tsecondPacket.SetTCPAcknowledgementNumber(firstPacket.GetTCPSequenceNumber() + 1)\n\t\tsynackPacket, _ := secondPacket.(*Packet)\n\n\t\tp.flow = append(p.flow, synackPacket)\n\n\t\t//Create an Ack Packet\n\t\tthirdPacket := NewPacket()\n\t\tif err := thirdPacket.AddEthernetLayer(p.sMAC, p.dMAC); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable tp add ethernet layer: %s\", err)\n\t\t}\n\t\tif err := thirdPacket.AddIPLayer(p.sIP, p.dIP); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to add ip layer: %s\", err)\n\t\t}\n\t\tif err := thirdPacket.AddTCPLayer(p.sPort, p.dPort); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to add tcp layer: %s\", err)\n\t\t}\n\t\tthirdPacket.SetTCPAck()\n\t\tthirdPacket.SetTCPSequenceNumber(secondPacket.GetTCPAcknowledgementNumber())\n\t\tthirdPacket.SetTCPAcknowledgementNumber(secondPacket.GetTCPSequenceNumber() + 1)\n\t\tackPacket, _ := thirdPacket.(*Packet)\n\n\t\tp.flow = append(p.flow, ackPacket)\n\n\t\treturn p, nil\n\n\t} else if pt == 1 {\n\n\t\tfor i := 0; i < len(PacketFlowTemplate1); i++ {\n\t\t\t//Create a Packet type variable to store decoded packet\n\t\t\tnewPacket := NewPacket()\n\t\t\tpacket := gopacket.NewPacket(PacketFlowTemplate1[i], layers.LayerTypeEthernet, gopacket.Default)\n\t\t\tnewPacket.AddPacket(packet)\n\n\t\t\tp.flow = append(p.flow, newPacket.DecodePacket())\n\n\t\t}\n\n\t\treturn p, nil\n\t} else if pt == 2 {\n\n\t\tfor i := 0; i < len(PacketFlowTemplate2); i++ {\n\n\t\t\t//Create a Packet type variable to store decoded packet\n\t\t\tnewPacket := NewPacket()\n\t\t\tpacket := gopacket.NewPacket(PacketFlowTemplate2[i], layers.LayerTypeEthernet, gopacket.Default)\n\t\t\tnewPacket.AddPacket(packet)\n\n\t\t\tp.flow = append(p.flow, newPacket.DecodePacket())\n\n\t\t}\n\n\t\treturn p, nil\n\t} else if pt == 3 {\n\n\t\tfor i := 0; i < len(PacketFlowTemplate3); i++ {\n\n\t\t\t//Create a Packet type variable to store decoded packet\n\t\t\tnewPacket := NewPacket()\n\t\t\tpacket := gopacket.NewPacket(PacketFlowTemplate3[i], layers.LayerTypeEthernet, gopacket.Default)\n\t\t\tnewPacket.AddPacket(packet)\n\n\t\t\tp.flow = append(p.flow, newPacket.DecodePacket())\n\n\t\t}\n\n\t\treturn p, nil\n\t}\n\n\treturn nil, nil\n}\n\n//GenerateTCPFlowPayload Coming soon...\nfunc (p *PacketFlow) GenerateTCPFlowPayload(newPayload string) PacketFlowManipulator {\n\n\treturn nil\n}\n\n//AppendPacket adds the packet to Flow field of PacketFlowManipulator interface\nfunc (p *PacketFlow) AppendPacket(pm PacketManipulator) int {\n\n\tp.flow = append(p.flow, pm)\n\n\treturn p.GetNumPackets()\n}\n\n//GetMatchPackets implicitly returns the matching packets requested by the user\nfunc (p *PacketFlow) getMatchPackets(syn, ack, fin bool) PacketFlowManipulator {\n\n\tpacketsInFlow := NewPacketFlow(p.sMAC, p.dMAC, p.sIP, p.dIP, p.sPort, p.dPort)\n\n\tfor j := 0; j < len(p.flow); j++ {\n\t\tif p.flow[j].GetTCPSyn() == syn && p.flow[j].GetTCPAck() == ack && p.flow[j].GetTCPFin() == fin {\n\t\t\tpacketsInFlow.AppendPacket(p.flow[j])\n\t\t}\n\t}\n\n\treturn packetsInFlow\n}\n\n//GetFirstSynPacket return first Syn packet from the flow\nfunc (p *PacketFlow) GetFirstSynPacket() PacketManipulator {\n\n\treturn p.GetSynPackets().GetNthPacket(0)\n}\n\n//GetFirstSynAckPacket return first SynAck packet from the flow\nfunc (p *PacketFlow) GetFirstSynAckPacket() PacketManipulator {\n\n\treturn p.GetSynAckPackets().GetNthPacket(0)\n}\n\n//GetFirstAckPacket return first Ack packet from the flow\nfunc (p *PacketFlow) GetFirstAckPacket() PacketManipulator {\n\n\treturn p.GetAckPackets().GetNthPacket(0)\n}\n\n//GetSynPackets returns the SYN packets\nfunc (p *PacketFlow) GetSynPackets() PacketFlowManipulator {\n\n\treturn p.getMatchPackets(true, false, false)\n}\n\n//GetSynAckPackets returns the SynAck packets\nfunc (p *PacketFlow) GetSynAckPackets() PacketFlowManipulator {\n\n\treturn p.getMatchPackets(true, true, false)\n}\n\n//GetAckPackets returns the Ack Packets\nfunc (p *PacketFlow) GetAckPackets() PacketFlowManipulator {\n\n\treturn p.getMatchPackets(false, true, false)\n}\n\n//GetUptoFirstSynAckPacket will return packets upto first SynAck packet\nfunc (p *PacketFlow) GetUptoFirstSynAckPacket() PacketFlowManipulator {\n\n\tpacketsInFlow := NewPacketFlow(p.sMAC, p.dMAC, p.sIP, p.dIP, p.sPort, p.dPort)\n\tflag := false\n\n\tfor j := 0; j < len(p.flow); j++ {\n\t\tif !flag {\n\t\t\tpacketsInFlow.AppendPacket(p.flow[j])\n\t\t\tif p.flow[j].GetTCPSyn() && p.flow[j].GetTCPAck() && !p.flow[j].GetTCPFin() {\n\t\t\t\tflag = true\n\t\t\t}\n\t\t}\n\t}\n\n\treturn packetsInFlow\n}\n\n//GetUptoFirstAckPacket will return packets upto first Ack packet\nfunc (p *PacketFlow) GetUptoFirstAckPacket() PacketFlowManipulator {\n\n\tpacketsInFlow := NewPacketFlow(p.sMAC, p.dMAC, p.sIP, p.dIP, p.sPort, p.dPort)\n\tflag := false\n\n\tfor j := 0; j < len(p.flow); j++ {\n\t\tif !flag {\n\t\t\tpacketsInFlow.AppendPacket(p.flow[j])\n\t\t\tif !p.flow[j].GetTCPSyn() && p.flow[j].GetTCPAck() && !p.flow[j].GetTCPFin() {\n\t\t\t\tflag = true\n\t\t\t}\n\t\t}\n\t}\n\n\treturn packetsInFlow\n}\n\n//GetNthPacket returns the packet requested by the user from the array\nfunc (p *PacketFlow) GetNthPacket(index int) PacketManipulator {\n\n\tfor i := 0; i < len(p.flow); i++ {\n\t\tif index == i {\n\t\t\treturn p.flow[i]\n\t\t}\n\t}\n\n\tpanic(\"Index out of range\")\n}\n\n//GetNumPackets returns an array of packets\nfunc (p *PacketFlow) GetNumPackets() int {\n\n\treturn len(p.flow)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/packetgen/packet_gen_test.go",
    "content": "//+build !test\n\n//PacketGen tester\n//Still in beta version, Currently used for debugging\n//Updates are coming soon with more test cases\npackage packetgen\n\nimport \"testing\"\n\n//TestTypeInterface: to check if the type implements interface\nfunc TestTypeInterface(t *testing.T) {\n\tt.Parallel()\n\n\tvar PktInterface PacketManipulator = (*Packet)(nil)\n\n\tif PktInterface != (*Packet)(nil) {\n\n\t\tt.Error(\"Packet struct does not implement PacketManipulator Interface\")\n\t}\n\n\tvar PktFlowInterface PacketFlowManipulator = (*PacketFlow)(nil)\n\tif PktFlowInterface != (*PacketFlow)(nil) {\n\n\t\tt.Error(\"PacketFlow struct does not implement PacketFlowManipulator Interface\")\n\t}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/packetgen/packet_templates.go",
    "content": "package packetgen\n\n//PacketFlowTemplate1 is a good hardcoded template\nvar PacketFlowTemplate1 [][]byte\n\n//PacketFlowTemplate2 has a two complete TCP flow\nvar PacketFlowTemplate2 [][]byte\n\n//PacketFlowTemplate3 has a two intervened TCP flows\nvar PacketFlowTemplate3 [][]byte\n\nfunc init() {\n\tPacketFlowTemplate1 = [][]byte{\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x40, 0xf4, 0x1f, 0x40, 0x00, 0x40, 0x06, 0xa9, 0x6f, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xac, 0x48, 0x00, 0x00, 0x00, 0x00, 0xb0, 0x02, 0xff, 0xff, 0x6b, 0x6c, 0x00, 0x00, 0x02, 0x04, 0x05, 0xb4, 0x01, 0x03, 0x03, 0x05, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0x38, 0x00, 0x00, 0x00, 0x00, 0x04, 0x02, 0x00, 0x00},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x3c, 0xf4, 0x1f, 0x40, 0x00, 0x70, 0x06, 0x79, 0x53, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xf0, 0x0c, 0x4d, 0xa6, 0xac, 0x49, 0xa0, 0x12, 0x38, 0x90, 0x3b, 0xba, 0x00, 0x00, 0x02, 0x04, 0x05, 0x64, 0x01, 0x03, 0x03, 0x00, 0x04, 0x02, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x11, 0x1b, 0x4f, 0x37, 0x38},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x7a, 0x12, 0x40, 0x00, 0x40, 0x06, 0x23, 0x89, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xac, 0x49, 0xfe, 0xaa, 0xf0, 0x0d, 0x80, 0x10, 0x10, 0x08, 0x92, 0x82, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0x6d, 0xb3, 0xa1, 0x66, 0x11},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x02, 0x21, 0xeb, 0x50, 0x40, 0x00, 0x40, 0x06, 0xb0, 0x5d, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xac, 0x49, 0xfe, 0xaa, 0xf0, 0x0d, 0x80, 0x18, 0x10, 0x08, 0xe1, 0xdc, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0x6d, 0xb3, 0xa1, 0x66, 0x11, 0x47, 0x45, 0x54, 0x20, 0x2f, 0x20, 0x48, 0x54, 0x54, 0x50, 0x2f, 0x31, 0x2e, 0x31, 0x0d, 0x0a, 0x48, 0x6f, 0x73, 0x74, 0x3a, 0x20, 0x31, 0x36, 0x34, 0x2e, 0x36, 0x37, 0x2e, 0x32, 0x32, 0x38, 0x2e, 0x31, 0x35, 0x32, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x6b, 0x65, 0x65, 0x70, 0x2d, 0x61, 0x6c, 0x69, 0x76, 0x65, 0x0d, 0x0a, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x2d, 0x49, 0x6e, 0x73, 0x65, 0x63, 0x75, 0x72, 0x65, 0x2d, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x3a, 0x20, 0x31, 0x0d, 0x0a, 0x55, 0x73, 0x65, 0x72, 0x2d, 0x41, 0x67, 0x65, 0x6e, 0x74, 0x3a, 0x20, 0x4d, 0x6f, 0x7a, 0x69, 0x6c, 0x6c, 0x61, 0x2f, 0x35, 0x2e, 0x30, 0x20, 0x28, 0x4d, 0x61, 0x63, 0x69, 0x6e, 0x74, 0x6f, 0x73, 0x68, 0x3b, 0x20, 0x49, 0x6e, 0x74, 0x65, 0x6c, 0x20, 0x4d, 0x61, 0x63, 0x20, 0x4f, 0x53, 0x20, 0x58, 0x20, 0x31, 0x30, 0x5f, 0x31, 0x31, 0x5f, 0x36, 0x29, 0x20, 0x41, 0x70, 0x70, 0x6c, 0x65, 0x57, 0x65, 0x62, 0x4b, 0x69, 0x74, 0x2f, 0x35, 0x33, 0x37, 0x2e, 0x33, 0x36, 0x20, 0x28, 0x4b, 0x48, 0x54, 0x4d, 0x4c, 0x2c, 0x20, 0x6c, 0x69, 0x6b, 0x65, 0x20, 0x47, 0x65, 0x63, 0x6b, 0x6f, 0x29, 0x20, 0x43, 0x68, 0x72, 0x6f, 0x6d, 0x65, 0x2f, 0x35, 0x33, 0x2e, 0x30, 0x2e, 0x32, 0x37, 0x38, 0x35, 0x2e, 0x31, 0x34, 0x33, 0x20, 0x53, 0x61, 0x66, 0x61, 0x72, 0x69, 0x2f, 0x35, 0x33, 0x37, 0x2e, 0x33, 0x36, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x3a, 0x20, 0x74, 0x65, 0x78, 0x74, 0x2f, 0x68, 0x74, 0x6d, 0x6c, 0x2c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x78, 0x68, 0x74, 0x6d, 0x6c, 0x2b, 0x78, 0x6d, 0x6c, 0x2c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x78, 0x6d, 0x6c, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x39, 0x2c, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x2f, 0x77, 0x65, 0x62, 0x70, 0x2c, 0x2a, 0x2f, 0x2a, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x38, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x45, 0x6e, 0x63, 0x6f, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x67, 0x7a, 0x69, 0x70, 0x2c, 0x20, 0x64, 0x65, 0x66, 0x6c, 0x61, 0x74, 0x65, 0x2c, 0x20, 0x73, 0x64, 0x63, 0x68, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x3a, 0x20, 0x65, 0x6e, 0x2d, 0x55, 0x53, 0x2c, 0x65, 0x6e, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x38, 0x0d, 0x0a, 0x43, 0x6f, 0x6f, 0x6b, 0x69, 0x65, 0x3a, 0x20, 0x50, 0x48, 0x50, 0x53, 0x45, 0x53, 0x53, 0x49, 0x44, 0x3d, 0x64, 0x38, 0x69, 0x61, 0x63, 0x62, 0x69, 0x65, 0x34, 0x38, 0x63, 0x74, 0x73, 0x32, 0x76, 0x70, 0x30, 0x36, 0x39, 0x64, 0x61, 0x62, 0x63, 0x74, 0x6d, 0x34, 0x3b, 0x20, 0x5f, 0x67, 0x61, 0x74, 0x3d, 0x31, 0x3b, 0x20, 0x5f, 0x67, 0x61, 0x74, 0x5f, 0x67, 0x6c, 0x6f, 0x62, 0x61, 0x6c, 0x54, 0x72, 0x61, 0x63, 0x6b, 0x65, 0x72, 0x3d, 0x31, 0x3b, 0x20, 0x5f, 0x67, 0x61, 0x3d, 0x47, 0x41, 0x31, 0x2e, 0x31, 0x2e, 0x31, 0x35, 0x35, 0x38, 0x30, 0x37, 0x37, 0x34, 0x31, 0x36, 0x2e, 0x31, 0x34, 0x37, 0x36, 0x33, 0x38, 0x34, 0x34, 0x30, 0x35, 0x0d, 0x0a, 0x0d, 0x0a},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x34, 0x3c, 0x55, 0x00, 0x00, 0xef, 0x06, 0xf2, 0x25, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xf0, 0x0d, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x10, 0x1f, 0x68, 0x81, 0x26, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x20, 0x1b, 0x4f, 0x37, 0x6d},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x56, 0x00, 0x00, 0xef, 0x06, 0xec, 0xcc, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xf0, 0x0d, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x34, 0xda, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x25, 0x1b, 0x4f, 0x37, 0x6d, 0x48, 0x54, 0x54, 0x50, 0x2f, 0x31, 0x2e, 0x31, 0x20, 0x32, 0x30, 0x30, 0x20, 0x4f, 0x4b, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2d, 0x54, 0x79, 0x70, 0x65, 0x3a, 0x20, 0x74, 0x65, 0x78, 0x74, 0x2f, 0x68, 0x74, 0x6d, 0x6c, 0x3b, 0x20, 0x63, 0x68, 0x61, 0x72, 0x73, 0x65, 0x74, 0x3d, 0x55, 0x54, 0x46, 0x2d, 0x38, 0x0d, 0x0a, 0x56, 0x61, 0x72, 0x79, 0x3a, 0x20, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x45, 0x6e, 0x63, 0x6f, 0x64, 0x69, 0x6e, 0x67, 0x0d, 0x0a, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65, 0x73, 0x3a, 0x20, 0x54, 0x68, 0x75, 0x2c, 0x20, 0x31, 0x39, 0x20, 0x4e, 0x6f, 0x76, 0x20, 0x31, 0x39, 0x38, 0x31, 0x20, 0x30, 0x38, 0x3a, 0x35, 0x32, 0x3a, 0x30, 0x30, 0x20, 0x47, 0x4d, 0x54, 0x0d, 0x0a, 0x43, 0x61, 0x63, 0x68, 0x65, 0x2d, 0x43, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x3a, 0x20, 0x6e, 0x6f, 0x2d, 0x73, 0x74, 0x6f, 0x72, 0x65, 0x2c, 0x20, 0x6e, 0x6f, 0x2d, 0x63, 0x61, 0x63, 0x68, 0x65, 0x2c, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x2d, 0x72, 0x65, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x2c, 0x20, 0x70, 0x6f, 0x73, 0x74, 0x2d, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x3d, 0x30, 0x2c, 0x20, 0x70, 0x72, 0x65, 0x2d, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x3d, 0x30, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x2d, 0x61, 0x67, 0x65, 0x3d, 0x38, 0x36, 0x34, 0x30, 0x30, 0x2c, 0x20, 0x70, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x2c, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x2d, 0x72, 0x65, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x2c, 0x20, 0x70, 0x72, 0x6f, 0x78, 0x79, 0x2d, 0x72, 0x65, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x0d, 0x0a, 0x50, 0x72, 0x61, 0x67, 0x6d, 0x61, 0x3a, 0x20, 0x6e, 0x6f, 0x2d, 0x63, 0x61, 0x63, 0x68, 0x65, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2d, 0x45, 0x6e, 0x63, 0x6f, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x67, 0x7a, 0x69, 0x70, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2d, 0x4c, 0x65, 0x6e, 0x67, 0x74, 0x68, 0x3a, 0x20, 0x36, 0x37, 0x39, 0x38, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x73, 0x3a, 0x20, 0x62, 0x79, 0x74, 0x65, 0x73, 0x0d, 0x0a, 0x44, 0x61, 0x74, 0x65, 0x3a, 0x20, 0x54, 0x68, 0x75, 0x2c, 0x20, 0x31, 0x33, 0x20, 0x4f, 0x63, 0x74, 0x20, 0x32, 0x30, 0x31, 0x36, 0x20, 0x31, 0x38, 0x3a, 0x34, 0x38, 0x3a, 0x31, 0x30, 0x20, 0x47, 0x4d, 0x54, 0x0d, 0x0a, 0x41, 0x67, 0x65, 0x3a, 0x20, 0x30, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x6b, 0x65, 0x65, 0x70, 0x2d, 0x61, 0x6c, 0x69, 0x76, 0x65, 0x0d, 0x0a, 0x58, 0x2d, 0x43, 0x61, 0x63, 0x68, 0x65, 0x3a, 0x20, 0x4d, 0x49, 0x53, 0x53, 0x0d, 0x0a, 0x0d, 0x0a, 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xd5, 0x3d, 0xdb, 0x72, 0x1b, 0xbb, 0x91, 0xcf, 0x56, 0x55, 0xfe, 0x01, 0x61, 0xb2, 0x76, 0xb2, 0xd2, 0x90, 0xba, 0xda, 0x92, 0x2d, 0xf1, 0x14, 0x45, 0xd1, 0x96, 0x4e, 0x74, 0x3b, 0x22, 0x65, 0xc7, 0x49, 0xa5, 0x5c, 0xe0, 0x0c, 0xc8, 0x81, 0x35, 0xb7, 0x33, 0x98, 0x11, 0x45, 0x3f, 0xed, 0x6f, 0xec, 0xef, 0xed, 0x97, 0x6c, 0x37, 0x6e, 0x73, 0x21, 0x29, 0x4a, 0x96, 0x62, 0xef, 0xba, 0x2a, 0x47, 0x43, 0x0c, 0xd0, 0x68, 0xf4, 0xbd, 0x81, 0xc6, 0x64, 0xff, 0x8f, 0x5e, 0xec, 0x66, 0xd3, 0x84, 0x11, 0x3f, 0x0b, 0x83, 0xf6, 0xca, 0xfe, 0x1f, 0x1d, 0xe7, 0x9f, 0x7c, 0x44, 0x4e, 0x7a, 0xe4, 0xcd, 0xbf, 0xda, 0x64, 0x1f, 0x5b, 0x89, 0x1b, 0x50, 0x21, 0x0e, 0x1a, 0x41, 0xe6, 0x70, 0xb6, 0x47, 0xe4, 0x9f, 0xdd, 0x06, 0x09, 0x68, 0x34, 0x3e, 0x68, 0xb0, 0xa8, 0x01, 0xdd, 0xfe, 0xf8, 0x4f, 0x16, 0x79, 0x7c, 0xf4, 0x2f, 0xc7, 0xa9, 0x80, 0xd8, 0x9d, 0x0f, 0x62, 0xf9, 0xd8, 0x71, 0xa6, 0x87, 0x63, 0x43, 0x0d, 0x46, 0x14, 0x3b, 0x5f, 0x05, 0xf1, 0x98, 0xb8, 0xc9, 0xe2, 0xa4, 0x0a, 0xc9, 0x71, 0x2a, 0xd0, 0x56, 0xf6, 0x7d, 0x46, 0xbd, 0x52, 0x97, 0x95, 0x17, 0xfb, 0x21, 0xcb, 0x28, 0x71, 0x7d, 0x9a, 0x0a, 0x96, 0x1d, 0x34, 0xf2, 0x6c, 0xe4, 0xec, 0x16, 0xed, 0x7e, 0x96, 0x25, 0x0e, 0xfb, 0x3d, 0xe7, 0xb7, 0x07, 0x8d, 0xbf, 0x3b, 0xd7, 0x1d, 0xa7, 0x1b, 0x87, 0x09, 0xcd, 0xf8, 0x30, 0x60, 0x0d, 0xe2, 0xc6, 0x51, 0xc6, 0x22, 0x18, 0x74, 0xd2, 0x3b, 0x60, 0xde, 0x98, 0xad, 0xb9, 0x7e, 0x1a, 0x87, 0xec, 0x60, 0xa3, 0x18, 0x1f, 0x51, 0xf8, 0xdd, 0xb8, 0xe5, 0x6c, 0x92, 0xc4, 0x69, 0x56, 0x1a, 0x32, 0xe1, 0x5e, 0xe6, 0x1f, 0x78, 0xec, 0x96, 0xbb, 0xcc, 0x91, 0x3f, 0xd6, 0x48, 0x2e, 0x58, 0xea, 0x08, 0x97, 0x06, 0x14, 0xc0, 0x1f, 0x4c, 0x99, 0x58, 0x23, 0x3c, 0xe2, 0x19, 0xa7, 0x81, 0x6c, 0x05, 0xc0, 0xcd, 0xf5, 0x35, 0x12, 0x42, 0x5b, 0x98, 0x87, 0x95, 0x26, 0x7a, 0x57, 0x6a, 0xda, 0x6c, 0xae, 0x03, 0x02, 0x80, 0x41, 0xc6, 0xb3, 0x80, 0xb5, 0xaf, 0xbb, 0xa7, 0x9d, 0xfd, 0x96, 0x7a, 0xae, 0xa2, 0x05, 0x14, 0x73, 0x53, 0x9e, 0x64, 0x3c, 0x8e, 0x4a, 0x98, 0x61, 0x7f, 0x42, 0xbd, 0x5b, 0x1a, 0xb9, 0x4c, 0x90, 0x9b, 0x28, 0x9e, 0x04, 0x72, 0x71, 0xd0, 0xe6, 0xa5, 0x4c, 0x08, 0x68, 0x4c, 0xf0, 0x2f, 0x8f, 0xc6, 0x44, 0xc4, 0x2e, 0x07, 0x78, 0x01, 0x89, 0x18, 0xf3, 0x04, 0xa1, 0x91, 0x47, 0xdc, 0x94, 0xd1, 0x0c, 0xfa, 0x50, 0x92, 0x47, 0xfc, 0x96, 0xa5, 0x82, 0x67, 0x53, 0xc2, 0xa2, 0x94, 0xbb, 0x3e, 0xf3, 0xc8, 0x70, 0x4a, 0x3c, 0xd9, 0xca, 0x48, 0x02, 0xff, 0x4d, 0x98, 0x9b, 0xc1, 0x4f, 0x41, 0x26, 0x3e, 0x4b, 0x19, 0xa1, 0x41, 0x00, 0x2b, 0x86, 0x0e, 0xdc, 0xcb, 0x69, 0x20, 0x88, 0x4b, 0x23, 0x32, 0x0a, 0xe2, 0x3c, 0xe5, 0xc2, 0x6f, 0xd6, 0x89, 0x7a, 0xc3, 0xa6, 0x93, 0x38, 0xf5, 0x44, 0x0d, 0xf5, 0x35, 0x72, 0x5d, 0xcc, 0x1b, 0x8f, 0x48, 0x97, 0x06, 0x7c, 0x14, 0xa7, 0x11, 0xa7, 0xe4, 0x34, 0x16, 0xa4, 0x13, 0x8d, 0x59, 0x80, 0x94, 0x3d, 0x4c, 0x73, 0x1e, 0xc1, 0x5f, 0x64, 0x6e, 0x36, 0x5d, 0xd3, 0x68, 0xc9, 0x47, 0x1e, 0xb9, 0x41, 0x2e, 0x80, 0x2a, 0xb0, 0xe6, 0x34, 0x13, 0xf2, 0xbf, 0x6b, 0x30, 0x4b, 0x10, 0x30, 0xa4, 0x03, 0x0c, 0x03, 0x6a, 0xe6, 0x99, 0xee, 0xe0, 0xfa, 0x3c, 0x83, 0x65, 0xe4, 0x29, 0xbc, 0x62, 0xd1, 0x98, 0x03, 0x29, 0x52, 0xa0, 0xcd, 0x1a, 0xf1, 0x41, 0x89, 0xd2, 0x20, 0x8e, 0x93, 0x35, 0x82, 0x62, 0x11, 0xd0, 0x35, 0x32, 0x4c, 0x29, 0x87, 0x41, 0x89, 0xcf, 0x41, 0x04, 0x33, 0x90, 0x96, 0x04, 0xa6, 0xeb, 0x0f, 0x7a, 0x67, 0x6b, 0x48, 0x3b, 0x98, 0x1f, 0x41, 0x06, 0x74, 0x82, 0x2c, 0x1d, 0xd3, 0x6f, 0x00, 0x0b, 0x5e, 0x04, 0x79, 0x18, 0x71, 0x9c, 0x3f, 0x0c, 0x81, 0xa4, 0x88, 0x60, 0x0c, 0x3c, 0x0b, 0xb9, 0xc8, 0x8a, 0x27, 0x61, 0x1f, 0xc3, 0x46, 0xfb, 0x45, 0x95, 0x50, 0xe3, 0x38, 0x1e, 0x07, 0xcc, 0x81, 0xb5, 0x31, 0x07, 0xd6, 0xc8, 0x47, 0xdc, 0xa5, 0x35, 0x9e, 0xff, 0xd6, 0xdf, 0x19, 0xd3, 0xe9, 0xe5, 0x78, 0x6b, 0xf7, 0x5b, 0x77, 0xfb, 0xf5, 0x67, 0xd1, 0xb9, 0xdb, 0xa6, 0x7b, 0xee, 0xd6, 0xed, 0xf6, 0xef, 0x83, 0xc3, 0x9b, 0xc8, 0xb9, 0x7a, 0xbd, 0x37, 0xde, 0x5b, 0x4f, 0x7e, 0x1b, 0x27, 0x87, 0xeb, 0x0d, 0xd2, 0xb2, 0x8c, 0x48, 0x60, 0x01, 0x2c, 0xcd, 0xa6, 0x07, 0x8d, 0x6c, 0xc2, 0xb3, 0x8c, 0xa5, 0x6f, 0xa9, 0xeb, 0xc6, 0x79, 0x94, 0x7d, 0xe1, 0x5e, 0x09, 0xfa, 0xc6, 0xee, 0xde, 0xc6, 0xee, 0xf6, 0xee, 0x96, 0x1a, 0x4a, 0xe0, 0x1f, 0x4a, 0x67, 0xc0, 0xa3, 0x1b, 0x92, 0xb2, 0xe0, 0xa0, 0x21, 0x7c, 0xd0, 0x0d, 0x37, 0xcf, 0x08, 0x77, 0x11, 0x2b, 0x3f, 0x65, 0xa3, 0x83, 0x46, 0x8b, 0x87, 0xe3, 0xd6, 0x88, 0xde, 0x62, 0x5b, 0x13, 0xfe, 0x23, 0xd9, 0x5f, 0x8c, 0xa1, 0x49, 0x02, 0x6b, 0xca, 0xe2, 0xdc, 0xf5, 0x9d, 0x99, 0x61, 0xf5, 0x97, 0xcd, 0x24, 0x1a, 0xe3, 0xf8, 0x95, 0x17, 0x2f, 0xd0, 0x24, 0x90, 0xf2, 0xdc, 0xd9, 0x14, 0xc4, 0xc1, 0x67, 0x2c, 0xb3, 0x10, 0x5c, 0x21, 0x5a, 0x2c, 0x64, 0xe9, 0x98, 0x45, 0xee, 0xd4, 0xc9, 0x58, 0x98, 0x04, 0x20, 0xd1, 0x4d, 0x68, 0x06, 0x93, 0x82, 0x56, 0xe4, 0x85, 0x04, 0xb4, 0x14, 0x46, 0x08, 0xbc, 0x56, 0xa3, 0x56, 0x70, 0xcd, 0xcb, 0x07, 0x80, 0x56, 0x25, 0x71, 0x24, 0x40, 0x10, 0x1f, 0x35, 0x2c, 0x01, 0x71, 0xcb, 0xe4, 0x08, 0x12, 0x32, 0x8f, 0xd3, 0x83, 0x86, 0x6c, 0x51, 0x36, 0x00, 0x96, 0x7b, 0xda, 0x1b, 0x90, 0xc1, 0x71, 0xef, 0xaa, 0x47, 0x0e, 0x7b, 0x64, 0xd0, 0xf9, 0x5b, 0x8f, 0x5c, 0x7c, 0xec, 0x5d, 0x91, 0x6e, 0xbf, 0x8f, 0x8b, 0x21, 0xfa, 0xdf, 0xf2, 0x79, 0x5c, 0x86, 0xfc, 0x8c, 0xd0, 0x1c, 0x65, 0xf4, 0x86, 0xc5, 0x20, 0x4b, 0x1a, 0x4f, 0x14, 0x38, 0x65, 0x4a, 0x88, 0x48, 0x5d, 0xe8, 0xfe, 0x15, 0x56, 0x1f, 0x83, 0x3c, 0x47, 0xfc, 0x5b, 0xda, 0xfc, 0x0a, 0x5d, 0xf6, 0x5b, 0xea, 0xbd, 0x5e, 0x53, 0xb9, 0x33, 0x9a, 0x58, 0xf1, 0xb6, 0xd5, 0xa2, 0x5f, 0xe9, 0x5d, 0x53, 0xc9, 0x2a, 0x4d, 0xb8, 0x68, 0x82, 0xbc, 0xcb, 0xb6, 0x56, 0xc0, 0x87, 0xa2, 0xf5, 0xf5, 0xf7, 0x9c, 0xa5, 0xd3, 0xd6, 0x46, 0xf3, 0x4d, 0x73, 0x43, 0xff},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x57, 0x00, 0x00, 0xef, 0x06, 0xec, 0xcb, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xf5, 0x65, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x15, 0xc0, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x25, 0x1b, 0x4f, 0x37, 0x6d, 0x68, 0x82, 0x25, 0x5c, 0x06, 0x1c, 0x31, 0xa1, 0x11, 0x0d, 0xa6, 0x19, 0x77, 0x45, 0xad, 0xf3, 0x0b, 0xe3, 0x5b, 0x82, 0x8c, 0xa1, 0x73, 0x79, 0x0d, 0xce, 0xa5, 0x3e, 0x96, 0xb3, 0xd7, 0xf8, 0x3f, 0x27, 0x4f, 0xc6, 0x29, 0xf5, 0xd8, 0xec, 0x94, 0x15, 0xf7, 0x22, 0x45, 0x5b, 0xe2, 0x00, 0x90, 0x4f, 0xce, 0x3a, 0x1f, 0x7a, 0xa4, 0xdb, 0xe9, 0x1e, 0xf7, 0x2a, 0x74, 0x6e, 0xa1, 0x0b, 0x02, 0xde, 0xc0, 0xbf, 0xfd, 0x61, 0xec, 0x4d, 0x8d, 0x17, 0xf3, 0xc1, 0x46, 0x24, 0x74, 0xcc, 0x90, 0x6f, 0x38, 0x7e, 0xa5, 0x9f, 0x81, 0xdd, 0x41, 0x13, 0x76, 0x14, 0xe7, 0xe0, 0x10, 0xba, 0x01, 0x77, 0x6f, 0xc8, 0x7b, 0x30, 0x28, 0x5e, 0xc0, 0xc7, 0x7e, 0x46, 0x06, 0x74, 0xfc, 0x96, 0x5c, 0x02, 0xb5, 0xc0, 0x98, 0x7a, 0x31, 0x89, 0xe2, 0x0c, 0xd8, 0x17, 0x02, 0x5b, 0x56, 0x3a, 0x68, 0x55, 0xd1, 0xfe, 0xa1, 0x01, 0x40, 0x08, 0x99, 0xcf, 0x05, 0xc9, 0xb0, 0x3f, 0x9a, 0xc8, 0x26, 0xf3, 0xf2, 0x95, 0xeb, 0xab, 0x53, 0xf5, 0x86, 0x91, 0x09, 0x1b, 0xe2, 0xc4, 0xda, 0x08, 0x63, 0x0b, 0x74, 0x25, 0x30, 0x82, 0xdd, 0xa1, 0x81, 0x06, 0xb3, 0x9d, 0xc5, 0x64, 0x08, 0x16, 0x3b, 0xa0, 0x2e, 0xf3, 0xde, 0x4a, 0xa7, 0x08, 0x0c, 0x9b, 0x4c, 0x26, 0xcd, 0x1c, 0x90, 0x47, 0x70, 0xad, 0x95, 0x81, 0x9e, 0x82, 0x84, 0xb9, 0xc8, 0x8a, 0xde, 0xf0, 0x94, 0x4d, 0x18, 0x8b, 0x24, 0x58, 0xb9, 0xde, 0xb6, 0x74, 0x13, 0xfb, 0x2d, 0xf5, 0x0c, 0x23, 0xd0, 0xc0, 0x82, 0xad, 0x0f, 0x62, 0x58, 0x07, 0x3c, 0x24, 0x31, 0xf8, 0x16, 0x58, 0x2f, 0x4e, 0x8a, 0x83, 0xc0, 0xc4, 0x44, 0xe8, 0x6b, 0xa0, 0x67, 0x73, 0xa5, 0x8b, 0xde, 0x05, 0x6c, 0x17, 0x39, 0x02, 0x95, 0x7c, 0x4b, 0xd6, 0x77, 0x5a, 0x1b, 0x5b, 0xad, 0xcd, 0xf5, 0x8d, 0xed, 0x15, 0x19, 0x2c, 0x68, 0xf6, 0x61, 0xf8, 0x02, 0x36, 0x89, 0xdd, 0x65, 0xad, 0xaf, 0xf4, 0x96, 0xaa, 0x56, 0xa0, 0xeb, 0x2d, 0x4d, 0x09, 0xbd, 0x63, 0x01, 0x39, 0x20, 0x67, 0x34, 0xf3, 0x9b, 0x29, 0x60, 0x12, 0x87, 0x7f, 0xf9, 0x2b, 0x59, 0x25, 0x8d, 0xc6, 0x3b, 0xf5, 0x1a, 0xde, 0xc9, 0x2e, 0xff, 0x49, 0x36, 0xd6, 0xcb, 0xff, 0xde, 0xad, 0x40, 0x5c, 0x94, 0x87, 0xa0, 0x03, 0xcd, 0x49, 0x0a, 0xa6, 0xf4, 0x2f, 0xaf, 0xf6, 0xf9, 0x28, 0x45, 0xfa, 0x5a, 0x29, 0x06, 0x9a, 0x6c, 0x6f, 0xed, 0x6d, 0xbd, 0x7e, 0xbd, 0xd3, 0x1c, 0x05, 0xa2, 0xe9, 0x49, 0xb6, 0xb9, 0xc8, 0xb6, 0x66, 0xc4, 0xb2, 0x16, 0xd5, 0x5c, 0xe1, 0xef, 0x70, 0x84, 0xee, 0xf9, 0x4e, 0xe2, 0x7a, 0x99, 0xf2, 0x90, 0xae, 0xbf, 0x03, 0xbb, 0x7c, 0x80, 0x0c, 0x62, 0xeb, 0xef, 0xc0, 0xb3, 0x1d, 0xbc, 0x02, 0xbc, 0x28, 0xfc, 0xef, 0xd5, 0x2f, 0x0d, 0xa2, 0xe2, 0x85, 0xc6, 0x06, 0x68, 0x24, 0x43, 0xee, 0xcb, 0x47, 0x39, 0xff, 0x10, 0xba, 0xb2, 0xf4, 0xa0, 0x01, 0x96, 0x59, 0x2a, 0x2e, 0xf8, 0x75, 0x2e, 0x80, 0xf8, 0xd3, 0xb7, 0x51, 0x1c, 0x31, 0x14, 0x55, 0x85, 0x67, 0xfb, 0xd5, 0x5f, 0xdf, 0xad, 0x14, 0xd2, 0xbf, 0x1f, 0xc5, 0xf6, 0xf1, 0xdf, 0xbd, 0x90, 0x8d, 0xa7, 0x2c, 0x80, 0xc8, 0xd0, 0xe5, 0xa0, 0x51, 0x92, 0xfc, 0x8e, 0xd1, 0xe8, 0xd2, 0xea, 0x60, 0x69, 0xa5, 0x15, 0xa1, 0x95, 0xef, 0x81, 0x98, 0x7d, 0x87, 0xf6, 0x10, 0x15, 0x28, 0x82, 0x1f, 0xa9, 0xd9, 0x27, 0x4f, 0x34, 0xa7, 0xd4, 0x8f, 0x63, 0x69, 0x99, 0x12, 0x0e, 0x42, 0xf2, 0x0b, 0xf7, 0x0e, 0x36, 0xb7, 0x77, 0x77, 0xde, 0xac, 0xef, 0xbd, 0xcc, 0x0e, 0x36, 0x17, 0x2c, 0x92, 0x06, 0xf0, 0xf7, 0x33, 0x8e, 0x24, 0x83, 0x94, 0xba, 0x37, 0x28, 0xcc, 0x97, 0x38, 0x5c, 0xfa, 0xc3, 0xfb, 0x65, 0xb6, 0xc2, 0x13, 0xda, 0xa4, 0x5e, 0x06, 0x13, 0x2b, 0x04, 0xee, 0x5a, 0x38, 0xfd, 0xc6, 0xfa, 0xf6, 0xee, 0xee, 0x4e, 0xd9, 0xac, 0xe9, 0x7f, 0xfb, 0x10, 0xd0, 0x10, 0xe8, 0xd0, 0x00, 0xf3, 0xf5, 0x89, 0xa6, 0xa8, 0x42, 0xd8, 0x0b, 0x5a, 0xb1, 0x8b, 0x24, 0xd0, 0x25, 0x2a, 0xfd, 0xa7, 0x94, 0x26, 0x72, 0xc9, 0x76, 0x00, 0xda, 0x02, 0x67, 0x02, 0xcd, 0x68, 0x89, 0x94, 0x95, 0x24, 0xfd, 0x1b, 0x9e, 0x90, 0x73, 0x7a, 0xab, 0xfc, 0xdf, 0x7e, 0x6e, 0x63, 0x6f, 0x9f, 0x7b, 0xc0, 0x22, 0x1c, 0x26, 0xa0, 0x8b, 0x13, 0xd1, 0xdb, 0x86, 0xf6, 0xf1, 0xe0, 0x50, 0xda, 0xfb, 0x54, 0x3b, 0x90, 0x3f, 0xd1, 0xdc, 0xe3, 0xe0, 0x4f, 0x99, 0xea, 0x21, 0xc1, 0x81, 0x7e, 0x03, 0x44, 0x3e, 0x96, 0x0a, 0xbd, 0xdf, 0xa2, 0x80, 0x1d, 0x0c, 0x99, 0x37, 0x58, 0x30, 0x0c, 0xb3, 0x86, 0xf1, 0x5d, 0x31, 0xb2, 0x2f, 0x9b, 0xee, 0x1d, 0x85, 0xfe, 0xd7, 0xd1, 0x41, 0x48, 0x31, 0xf0, 0x0c, 0x5a, 0x49, 0x57, 0xb5, 0x96, 0x86, 0xef, 0xb7, 0xf2, 0x40, 0xbb, 0x04, 0x29, 0x38, 0x95, 0x05, 0xeb, 0xf6, 0x01, 0x64, 0x17, 0x87, 0x60, 0x23, 0x14, 0x0d, 0x0c, 0xb9, 0x20, 0xe7, 0x70, 0x86, 0x34, 0xc5, 0x65, 0xcf, 0x34, 0x16, 0xd3, 0x03, 0x0c, 0x1d, 0x7c, 0x7c, 0x02, 0x0b, 0x06, 0x86, 0x56, 0x47, 0x12, 0x2f, 0x4c, 0x33, 0x4e, 0x5a, 0x7e, 0x65, 0xfb, 0x77, 0x34, 0xe5, 0x0a, 0xf2, 0xcf, 0x99, 0x07, 0xa9, 0x0a, 0x51, 0x26, 0xea, 0x49, 0x64, 0x69, 0x0a, 0xfc, 0x06, 0x46, 0x61, 0xb7, 0x2a, 0xf5, 0xd5, 0xb4, 0x15, 0x62, 0xb5, 0x44, 0x96, 0x7b, 0x80, 0x28, 0x46, 0x13, 0xb1, 0x09, 0xcb, 0x1d, 0xd3, 0xd8, 0x20, 0xed, 0xcb, 0xa2, 0x99, 0xf4, 0x75, 0xb3, 0x25, 0xdf, 0x02, 0x50, 0x6e, 0x9e, 0xa6, 0xf0, 0x50, 0x06, 0xd3, 0x55, 0x4d, 0xcb, 0x40, 0x8c, 0xa8, 0x9b, 0x07, 0xd9, 0x14, 0x06, 0xbc, 0x57, 0x4f, 0x0b, 0xa7, 0xa2, 0xa3, 0x11, 0xf4, 0xea, 0xe3, 0xdf, 0x05, 0x7d, 0x54, 0xfc, 0x0c, 0x9d, 0x3a, 0xf2, 0x61, 0x41, 0xaf, 0x84, 0x22, 0x5e, 0xc2, 0x01, 0x8f, 0xe0, 0x8c, 0x68, 0xc8, 0x03, 0xce, 0xe4, 0xb2, 0x55, 0x33, 0x79, 0x49, 0xc3, 0xe4, 0x1d, 0x79, 0xaf, 0x5f, 0x94, 0xe4, 0x06, 0xfe, 0xa1, 0x4f, 0x07, 0xe1, 0xd1, 0xba, 0x25, 0x5b, 0xca, 0x4c, 0x9d, 0xe1, 0x1f, 0xca, 0x89, 0xee, 0x6b, 0xff, 0x9a, 0xce, 0x65, 0x11, 0xdb, 0x17, 0x09, 0xe4, 0x3f, 0x52, 0x23, 0x31, 0x9e, 0x73, 0x82, 0x78, 0x0c, 0x51, 0x70, 0x61, 0x98, 0x64, 0xa8, 0x8b, 0x8d, 0x8e, 0x8a, 0x00, 0xc7, 0x7c, 0xa4, 0x8d, 0x8d, 0x4c, 0xe1, 0x74, 0xf7, 0x16, 0x42, 0xd1, 0x21, 0x05, 0x39, 0x86, 0xd0, 0x43, 0xcb, 0x97, 0x15, 0x21, 0x5f, 0xb6, 0x19, 0xe9, 0x19, 0xd2, 0x28, 0x62, 0x28, 0xcc, 0x36, 0x8e, 0x31, 0x83, 0xb4, 0xce, 0x10, 0x13, 0xcc, 0xd4, 0x00, 0x94, 0x45, 0xdd, 0xc6, 0x3a, 0x38, 0x5c, 0x22, 0x53, 0x89, 0x80, 0xfc, 0x0d, 0x39, 0x4e, 0x23, 0x68, 0x59, 0xd0, 0xd0, 0xa9, 0x2a, 0xd2, 0xd6, 0xdf, 0x28, 0x81, 0xa9, 0x80, 0xbb, 0xb2, 0x41, 0x32, 0x39, 0xcc, 0xb3, 0x0c, 0x1e, 0xab, 0xc0, 0x0d, 0x56, 0x61, 0x3c, 0xe4, 0x90, 0x04, 0x0c, 0x55, 0x9f, 0x79, 0xca, 0x21, 0x15, 0xc9, 0xcc, 0xae, 0x4d, 0x6c, 0x58, 0x44, 0x34, 0xbf, 0xe4, 0x59, 0xf8, 0x45, 0x40, 0xf2, 0xe9, 0xb2, 0x03, 0xd3, 0xf8, 0x1f, 0x9b, 0x87, 0x45, 0x90, 0x0e, 0x3f, 0x4c, 0xcc, 0xf6, 0x12, 0xfb, 0x62, 0xf0, 0x9d, 0x87, 0x07, 0x9d, 0x24, 0x11, 0xf0, 0x4a, 0x4d, 0x2c, 0x5f, 0xb8, 0x20, 0x3c, 0x94, 0x8f, 0x23, 0xe9, 0x12, 0xe1, 0x95, 0x42, 0x4d, 0xc2, 0x1a, 0xb1, 0x34, 0xa5, 0x01, 0x3c, 0x8e, 0x20, 0x45, 0x84, 0x3f, 0x80, 0x1b, 0x9b, 0xd0, 0x69, 0xa3, 0xb4, 0x84, 0x46, 0x1b, 0x21, 0x4a, 0x61, 0x30, 0xdc, 0x44, 0x1a, 0xad, 0x48, 0xc9, 0x2b, 0x11, 0xc7, 0xda, 0x3d, 0x35, 0x18, 0x56, 0x2a, 0x59, 0x02, 0xeb, 0x6e, 0xb4, 0xcf, 0x58, 0x94, 0xcf, 0x42, 0x28, 0x0f, 0x2f, 0x64, 0xf2, 0xc5, 0x1c, 0xc2, 0xd7, 0x58, 0x6d, 0xad, 0x88, 0xf2, 0x0d, 0xd2, 0x14, 0x93, 0xc3, 0xf8, 0x6e, 0xc6, 0x3c, 0x95, 0x0c, 0x37, 0x4a, 0x3d, 0xe4, 0xe1, 0x61, 0xa9, 0x1d, 0x7f, 0x36, 0x74, 0x92, 0x5a, 0x6e, 0x51, 0xac, 0x52, 0x2d, 0x98, 0xd3, 0x64, 0x7e, 0x0c, 0x63, 0xc6, 0x98, 0x8e, 0x60, 0xe8, 0x11, 0x47, 0xb8, 0x48, 0x09, 0x90, 0x47, 0x49, 0xae, 0x5d, 0xe7, 0x2b, 0xf0, 0x44, 0x60, 0x52, 0x5e, 0x29, 0x78, 0xaf, 0x30, 0xdb, 0x7d, 0x45, 0x6e, 0x41, 0xf9, 0xe1, 0x07, 0xd2, 0xfd, 0x55, 0x6b, 0xc9, 0x08, 0x88, 0x6f, 0x40, 0x7c, 0x2b, 0x63, 0xbe, 0x74, 0x55, 0xce, 0xad, 0xf2, 0xe5, 0xa5, 0x10, 0xc0, 0x74, 0xde, 0x4d, 0x8b, 0xe4, 0xe9, 0x29, 0xa0, 0xe2, 0x3c, 0x83, 0x57, 0x16, 0xc2, 0x5d, 0x18, 0x7c, 0x89, 0xe2, 0x2f, 0x5e, 0xe6, 0x55, 0x46, 0x56, 0x48, 0xec, 0xc8, 0xb6, 0x86, 0x71, 0xcc, 0x26, 0xeb, 0x57, 0xcc, 0x29, 0x7a, 0x0d, 0x21, 0x02, 0x19, 0xa7, 0x90, 0xa4, 0x7b, 0x36, 0xb2, 0xd2, 0x0c, 0x34, 0xa9, 0x81, 0x61, 0xc9, 0xef, 0x8d, 0x52, 0x50, 0x02, 0x91, 0x08, 0xff, 0x06, 0xcf, 0x9b, 0x10, 0xa5, 0xe1, 0x7e, 0x53, 0xc0, 0xa2, 0x31, 0x86, 0x3a, 0x9b, 0x3b, 0x3b, 0x0d, 0x8d, 0x65, 0x43, 0xf1, 0x44, 0x09, 0x7e, 0x0d, 0xb7, 0x61, 0x16, 0x19, 0x68, 0x22, 0x1f, 0x86, 0x3c, 0xd3, 0x96, 0x4a, 0x85, 0x65, 0x2a, 0xef, 0x88, 0x35, 0x96, 0x68, 0xb4, 0x14, 0x10, 0x69, 0x21, 0x51, 0x26, 0x0a, 0x53, 0x59, 0x95, 0x4d, 0x29, 0x7f, 0xd2, 0xa3, 0x1b, 0xe3, 0x3a, 0x57, 0x6c, 0x65, 0x24, 0x80, 0x0a, 0x81, 0x51, 0x4d, 0x52, 0x18, 0xbb, 0x79, 0xae, 0xd2, 0xf4, 0x6d, 0xd4, 0xdd},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x58, 0x00, 0x00, 0xef, 0x06, 0xec, 0xca, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xfa, 0xbd, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x64, 0x1e, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x25, 0x1b, 0x4f, 0x37, 0x6d, 0xc8, 0x30, 0x2e, 0x51, 0xd7, 0x4b, 0xe3, 0xc4, 0x8b, 0x27, 0x91, 0x83, 0x59, 0x33, 0x68, 0xe8, 0xe1, 0xc5, 0xf5, 0x40, 0x5a, 0xae, 0x22, 0x32, 0xb2, 0x5d, 0x6a, 0x93, 0xa6, 0x6c, 0x2c, 0x37, 0x5f, 0x68, 0xca, 0x29, 0x0c, 0xbf, 0x85, 0xa6, 0x24, 0x0e, 0x40, 0x5c, 0x8d, 0x63, 0x36, 0xbe, 0xb9, 0x00, 0xa5, 0x74, 0x19, 0xdd, 0xd7, 0x5c, 0xac, 0xc0, 0x72, 0x5e, 0x40, 0x32, 0x8e, 0xbb, 0x8c, 0x8b, 0x7c, 0xa0, 0xec, 0x16, 0x72, 0x81, 0xbb, 0x5a, 0xd2, 0xc7, 0x49, 0x96, 0x41, 0xfc, 0x7c, 0xa6, 0xda, 0xb4, 0x83, 0xfb, 0x28, 0x5b, 0xef, 0x05, 0x02, 0xae, 0xd9, 0xb8, 0x49, 0x3e, 0xce, 0x53, 0x84, 0xf1, 0x1e, 0x9b, 0x8c, 0x8b, 0x54, 0x8d, 0x05, 0x08, 0xe9, 0x1a, 0x1f, 0xbe, 0x14, 0x3a, 0xa1, 0xa9, 0xa7, 0xe0, 0xfb, 0x71, 0x14, 0xa7, 0x00, 0xbe, 0x23, 0x9b, 0x34, 0xfc, 0x63, 0xd9, 0x78, 0x2f, 0x86, 0x90, 0x88, 0x66, 0x71, 0x3a, 0x6d, 0xb4, 0x8f, 0xd5, 0xc3, 0xbd, 0x9d, 0x39, 0x98, 0x66, 0x37, 0x93, 0xf3, 0xe1, 0xde, 0x55, 0x98, 0x04, 0x5c, 0xf8, 0xa1, 0x8c, 0x56, 0xda, 0x27, 0xf2, 0x9d, 0x9e, 0xb8, 0x53, 0x7d, 0xfb, 0xdd, 0x0b, 0xc4, 0xfd, 0x28, 0xee, 0xca, 0x09, 0x79, 0x34, 0x02, 0x72, 0x47, 0x72, 0x83, 0x57, 0x47, 0x29, 0xed, 0x13, 0xf9, 0x5a, 0xcf, 0x79, 0x52, 0x74, 0x20, 0xf7, 0x46, 0x2f, 0x0a, 0x34, 0xa4, 0xb3, 0x98, 0x5c, 0x00, 0xe6, 0x57, 0xfa, 0xe9, 0xde, 0xee, 0x72, 0x3f, 0x27, 0x15, 0x1a, 0x15, 0xb5, 0xa7, 0x89, 0xec, 0xec, 0xaa, 0x76, 0x8b, 0x83, 0x79, 0xf3, 0xdd, 0x4b, 0x0e, 0xa4, 0xd7, 0x10, 0x3e, 0x87, 0x8c, 0xe2, 0xd4, 0x3e, 0xdf, 0x8f, 0x1b, 0xc4, 0x5d, 0x0c, 0x99, 0xdf, 0x55, 0x0f, 0xb5, 0xb9, 0x4d, 0x6f, 0xab, 0x8f, 0x72, 0xf7, 0x00, 0x7a, 0xcb, 0x4d, 0x84, 0xbf, 0xfc, 0xfd, 0xaf, 0xd8, 0x9f, 0x14, 0xea, 0x64, 0x63, 0x35, 0x74, 0x9a, 0x3a, 0xd4, 0x2f, 0xbd, 0xfa, 0x43, 0x0d, 0x05, 0x17, 0x70, 0x0c, 0x31, 0xbd, 0x5c, 0xa4, 0xee, 0xdd, 0xce, 0x51, 0xef, 0xec, 0xa4, 0xdb, 0xff, 0x29, 0x2a, 0x6f, 0xb0, 0x7b, 0x80, 0xda, 0xdb, 0xae, 0x39, 0xee, 0x2e, 0xe3, 0x16, 0x56, 0x0e, 0x01, 0x86, 0x03, 0x46, 0x5e, 0x6f, 0x01, 0xb7, 0xaf, 0xcb, 0x2f, 0x48, 0xcf, 0xbc, 0x58, 0x0a, 0xd0, 0xc2, 0x42, 0xe9, 0x01, 0xc7, 0x37, 0x62, 0xd2, 0x8e, 0x80, 0x24, 0x97, 0x80, 0x7f, 0x30, 0x70, 0x95, 0x28, 0x5d, 0x96, 0xba, 0xcd, 0x9b, 0xea, 0xe1, 0x62, 0x65, 0xd1, 0xc0, 0x08, 0x87, 0x47, 0x39, 0xc8, 0x7a, 0x79, 0xde, 0xae, 0x6d, 0x7d, 0xc4, 0x8a, 0xcc, 0x93, 0x13, 0x27, 0x78, 0x56, 0x83, 0x5b, 0xed, 0x98, 0x03, 0xb4, 0x3b, 0xba, 0x9d, 0x5c, 0x94, 0xdb, 0x97, 0x82, 0x33, 0x29, 0xcc, 0x6c, 0x06, 0xf3, 0x1d, 0xcb, 0xf4, 0x20, 0xca, 0x4c, 0xb3, 0xd0, 0x66, 0x28, 0x40, 0x70, 0x60, 0x40, 0x08, 0xd8, 0x1d, 0x15, 0x6f, 0x0a, 0x2a, 0xcb, 0x77, 0x4b, 0x31, 0x0c, 0xf8, 0x30, 0x05, 0x89, 0xc4, 0x35, 0x9e, 0x9a, 0xc7, 0x1f, 0xab, 0x67, 0x9e, 0xf6, 0x48, 0x0b, 0xf5, 0xec, 0xe8, 0xec, 0xa4, 0xdf, 0x3f, 0xb9, 0x38, 0xff, 0x29, 0x7a, 0x66, 0xb0, 0x03, 0x3d, 0xeb, 0x98, 0x67, 0xb2, 0x4c, 0xe3, 0xec, 0xa0, 0xaa, 0xc6, 0x15, 0x4b, 0xad, 0x69, 0x9c, 0x85, 0xbc, 0x14, 0xe0, 0x3c, 0x58, 0x1f, 0xee, 0x01, 0xf3, 0x70, 0x31, 0xb3, 0x53, 0x40, 0x9c, 0xc7, 0x22, 0x19, 0x22, 0xb0, 0x08, 0x28, 0x1a, 0x84, 0x32, 0x9f, 0xeb, 0x99, 0x56, 0xc8, 0x50, 0x4d, 0xeb, 0x52, 0x6c, 0x39, 0xfa, 0x90, 0x48, 0x6a, 0x1e, 0x7a, 0xb7, 0x02, 0xe5, 0x93, 0xf2, 0x8b, 0x47, 0x2c, 0x3f, 0x03, 0xef, 0x26, 0x20, 0x5d, 0x2a, 0xc3, 0x1a, 0xe8, 0xb6, 0x67, 0x5a, 0x3e, 0x1d, 0x41, 0xac, 0xe9, 0x51, 0x48, 0xba, 0x38, 0x2a, 0x6e, 0xa7, 0xfc, 0xd3, 0x86, 0x37, 0x11, 0x8d, 0x5c, 0xe9, 0x8f, 0xb9, 0xf7, 0x43, 0x95, 0x05, 0x53, 0x50, 0x70, 0xd3, 0x4e, 0xe6, 0x33, 0xc7, 0x1e, 0x05, 0x2e, 0x52, 0x1c, 0x99, 0x73, 0x9f, 0x9c, 0xe3, 0xb1, 0x0f, 0xe9, 0x5e, 0x9c, 0x9d, 0x5d, 0x9f, 0x9f, 0x0c, 0x3e, 0xff, 0x0c, 0x25, 0x9a, 0x8b, 0xf5, 0x72, 0xc7, 0x35, 0x7f, 0x98, 0x7d, 0xb2, 0xf6, 0x4f, 0x19, 0x43, 0x30, 0x80, 0x91, 0x8a, 0x29, 0x30, 0x60, 0x30, 0x9d, 0xac, 0x21, 0x34, 0x76, 0xb1, 0xd4, 0xed, 0x51, 0xd3, 0xde, 0x38, 0x1b, 0x9b, 0x85, 0x77, 0xa9, 0xce, 0x0d, 0xa1, 0xae, 0xdc, 0x21, 0x80, 0x89, 0xff, 0x06, 0xdd, 0x0a, 0x77, 0x53, 0x9f, 0xfd, 0xca, 0x74, 0xfc, 0x0e, 0x09, 0x9d, 0x8f, 0x96, 0xf5, 0x57, 0x85, 0x33, 0xb0, 0xae, 0x6a, 0x89, 0x0f, 0x98, 0x0f, 0x10, 0x92, 0xfb, 0x20, 0xf3, 0x1d, 0x0c, 0xbd, 0x20, 0x76, 0x96, 0x3f, 0x08, 0x86, 0x5f, 0x8f, 0x02, 0x82, 0x07, 0xe6, 0x92, 0x32, 0xe8, 0xf4, 0x72, 0x84, 0xd4, 0x49, 0xad, 0x6b, 0xea, 0xaa, 0xb6, 0x1f, 0xaa, 0x37, 0x88, 0x1e, 0x22, 0xb5, 0x48, 0x55, 0x50, 0x45, 0x3a, 0x57, 0x83, 0x9f, 0x12, 0xca, 0x19, 0xdc, 0x5a, 0x9a, 0x4a, 0x4b, 0xb4, 0xc2, 0x76, 0x17, 0xae, 0x1f, 0xc7, 0x81, 0x13, 0x8f, 0x1c, 0xd3, 0xa4, 0x12, 0x98, 0x52, 0x2d, 0x42, 0xa3, 0xdd, 0x97, 0x9d, 0xcc, 0x19, 0x9f, 0x84, 0x8f, 0x07, 0x6f, 0x9d, 0x52, 0xa7, 0x87, 0xcf, 0x13, 0xe6, 0x82, 0xbb, 0x28, 0x15, 0xe9, 0x10, 0x32, 0x11, 0x3c, 0xf4, 0x27, 0x05, 0xfc, 0x33, 0x7c, 0xf9, 0x1d, 0x72, 0x3d, 0x7f, 0x3d, 0xe0, 0xce, 0x52, 0xc8, 0x2d, 0x83, 0x50, 0xae, 0x29, 0x63, 0x01, 0xbb, 0xe5, 0xca, 0xdc, 0x17, 0x33, 0x0e, 0x54, 0xb7, 0x35, 0xb0, 0xc7, 0x41, 0x28, 0x97, 0x35, 0xb0, 0xfd, 0x96, 0x2d, 0x0a, 0x90, 0xc7, 0x2d, 0x05, 0x0c, 0x19, 0xa5, 0x5c, 0xb4, 0x2f, 0x6d, 0x83, 0xa4, 0xd2, 0xb2, 0xf1, 0x40, 0x0a, 0x96, 0xa3, 0xb6, 0x9d, 0xa9, 0x87, 0xa7, 0x2c, 0x5c, 0xb2, 0x6e, 0xc6, 0xa8, 0x2b, 0x61, 0x38, 0x89, 0x70, 0x99, 0xc4, 0xda, 0xb3, 0x1f, 0xaa, 0x35, 0xb8, 0x65, 0x99, 0x0b, 0x10, 0xf0, 0x11, 0x5b, 0xa4, 0x38, 0xdd, 0xce, 0xd9, 0xe5, 0x75, 0x9f, 0x9c, 0x9e, 0xbc, 0xef, 0xfd, 0x0c, 0xdd, 0x29, 0x61, 0xb8, 0xdc, 0x9f, 0x94, 0x3b, 0x9b, 0x9d, 0x8f, 0x7b, 0xf7, 0x3a, 0xca, 0x03, 0xfc, 0x38, 0x17, 0x52, 0x5a, 0x40, 0x1c, 0x3d, 0xae, 0x0e, 0xd2, 0x8e, 0x55, 0x9b, 0xb6, 0x6b, 0x47, 0xb2, 0xf5, 0x3b, 0x04, 0xa1, 0x3c, 0x4d, 0x40, 0xe5, 0x5f, 0xb9, 0x7f, 0x08, 0x11, 0x79, 0x87, 0x9c, 0x9a, 0x5f, 0x0f, 0xc1, 0x51, 0xd9, 0x6e, 0x44, 0x51, 0xd0, 0x11, 0x43, 0x21, 0xd2, 0x16, 0x5c, 0x61, 0xd8, 0x97, 0x8d, 0x0f, 0x00, 0xe4, 0x06, 0xf9, 0x50, 0x38, 0x71, 0x3a, 0xa6, 0x11, 0xff, 0x26, 0x3d, 0x99, 0x71, 0x74, 0xae, 0x3e, 0x7a, 0x47, 0x11, 0x83, 0x3e, 0x6b, 0xe4, 0xa2, 0xdc, 0xc9, 0xba, 0x39, 0xd3, 0xed, 0x89, 0xd4, 0xd0, 0x67, 0x46, 0x8e, 0xc8, 0x65, 0xd6, 0x05, 0xca, 0xaf, 0x1a, 0x48, 0x5f, 0x35, 0xfc, 0x50, 0x6d, 0xd0, 0xd4, 0x15, 0x53, 0x91, 0xb1, 0x70, 0x91, 0x3e, 0x1c, 0xf7, 0x3a, 0xa7, 0x83, 0x63, 0xd2, 0xff, 0x8c, 0xf5, 0x5c, 0x3f, 0x43, 0x23, 0x2a, 0x58, 0x2e, 0xd7, 0x89, 0x6a, 0x77, 0xb0, 0x80, 0x0e, 0x16, 0x36, 0xaa, 0x9d, 0xaf, 0xf7, 0x71, 0x0a, 0xf1, 0x92, 0xfa, 0xf5, 0xe0, 0xe1, 0xaa, 0x05, 0xc3, 0x87, 0xca, 0x86, 0x80, 0x06, 0x77, 0x6c, 0xdf, 0x56, 0xf6, 0x01, 0xbe, 0xc7, 0x7c, 0x56, 0xa7, 0x2e, 0xe5, 0xfd, 0xcb, 0x92, 0xfd, 0xea, 0xc0, 0x27, 0xef, 0x1d, 0x54, 0xc1, 0x8d, 0xf9, 0x2d, 0x82, 0xc2, 0xfd, 0x75, 0x27, 0x8b, 0x1d, 0x0c, 0x8f, 0x20, 0x41, 0x93, 0x8d, 0xe4, 0x10, 0x1a, 0x71, 0x83, 0xbb, 0x38, 0xe0, 0xfa, 0x41, 0x92, 0x0b, 0xf1, 0xa9, 0x3e, 0x46, 0x99, 0x2f, 0xb4, 0x57, 0xbd, 0x7e, 0xaf, 0x73, 0xd5, 0x3d, 0xfe, 0x19, 0xf2, 0x6a, 0x70, 0x5b, 0x2e, 0xaa, 0xb6, 0x67, 0x65, 0x6b, 0xc6, 0x51, 0x32, 0x5b, 0x0e, 0xfe, 0x2b, 0x5b, 0x34, 0x44, 0x49, 0xf1, 0xd2, 0xa8, 0xdf, 0x42, 0xaf, 0xa6, 0xec, 0x96, 0x74, 0xb5, 0x8c, 0xfd, 0x4a, 0xb7, 0x7f, 0x87, 0xe0, 0xda, 0x99, 0xe6, 0x4c, 0xf2, 0x61, 0x31, 0xfc, 0xf9, 0x30, 0xcc, 0x83, 0xb3, 0x68, 0x1f, 0xd7, 0x00, 0x22, 0xcb, 0x37, 0x74, 0xe7, 0xcf, 0xf0, 0x80, 0x0d, 0xa2, 0x47, 0xad, 0x1a, 0x33, 0xed, 0x88, 0x0b, 0xc8, 0xe4, 0x65, 0x22, 0x15, 0x6b, 0xa6, 0xab, 0xed, 0x95, 0xe2, 0xc5, 0x9c, 0x30, 0xf8, 0x47, 0x6c, 0x46, 0x65, 0x7e, 0xc0, 0x54, 0x4d, 0x51, 0x67, 0x70, 0x7c, 0xda, 0x1b, 0x98, 0xed, 0x5d, 0xd5, 0x57, 0x62, 0x60, 0xce, 0x93, 0xac, 0x7a, 0x88, 0x84, 0xba, 0x78, 0x5a, 0xae, 0x0e, 0xa7, 0xf4, 0x11, 0xd5, 0xcc, 0x01, 0x55, 0x97, 0x06, 0x41, 0x9c, 0x67, 0xf3, 0xcf, 0xa8, 0xf4, 0xb1, 0x93, 0xab, 0xfa, 0xe8, 0xa3, 0xd5, 0x99, 0x03, 0xd8, 0x0a, 0xaa, 0x18, 0xe7, 0x66, 0x0d, 0xd2, 0xfe, 0x88, 0x7f, 0x8b, 0x5a, 0x84, 0xfb, 0x07, 0x61, 0x4d, 0x2c, 0x16, 0x54, 0x74, 0xf0, 0xef, 0xbd, 0x83, 0xea, 0xf8, 0x38, 0x63, 0xac, 0x05, 0x89, 0xe2, 0x49, 0xe9, 0xb0, 0x5e, 0x1f, 0x97, 0xe3, 0x9b, 0x2c, 0xae, 0x9d, 0x99, 0xdb, 0xe3, 0xee, 0xae, 0x2d, 0x18, 0x5d, 0xed, 0xea, 0xb6, 0xd5, 0x53, 0x9a, 0x47, 0xae, 0xff, 0xb2, 0x74, 0xb6, 0xfe, 0x41, 0x1d, 0x79, 0xaf, 0x0e, 0x74, 0x4d, 0x69, 0xf9, 0x2c, 0x7d, 0xc2, 0x86, 0x78, 0x90, 0xab, 0x4e, 0xd1, 0x75, 0x39, 0x31, 0x98, 0x54, 0xb6, 0x7a, 0x1e, 0x4f, 0x56, 0xd5, 0xf1},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x59, 0x00, 0x00, 0xef, 0x06, 0xec, 0xc9, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x00, 0x15, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0xf6, 0x67, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x25, 0x1b, 0x4f, 0x37, 0x6d, 0xbf, 0xb4, 0xb1, 0x6c, 0x7f, 0x98, 0xb6, 0xda, 0xd0, 0x38, 0x7f, 0x61, 0x8a, 0xf9, 0xc5, 0xaf, 0x0a, 0x9b, 0x56, 0x6a, 0x6d, 0xb6, 0x16, 0x63, 0x4e, 0xdd, 0xc3, 0x8a, 0xee, 0x57, 0xef, 0x23, 0x0b, 0x2a, 0xa8, 0xaa, 0x6a, 0x60, 0x19, 0x86, 0xf3, 0xe0, 0x04, 0x87, 0xcc, 0x19, 0x05, 0x53, 0x79, 0x6a, 0x58, 0x21, 0x5a, 0xa5, 0x6e, 0xd2, 0x16, 0x70, 0xd7, 0x68, 0x67, 0xdb, 0x9d, 0x11, 0x70, 0xc2, 0xd9, 0x5c, 0xdf, 0x78, 0xfd, 0x72, 0x4e, 0x41, 0x42, 0x99, 0x58, 0x74, 0x04, 0x59, 0x13, 0xa7, 0x35, 0x72, 0x29, 0x14, 0x9c, 0x89, 0x4f, 0xe1, 0x3f, 0x1c, 0x40, 0xc1, 0x4f, 0xc7, 0xf5, 0x69, 0xa4, 0xab, 0x16, 0xe4, 0xb1, 0xee, 0xf6, 0xc6, 0xf6, 0xdd, 0xde, 0xeb, 0x99, 0xe2, 0x12, 0x0d, 0xc3, 0xe1, 0x21, 0x1d, 0x33, 0x51, 0xe0, 0xea, 0x60, 0xdd, 0x83, 0x33, 0xcc, 0xc7, 0xcd, 0xaf, 0xc9, 0x58, 0x9f, 0xe4, 0x7e, 0x82, 0x09, 0x08, 0x4e, 0x40, 0x60, 0x02, 0xa2, 0x26, 0xf8, 0x85, 0x9c, 0x01, 0x53, 0x09, 0xc5, 0x22, 0x50, 0xb0, 0x40, 0x64, 0x12, 0xa7, 0x81, 0xd7, 0x54, 0x15, 0x21, 0x17, 0x06, 0x58, 0x93, 0x9c, 0x82, 0x79, 0x88, 0xc8, 0x59, 0x9c, 0xca, 0x6a, 0x45, 0xaa, 0xeb, 0x61, 0xb0, 0x52, 0x59, 0x16, 0x2a, 0x0f, 0x2e, 0x2e, 0x4b, 0x25, 0x57, 0x5a, 0xf7, 0x4d, 0x05, 0x32, 0xb8, 0xda, 0x44, 0x9d, 0x3b, 0x97, 0x87, 0x1c, 0x5e, 0x0c, 0x06, 0x17, 0x67, 0xaa, 0x1a, 0x61, 0xee, 0xa8, 0x61, 0x0c, 0xb2, 0x13, 0x36, 0x6c, 0x29, 0x1c, 0x32, 0x99, 0xf2, 0xa8, 0x5e, 0x1b, 0xe3, 0x9a, 0x66, 0x5b, 0x4c, 0x4d, 0xfa, 0x01, 0xf7, 0x98, 0xf0, 0xe3, 0x49, 0xad, 0x0c, 0x4c, 0x9e, 0x1b, 0x0b, 0xf3, 0xb2, 0x51, 0x9f, 0x79, 0x14, 0xb0, 0x3b, 0xf9, 0x56, 0x15, 0x8a, 0x19, 0xd5, 0xd7, 0x6f, 0xd5, 0x38, 0xfb, 0xe6, 0x85, 0x35, 0x62, 0xd2, 0x9b, 0x9a, 0x32, 0xb1, 0x9a, 0xf2, 0x49, 0x19, 0xe0, 0x51, 0x6b, 0xd3, 0xfd, 0x47, 0x18, 0xfc, 0xe3, 0x78, 0x19, 0xf7, 0x44, 0x12, 0x67, 0xb2, 0xf4, 0xb1, 0xb5, 0xe1, 0x7c, 0xa4, 0xb7, 0x10, 0xd0, 0xdf, 0x94, 0xd8, 0x77, 0x3a, 0x8d, 0x22, 0xa2, 0x9b, 0x89, 0x43, 0x2e, 0xc1, 0x0e, 0xab, 0x7b, 0x0c, 0x44, 0x3a, 0x78, 0x0e, 0xf6, 0x80, 0x08, 0x17, 0x03, 0x42, 0xe0, 0x99, 0x61, 0x93, 0x42, 0x2b, 0x29, 0xaf, 0xd1, 0x71, 0x69, 0xa2, 0x0f, 0xd4, 0x97, 0xa2, 0x0b, 0x06, 0x3f, 0x8e, 0xc6, 0xed, 0xf2, 0xd4, 0xfb, 0x2d, 0xdd, 0x28, 0x15, 0x5a, 0x55, 0x40, 0xcd, 0x81, 0xee, 0x88, 0x10, 0xb4, 0x02, 0x52, 0xf7, 0x7b, 0xd1, 0x2c, 0xd5, 0xba, 0xec, 0xb7, 0x12, 0x53, 0xf6, 0x56, 0x76, 0x0b, 0x0f, 0xa3, 0xaf, 0x97, 0x0e, 0x47, 0xbf, 0xfd, 0xfa, 0x70, 0xfa, 0x6e, 0x3a, 0xa7, 0xb9, 0xb8, 0xe1, 0x51, 0xb7, 0x5b, 0x22, 0xf0, 0xe1, 0xf5, 0xc9, 0xe9, 0x80, 0xbc, 0xbf, 0x00, 0xe1, 0xbc, 0xea, 0x75, 0xfe, 0x36, 0x38, 0xbe, 0xba, 0xb8, 0xfe, 0x70, 0xdc, 0x07, 0x5a, 0x63, 0x96, 0x2f, 0x55, 0xe2, 0x8c, 0x4d, 0x41, 0x00, 0x4d, 0x22, 0x05, 0x0b, 0x23, 0x0a, 0x0e, 0x4a, 0xe7, 0x08, 0x2c, 0x0a, 0x16, 0x91, 0x29, 0x5f, 0xfe, 0x34, 0x06, 0xd8, 0xf5, 0x68, 0x5a, 0x1f, 0xe6, 0x3c, 0xc8, 0x08, 0x44, 0x53, 0xe4, 0x10, 0x92, 0xb7, 0x1b, 0xbc, 0x97, 0x92, 0x8f, 0x7d, 0xf1, 0x48, 0x5e, 0x7c, 0xc7, 0x32, 0x9e, 0x8d, 0x41, 0x7c, 0xfd, 0xea, 0x1f, 0xe7, 0x0f, 0x67, 0xd0, 0x96, 0x73, 0x98, 0x07, 0x41, 0xd9, 0x78, 0x1d, 0x5f, 0x90, 0xcf, 0x17, 0xd7, 0xe4, 0xc3, 0xc5, 0xf9, 0x79, 0x87, 0x5c, 0x9e, 0x76, 0x3e, 0xff, 0x42, 0x74, 0xe5, 0x9a, 0x4a, 0x85, 0x45, 0x46, 0xba, 0x7e, 0x8a, 0xf5, 0x04, 0x89, 0x4f, 0x70, 0xf0, 0x2b, 0x41, 0xe4, 0x9e, 0x18, 0x49, 0xd1, 0xd0, 0x09, 0x82, 0xf5, 0x5b, 0x8a, 0x3a, 0x52, 0x06, 0x21, 0xd6, 0xff, 0xe0, 0xc7, 0x22, 0x1b, 0xe6, 0x22, 0x93, 0xe7, 0xd8, 0x4f, 0xe2, 0x97, 0x59, 0x9e, 0xe6, 0xc7, 0x27, 0x3f, 0x96, 0x46, 0x76, 0x1c, 0x47, 0x11, 0xc5, 0x3a, 0xfb, 0xe9, 0x2f, 0x8f, 0xe4, 0xd5, 0x7d, 0x2b, 0x7b, 0xf9, 0xa7, 0xad, 0xbd, 0x77, 0x8f, 0x5a, 0xdd, 0xf3, 0xe9, 0x99, 0x9b, 0xff, 0xb6, 0xd4, 0x0b, 0x15, 0x6c, 0xdc, 0x76, 0x2e, 0xf3, 0x61, 0xc0, 0xdd, 0xce, 0x68, 0x44, 0x79, 0x2a, 0x4a, 0xfc, 0x3c, 0xeb, 0x0c, 0xba, 0xc7, 0x27, 0xe7, 0x1f, 0xc8, 0x59, 0xef, 0x7c, 0x70, 0x71, 0xd5, 0x27, 0x9d, 0xf3, 0x23, 0xd0, 0xbb, 0xeb, 0x93, 0x73, 0x54, 0x38, 0x0c, 0x18, 0xcc, 0xed, 0x02, 0x49, 0x09, 0x2d, 0x9f, 0x7a, 0x7f, 0xb1, 0xcf, 0x22, 0x0e, 0xba, 0xf0, 0x9e, 0x41, 0x04, 0x34, 0x11, 0x66, 0x2f, 0xfd, 0xa9, 0x2a, 0xa7, 0x97, 0xa6, 0xd9, 0x74, 0x46, 0x33, 0xd7, 0x47, 0xb3, 0x85, 0x87, 0x69, 0x71, 0xaa, 0xb6, 0x66, 0xd5, 0x95, 0xb4, 0x47, 0xb2, 0xf2, 0xbb, 0x16, 0xb3, 0x94, 0x63, 0x45, 0xd4, 0x54, 0x2a, 0x0c, 0x2c, 0xdc, 0xa2, 0x46, 0x41, 0x16, 0xf6, 0x2d, 0xac, 0x65, 0xad, 0x7a, 0x4a, 0xfd, 0x42, 0x16, 0x6b, 0xcd, 0xf1, 0x9c, 0xa6, 0x7a, 0x54, 0x27, 0x9a, 0xd8, 0xd6, 0x30, 0x37, 0xad, 0xe8, 0xff, 0x9b, 0xf8, 0x49, 0x16, 0x3f, 0xa2, 0xf4, 0xd6, 0x23, 0xc1, 0xc6, 0xcf, 0x8d, 0xaa, 0x24, 0x17, 0xa9, 0x2a, 0x3d, 0x46, 0x2e, 0xbc, 0x67, 0x14, 0xf7, 0xff, 0x81, 0x47, 0x20, 0x7c, 0x98, 0x2a, 0x97, 0x42, 0x25, 0x44, 0x7d, 0xa4, 0xdf, 0x3b, 0x42, 0xbd, 0x07, 0x2e, 0xfb, 0x5b, 0xed, 0xfa, 0xa8, 0xfd, 0x16, 0x34, 0x96, 0x83, 0x1c, 0x7b, 0xcf, 0xb5, 0x9c, 0x18, 0x16, 0x69, 0x1e, 0xe6, 0x16, 0x26, 0xe0, 0x51, 0x15, 0x5a, 0x8b, 0xf5, 0xe5, 0xe3, 0x71, 0x7c, 0xf1, 0xfb, 0x8c, 0x29, 0xd0, 0x78, 0x89, 0xd6, 0x27, 0x26, 0x02, 0x36, 0x95, 0x69, 0xc5, 0x10, 0x44, 0x7b, 0x6d, 0xf5, 0xef, 0x39, 0x4b, 0x40, 0x9f, 0x56, 0x4f, 0x41, 0x73, 0xd6, 0x56, 0x3b, 0x91, 0x97, 0xb2, 0xc9, 0xea, 0xaf, 0x74, 0xca, 0x5d, 0x7f, 0x35, 0xf1, 0xe3, 0x2c, 0xfe, 0x12, 0x72, 0xaf, 0x44, 0x58, 0x49, 0xad, 0xc4, 0x9f, 0x82, 0x41, 0x43, 0x72, 0xd9, 0x9c, 0x52, 0xf6, 0x2d, 0xeb, 0x7a, 0x25, 0x7c, 0xc4, 0x42, 0x48, 0x6b, 0x03, 0xfc, 0xed, 0xe5, 0xf8, 0x77, 0x20, 0xca, 0x04, 0x33, 0xda, 0x05, 0x55, 0xd4, 0x85, 0xcd, 0xdb, 0x85, 0x0d, 0x69, 0xd7, 0x90, 0x00, 0xfe, 0x22, 0xa7, 0x29, 0x8c, 0x11, 0x24, 0x02, 0xf6, 0x05, 0x53, 0xd4, 0x6c, 0x3a, 0x14, 0x90, 0x57, 0x67, 0x8c, 0x7c, 0x63, 0x69, 0xdc, 0x84, 0x94, 0x23, 0x65, 0xaf, 0xf0, 0xc6, 0xec, 0xb4, 0x49, 0x50, 0x79, 0x89, 0xd1, 0x5e, 0x5b, 0x7e, 0x5e, 0x28, 0xf2, 0x63, 0x28, 0x7e, 0x7a, 0xf4, 0x29, 0xd9, 0x5a, 0x4c, 0xf1, 0x8f, 0x34, 0x80, 0x40, 0x8b, 0xad, 0xf6, 0x43, 0x9e, 0xf9, 0x35, 0x62, 0xea, 0x77, 0x44, 0xbe, 0x7b, 0x76, 0x52, 0x6a, 0xc4, 0xce, 0xd9, 0x04, 0x77, 0x74, 0x72, 0xa0, 0xcd, 0x45, 0x44, 0x8e, 0xec, 0x1d, 0x5f, 0x95, 0x51, 0xce, 0xa1, 0x6e, 0x05, 0xab, 0x35, 0x69, 0x20, 0xf1, 0xe2, 0x32, 0x97, 0xdb, 0xc8, 0xf1, 0x88, 0xf4, 0x27, 0x34, 0xcd, 0xfc, 0x10, 0x14, 0x04, 0xf8, 0xa3, 0xef, 0xf6, 0xba, 0x1c, 0xef, 0x2e, 0xe3, 0x1d, 0x8c, 0x29, 0xd0, 0x5f, 0x90, 0x8c, 0x51, 0x65, 0xa7, 0x41, 0x0d, 0xd5, 0xe6, 0xdd, 0xf3, 0x11, 0xfc, 0x03, 0xdf, 0xf8, 0x8d, 0x2d, 0x26, 0x78, 0x9f, 0x05, 0x23, 0xa0, 0xf7, 0x04, 0x90, 0x5f, 0x3d, 0x03, 0x41, 0x0e, 0x57, 0x0f, 0x21, 0x93, 0xe6, 0x37, 0x6b, 0xab, 0xc3, 0x69, 0xa5, 0xe1, 0xcb, 0x96, 0xb7, 0xf7, 0xfa, 0xcd, 0xde, 0xd6, 0xa6, 0xf3, 0x7a, 0x77, 0xe7, 0x8d, 0xb3, 0xbd, 0xb9, 0x3e, 0x72, 0xf6, 0x36, 0xd7, 0xb7, 0x9d, 0x8d, 0x37, 0x5b, 0x6f, 0x46, 0x2e, 0xdb, 0xa0, 0x8c, 0x6d, 0x3b, 0x49, 0x7a, 0x5b, 0x76, 0x90, 0x38, 0x9e, 0xa8, 0xf1, 0xd2, 0xf7, 0xe0, 0x11, 0x5c, 0x00, 0x2b, 0x7e, 0x76, 0xee, 0xe9, 0x55, 0x9a, 0x0b, 0x1f, 0xe4, 0x03, 0xcb, 0x60, 0x5e, 0xdc, 0x12, 0x05, 0xaa, 0xbe, 0xc7, 0xb8, 0xa2, 0xaf, 0xa7, 0x2e, 0xb6, 0xe8, 0x2a, 0x6c, 0x9c, 0x41, 0x15, 0x19, 0xd9, 0xc0, 0x68, 0xd3, 0x82, 0x81, 0x1f, 0x48, 0x6b, 0xbc, 0x22, 0x0d, 0x6e, 0x6d, 0x4a, 0xf4, 0xa1, 0x81, 0x32, 0x89, 0xe6, 0xf6, 0xc9, 0x72, 0x65, 0x29, 0x6e, 0x71, 0x94, 0xd7, 0xea, 0x52, 0x08, 0x87, 0x05, 0x0b, 0x74, 0x51, 0x70, 0xd9, 0x1f, 0xeb, 0xcb, 0x05, 0x28, 0x56, 0xb7, 0xc4, 0xe3, 0x02, 0xef, 0xed, 0x7b, 0x26, 0xdb, 0xff, 0x13, 0xa4, 0x27, 0xd0, 0xce, 0x61, 0xf0, 0x8c, 0xbd, 0x35, 0xfe, 0x77, 0x1e, 0xb0, 0x48, 0x56, 0x5b, 0x5b, 0x18, 0xe7, 0xf0, 0x73, 0xe1, 0xf8, 0xf2, 0xb6, 0x93, 0xa9, 0xc6, 0x57, 0x6b, 0x47, 0x99, 0x2d, 0xef, 0x5c, 0x94, 0x8b, 0xf2, 0x6d, 0x8f, 0xc2, 0xeb, 0xda, 0x1d, 0x35, 0x90, 0x7a, 0x67, 0xc4, 0xef, 0x1a, 0xd5, 0x3a, 0x7e, 0xe9, 0x99, 0x22, 0x36, 0x11, 0x9a, 0xe9, 0xe8, 0x0d, 0x40, 0x1f, 0x45, 0x1a, 0xc7, 0xa1, 0xf4, 0x02, 0xaa, 0x15, 0xe8, 0xf7, 0x87, 0xfd, 0x72, 0xd8, 0x47, 0x4a, 0xf7, 0x12, 0xb4, 0x64, 0x44, 0x7a, 0x58, 0xe1, 0xc2, 0x53, 0x26, 0x6f, 0xe1, 0x09, 0x7c, 0xe5, 0x78, 0x11, 0x75, 0x42, 0x9a, 0xa0, 0x2d, 0x87, 0xd4, 0x3e, 0x0e, 0x1c, 0x9f, 0x05, 0x89, 0x70, 0x6c, 0x4e, 0x27, 0x1c, 0xe5, 0xf5, 0x1c, 0xb9, 0x59, 0x2b, 0x32, 0x55, 0x28, 0x3c, 0x71, 0xc6, 0x90, 0x68, 0x08, 0x07, 0xcf, 0x1e, 0x52, 0x36, 0xce, 0xf1, 0xc2, 0xb6, 0xd7, 0x68},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x38, 0x2d, 0x40, 0x00, 0x40, 0x06, 0x65, 0x6e, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xaa, 0xf5, 0x65, 0x80, 0x10, 0x0f, 0xdd, 0x8b, 0x18, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xa9, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x18, 0x50, 0x40, 0x00, 0x40, 0x06, 0x85, 0x4b, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xaa, 0xfa, 0xbd, 0x80, 0x10, 0x0f, 0xd5, 0x85, 0xc8, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xa9, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0xf1, 0x50, 0x40, 0x00, 0x40, 0x06, 0xac, 0x4a, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x00, 0x15, 0x80, 0x10, 0x0f, 0xaa, 0x80, 0x9b, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xa9, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x9f, 0x83, 0x40, 0x00, 0x40, 0x06, 0xfe, 0x17, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x05, 0x6d, 0x80, 0x10, 0x0f, 0x7f, 0x7b, 0x6e, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xa9, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x9e, 0x4e, 0x40, 0x00, 0x40, 0x06, 0xff, 0x4c, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x05, 0x6d, 0x80, 0x10, 0x10, 0x00, 0x7a, 0xec, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xaa, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x5a, 0x00, 0x00, 0xef, 0x06, 0xec, 0xc8, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x05, 0x6d, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0xde, 0x6c, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x2b, 0x1b, 0x4f, 0x37, 0xa9, 0x1f, 0x41, 0x30, 0xaf, 0x81, 0x11, 0x04, 0x46, 0x24, 0xb0, 0x22, 0x41, 0x14, 0xc6, 0x85, 0x16, 0xc0, 0x08, 0xc6, 0x2b, 0x12, 0x18, 0xc1, 0xa3, 0x0a, 0x0b, 0x8c, 0x94, 0xa2, 0x3f, 0xb9, 0x9d, 0x68, 0x76, 0x10, 0x1f, 0xb7, 0x56, 0xed, 0x4f, 0x55, 0xf1, 0x44, 0x88, 0x89, 0x92, 0x3e, 0x5d, 0x83, 0x34, 0xc9, 0x09, 0x64, 0xe4, 0x86, 0x4c, 0xd1, 0x69, 0x92, 0xde, 0xda, 0x75, 0xf0, 0x0a, 0xac, 0xd0, 0x51, 0xbb, 0xca, 0xae, 0x10, 0xd3, 0xfb, 0x73, 0x2b, 0x79, 0x6f, 0x56, 0x3c, 0x0f, 0xd6, 0x96, 0x43, 0x29, 0x1e, 0xd9, 0x78, 0xa6, 0x80, 0x1c, 0xad, 0xab, 0x23, 0xf2, 0x31, 0x88, 0x8c, 0xdc, 0x22, 0xdc, 0x71, 0x42, 0xf0, 0x5f, 0x18, 0x18, 0x8e, 0xf9, 0x28, 0x33, 0x87, 0x20, 0x4e, 0xe9, 0xa3, 0x08, 0x8d, 0xf6, 0x95, 0x84, 0x20, 0xf1, 0xb7, 0xf6, 0x19, 0x21, 0x10, 0x84, 0x40, 0xfe, 0xbc, 0x43, 0x34, 0x08, 0x82, 0x20, 0xcc, 0x91, 0x09, 0x84, 0x93, 0x16, 0xc4, 0x33, 0xaf, 0x48, 0xf0, 0xd4, 0xf9, 0xea, 0x8c, 0x52, 0x2a, 0xbf, 0xbd, 0x91, 0xc5, 0x9e, 0x07, 0x4e, 0x01, 0x02, 0xbd, 0x48, 0xc8, 0x98, 0xd1, 0x89, 0x62, 0x88, 0x2f, 0xf0, 0x82, 0xd5, 0x37, 0x86, 0x07, 0xf5, 0xae, 0xcf, 0x30, 0xbc, 0x42, 0xc3, 0xae, 0x4f, 0xb1, 0x20, 0xb8, 0xc6, 0x8f, 0x05, 0xf0, 0x0c, 0xd4, 0xfc, 0xd7, 0x26, 0x58, 0x34, 0x84, 0x84, 0x7a, 0x2a, 0x21, 0x11, 0x84, 0x44, 0xce, 0x11, 0x08, 0xc4, 0xde, 0x00, 0x84, 0x00, 0xaf, 0x2c, 0x90, 0x67, 0x5e, 0x0b, 0x04, 0x82, 0x58, 0xf5, 0xed, 0x60, 0xbc, 0x8a, 0xd1, 0x2a, 0xfc, 0x74, 0x84, 0x13, 0x63, 0x0c, 0x0a, 0x5a, 0x13, 0xb1, 0x11, 0x07, 0xa3, 0xfd, 0x49, 0x75, 0x22, 0xb8, 0x21, 0x8d, 0xb1, 0x25, 0xfc, 0xfc, 0x9f, 0xff, 0xfa, 0x6f, 0x41, 0x64, 0x37, 0xa2, 0xbb, 0x3d, 0x33, 0x62, 0x34, 0x72, 0xa8, 0xe7, 0xe3, 0xe9, 0x39, 0x1d, 0x47, 0xb1, 0xe0, 0xc2, 0x49, 0x72, 0xd0, 0xe4, 0x31, 0x4f, 0x03, 0x81, 0x98, 0x86, 0xf8, 0xa1, 0x06, 0x1f, 0x72, 0x3a, 0x40, 0x14, 0x52, 0xd1, 0x1b, 0x79, 0xb2, 0xa3, 0xf0, 0xc6, 0xfc, 0x88, 0x06, 0xfa, 0x6c, 0x11, 0xcf, 0x15, 0xc1, 0xc4, 0xca, 0x52, 0xa7, 0xa3, 0xe3, 0x23, 0x62, 0xe1, 0x11, 0x84, 0x47, 0x24, 0x3c, 0x5c, 0x94, 0x02, 0x45, 0xd4, 0x58, 0xa2, 0xc6, 0x12, 0x04, 0xfc, 0xcc, 0xeb, 0x92, 0xe2, 0x1d, 0xe4, 0x68, 0xaa, 0xb0, 0x12, 0x31, 0xc1, 0x8f, 0x7a, 0x98, 0x72, 0x2c, 0x47, 0xd7, 0x99, 0x08, 0x07, 0x5a, 0xbd, 0x90, 0x02, 0x2f, 0xf0, 0x42, 0x39, 0x9e, 0x17, 0xe6, 0xe9, 0x98, 0xa1, 0x04, 0xa9, 0x6b, 0x7a, 0x39, 0x1a, 0x27, 0x3b, 0x9c, 0xe8, 0xe1, 0xc4, 0x0c, 0x27, 0x66, 0x38, 0x51, 0xc3, 0x89, 0x1e, 0x3e, 0x77, 0x29, 0xf6, 0x90, 0x63, 0xf9, 0x22, 0x1a, 0x85, 0xdf, 0x81, 0xd8, 0x07, 0xcb, 0x3c, 0xa5, 0xd5, 0x28, 0xac, 0xba, 0x02, 0x5f, 0x78, 0xca, 0xaa, 0x33, 0x60, 0xb7, 0xea, 0xa4, 0xd8, 0xba, 0x83, 0x63, 0x3c, 0x28, 0x8c, 0xd4, 0xa5, 0x83, 0x8a, 0x43, 0x58, 0xd1, 0x18, 0x2e, 0xde, 0x00, 0xf0, 0xed, 0xd0, 0x02, 0x3d, 0x48, 0xda, 0x5a, 0x72, 0x8e, 0xd6, 0xe6, 0xfa, 0xce, 0xeb, 0xf5, 0x37, 0x33, 0x71, 0x52, 0x31, 0xa8, 0xb5, 0xb9, 0xf5, 0xe6, 0x0d, 0x78, 0x08, 0xac, 0x4d, 0x65, 0x69, 0x29, 0xc8, 0xe9, 0x79, 0x78, 0x73, 0x84, 0x9c, 0x35, 0xc9, 0x11, 0x23, 0x57, 0xa0, 0x83, 0xe0, 0xf4, 0xc4, 0x1a, 0x39, 0x3b, 0xb2, 0x17, 0xb5, 0x37, 0x37, 0x5e, 0xe3, 0x25, 0xec, 0x72, 0x66, 0xff, 0x5d, 0xb8, 0x0d, 0x2e, 0x8e, 0x3a, 0x9f, 0xdf, 0x92, 0x85, 0x13, 0x16, 0x13, 0x68, 0x4e, 0x3d, 0x9d, 0x28, 0xbb, 0x1b, 0x3b, 0x7b, 0xf7, 0x13, 0x65, 0x77, 0x2e, 0x51, 0xba, 0xc0, 0x6e, 0x40, 0x0c, 0x23, 0x27, 0xc9, 0xf7, 0xe7, 0x23, 0x85, 0xc2, 0x68, 0x63, 0xbd, 0xb5, 0xb1, 0x43, 0x48, 0x6d, 0x9a, 0x7f, 0x07, 0x01, 0xf6, 0xd6, 0x37, 0x96, 0x10, 0x60, 0x77, 0x1e, 0x01, 0x2e, 0x4d, 0x0e, 0x80, 0x55, 0xbc, 0x47, 0x6c, 0x08, 0xde, 0xfd, 0x39, 0x89, 0x20, 0xb1, 0x42, 0x22, 0xec, 0x11, 0x32, 0x67, 0xaa, 0x19, 0x42, 0x48, 0x9d, 0xfd, 0xc3, 0x43, 0xa7, 0x5a, 0xa0, 0xb5, 0x65, 0xe5, 0xbb, 0x5f, 0x6f, 0xf1, 0x8b, 0x4d, 0x60, 0x4f, 0xe5, 0x87, 0x62, 0x2c, 0x30, 0x16, 0x79, 0x25, 0x4d, 0x86, 0x50, 0x22, 0x62, 0x2e, 0xba, 0xae, 0xcc, 0xd7, 0x49, 0x4f, 0x59, 0xa1, 0x75, 0x04, 0x5d, 0x3e, 0x5c, 0x2c, 0x6d, 0xcc, 0x8c, 0xa8, 0xcb, 0x86, 0x71, 0x7c, 0x23, 0xbf, 0x51, 0x80, 0x68, 0x0f, 0xe5, 0xa6, 0x56, 0x2d, 0x70, 0x36, 0xbd, 0xe4, 0xed, 0x08, 0xf9, 0x24, 0xbf, 0xfd, 0xa3, 0xc3, 0x5a, 0xd3, 0x56, 0xbd, 0x66, 0x3d, 0x77, 0x4a, 0xfd, 0xdd, 0x21, 0x3b, 0x5b, 0x6d, 0x1e, 0xfd, 0x1a, 0x14, 0x54, 0x3d, 0x94, 0x67, 0xd1, 0x4d, 0x0f, 0x98, 0x04, 0x37, 0x80, 0xf2, 0x21, 0xb3, 0x5c, 0xa8, 0x4d, 0xa2, 0x5f, 0x37, 0xda, 0x9f, 0xd5, 0x43, 0x79, 0x12, 0x68, 0x1a, 0x40, 0xd3, 0x03, 0x26, 0x41, 0xe2, 0xe1, 0xe9, 0x3d, 0x45, 0xeb, 0xbf, 0x68, 0x3d, 0xb6, 0x03, 0xd6, 0xd1, 0xeb, 0xc7, 0xf2, 0x74, 0xb6, 0x71, 0xd9, 0x84, 0x42, 0xcf, 0x28, 0x22, 0x9a, 0xb8, 0x3e, 0xcd, 0xd4, 0xd7, 0x76, 0x3c, 0x4f, 0x4e, 0x1a, 0x8f, 0x46, 0x1c, 0xa5, 0xa4, 0x36, 0xb9, 0xe9, 0x0b, 0xc9, 0x9c, 0x7e, 0x2a, 0x4f, 0x6d, 0xda, 0xaa, 0x33, 0x5b, 0x8f, 0x54, 0x24, 0x29, 0x95, 0x6c, 0x45, 0xee, 0x13, 0x2e, 0xc8, 0x58, 0x66, 0x36, 0x14, 0x8b, 0xbd, 0xc3, 0x9f, 0xbe, 0x2b, 0xa8, 0xae, 0xc9, 0x3b, 0xb8, 0xf7, 0x2e, 0x4b, 0x03, 0x87, 0x31, 0xc4, 0xbf, 0x58, 0xc5, 0x99, 0xf2, 0xb0, 0xd8, 0x16, 0x84, 0x2c, 0xfc, 0x0e, 0xbc, 0xb6, 0xb6, 0x52, 0x72, 0xe7, 0xb4, 0xf8, 0x86, 0x92, 0x41, 0xca, 0xd1, 0xb0, 0xf6, 0xf0, 0x6a, 0xed, 0x03, 0x76, 0x09, 0x47, 0x14, 0xc2, 0xd2, 0x62, 0x4c, 0xe9, 0xda, 0xff, 0xa5, 0xc2, 0x86, 0x48, 0x6c, 0x64, 0x9a, 0x0c, 0xd8, 0xac, 0x91, 0x89, 0xcf, 0x03, 0xcc, 0x64, 0x70, 0x4f, 0x15, 0x5f, 0x4f, 0x18, 0x2c, 0x5d, 0x5d, 0x01, 0x66, 0x33, 0xfb, 0x86, 0xc4, 0x99, 0x73, 0x1c, 0xfb, 0x62, 0xf6, 0xe0, 0xbb, 0x7a, 0x60, 0x5a, 0xd9, 0x3b, 0x8c, 0x63, 0xcc, 0x59, 0x92, 0x5c, 0xf8, 0xc5, 0x07, 0x47, 0x34, 0x33, 0xc9, 0x9c, 0x53, 0x5a, 0xd2, 0x3b, 0x3f, 0x92, 0x9f, 0xf4, 0x9b, 0xd3, 0x03, 0x8f, 0x7e, 0xd5, 0x6b, 0xf5, 0x4d, 0x93, 0x1a, 0x12, 0xd5, 0x0f, 0x98, 0xa8, 0x76, 0x65, 0xe0, 0xd4, 0x97, 0xb0, 0xaa, 0x87, 0xb9, 0x15, 0xd3, 0xa7, 0xcf, 0xfa, 0x1f, 0x61, 0xc1, 0xcc, 0x6d, 0x69, 0x75, 0x76, 0x12, 0x11, 0x63, 0xa5, 0x1a, 0xb3, 0x46, 0xad, 0xde, 0xc5, 0xde, 0xb6, 0x5f, 0x6a, 0xbe, 0xea, 0x93, 0x68, 0x23, 0xd5, 0x98, 0x31, 0x68, 0xb5, 0x0e, 0x0b, 0x67, 0x98, 0xb1, 0x5d, 0xf5, 0x19, 0xb4, 0x85, 0x6a, 0xcc, 0x58, 0xb3, 0x5a, 0x87, 0x85, 0x33, 0x2c, 0x30, 0x5c, 0xf5, 0x79, 0xac, 0x69, 0x6a, 0xcc, 0x31, 0x66, 0x33, 0x9d, 0xe6, 0xcf, 0xf6, 0x30, 0xab, 0x55, 0x9f, 0xd9, 0x58, 0xa6, 0xc6, 0xac, 0x21, 0xab, 0x77, 0x91, 0xd3, 0x1a, 0x99, 0x55, 0x3b, 0xe5, 0x52, 0x9a, 0xab, 0x82, 0xa4, 0x24, 0xbc, 0xe0, 0xbb, 0xfa, 0x59, 0xfb, 0x2a, 0x86, 0x56, 0x83, 0xe2, 0xab, 0x18, 0xa5, 0x3b, 0xe9, 0x95, 0x81, 0xb6, 0xc6, 0xce, 0x5e, 0x6e, 0xaf, 0x7e, 0xcc, 0xc2, 0x6c, 0xa3, 0xcb, 0x74, 0x08, 0xab, 0xf6, 0x44, 0xa5, 0x40, 0xa8, 0xdc, 0xa7, 0x80, 0x54, 0xa9, 0x7e, 0x29, 0xf5, 0x48, 0xe3, 0x89, 0x33, 0x0a, 0x72, 0xee, 0xcd, 0x14, 0x19, 0x95, 0x7b, 0xa1, 0x41, 0x77, 0x36, 0xe7, 0xd5, 0x21, 0x51, 0xd3, 0x45, 0xa5, 0x3e, 0xf8, 0x05, 0x8f, 0xe2, 0xf3, 0x1d, 0x33, 0xdd, 0x61, 0x00, 0x9f, 0x19, 0xe0, 0x4c, 0x7c, 0x59, 0x30, 0xb8, 0xdf, 0x9a, 0x57, 0x7e, 0x54, 0xff, 0x2e, 0x85, 0x2d, 0xcd, 0xa9, 0x7f, 0x95, 0x62, 0x16, 0x5f, 0xa2, 0xe9, 0xb9, 0xa0, 0x84, 0x8a, 0x85, 0x76, 0x97, 0x4b, 0x96, 0xa9, 0x39, 0xfa, 0xc3, 0x27, 0xe0, 0x34, 0x31, 0xd1, 0xd2, 0x85, 0x9e, 0x2c, 0x9c, 0x33, 0xb4, 0x38, 0xcc, 0x28, 0xa6, 0x98, 0xc3, 0x0a, 0xdb, 0x7f, 0x5e, 0x61, 0x95, 0x26, 0xdf, 0xdc, 0x5b, 0xce, 0xe5, 0x6d, 0x55, 0x5d, 0x98, 0x39, 0x77, 0x3c, 0x7e, 0xdc, 0x0e, 0x7d, 0xad, 0xbd, 0x14, 0xad, 0xbe, 0xed, 0x32, 0xb7, 0xe3, 0x2c, 0x21, 0x55, 0xf3, 0x7c, 0xdc, 0x96, 0xa3, 0x5c, 0xd2, 0x76, 0x8f, 0xa7, 0x10, 0x1f, 0xc6, 0xe9, 0x74, 0x4e, 0x5c, 0xfa, 0xd0, 0x25, 0x1c, 0x19, 0x18, 0xdf, 0xbd, 0x08, 0x95, 0xae, 0x3f, 0x80, 0xd6, 0xb2, 0x88, 0xc7, 0xcd, 0x1e, 0x4f, 0x65, 0x35, 0xee, 0xdf, 0x8e, 0x60, 0x89, 0xb2, 0x58, 0x5f, 0x8b, 0x65, 0x90, 0x69, 0x41, 0xd9, 0x2e, 0x0d, 0x20, 0x3c, 0xa7, 0xa9, 0x68, 0x99, 0xb2, 0xc8, 0x47, 0x2f, 0xc4, 0x5e, 0x0b, 0x33, 0xb0, 0x7e, 0xa8, 0xe0, 0xb4, 0x42, 0x9a, 0x08, 0x47, 0xc9, 0x8c, 0xbc, 0x4e, 0x90, 0x50, 0xb9, 0x03, 0xf5, 0x78, 0x99, 0x39, 0x03, 0x40, 0x6b, 0xe4, 0xc8, 0x42, 0xfa, 0xe1, 0xf2, 0x8f, 0xfe, 0x47, 0x7e, 0xd5, 0x53, 0xed, 0xd7, 0x40, 0xd0, 0x2a, 0xad, 0x46, 0xc1, 0x2c, 0x7d, 0x83, 0x41, 0xaf, 0xf0, 0xd1, 0x0b, 0xbc, 0x54, 0xe3, 0x74, 0xed, 0xcc, 0xa0, 0x32, 0xc9, 0x13, 0xc5, 0xf0, 0x51, 0xd2, 0x68, 0xfc, 0xac, 0xfd, 0x0a, 0xea, 0x13, 0x14, 0xbd, 0x67, 0x60, 0x3c, 0x1b, 0xb7, 0xaa, 0xd5, 0x9c, 0xa6, 0x6d},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x01, 0xb7, 0x3c, 0x5b, 0x00, 0x00, 0xef, 0x06, 0xf0, 0x9c, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0a, 0xc5, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x13, 0x1f, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x2b, 0x1b, 0x4f, 0x37, 0xa9, 0xce, 0xd7, 0x61, 0xe6, 0xf9, 0x88, 0x31, 0x83, 0x90, 0x3a, 0xba, 0x8d, 0x83, 0x5b, 0x36, 0xeb, 0x03, 0xef, 0xf3, 0x12, 0x5d, 0xc9, 0x5a, 0xfb, 0x85, 0x94, 0x25, 0x7e, 0xa2, 0x3c, 0xcd, 0x93, 0x3c, 0x85, 0x61, 0x87, 0xfa, 0x4c, 0xf1, 0x53, 0xf8, 0xa0, 0xbf, 0x73, 0x5c, 0x1c, 0x9c, 0x9a, 0x72, 0x70, 0xfd, 0xb1, 0xe3, 0x9f, 0xe5, 0x4d, 0x44, 0x2e, 0x30, 0x93, 0x78, 0xc2, 0xd2, 0xfa, 0x0a, 0x82, 0xbe, 0xa9, 0xfd, 0xb3, 0xd6, 0x91, 0xa5, 0x39, 0x93, 0xa9, 0xc2, 0x13, 0x56, 0x32, 0x00, 0x18, 0xaa, 0x0c, 0x88, 0xf4, 0x41, 0xed, 0x99, 0xba, 0x6a, 0xff, 0xe3, 0xd7, 0x03, 0x82, 0x9b, 0x43, 0xdc, 0xca, 0xd2, 0x27, 0xac, 0xe5, 0xa3, 0x81, 0xc1, 0x45, 0xf8, 0x7f, 0x43, 0xf7, 0x41, 0x4a, 0xa6, 0xf8, 0xd1, 0x1d, 0x88, 0xf6, 0x1e, 0xa9, 0xfc, 0x3d, 0xdc, 0xdd, 0x23, 0x1f, 0x59, 0xf4, 0x10, 0xdd, 0xaf, 0x4c, 0xf3, 0x64, 0xe5, 0x07, 0x6e, 0xa8, 0x73, 0xc1, 0xe2, 0x58, 0x50, 0x9d, 0x0a, 0x96, 0x52, 0xba, 0x47, 0x72, 0x66, 0x49, 0x39, 0xe6, 0x8f, 0xf5, 0xa7, 0xd2, 0xcf, 0xc8, 0xc3, 0x84, 0xa7, 0x18, 0x37, 0xc9, 0x9f, 0xcb, 0x80, 0x46, 0xea, 0x22, 0xe6, 0x8f, 0x89, 0xd8, 0x10, 0x5f, 0x8d, 0x3a, 0x26, 0xa2, 0x78, 0x67, 0xd9, 0xdc, 0x13, 0x95, 0xd5, 0x13, 0x90, 0xc8, 0x26, 0xfe, 0xf4, 0xf1, 0xab, 0x91, 0x97, 0x9a, 0x2f, 0xd3, 0xd8, 0xcb, 0xdd, 0x27, 0x79, 0xfe, 0xd9, 0xd6, 0x85, 0xca, 0x73, 0x6f, 0x53, 0xad, 0xa1, 0xae, 0x6f, 0x65, 0x4d, 0x0b, 0xd8, 0x18, 0xf7, 0x0d, 0x1f, 0x97, 0x9f, 0x96, 0x37, 0x19, 0xdd, 0x38, 0x99, 0xa6, 0x58, 0xdd, 0xd9, 0x68, 0xbf, 0xc4, 0xe7, 0x77, 0xb8, 0x47, 0x47, 0xae, 0xd8, 0x58, 0x96, 0x70, 0xe8, 0xcb, 0xec, 0x73, 0xf7, 0xfe, 0xec, 0x57, 0xff, 0xe3, 0x91, 0x6b, 0xbf, 0xf9, 0xaf, 0xb6, 0x6b, 0x17, 0xfd, 0x3f, 0x02, 0xa8, 0xbd, 0xca, 0x59, 0xea, 0x16, 0x69, 0x2e, 0x68, 0x45, 0x28, 0x6c, 0x8a, 0x2b, 0x7f, 0xe1, 0x35, 0xf5, 0x1c, 0xef, 0x1d, 0x0d, 0xf0, 0x17, 0x02, 0xbc, 0x16, 0xac, 0xc6, 0x87, 0x7b, 0x29, 0xb6, 0x32, 0xd3, 0xc7, 0xec, 0x6b, 0xd5, 0xb6, 0xb7, 0x4a, 0x7b, 0x0f, 0xaa, 0xb1, 0x2f, 0x3f, 0xf1, 0x2b, 0x48, 0xf9, 0x5b, 0xd7, 0xf6, 0x53, 0xe5, 0xea, 0xc3, 0x94, 0x5e, 0xed, 0x23, 0xe7, 0x33, 0xdd, 0x92, 0x20, 0x1f, 0xf3, 0xa8, 0xfe, 0x2d, 0xf4, 0x99, 0x6e, 0xe3, 0x20, 0x1e, 0xd2, 0x60, 0x59, 0xaf, 0xd2, 0x17, 0xeb, 0x97, 0xf4, 0x54, 0x57, 0xc0, 0x6a, 0xbd, 0x56, 0xf4, 0x57, 0xc0, 0xe1, 0xaf, 0xfa, 0x3f, 0x1b, 0xf9, 0x5f, 0xa8, 0xe7, 0x2a, 0x0d, 0x7d, 0x64, 0x00, 0x00},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x35, 0xdd, 0x40, 0x00, 0x40, 0x06, 0x67, 0xbe, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x0a, 0xc5, 0x80, 0x10, 0x0f, 0xd5, 0x75, 0xa7, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xbc, 0xb3, 0xa1, 0x66, 0x2b},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0xea, 0x8c, 0x40, 0x00, 0x40, 0x06, 0xb3, 0x0e, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x0c, 0x48, 0x80, 0x10, 0x0f, 0xf3, 0x74, 0x06, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xbc, 0xb3, 0xa1, 0x66, 0x2b},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x34, 0x38, 0x2d, 0x40, 0x00, 0xef, 0x06, 0xb6, 0x4d, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0c, 0x48, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x64, 0x9b, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x2b, 0x1b, 0x4f, 0x37, 0xaa},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x34, 0x18, 0x50, 0x40, 0x00, 0xef, 0x06, 0xd6, 0x2a, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0c, 0x48, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x64, 0x9b, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x2b, 0x1b, 0x4f, 0x37, 0xaa},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x28, 0x1b, 0xf7, 0x40, 0x00, 0x30, 0x06, 0x91, 0x90, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0c, 0x48, 0x4d, 0xa6, 0xae, 0x36, 0x50, 0x11, 0x00, 0x7a, 0x29, 0x6e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0xfd, 0x9b, 0x40, 0x00, 0x40, 0x06, 0x9f, 0xff, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x0c, 0x49, 0x80, 0x10, 0x10, 0x00, 0x60, 0x9c, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x4b, 0x18, 0xb3, 0xa1, 0x66, 0x2b},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x28, 0xab, 0x91, 0x00, 0x00, 0x40, 0x06, 0x32, 0x16, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x35, 0xfe, 0xab, 0x0c, 0x49, 0x50, 0x10, 0x10, 0x00, 0x19, 0xe9, 0x00, 0x00},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0xaf, 0x96, 0x40, 0x00, 0x40, 0x06, 0xee, 0x04, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x0c, 0x49, 0x80, 0x11, 0x10, 0x00, 0x9a, 0xcb, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x51, 0x10, 0xe6, 0xb3, 0xa1, 0x66, 0x2b},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x28, 0x00, 0x00, 0x40, 0x00, 0x30, 0x06, 0xad, 0x87, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0c, 0x49, 0x13, 0x1c, 0x26, 0x5c, 0x50, 0x04, 0x00, 0x00, 0xec, 0x58, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t}\n\tPacketFlowTemplate2 = [][]byte{\n\t\t{0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x3c, 0x47, 0x65, 0x40, 0x00, 0x40, 0x06, 0xad, 0x64, 0xc0, 0xa8, 0x01, 0x02, 0xae, 0x8f,\n\t\t\t0xd5, 0xb8, 0xd6, 0x39, 0x00, 0x50, 0xf6, 0x1c, 0x6c, 0xbe, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02,\n\t\t\t0x16, 0xd0, 0x85, 0xf0, 0x00, 0x00, 0x02, 0x04, 0x05, 0xb4, 0x04, 0x02, 0x08, 0x0a, 0x00, 0x0d,\n\t\t\t0x2b, 0xdb, 0x00, 0x00, 0x00, 0x00, 0x01, 0x03, 0x03, 0x07},\n\t\t{0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x3c, 0x00, 0x00, 0x40, 0x00, 0x34, 0x06, 0x00, 0xca, 0xae, 0x8f, 0xd5, 0xb8, 0xc0, 0xa8,\n\t\t\t0x01, 0x02, 0x00, 0x50, 0xd6, 0x39, 0xfa, 0x58, 0x9c, 0x88, 0xf6, 0x1c, 0x6c, 0xbf, 0xa0, 0x12,\n\t\t\t0x16, 0xa0, 0x4f, 0xf1, 0x00, 0x00, 0x02, 0x04, 0x05, 0xb4, 0x04, 0x02, 0x08, 0x0a, 0x12, 0xcc,\n\t\t\t0x8c, 0x71, 0x00, 0x0d, 0x2b, 0xdb, 0x01, 0x03, 0x03, 0x06},\n\t\t{0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x34, 0x47, 0x66, 0x40, 0x00, 0x40, 0x06, 0xad, 0x6b, 0xc0, 0xa8, 0x01, 0x02, 0xae, 0x8f,\n\t\t\t0xd5, 0xb8, 0xd6, 0x39, 0x00, 0x50, 0xf6, 0x1c, 0x6c, 0xbf, 0xfa, 0x58, 0x9c, 0x89, 0x80, 0x10,\n\t\t\t0x00, 0x2e, 0x95, 0x29, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x00, 0x0d, 0x2b, 0xe0, 0x12, 0xcc,\n\t\t\t0x8c, 0x71},\n\t\t{0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x34, 0x10, 0xf4, 0x40, 0x00, 0x34, 0x06, 0xef, 0xdd, 0xae, 0x8f, 0xd5, 0xb8, 0xc0, 0xa8,\n\t\t\t0x01, 0x02, 0x00, 0x50, 0xd6, 0x39, 0xfa, 0x58, 0x9c, 0x89, 0xf6, 0x1c, 0x6f, 0x94, 0x80, 0x10,\n\t\t\t0x00, 0x72, 0x92, 0x04, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x12, 0xcc, 0x8c, 0x7d, 0x00, 0x0d,\n\t\t\t0x2b, 0xe0},\n\t\t{0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x34, 0x47, 0x68, 0x40, 0x00, 0x40, 0x06, 0xad, 0x69, 0xc0, 0xa8, 0x01, 0x02, 0xae, 0x8f,\n\t\t\t0xd5, 0xb8, 0xd6, 0x39, 0x00, 0x50, 0xf6, 0x1c, 0x6f, 0x94, 0xfa, 0x58, 0xa2, 0x31, 0x80, 0x10,\n\t\t\t0x00, 0x45, 0x8c, 0x84, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x00, 0x0d, 0x2b, 0xe5, 0x12, 0xcc,\n\t\t\t0x8c, 0x7d},\n\t\t{0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x34, 0x47, 0x78, 0x40, 0x00, 0x40, 0x06, 0xad, 0x59, 0xc0, 0xa8, 0x01, 0x02, 0xae, 0x8f,\n\t\t\t0xd5, 0xb8, 0xd6, 0x39, 0x00, 0x50, 0xf6, 0x1c, 0x6f, 0x94, 0xfa, 0x58, 0xf6, 0x2f, 0x80, 0x11,\n\t\t\t0x01, 0x98, 0x34, 0xfd, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x00, 0x0d, 0x2e, 0x00, 0x12, 0xcc,\n\t\t\t0x8c, 0x97},\n\t}\n\tPacketFlowTemplate3 = [][]byte{{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x40, 0xf4, 0x1f, 0x40, 0x00, 0x40, 0x06, 0xa9, 0x6f, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xac, 0x48, 0x00, 0x00, 0x00, 0x00, 0xb0, 0x02, 0xff, 0xff, 0x6b, 0x6c, 0x00, 0x00, 0x02, 0x04, 0x05, 0xb4, 0x01, 0x03, 0x03, 0x05, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0x38, 0x00, 0x00, 0x00, 0x00, 0x04, 0x02, 0x00, 0x00},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x3c, 0xf4, 0x1f, 0x40, 0x00, 0x70, 0x06, 0x79, 0x53, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xf0, 0x0c, 0x4d, 0xa6, 0xac, 0x49, 0xa0, 0x12, 0x38, 0x90, 0x3b, 0xba, 0x00, 0x00, 0x02, 0x04, 0x05, 0x64, 0x01, 0x03, 0x03, 0x00, 0x04, 0x02, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x11, 0x1b, 0x4f, 0x37, 0x38},\n\t\t{0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x3c, 0x47, 0x65, 0x40, 0x00, 0x40, 0x06, 0xad, 0x64, 0xc0, 0xa8, 0x01, 0x02, 0xae, 0x8f,\n\t\t\t0xd5, 0xb8, 0xd6, 0x39, 0x00, 0x50, 0xf6, 0x1c, 0x6c, 0xbe, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02,\n\t\t\t0x16, 0xd0, 0x85, 0xf0, 0x00, 0x00, 0x02, 0x04, 0x05, 0xb4, 0x04, 0x02, 0x08, 0x0a, 0x00, 0x0d,\n\t\t\t0x2b, 0xdb, 0x00, 0x00, 0x00, 0x00, 0x01, 0x03, 0x03, 0x07},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x7a, 0x12, 0x40, 0x00, 0x40, 0x06, 0x23, 0x89, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xac, 0x49, 0xfe, 0xaa, 0xf0, 0x0d, 0x80, 0x10, 0x10, 0x08, 0x92, 0x82, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0x6d, 0xb3, 0xa1, 0x66, 0x11},\n\t\t{0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x3c, 0x00, 0x00, 0x40, 0x00, 0x34, 0x06, 0x00, 0xca, 0xae, 0x8f, 0xd5, 0xb8, 0xc0, 0xa8,\n\t\t\t0x01, 0x02, 0x00, 0x50, 0xd6, 0x39, 0xfa, 0x58, 0x9c, 0x88, 0xf6, 0x1c, 0x6c, 0xbf, 0xa0, 0x12,\n\t\t\t0x16, 0xa0, 0x4f, 0xf1, 0x00, 0x00, 0x02, 0x04, 0x05, 0xb4, 0x04, 0x02, 0x08, 0x0a, 0x12, 0xcc,\n\t\t\t0x8c, 0x71, 0x00, 0x0d, 0x2b, 0xdb, 0x01, 0x03, 0x03, 0x06},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x02, 0x21, 0xeb, 0x50, 0x40, 0x00, 0x40, 0x06, 0xb0, 0x5d, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xac, 0x49, 0xfe, 0xaa, 0xf0, 0x0d, 0x80, 0x18, 0x10, 0x08, 0xe1, 0xdc, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0x6d, 0xb3, 0xa1, 0x66, 0x11, 0x47, 0x45, 0x54, 0x20, 0x2f, 0x20, 0x48, 0x54, 0x54, 0x50, 0x2f, 0x31, 0x2e, 0x31, 0x0d, 0x0a, 0x48, 0x6f, 0x73, 0x74, 0x3a, 0x20, 0x31, 0x36, 0x34, 0x2e, 0x36, 0x37, 0x2e, 0x32, 0x32, 0x38, 0x2e, 0x31, 0x35, 0x32, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x6b, 0x65, 0x65, 0x70, 0x2d, 0x61, 0x6c, 0x69, 0x76, 0x65, 0x0d, 0x0a, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x2d, 0x49, 0x6e, 0x73, 0x65, 0x63, 0x75, 0x72, 0x65, 0x2d, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x3a, 0x20, 0x31, 0x0d, 0x0a, 0x55, 0x73, 0x65, 0x72, 0x2d, 0x41, 0x67, 0x65, 0x6e, 0x74, 0x3a, 0x20, 0x4d, 0x6f, 0x7a, 0x69, 0x6c, 0x6c, 0x61, 0x2f, 0x35, 0x2e, 0x30, 0x20, 0x28, 0x4d, 0x61, 0x63, 0x69, 0x6e, 0x74, 0x6f, 0x73, 0x68, 0x3b, 0x20, 0x49, 0x6e, 0x74, 0x65, 0x6c, 0x20, 0x4d, 0x61, 0x63, 0x20, 0x4f, 0x53, 0x20, 0x58, 0x20, 0x31, 0x30, 0x5f, 0x31, 0x31, 0x5f, 0x36, 0x29, 0x20, 0x41, 0x70, 0x70, 0x6c, 0x65, 0x57, 0x65, 0x62, 0x4b, 0x69, 0x74, 0x2f, 0x35, 0x33, 0x37, 0x2e, 0x33, 0x36, 0x20, 0x28, 0x4b, 0x48, 0x54, 0x4d, 0x4c, 0x2c, 0x20, 0x6c, 0x69, 0x6b, 0x65, 0x20, 0x47, 0x65, 0x63, 0x6b, 0x6f, 0x29, 0x20, 0x43, 0x68, 0x72, 0x6f, 0x6d, 0x65, 0x2f, 0x35, 0x33, 0x2e, 0x30, 0x2e, 0x32, 0x37, 0x38, 0x35, 0x2e, 0x31, 0x34, 0x33, 0x20, 0x53, 0x61, 0x66, 0x61, 0x72, 0x69, 0x2f, 0x35, 0x33, 0x37, 0x2e, 0x33, 0x36, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x3a, 0x20, 0x74, 0x65, 0x78, 0x74, 0x2f, 0x68, 0x74, 0x6d, 0x6c, 0x2c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x78, 0x68, 0x74, 0x6d, 0x6c, 0x2b, 0x78, 0x6d, 0x6c, 0x2c, 0x61, 0x70, 0x70, 0x6c, 0x69, 0x63, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x78, 0x6d, 0x6c, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x39, 0x2c, 0x69, 0x6d, 0x61, 0x67, 0x65, 0x2f, 0x77, 0x65, 0x62, 0x70, 0x2c, 0x2a, 0x2f, 0x2a, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x38, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x45, 0x6e, 0x63, 0x6f, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x67, 0x7a, 0x69, 0x70, 0x2c, 0x20, 0x64, 0x65, 0x66, 0x6c, 0x61, 0x74, 0x65, 0x2c, 0x20, 0x73, 0x64, 0x63, 0x68, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x4c, 0x61, 0x6e, 0x67, 0x75, 0x61, 0x67, 0x65, 0x3a, 0x20, 0x65, 0x6e, 0x2d, 0x55, 0x53, 0x2c, 0x65, 0x6e, 0x3b, 0x71, 0x3d, 0x30, 0x2e, 0x38, 0x0d, 0x0a, 0x43, 0x6f, 0x6f, 0x6b, 0x69, 0x65, 0x3a, 0x20, 0x50, 0x48, 0x50, 0x53, 0x45, 0x53, 0x53, 0x49, 0x44, 0x3d, 0x64, 0x38, 0x69, 0x61, 0x63, 0x62, 0x69, 0x65, 0x34, 0x38, 0x63, 0x74, 0x73, 0x32, 0x76, 0x70, 0x30, 0x36, 0x39, 0x64, 0x61, 0x62, 0x63, 0x74, 0x6d, 0x34, 0x3b, 0x20, 0x5f, 0x67, 0x61, 0x74, 0x3d, 0x31, 0x3b, 0x20, 0x5f, 0x67, 0x61, 0x74, 0x5f, 0x67, 0x6c, 0x6f, 0x62, 0x61, 0x6c, 0x54, 0x72, 0x61, 0x63, 0x6b, 0x65, 0x72, 0x3d, 0x31, 0x3b, 0x20, 0x5f, 0x67, 0x61, 0x3d, 0x47, 0x41, 0x31, 0x2e, 0x31, 0x2e, 0x31, 0x35, 0x35, 0x38, 0x30, 0x37, 0x37, 0x34, 0x31, 0x36, 0x2e, 0x31, 0x34, 0x37, 0x36, 0x33, 0x38, 0x34, 0x34, 0x30, 0x35, 0x0d, 0x0a, 0x0d, 0x0a},\n\t\t{0x00, 0x26, 0x62, 0x2f, 0x47, 0x87, 0x00, 0x1d, 0x60, 0xb3, 0x01, 0x84, 0x08, 0x00, 0x45, 0x00,\n\t\t\t0x00, 0x34, 0x47, 0x66, 0x40, 0x00, 0x40, 0x06, 0xad, 0x6b, 0xc0, 0xa8, 0x01, 0x02, 0xae, 0x8f,\n\t\t\t0xd5, 0xb8, 0xd6, 0x39, 0x00, 0x50, 0xf6, 0x1c, 0x6c, 0xbf, 0xfa, 0x58, 0x9c, 0x89, 0x80, 0x10,\n\t\t\t0x00, 0x2e, 0x95, 0x29, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x00, 0x0d, 0x2b, 0xe0, 0x12, 0xcc,\n\t\t\t0x8c, 0x71},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x34, 0x3c, 0x55, 0x00, 0x00, 0xef, 0x06, 0xf2, 0x25, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xf0, 0x0d, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x10, 0x1f, 0x68, 0x81, 0x26, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x20, 0x1b, 0x4f, 0x37, 0x6d},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x56, 0x00, 0x00, 0xef, 0x06, 0xec, 0xcc, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xf0, 0x0d, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x34, 0xda, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x25, 0x1b, 0x4f, 0x37, 0x6d, 0x48, 0x54, 0x54, 0x50, 0x2f, 0x31, 0x2e, 0x31, 0x20, 0x32, 0x30, 0x30, 0x20, 0x4f, 0x4b, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2d, 0x54, 0x79, 0x70, 0x65, 0x3a, 0x20, 0x74, 0x65, 0x78, 0x74, 0x2f, 0x68, 0x74, 0x6d, 0x6c, 0x3b, 0x20, 0x63, 0x68, 0x61, 0x72, 0x73, 0x65, 0x74, 0x3d, 0x55, 0x54, 0x46, 0x2d, 0x38, 0x0d, 0x0a, 0x56, 0x61, 0x72, 0x79, 0x3a, 0x20, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x45, 0x6e, 0x63, 0x6f, 0x64, 0x69, 0x6e, 0x67, 0x0d, 0x0a, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65, 0x73, 0x3a, 0x20, 0x54, 0x68, 0x75, 0x2c, 0x20, 0x31, 0x39, 0x20, 0x4e, 0x6f, 0x76, 0x20, 0x31, 0x39, 0x38, 0x31, 0x20, 0x30, 0x38, 0x3a, 0x35, 0x32, 0x3a, 0x30, 0x30, 0x20, 0x47, 0x4d, 0x54, 0x0d, 0x0a, 0x43, 0x61, 0x63, 0x68, 0x65, 0x2d, 0x43, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x3a, 0x20, 0x6e, 0x6f, 0x2d, 0x73, 0x74, 0x6f, 0x72, 0x65, 0x2c, 0x20, 0x6e, 0x6f, 0x2d, 0x63, 0x61, 0x63, 0x68, 0x65, 0x2c, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x2d, 0x72, 0x65, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x2c, 0x20, 0x70, 0x6f, 0x73, 0x74, 0x2d, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x3d, 0x30, 0x2c, 0x20, 0x70, 0x72, 0x65, 0x2d, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x3d, 0x30, 0x2c, 0x20, 0x6d, 0x61, 0x78, 0x2d, 0x61, 0x67, 0x65, 0x3d, 0x38, 0x36, 0x34, 0x30, 0x30, 0x2c, 0x20, 0x70, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x2c, 0x20, 0x6d, 0x75, 0x73, 0x74, 0x2d, 0x72, 0x65, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x2c, 0x20, 0x70, 0x72, 0x6f, 0x78, 0x79, 0x2d, 0x72, 0x65, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x0d, 0x0a, 0x50, 0x72, 0x61, 0x67, 0x6d, 0x61, 0x3a, 0x20, 0x6e, 0x6f, 0x2d, 0x63, 0x61, 0x63, 0x68, 0x65, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2d, 0x45, 0x6e, 0x63, 0x6f, 0x64, 0x69, 0x6e, 0x67, 0x3a, 0x20, 0x67, 0x7a, 0x69, 0x70, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x2d, 0x4c, 0x65, 0x6e, 0x67, 0x74, 0x68, 0x3a, 0x20, 0x36, 0x37, 0x39, 0x38, 0x0d, 0x0a, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x2d, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x73, 0x3a, 0x20, 0x62, 0x79, 0x74, 0x65, 0x73, 0x0d, 0x0a, 0x44, 0x61, 0x74, 0x65, 0x3a, 0x20, 0x54, 0x68, 0x75, 0x2c, 0x20, 0x31, 0x33, 0x20, 0x4f, 0x63, 0x74, 0x20, 0x32, 0x30, 0x31, 0x36, 0x20, 0x31, 0x38, 0x3a, 0x34, 0x38, 0x3a, 0x31, 0x30, 0x20, 0x47, 0x4d, 0x54, 0x0d, 0x0a, 0x41, 0x67, 0x65, 0x3a, 0x20, 0x30, 0x0d, 0x0a, 0x43, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x20, 0x6b, 0x65, 0x65, 0x70, 0x2d, 0x61, 0x6c, 0x69, 0x76, 0x65, 0x0d, 0x0a, 0x58, 0x2d, 0x43, 0x61, 0x63, 0x68, 0x65, 0x3a, 0x20, 0x4d, 0x49, 0x53, 0x53, 0x0d, 0x0a, 0x0d, 0x0a, 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xd5, 0x3d, 0xdb, 0x72, 0x1b, 0xbb, 0x91, 0xcf, 0x56, 0x55, 0xfe, 0x01, 0x61, 0xb2, 0x76, 0xb2, 0xd2, 0x90, 0xba, 0xda, 0x92, 0x2d, 0xf1, 0x14, 0x45, 0xd1, 0x96, 0x4e, 0x74, 0x3b, 0x22, 0x65, 0xc7, 0x49, 0xa5, 0x5c, 0xe0, 0x0c, 0xc8, 0x81, 0x35, 0xb7, 0x33, 0x98, 0x11, 0x45, 0x3f, 0xed, 0x6f, 0xec, 0xef, 0xed, 0x97, 0x6c, 0x37, 0x6e, 0x73, 0x21, 0x29, 0x4a, 0x96, 0x62, 0xef, 0xba, 0x2a, 0x47, 0x43, 0x0c, 0xd0, 0x68, 0xf4, 0xbd, 0x81, 0xc6, 0x64, 0xff, 0x8f, 0x5e, 0xec, 0x66, 0xd3, 0x84, 0x11, 0x3f, 0x0b, 0x83, 0xf6, 0xca, 0xfe, 0x1f, 0x1d, 0xe7, 0x9f, 0x7c, 0x44, 0x4e, 0x7a, 0xe4, 0xcd, 0xbf, 0xda, 0x64, 0x1f, 0x5b, 0x89, 0x1b, 0x50, 0x21, 0x0e, 0x1a, 0x41, 0xe6, 0x70, 0xb6, 0x47, 0xe4, 0x9f, 0xdd, 0x06, 0x09, 0x68, 0x34, 0x3e, 0x68, 0xb0, 0xa8, 0x01, 0xdd, 0xfe, 0xf8, 0x4f, 0x16, 0x79, 0x7c, 0xf4, 0x2f, 0xc7, 0xa9, 0x80, 0xd8, 0x9d, 0x0f, 0x62, 0xf9, 0xd8, 0x71, 0xa6, 0x87, 0x63, 0x43, 0x0d, 0x46, 0x14, 0x3b, 0x5f, 0x05, 0xf1, 0x98, 0xb8, 0xc9, 0xe2, 0xa4, 0x0a, 0xc9, 0x71, 0x2a, 0xd0, 0x56, 0xf6, 0x7d, 0x46, 0xbd, 0x52, 0x97, 0x95, 0x17, 0xfb, 0x21, 0xcb, 0x28, 0x71, 0x7d, 0x9a, 0x0a, 0x96, 0x1d, 0x34, 0xf2, 0x6c, 0xe4, 0xec, 0x16, 0xed, 0x7e, 0x96, 0x25, 0x0e, 0xfb, 0x3d, 0xe7, 0xb7, 0x07, 0x8d, 0xbf, 0x3b, 0xd7, 0x1d, 0xa7, 0x1b, 0x87, 0x09, 0xcd, 0xf8, 0x30, 0x60, 0x0d, 0xe2, 0xc6, 0x51, 0xc6, 0x22, 0x18, 0x74, 0xd2, 0x3b, 0x60, 0xde, 0x98, 0xad, 0xb9, 0x7e, 0x1a, 0x87, 0xec, 0x60, 0xa3, 0x18, 0x1f, 0x51, 0xf8, 0xdd, 0xb8, 0xe5, 0x6c, 0x92, 0xc4, 0x69, 0x56, 0x1a, 0x32, 0xe1, 0x5e, 0xe6, 0x1f, 0x78, 0xec, 0x96, 0xbb, 0xcc, 0x91, 0x3f, 0xd6, 0x48, 0x2e, 0x58, 0xea, 0x08, 0x97, 0x06, 0x14, 0xc0, 0x1f, 0x4c, 0x99, 0x58, 0x23, 0x3c, 0xe2, 0x19, 0xa7, 0x81, 0x6c, 0x05, 0xc0, 0xcd, 0xf5, 0x35, 0x12, 0x42, 0x5b, 0x98, 0x87, 0x95, 0x26, 0x7a, 0x57, 0x6a, 0xda, 0x6c, 0xae, 0x03, 0x02, 0x80, 0x41, 0xc6, 0xb3, 0x80, 0xb5, 0xaf, 0xbb, 0xa7, 0x9d, 0xfd, 0x96, 0x7a, 0xae, 0xa2, 0x05, 0x14, 0x73, 0x53, 0x9e, 0x64, 0x3c, 0x8e, 0x4a, 0x98, 0x61, 0x7f, 0x42, 0xbd, 0x5b, 0x1a, 0xb9, 0x4c, 0x90, 0x9b, 0x28, 0x9e, 0x04, 0x72, 0x71, 0xd0, 0xe6, 0xa5, 0x4c, 0x08, 0x68, 0x4c, 0xf0, 0x2f, 0x8f, 0xc6, 0x44, 0xc4, 0x2e, 0x07, 0x78, 0x01, 0x89, 0x18, 0xf3, 0x04, 0xa1, 0x91, 0x47, 0xdc, 0x94, 0xd1, 0x0c, 0xfa, 0x50, 0x92, 0x47, 0xfc, 0x96, 0xa5, 0x82, 0x67, 0x53, 0xc2, 0xa2, 0x94, 0xbb, 0x3e, 0xf3, 0xc8, 0x70, 0x4a, 0x3c, 0xd9, 0xca, 0x48, 0x02, 0xff, 0x4d, 0x98, 0x9b, 0xc1, 0x4f, 0x41, 0x26, 0x3e, 0x4b, 0x19, 0xa1, 0x41, 0x00, 0x2b, 0x86, 0x0e, 0xdc, 0xcb, 0x69, 0x20, 0x88, 0x4b, 0x23, 0x32, 0x0a, 0xe2, 0x3c, 0xe5, 0xc2, 0x6f, 0xd6, 0x89, 0x7a, 0xc3, 0xa6, 0x93, 0x38, 0xf5, 0x44, 0x0d, 0xf5, 0x35, 0x72, 0x5d, 0xcc, 0x1b, 0x8f, 0x48, 0x97, 0x06, 0x7c, 0x14, 0xa7, 0x11, 0xa7, 0xe4, 0x34, 0x16, 0xa4, 0x13, 0x8d, 0x59, 0x80, 0x94, 0x3d, 0x4c, 0x73, 0x1e, 0xc1, 0x5f, 0x64, 0x6e, 0x36, 0x5d, 0xd3, 0x68, 0xc9, 0x47, 0x1e, 0xb9, 0x41, 0x2e, 0x80, 0x2a, 0xb0, 0xe6, 0x34, 0x13, 0xf2, 0xbf, 0x6b, 0x30, 0x4b, 0x10, 0x30, 0xa4, 0x03, 0x0c, 0x03, 0x6a, 0xe6, 0x99, 0xee, 0xe0, 0xfa, 0x3c, 0x83, 0x65, 0xe4, 0x29, 0xbc, 0x62, 0xd1, 0x98, 0x03, 0x29, 0x52, 0xa0, 0xcd, 0x1a, 0xf1, 0x41, 0x89, 0xd2, 0x20, 0x8e, 0x93, 0x35, 0x82, 0x62, 0x11, 0xd0, 0x35, 0x32, 0x4c, 0x29, 0x87, 0x41, 0x89, 0xcf, 0x41, 0x04, 0x33, 0x90, 0x96, 0x04, 0xa6, 0xeb, 0x0f, 0x7a, 0x67, 0x6b, 0x48, 0x3b, 0x98, 0x1f, 0x41, 0x06, 0x74, 0x82, 0x2c, 0x1d, 0xd3, 0x6f, 0x00, 0x0b, 0x5e, 0x04, 0x79, 0x18, 0x71, 0x9c, 0x3f, 0x0c, 0x81, 0xa4, 0x88, 0x60, 0x0c, 0x3c, 0x0b, 0xb9, 0xc8, 0x8a, 0x27, 0x61, 0x1f, 0xc3, 0x46, 0xfb, 0x45, 0x95, 0x50, 0xe3, 0x38, 0x1e, 0x07, 0xcc, 0x81, 0xb5, 0x31, 0x07, 0xd6, 0xc8, 0x47, 0xdc, 0xa5, 0x35, 0x9e, 0xff, 0xd6, 0xdf, 0x19, 0xd3, 0xe9, 0xe5, 0x78, 0x6b, 0xf7, 0x5b, 0x77, 0xfb, 0xf5, 0x67, 0xd1, 0xb9, 0xdb, 0xa6, 0x7b, 0xee, 0xd6, 0xed, 0xf6, 0xef, 0x83, 0xc3, 0x9b, 0xc8, 0xb9, 0x7a, 0xbd, 0x37, 0xde, 0x5b, 0x4f, 0x7e, 0x1b, 0x27, 0x87, 0xeb, 0x0d, 0xd2, 0xb2, 0x8c, 0x48, 0x60, 0x01, 0x2c, 0xcd, 0xa6, 0x07, 0x8d, 0x6c, 0xc2, 0xb3, 0x8c, 0xa5, 0x6f, 0xa9, 0xeb, 0xc6, 0x79, 0x94, 0x7d, 0xe1, 0x5e, 0x09, 0xfa, 0xc6, 0xee, 0xde, 0xc6, 0xee, 0xf6, 0xee, 0x96, 0x1a, 0x4a, 0xe0, 0x1f, 0x4a, 0x67, 0xc0, 0xa3, 0x1b, 0x92, 0xb2, 0xe0, 0xa0, 0x21, 0x7c, 0xd0, 0x0d, 0x37, 0xcf, 0x08, 0x77, 0x11, 0x2b, 0x3f, 0x65, 0xa3, 0x83, 0x46, 0x8b, 0x87, 0xe3, 0xd6, 0x88, 0xde, 0x62, 0x5b, 0x13, 0xfe, 0x23, 0xd9, 0x5f, 0x8c, 0xa1, 0x49, 0x02, 0x6b, 0xca, 0xe2, 0xdc, 0xf5, 0x9d, 0x99, 0x61, 0xf5, 0x97, 0xcd, 0x24, 0x1a, 0xe3, 0xf8, 0x95, 0x17, 0x2f, 0xd0, 0x24, 0x90, 0xf2, 0xdc, 0xd9, 0x14, 0xc4, 0xc1, 0x67, 0x2c, 0xb3, 0x10, 0x5c, 0x21, 0x5a, 0x2c, 0x64, 0xe9, 0x98, 0x45, 0xee, 0xd4, 0xc9, 0x58, 0x98, 0x04, 0x20, 0xd1, 0x4d, 0x68, 0x06, 0x93, 0x82, 0x56, 0xe4, 0x85, 0x04, 0xb4, 0x14, 0x46, 0x08, 0xbc, 0x56, 0xa3, 0x56, 0x70, 0xcd, 0xcb, 0x07, 0x80, 0x56, 0x25, 0x71, 0x24, 0x40, 0x10, 0x1f, 0x35, 0x2c, 0x01, 0x71, 0xcb, 0xe4, 0x08, 0x12, 0x32, 0x8f, 0xd3, 0x83, 0x86, 0x6c, 0x51, 0x36, 0x00, 0x96, 0x7b, 0xda, 0x1b, 0x90, 0xc1, 0x71, 0xef, 0xaa, 0x47, 0x0e, 0x7b, 0x64, 0xd0, 0xf9, 0x5b, 0x8f, 0x5c, 0x7c, 0xec, 0x5d, 0x91, 0x6e, 0xbf, 0x8f, 0x8b, 0x21, 0xfa, 0xdf, 0xf2, 0x79, 0x5c, 0x86, 0xfc, 0x8c, 0xd0, 0x1c, 0x65, 0xf4, 0x86, 0xc5, 0x20, 0x4b, 0x1a, 0x4f, 0x14, 0x38, 0x65, 0x4a, 0x88, 0x48, 0x5d, 0xe8, 0xfe, 0x15, 0x56, 0x1f, 0x83, 0x3c, 0x47, 0xfc, 0x5b, 0xda, 0xfc, 0x0a, 0x5d, 0xf6, 0x5b, 0xea, 0xbd, 0x5e, 0x53, 0xb9, 0x33, 0x9a, 0x58, 0xf1, 0xb6, 0xd5, 0xa2, 0x5f, 0xe9, 0x5d, 0x53, 0xc9, 0x2a, 0x4d, 0xb8, 0x68, 0x82, 0xbc, 0xcb, 0xb6, 0x56, 0xc0, 0x87, 0xa2, 0xf5, 0xf5, 0xf7, 0x9c, 0xa5, 0xd3, 0xd6, 0x46, 0xf3, 0x4d, 0x73, 0x43, 0xff},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x57, 0x00, 0x00, 0xef, 0x06, 0xec, 0xcb, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xf5, 0x65, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x15, 0xc0, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x25, 0x1b, 0x4f, 0x37, 0x6d, 0x68, 0x82, 0x25, 0x5c, 0x06, 0x1c, 0x31, 0xa1, 0x11, 0x0d, 0xa6, 0x19, 0x77, 0x45, 0xad, 0xf3, 0x0b, 0xe3, 0x5b, 0x82, 0x8c, 0xa1, 0x73, 0x79, 0x0d, 0xce, 0xa5, 0x3e, 0x96, 0xb3, 0xd7, 0xf8, 0x3f, 0x27, 0x4f, 0xc6, 0x29, 0xf5, 0xd8, 0xec, 0x94, 0x15, 0xf7, 0x22, 0x45, 0x5b, 0xe2, 0x00, 0x90, 0x4f, 0xce, 0x3a, 0x1f, 0x7a, 0xa4, 0xdb, 0xe9, 0x1e, 0xf7, 0x2a, 0x74, 0x6e, 0xa1, 0x0b, 0x02, 0xde, 0xc0, 0xbf, 0xfd, 0x61, 0xec, 0x4d, 0x8d, 0x17, 0xf3, 0xc1, 0x46, 0x24, 0x74, 0xcc, 0x90, 0x6f, 0x38, 0x7e, 0xa5, 0x9f, 0x81, 0xdd, 0x41, 0x13, 0x76, 0x14, 0xe7, 0xe0, 0x10, 0xba, 0x01, 0x77, 0x6f, 0xc8, 0x7b, 0x30, 0x28, 0x5e, 0xc0, 0xc7, 0x7e, 0x46, 0x06, 0x74, 0xfc, 0x96, 0x5c, 0x02, 0xb5, 0xc0, 0x98, 0x7a, 0x31, 0x89, 0xe2, 0x0c, 0xd8, 0x17, 0x02, 0x5b, 0x56, 0x3a, 0x68, 0x55, 0xd1, 0xfe, 0xa1, 0x01, 0x40, 0x08, 0x99, 0xcf, 0x05, 0xc9, 0xb0, 0x3f, 0x9a, 0xc8, 0x26, 0xf3, 0xf2, 0x95, 0xeb, 0xab, 0x53, 0xf5, 0x86, 0x91, 0x09, 0x1b, 0xe2, 0xc4, 0xda, 0x08, 0x63, 0x0b, 0x74, 0x25, 0x30, 0x82, 0xdd, 0xa1, 0x81, 0x06, 0xb3, 0x9d, 0xc5, 0x64, 0x08, 0x16, 0x3b, 0xa0, 0x2e, 0xf3, 0xde, 0x4a, 0xa7, 0x08, 0x0c, 0x9b, 0x4c, 0x26, 0xcd, 0x1c, 0x90, 0x47, 0x70, 0xad, 0x95, 0x81, 0x9e, 0x82, 0x84, 0xb9, 0xc8, 0x8a, 0xde, 0xf0, 0x94, 0x4d, 0x18, 0x8b, 0x24, 0x58, 0xb9, 0xde, 0xb6, 0x74, 0x13, 0xfb, 0x2d, 0xf5, 0x0c, 0x23, 0xd0, 0xc0, 0x82, 0xad, 0x0f, 0x62, 0x58, 0x07, 0x3c, 0x24, 0x31, 0xf8, 0x16, 0x58, 0x2f, 0x4e, 0x8a, 0x83, 0xc0, 0xc4, 0x44, 0xe8, 0x6b, 0xa0, 0x67, 0x73, 0xa5, 0x8b, 0xde, 0x05, 0x6c, 0x17, 0x39, 0x02, 0x95, 0x7c, 0x4b, 0xd6, 0x77, 0x5a, 0x1b, 0x5b, 0xad, 0xcd, 0xf5, 0x8d, 0xed, 0x15, 0x19, 0x2c, 0x68, 0xf6, 0x61, 0xf8, 0x02, 0x36, 0x89, 0xdd, 0x65, 0xad, 0xaf, 0xf4, 0x96, 0xaa, 0x56, 0xa0, 0xeb, 0x2d, 0x4d, 0x09, 0xbd, 0x63, 0x01, 0x39, 0x20, 0x67, 0x34, 0xf3, 0x9b, 0x29, 0x60, 0x12, 0x87, 0x7f, 0xf9, 0x2b, 0x59, 0x25, 0x8d, 0xc6, 0x3b, 0xf5, 0x1a, 0xde, 0xc9, 0x2e, 0xff, 0x49, 0x36, 0xd6, 0xcb, 0xff, 0xde, 0xad, 0x40, 0x5c, 0x94, 0x87, 0xa0, 0x03, 0xcd, 0x49, 0x0a, 0xa6, 0xf4, 0x2f, 0xaf, 0xf6, 0xf9, 0x28, 0x45, 0xfa, 0x5a, 0x29, 0x06, 0x9a, 0x6c, 0x6f, 0xed, 0x6d, 0xbd, 0x7e, 0xbd, 0xd3, 0x1c, 0x05, 0xa2, 0xe9, 0x49, 0xb6, 0xb9, 0xc8, 0xb6, 0x66, 0xc4, 0xb2, 0x16, 0xd5, 0x5c, 0xe1, 0xef, 0x70, 0x84, 0xee, 0xf9, 0x4e, 0xe2, 0x7a, 0x99, 0xf2, 0x90, 0xae, 0xbf, 0x03, 0xbb, 0x7c, 0x80, 0x0c, 0x62, 0xeb, 0xef, 0xc0, 0xb3, 0x1d, 0xbc, 0x02, 0xbc, 0x28, 0xfc, 0xef, 0xd5, 0x2f, 0x0d, 0xa2, 0xe2, 0x85, 0xc6, 0x06, 0x68, 0x24, 0x43, 0xee, 0xcb, 0x47, 0x39, 0xff, 0x10, 0xba, 0xb2, 0xf4, 0xa0, 0x01, 0x96, 0x59, 0x2a, 0x2e, 0xf8, 0x75, 0x2e, 0x80, 0xf8, 0xd3, 0xb7, 0x51, 0x1c, 0x31, 0x14, 0x55, 0x85, 0x67, 0xfb, 0xd5, 0x5f, 0xdf, 0xad, 0x14, 0xd2, 0xbf, 0x1f, 0xc5, 0xf6, 0xf1, 0xdf, 0xbd, 0x90, 0x8d, 0xa7, 0x2c, 0x80, 0xc8, 0xd0, 0xe5, 0xa0, 0x51, 0x92, 0xfc, 0x8e, 0xd1, 0xe8, 0xd2, 0xea, 0x60, 0x69, 0xa5, 0x15, 0xa1, 0x95, 0xef, 0x81, 0x98, 0x7d, 0x87, 0xf6, 0x10, 0x15, 0x28, 0x82, 0x1f, 0xa9, 0xd9, 0x27, 0x4f, 0x34, 0xa7, 0xd4, 0x8f, 0x63, 0x69, 0x99, 0x12, 0x0e, 0x42, 0xf2, 0x0b, 0xf7, 0x0e, 0x36, 0xb7, 0x77, 0x77, 0xde, 0xac, 0xef, 0xbd, 0xcc, 0x0e, 0x36, 0x17, 0x2c, 0x92, 0x06, 0xf0, 0xf7, 0x33, 0x8e, 0x24, 0x83, 0x94, 0xba, 0x37, 0x28, 0xcc, 0x97, 0x38, 0x5c, 0xfa, 0xc3, 0xfb, 0x65, 0xb6, 0xc2, 0x13, 0xda, 0xa4, 0x5e, 0x06, 0x13, 0x2b, 0x04, 0xee, 0x5a, 0x38, 0xfd, 0xc6, 0xfa, 0xf6, 0xee, 0xee, 0x4e, 0xd9, 0xac, 0xe9, 0x7f, 0xfb, 0x10, 0xd0, 0x10, 0xe8, 0xd0, 0x00, 0xf3, 0xf5, 0x89, 0xa6, 0xa8, 0x42, 0xd8, 0x0b, 0x5a, 0xb1, 0x8b, 0x24, 0xd0, 0x25, 0x2a, 0xfd, 0xa7, 0x94, 0x26, 0x72, 0xc9, 0x76, 0x00, 0xda, 0x02, 0x67, 0x02, 0xcd, 0x68, 0x89, 0x94, 0x95, 0x24, 0xfd, 0x1b, 0x9e, 0x90, 0x73, 0x7a, 0xab, 0xfc, 0xdf, 0x7e, 0x6e, 0x63, 0x6f, 0x9f, 0x7b, 0xc0, 0x22, 0x1c, 0x26, 0xa0, 0x8b, 0x13, 0xd1, 0xdb, 0x86, 0xf6, 0xf1, 0xe0, 0x50, 0xda, 0xfb, 0x54, 0x3b, 0x90, 0x3f, 0xd1, 0xdc, 0xe3, 0xe0, 0x4f, 0x99, 0xea, 0x21, 0xc1, 0x81, 0x7e, 0x03, 0x44, 0x3e, 0x96, 0x0a, 0xbd, 0xdf, 0xa2, 0x80, 0x1d, 0x0c, 0x99, 0x37, 0x58, 0x30, 0x0c, 0xb3, 0x86, 0xf1, 0x5d, 0x31, 0xb2, 0x2f, 0x9b, 0xee, 0x1d, 0x85, 0xfe, 0xd7, 0xd1, 0x41, 0x48, 0x31, 0xf0, 0x0c, 0x5a, 0x49, 0x57, 0xb5, 0x96, 0x86, 0xef, 0xb7, 0xf2, 0x40, 0xbb, 0x04, 0x29, 0x38, 0x95, 0x05, 0xeb, 0xf6, 0x01, 0x64, 0x17, 0x87, 0x60, 0x23, 0x14, 0x0d, 0x0c, 0xb9, 0x20, 0xe7, 0x70, 0x86, 0x34, 0xc5, 0x65, 0xcf, 0x34, 0x16, 0xd3, 0x03, 0x0c, 0x1d, 0x7c, 0x7c, 0x02, 0x0b, 0x06, 0x86, 0x56, 0x47, 0x12, 0x2f, 0x4c, 0x33, 0x4e, 0x5a, 0x7e, 0x65, 0xfb, 0x77, 0x34, 0xe5, 0x0a, 0xf2, 0xcf, 0x99, 0x07, 0xa9, 0x0a, 0x51, 0x26, 0xea, 0x49, 0x64, 0x69, 0x0a, 0xfc, 0x06, 0x46, 0x61, 0xb7, 0x2a, 0xf5, 0xd5, 0xb4, 0x15, 0x62, 0xb5, 0x44, 0x96, 0x7b, 0x80, 0x28, 0x46, 0x13, 0xb1, 0x09, 0xcb, 0x1d, 0xd3, 0xd8, 0x20, 0xed, 0xcb, 0xa2, 0x99, 0xf4, 0x75, 0xb3, 0x25, 0xdf, 0x02, 0x50, 0x6e, 0x9e, 0xa6, 0xf0, 0x50, 0x06, 0xd3, 0x55, 0x4d, 0xcb, 0x40, 0x8c, 0xa8, 0x9b, 0x07, 0xd9, 0x14, 0x06, 0xbc, 0x57, 0x4f, 0x0b, 0xa7, 0xa2, 0xa3, 0x11, 0xf4, 0xea, 0xe3, 0xdf, 0x05, 0x7d, 0x54, 0xfc, 0x0c, 0x9d, 0x3a, 0xf2, 0x61, 0x41, 0xaf, 0x84, 0x22, 0x5e, 0xc2, 0x01, 0x8f, 0xe0, 0x8c, 0x68, 0xc8, 0x03, 0xce, 0xe4, 0xb2, 0x55, 0x33, 0x79, 0x49, 0xc3, 0xe4, 0x1d, 0x79, 0xaf, 0x5f, 0x94, 0xe4, 0x06, 0xfe, 0xa1, 0x4f, 0x07, 0xe1, 0xd1, 0xba, 0x25, 0x5b, 0xca, 0x4c, 0x9d, 0xe1, 0x1f, 0xca, 0x89, 0xee, 0x6b, 0xff, 0x9a, 0xce, 0x65, 0x11, 0xdb, 0x17, 0x09, 0xe4, 0x3f, 0x52, 0x23, 0x31, 0x9e, 0x73, 0x82, 0x78, 0x0c, 0x51, 0x70, 0x61, 0x98, 0x64, 0xa8, 0x8b, 0x8d, 0x8e, 0x8a, 0x00, 0xc7, 0x7c, 0xa4, 0x8d, 0x8d, 0x4c, 0xe1, 0x74, 0xf7, 0x16, 0x42, 0xd1, 0x21, 0x05, 0x39, 0x86, 0xd0, 0x43, 0xcb, 0x97, 0x15, 0x21, 0x5f, 0xb6, 0x19, 0xe9, 0x19, 0xd2, 0x28, 0x62, 0x28, 0xcc, 0x36, 0x8e, 0x31, 0x83, 0xb4, 0xce, 0x10, 0x13, 0xcc, 0xd4, 0x00, 0x94, 0x45, 0xdd, 0xc6, 0x3a, 0x38, 0x5c, 0x22, 0x53, 0x89, 0x80, 0xfc, 0x0d, 0x39, 0x4e, 0x23, 0x68, 0x59, 0xd0, 0xd0, 0xa9, 0x2a, 0xd2, 0xd6, 0xdf, 0x28, 0x81, 0xa9, 0x80, 0xbb, 0xb2, 0x41, 0x32, 0x39, 0xcc, 0xb3, 0x0c, 0x1e, 0xab, 0xc0, 0x0d, 0x56, 0x61, 0x3c, 0xe4, 0x90, 0x04, 0x0c, 0x55, 0x9f, 0x79, 0xca, 0x21, 0x15, 0xc9, 0xcc, 0xae, 0x4d, 0x6c, 0x58, 0x44, 0x34, 0xbf, 0xe4, 0x59, 0xf8, 0x45, 0x40, 0xf2, 0xe9, 0xb2, 0x03, 0xd3, 0xf8, 0x1f, 0x9b, 0x87, 0x45, 0x90, 0x0e, 0x3f, 0x4c, 0xcc, 0xf6, 0x12, 0xfb, 0x62, 0xf0, 0x9d, 0x87, 0x07, 0x9d, 0x24, 0x11, 0xf0, 0x4a, 0x4d, 0x2c, 0x5f, 0xb8, 0x20, 0x3c, 0x94, 0x8f, 0x23, 0xe9, 0x12, 0xe1, 0x95, 0x42, 0x4d, 0xc2, 0x1a, 0xb1, 0x34, 0xa5, 0x01, 0x3c, 0x8e, 0x20, 0x45, 0x84, 0x3f, 0x80, 0x1b, 0x9b, 0xd0, 0x69, 0xa3, 0xb4, 0x84, 0x46, 0x1b, 0x21, 0x4a, 0x61, 0x30, 0xdc, 0x44, 0x1a, 0xad, 0x48, 0xc9, 0x2b, 0x11, 0xc7, 0xda, 0x3d, 0x35, 0x18, 0x56, 0x2a, 0x59, 0x02, 0xeb, 0x6e, 0xb4, 0xcf, 0x58, 0x94, 0xcf, 0x42, 0x28, 0x0f, 0x2f, 0x64, 0xf2, 0xc5, 0x1c, 0xc2, 0xd7, 0x58, 0x6d, 0xad, 0x88, 0xf2, 0x0d, 0xd2, 0x14, 0x93, 0xc3, 0xf8, 0x6e, 0xc6, 0x3c, 0x95, 0x0c, 0x37, 0x4a, 0x3d, 0xe4, 0xe1, 0x61, 0xa9, 0x1d, 0x7f, 0x36, 0x74, 0x92, 0x5a, 0x6e, 0x51, 0xac, 0x52, 0x2d, 0x98, 0xd3, 0x64, 0x7e, 0x0c, 0x63, 0xc6, 0x98, 0x8e, 0x60, 0xe8, 0x11, 0x47, 0xb8, 0x48, 0x09, 0x90, 0x47, 0x49, 0xae, 0x5d, 0xe7, 0x2b, 0xf0, 0x44, 0x60, 0x52, 0x5e, 0x29, 0x78, 0xaf, 0x30, 0xdb, 0x7d, 0x45, 0x6e, 0x41, 0xf9, 0xe1, 0x07, 0xd2, 0xfd, 0x55, 0x6b, 0xc9, 0x08, 0x88, 0x6f, 0x40, 0x7c, 0x2b, 0x63, 0xbe, 0x74, 0x55, 0xce, 0xad, 0xf2, 0xe5, 0xa5, 0x10, 0xc0, 0x74, 0xde, 0x4d, 0x8b, 0xe4, 0xe9, 0x29, 0xa0, 0xe2, 0x3c, 0x83, 0x57, 0x16, 0xc2, 0x5d, 0x18, 0x7c, 0x89, 0xe2, 0x2f, 0x5e, 0xe6, 0x55, 0x46, 0x56, 0x48, 0xec, 0xc8, 0xb6, 0x86, 0x71, 0xcc, 0x26, 0xeb, 0x57, 0xcc, 0x29, 0x7a, 0x0d, 0x21, 0x02, 0x19, 0xa7, 0x90, 0xa4, 0x7b, 0x36, 0xb2, 0xd2, 0x0c, 0x34, 0xa9, 0x81, 0x61, 0xc9, 0xef, 0x8d, 0x52, 0x50, 0x02, 0x91, 0x08, 0xff, 0x06, 0xcf, 0x9b, 0x10, 0xa5, 0xe1, 0x7e, 0x53, 0xc0, 0xa2, 0x31, 0x86, 0x3a, 0x9b, 0x3b, 0x3b, 0x0d, 0x8d, 0x65, 0x43, 0xf1, 0x44, 0x09, 0x7e, 0x0d, 0xb7, 0x61, 0x16, 0x19, 0x68, 0x22, 0x1f, 0x86, 0x3c, 0xd3, 0x96, 0x4a, 0x85, 0x65, 0x2a, 0xef, 0x88, 0x35, 0x96, 0x68, 0xb4, 0x14, 0x10, 0x69, 0x21, 0x51, 0x26, 0x0a, 0x53, 0x59, 0x95, 0x4d, 0x29, 0x7f, 0xd2, 0xa3, 0x1b, 0xe3, 0x3a, 0x57, 0x6c, 0x65, 0x24, 0x80, 0x0a, 0x81, 0x51, 0x4d, 0x52, 0x18, 0xbb, 0x79, 0xae, 0xd2, 0xf4, 0x6d, 0xd4, 0xdd},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x58, 0x00, 0x00, 0xef, 0x06, 0xec, 0xca, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xaa, 0xfa, 0xbd, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x64, 0x1e, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x25, 0x1b, 0x4f, 0x37, 0x6d, 0xc8, 0x30, 0x2e, 0x51, 0xd7, 0x4b, 0xe3, 0xc4, 0x8b, 0x27, 0x91, 0x83, 0x59, 0x33, 0x68, 0xe8, 0xe1, 0xc5, 0xf5, 0x40, 0x5a, 0xae, 0x22, 0x32, 0xb2, 0x5d, 0x6a, 0x93, 0xa6, 0x6c, 0x2c, 0x37, 0x5f, 0x68, 0xca, 0x29, 0x0c, 0xbf, 0x85, 0xa6, 0x24, 0x0e, 0x40, 0x5c, 0x8d, 0x63, 0x36, 0xbe, 0xb9, 0x00, 0xa5, 0x74, 0x19, 0xdd, 0xd7, 0x5c, 0xac, 0xc0, 0x72, 0x5e, 0x40, 0x32, 0x8e, 0xbb, 0x8c, 0x8b, 0x7c, 0xa0, 0xec, 0x16, 0x72, 0x81, 0xbb, 0x5a, 0xd2, 0xc7, 0x49, 0x96, 0x41, 0xfc, 0x7c, 0xa6, 0xda, 0xb4, 0x83, 0xfb, 0x28, 0x5b, 0xef, 0x05, 0x02, 0xae, 0xd9, 0xb8, 0x49, 0x3e, 0xce, 0x53, 0x84, 0xf1, 0x1e, 0x9b, 0x8c, 0x8b, 0x54, 0x8d, 0x05, 0x08, 0xe9, 0x1a, 0x1f, 0xbe, 0x14, 0x3a, 0xa1, 0xa9, 0xa7, 0xe0, 0xfb, 0x71, 0x14, 0xa7, 0x00, 0xbe, 0x23, 0x9b, 0x34, 0xfc, 0x63, 0xd9, 0x78, 0x2f, 0x86, 0x90, 0x88, 0x66, 0x71, 0x3a, 0x6d, 0xb4, 0x8f, 0xd5, 0xc3, 0xbd, 0x9d, 0x39, 0x98, 0x66, 0x37, 0x93, 0xf3, 0xe1, 0xde, 0x55, 0x98, 0x04, 0x5c, 0xf8, 0xa1, 0x8c, 0x56, 0xda, 0x27, 0xf2, 0x9d, 0x9e, 0xb8, 0x53, 0x7d, 0xfb, 0xdd, 0x0b, 0xc4, 0xfd, 0x28, 0xee, 0xca, 0x09, 0x79, 0x34, 0x02, 0x72, 0x47, 0x72, 0x83, 0x57, 0x47, 0x29, 0xed, 0x13, 0xf9, 0x5a, 0xcf, 0x79, 0x52, 0x74, 0x20, 0xf7, 0x46, 0x2f, 0x0a, 0x34, 0xa4, 0xb3, 0x98, 0x5c, 0x00, 0xe6, 0x57, 0xfa, 0xe9, 0xde, 0xee, 0x72, 0x3f, 0x27, 0x15, 0x1a, 0x15, 0xb5, 0xa7, 0x89, 0xec, 0xec, 0xaa, 0x76, 0x8b, 0x83, 0x79, 0xf3, 0xdd, 0x4b, 0x0e, 0xa4, 0xd7, 0x10, 0x3e, 0x87, 0x8c, 0xe2, 0xd4, 0x3e, 0xdf, 0x8f, 0x1b, 0xc4, 0x5d, 0x0c, 0x99, 0xdf, 0x55, 0x0f, 0xb5, 0xb9, 0x4d, 0x6f, 0xab, 0x8f, 0x72, 0xf7, 0x00, 0x7a, 0xcb, 0x4d, 0x84, 0xbf, 0xfc, 0xfd, 0xaf, 0xd8, 0x9f, 0x14, 0xea, 0x64, 0x63, 0x35, 0x74, 0x9a, 0x3a, 0xd4, 0x2f, 0xbd, 0xfa, 0x43, 0x0d, 0x05, 0x17, 0x70, 0x0c, 0x31, 0xbd, 0x5c, 0xa4, 0xee, 0xdd, 0xce, 0x51, 0xef, 0xec, 0xa4, 0xdb, 0xff, 0x29, 0x2a, 0x6f, 0xb0, 0x7b, 0x80, 0xda, 0xdb, 0xae, 0x39, 0xee, 0x2e, 0xe3, 0x16, 0x56, 0x0e, 0x01, 0x86, 0x03, 0x46, 0x5e, 0x6f, 0x01, 0xb7, 0xaf, 0xcb, 0x2f, 0x48, 0xcf, 0xbc, 0x58, 0x0a, 0xd0, 0xc2, 0x42, 0xe9, 0x01, 0xc7, 0x37, 0x62, 0xd2, 0x8e, 0x80, 0x24, 0x97, 0x80, 0x7f, 0x30, 0x70, 0x95, 0x28, 0x5d, 0x96, 0xba, 0xcd, 0x9b, 0xea, 0xe1, 0x62, 0x65, 0xd1, 0xc0, 0x08, 0x87, 0x47, 0x39, 0xc8, 0x7a, 0x79, 0xde, 0xae, 0x6d, 0x7d, 0xc4, 0x8a, 0xcc, 0x93, 0x13, 0x27, 0x78, 0x56, 0x83, 0x5b, 0xed, 0x98, 0x03, 0xb4, 0x3b, 0xba, 0x9d, 0x5c, 0x94, 0xdb, 0x97, 0x82, 0x33, 0x29, 0xcc, 0x6c, 0x06, 0xf3, 0x1d, 0xcb, 0xf4, 0x20, 0xca, 0x4c, 0xb3, 0xd0, 0x66, 0x28, 0x40, 0x70, 0x60, 0x40, 0x08, 0xd8, 0x1d, 0x15, 0x6f, 0x0a, 0x2a, 0xcb, 0x77, 0x4b, 0x31, 0x0c, 0xf8, 0x30, 0x05, 0x89, 0xc4, 0x35, 0x9e, 0x9a, 0xc7, 0x1f, 0xab, 0x67, 0x9e, 0xf6, 0x48, 0x0b, 0xf5, 0xec, 0xe8, 0xec, 0xa4, 0xdf, 0x3f, 0xb9, 0x38, 0xff, 0x29, 0x7a, 0x66, 0xb0, 0x03, 0x3d, 0xeb, 0x98, 0x67, 0xb2, 0x4c, 0xe3, 0xec, 0xa0, 0xaa, 0xc6, 0x15, 0x4b, 0xad, 0x69, 0x9c, 0x85, 0xbc, 0x14, 0xe0, 0x3c, 0x58, 0x1f, 0xee, 0x01, 0xf3, 0x70, 0x31, 0xb3, 0x53, 0x40, 0x9c, 0xc7, 0x22, 0x19, 0x22, 0xb0, 0x08, 0x28, 0x1a, 0x84, 0x32, 0x9f, 0xeb, 0x99, 0x56, 0xc8, 0x50, 0x4d, 0xeb, 0x52, 0x6c, 0x39, 0xfa, 0x90, 0x48, 0x6a, 0x1e, 0x7a, 0xb7, 0x02, 0xe5, 0x93, 0xf2, 0x8b, 0x47, 0x2c, 0x3f, 0x03, 0xef, 0x26, 0x20, 0x5d, 0x2a, 0xc3, 0x1a, 0xe8, 0xb6, 0x67, 0x5a, 0x3e, 0x1d, 0x41, 0xac, 0xe9, 0x51, 0x48, 0xba, 0x38, 0x2a, 0x6e, 0xa7, 0xfc, 0xd3, 0x86, 0x37, 0x11, 0x8d, 0x5c, 0xe9, 0x8f, 0xb9, 0xf7, 0x43, 0x95, 0x05, 0x53, 0x50, 0x70, 0xd3, 0x4e, 0xe6, 0x33, 0xc7, 0x1e, 0x05, 0x2e, 0x52, 0x1c, 0x99, 0x73, 0x9f, 0x9c, 0xe3, 0xb1, 0x0f, 0xe9, 0x5e, 0x9c, 0x9d, 0x5d, 0x9f, 0x9f, 0x0c, 0x3e, 0xff, 0x0c, 0x25, 0x9a, 0x8b, 0xf5, 0x72, 0xc7, 0x35, 0x7f, 0x98, 0x7d, 0xb2, 0xf6, 0x4f, 0x19, 0x43, 0x30, 0x80, 0x91, 0x8a, 0x29, 0x30, 0x60, 0x30, 0x9d, 0xac, 0x21, 0x34, 0x76, 0xb1, 0xd4, 0xed, 0x51, 0xd3, 0xde, 0x38, 0x1b, 0x9b, 0x85, 0x77, 0xa9, 0xce, 0x0d, 0xa1, 0xae, 0xdc, 0x21, 0x80, 0x89, 0xff, 0x06, 0xdd, 0x0a, 0x77, 0x53, 0x9f, 0xfd, 0xca, 0x74, 0xfc, 0x0e, 0x09, 0x9d, 0x8f, 0x96, 0xf5, 0x57, 0x85, 0x33, 0xb0, 0xae, 0x6a, 0x89, 0x0f, 0x98, 0x0f, 0x10, 0x92, 0xfb, 0x20, 0xf3, 0x1d, 0x0c, 0xbd, 0x20, 0x76, 0x96, 0x3f, 0x08, 0x86, 0x5f, 0x8f, 0x02, 0x82, 0x07, 0xe6, 0x92, 0x32, 0xe8, 0xf4, 0x72, 0x84, 0xd4, 0x49, 0xad, 0x6b, 0xea, 0xaa, 0xb6, 0x1f, 0xaa, 0x37, 0x88, 0x1e, 0x22, 0xb5, 0x48, 0x55, 0x50, 0x45, 0x3a, 0x57, 0x83, 0x9f, 0x12, 0xca, 0x19, 0xdc, 0x5a, 0x9a, 0x4a, 0x4b, 0xb4, 0xc2, 0x76, 0x17, 0xae, 0x1f, 0xc7, 0x81, 0x13, 0x8f, 0x1c, 0xd3, 0xa4, 0x12, 0x98, 0x52, 0x2d, 0x42, 0xa3, 0xdd, 0x97, 0x9d, 0xcc, 0x19, 0x9f, 0x84, 0x8f, 0x07, 0x6f, 0x9d, 0x52, 0xa7, 0x87, 0xcf, 0x13, 0xe6, 0x82, 0xbb, 0x28, 0x15, 0xe9, 0x10, 0x32, 0x11, 0x3c, 0xf4, 0x27, 0x05, 0xfc, 0x33, 0x7c, 0xf9, 0x1d, 0x72, 0x3d, 0x7f, 0x3d, 0xe0, 0xce, 0x52, 0xc8, 0x2d, 0x83, 0x50, 0xae, 0x29, 0x63, 0x01, 0xbb, 0xe5, 0xca, 0xdc, 0x17, 0x33, 0x0e, 0x54, 0xb7, 0x35, 0xb0, 0xc7, 0x41, 0x28, 0x97, 0x35, 0xb0, 0xfd, 0x96, 0x2d, 0x0a, 0x90, 0xc7, 0x2d, 0x05, 0x0c, 0x19, 0xa5, 0x5c, 0xb4, 0x2f, 0x6d, 0x83, 0xa4, 0xd2, 0xb2, 0xf1, 0x40, 0x0a, 0x96, 0xa3, 0xb6, 0x9d, 0xa9, 0x87, 0xa7, 0x2c, 0x5c, 0xb2, 0x6e, 0xc6, 0xa8, 0x2b, 0x61, 0x38, 0x89, 0x70, 0x99, 0xc4, 0xda, 0xb3, 0x1f, 0xaa, 0x35, 0xb8, 0x65, 0x99, 0x0b, 0x10, 0xf0, 0x11, 0x5b, 0xa4, 0x38, 0xdd, 0xce, 0xd9, 0xe5, 0x75, 0x9f, 0x9c, 0x9e, 0xbc, 0xef, 0xfd, 0x0c, 0xdd, 0x29, 0x61, 0xb8, 0xdc, 0x9f, 0x94, 0x3b, 0x9b, 0x9d, 0x8f, 0x7b, 0xf7, 0x3a, 0xca, 0x03, 0xfc, 0x38, 0x17, 0x52, 0x5a, 0x40, 0x1c, 0x3d, 0xae, 0x0e, 0xd2, 0x8e, 0x55, 0x9b, 0xb6, 0x6b, 0x47, 0xb2, 0xf5, 0x3b, 0x04, 0xa1, 0x3c, 0x4d, 0x40, 0xe5, 0x5f, 0xb9, 0x7f, 0x08, 0x11, 0x79, 0x87, 0x9c, 0x9a, 0x5f, 0x0f, 0xc1, 0x51, 0xd9, 0x6e, 0x44, 0x51, 0xd0, 0x11, 0x43, 0x21, 0xd2, 0x16, 0x5c, 0x61, 0xd8, 0x97, 0x8d, 0x0f, 0x00, 0xe4, 0x06, 0xf9, 0x50, 0x38, 0x71, 0x3a, 0xa6, 0x11, 0xff, 0x26, 0x3d, 0x99, 0x71, 0x74, 0xae, 0x3e, 0x7a, 0x47, 0x11, 0x83, 0x3e, 0x6b, 0xe4, 0xa2, 0xdc, 0xc9, 0xba, 0x39, 0xd3, 0xed, 0x89, 0xd4, 0xd0, 0x67, 0x46, 0x8e, 0xc8, 0x65, 0xd6, 0x05, 0xca, 0xaf, 0x1a, 0x48, 0x5f, 0x35, 0xfc, 0x50, 0x6d, 0xd0, 0xd4, 0x15, 0x53, 0x91, 0xb1, 0x70, 0x91, 0x3e, 0x1c, 0xf7, 0x3a, 0xa7, 0x83, 0x63, 0xd2, 0xff, 0x8c, 0xf5, 0x5c, 0x3f, 0x43, 0x23, 0x2a, 0x58, 0x2e, 0xd7, 0x89, 0x6a, 0x77, 0xb0, 0x80, 0x0e, 0x16, 0x36, 0xaa, 0x9d, 0xaf, 0xf7, 0x71, 0x0a, 0xf1, 0x92, 0xfa, 0xf5, 0xe0, 0xe1, 0xaa, 0x05, 0xc3, 0x87, 0xca, 0x86, 0x80, 0x06, 0x77, 0x6c, 0xdf, 0x56, 0xf6, 0x01, 0xbe, 0xc7, 0x7c, 0x56, 0xa7, 0x2e, 0xe5, 0xfd, 0xcb, 0x92, 0xfd, 0xea, 0xc0, 0x27, 0xef, 0x1d, 0x54, 0xc1, 0x8d, 0xf9, 0x2d, 0x82, 0xc2, 0xfd, 0x75, 0x27, 0x8b, 0x1d, 0x0c, 0x8f, 0x20, 0x41, 0x93, 0x8d, 0xe4, 0x10, 0x1a, 0x71, 0x83, 0xbb, 0x38, 0xe0, 0xfa, 0x41, 0x92, 0x0b, 0xf1, 0xa9, 0x3e, 0x46, 0x99, 0x2f, 0xb4, 0x57, 0xbd, 0x7e, 0xaf, 0x73, 0xd5, 0x3d, 0xfe, 0x19, 0xf2, 0x6a, 0x70, 0x5b, 0x2e, 0xaa, 0xb6, 0x67, 0x65, 0x6b, 0xc6, 0x51, 0x32, 0x5b, 0x0e, 0xfe, 0x2b, 0x5b, 0x34, 0x44, 0x49, 0xf1, 0xd2, 0xa8, 0xdf, 0x42, 0xaf, 0xa6, 0xec, 0x96, 0x74, 0xb5, 0x8c, 0xfd, 0x4a, 0xb7, 0x7f, 0x87, 0xe0, 0xda, 0x99, 0xe6, 0x4c, 0xf2, 0x61, 0x31, 0xfc, 0xf9, 0x30, 0xcc, 0x83, 0xb3, 0x68, 0x1f, 0xd7, 0x00, 0x22, 0xcb, 0x37, 0x74, 0xe7, 0xcf, 0xf0, 0x80, 0x0d, 0xa2, 0x47, 0xad, 0x1a, 0x33, 0xed, 0x88, 0x0b, 0xc8, 0xe4, 0x65, 0x22, 0x15, 0x6b, 0xa6, 0xab, 0xed, 0x95, 0xe2, 0xc5, 0x9c, 0x30, 0xf8, 0x47, 0x6c, 0x46, 0x65, 0x7e, 0xc0, 0x54, 0x4d, 0x51, 0x67, 0x70, 0x7c, 0xda, 0x1b, 0x98, 0xed, 0x5d, 0xd5, 0x57, 0x62, 0x60, 0xce, 0x93, 0xac, 0x7a, 0x88, 0x84, 0xba, 0x78, 0x5a, 0xae, 0x0e, 0xa7, 0xf4, 0x11, 0xd5, 0xcc, 0x01, 0x55, 0x97, 0x06, 0x41, 0x9c, 0x67, 0xf3, 0xcf, 0xa8, 0xf4, 0xb1, 0x93, 0xab, 0xfa, 0xe8, 0xa3, 0xd5, 0x99, 0x03, 0xd8, 0x0a, 0xaa, 0x18, 0xe7, 0x66, 0x0d, 0xd2, 0xfe, 0x88, 0x7f, 0x8b, 0x5a, 0x84, 0xfb, 0x07, 0x61, 0x4d, 0x2c, 0x16, 0x54, 0x74, 0xf0, 0xef, 0xbd, 0x83, 0xea, 0xf8, 0x38, 0x63, 0xac, 0x05, 0x89, 0xe2, 0x49, 0xe9, 0xb0, 0x5e, 0x1f, 0x97, 0xe3, 0x9b, 0x2c, 0xae, 0x9d, 0x99, 0xdb, 0xe3, 0xee, 0xae, 0x2d, 0x18, 0x5d, 0xed, 0xea, 0xb6, 0xd5, 0x53, 0x9a, 0x47, 0xae, 0xff, 0xb2, 0x74, 0xb6, 0xfe, 0x41, 0x1d, 0x79, 0xaf, 0x0e, 0x74, 0x4d, 0x69, 0xf9, 0x2c, 0x7d, 0xc2, 0x86, 0x78, 0x90, 0xab, 0x4e, 0xd1, 0x75, 0x39, 0x31, 0x98, 0x54, 0xb6, 0x7a, 0x1e, 0x4f, 0x56, 0xd5, 0xf1},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x59, 0x00, 0x00, 0xef, 0x06, 0xec, 0xc9, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x00, 0x15, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0xf6, 0x67, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x25, 0x1b, 0x4f, 0x37, 0x6d, 0xbf, 0xb4, 0xb1, 0x6c, 0x7f, 0x98, 0xb6, 0xda, 0xd0, 0x38, 0x7f, 0x61, 0x8a, 0xf9, 0xc5, 0xaf, 0x0a, 0x9b, 0x56, 0x6a, 0x6d, 0xb6, 0x16, 0x63, 0x4e, 0xdd, 0xc3, 0x8a, 0xee, 0x57, 0xef, 0x23, 0x0b, 0x2a, 0xa8, 0xaa, 0x6a, 0x60, 0x19, 0x86, 0xf3, 0xe0, 0x04, 0x87, 0xcc, 0x19, 0x05, 0x53, 0x79, 0x6a, 0x58, 0x21, 0x5a, 0xa5, 0x6e, 0xd2, 0x16, 0x70, 0xd7, 0x68, 0x67, 0xdb, 0x9d, 0x11, 0x70, 0xc2, 0xd9, 0x5c, 0xdf, 0x78, 0xfd, 0x72, 0x4e, 0x41, 0x42, 0x99, 0x58, 0x74, 0x04, 0x59, 0x13, 0xa7, 0x35, 0x72, 0x29, 0x14, 0x9c, 0x89, 0x4f, 0xe1, 0x3f, 0x1c, 0x40, 0xc1, 0x4f, 0xc7, 0xf5, 0x69, 0xa4, 0xab, 0x16, 0xe4, 0xb1, 0xee, 0xf6, 0xc6, 0xf6, 0xdd, 0xde, 0xeb, 0x99, 0xe2, 0x12, 0x0d, 0xc3, 0xe1, 0x21, 0x1d, 0x33, 0x51, 0xe0, 0xea, 0x60, 0xdd, 0x83, 0x33, 0xcc, 0xc7, 0xcd, 0xaf, 0xc9, 0x58, 0x9f, 0xe4, 0x7e, 0x82, 0x09, 0x08, 0x4e, 0x40, 0x60, 0x02, 0xa2, 0x26, 0xf8, 0x85, 0x9c, 0x01, 0x53, 0x09, 0xc5, 0x22, 0x50, 0xb0, 0x40, 0x64, 0x12, 0xa7, 0x81, 0xd7, 0x54, 0x15, 0x21, 0x17, 0x06, 0x58, 0x93, 0x9c, 0x82, 0x79, 0x88, 0xc8, 0x59, 0x9c, 0xca, 0x6a, 0x45, 0xaa, 0xeb, 0x61, 0xb0, 0x52, 0x59, 0x16, 0x2a, 0x0f, 0x2e, 0x2e, 0x4b, 0x25, 0x57, 0x5a, 0xf7, 0x4d, 0x05, 0x32, 0xb8, 0xda, 0x44, 0x9d, 0x3b, 0x97, 0x87, 0x1c, 0x5e, 0x0c, 0x06, 0x17, 0x67, 0xaa, 0x1a, 0x61, 0xee, 0xa8, 0x61, 0x0c, 0xb2, 0x13, 0x36, 0x6c, 0x29, 0x1c, 0x32, 0x99, 0xf2, 0xa8, 0x5e, 0x1b, 0xe3, 0x9a, 0x66, 0x5b, 0x4c, 0x4d, 0xfa, 0x01, 0xf7, 0x98, 0xf0, 0xe3, 0x49, 0xad, 0x0c, 0x4c, 0x9e, 0x1b, 0x0b, 0xf3, 0xb2, 0x51, 0x9f, 0x79, 0x14, 0xb0, 0x3b, 0xf9, 0x56, 0x15, 0x8a, 0x19, 0xd5, 0xd7, 0x6f, 0xd5, 0x38, 0xfb, 0xe6, 0x85, 0x35, 0x62, 0xd2, 0x9b, 0x9a, 0x32, 0xb1, 0x9a, 0xf2, 0x49, 0x19, 0xe0, 0x51, 0x6b, 0xd3, 0xfd, 0x47, 0x18, 0xfc, 0xe3, 0x78, 0x19, 0xf7, 0x44, 0x12, 0x67, 0xb2, 0xf4, 0xb1, 0xb5, 0xe1, 0x7c, 0xa4, 0xb7, 0x10, 0xd0, 0xdf, 0x94, 0xd8, 0x77, 0x3a, 0x8d, 0x22, 0xa2, 0x9b, 0x89, 0x43, 0x2e, 0xc1, 0x0e, 0xab, 0x7b, 0x0c, 0x44, 0x3a, 0x78, 0x0e, 0xf6, 0x80, 0x08, 0x17, 0x03, 0x42, 0xe0, 0x99, 0x61, 0x93, 0x42, 0x2b, 0x29, 0xaf, 0xd1, 0x71, 0x69, 0xa2, 0x0f, 0xd4, 0x97, 0xa2, 0x0b, 0x06, 0x3f, 0x8e, 0xc6, 0xed, 0xf2, 0xd4, 0xfb, 0x2d, 0xdd, 0x28, 0x15, 0x5a, 0x55, 0x40, 0xcd, 0x81, 0xee, 0x88, 0x10, 0xb4, 0x02, 0x52, 0xf7, 0x7b, 0xd1, 0x2c, 0xd5, 0xba, 0xec, 0xb7, 0x12, 0x53, 0xf6, 0x56, 0x76, 0x0b, 0x0f, 0xa3, 0xaf, 0x97, 0x0e, 0x47, 0xbf, 0xfd, 0xfa, 0x70, 0xfa, 0x6e, 0x3a, 0xa7, 0xb9, 0xb8, 0xe1, 0x51, 0xb7, 0x5b, 0x22, 0xf0, 0xe1, 0xf5, 0xc9, 0xe9, 0x80, 0xbc, 0xbf, 0x00, 0xe1, 0xbc, 0xea, 0x75, 0xfe, 0x36, 0x38, 0xbe, 0xba, 0xb8, 0xfe, 0x70, 0xdc, 0x07, 0x5a, 0x63, 0x96, 0x2f, 0x55, 0xe2, 0x8c, 0x4d, 0x41, 0x00, 0x4d, 0x22, 0x05, 0x0b, 0x23, 0x0a, 0x0e, 0x4a, 0xe7, 0x08, 0x2c, 0x0a, 0x16, 0x91, 0x29, 0x5f, 0xfe, 0x34, 0x06, 0xd8, 0xf5, 0x68, 0x5a, 0x1f, 0xe6, 0x3c, 0xc8, 0x08, 0x44, 0x53, 0xe4, 0x10, 0x92, 0xb7, 0x1b, 0xbc, 0x97, 0x92, 0x8f, 0x7d, 0xf1, 0x48, 0x5e, 0x7c, 0xc7, 0x32, 0x9e, 0x8d, 0x41, 0x7c, 0xfd, 0xea, 0x1f, 0xe7, 0x0f, 0x67, 0xd0, 0x96, 0x73, 0x98, 0x07, 0x41, 0xd9, 0x78, 0x1d, 0x5f, 0x90, 0xcf, 0x17, 0xd7, 0xe4, 0xc3, 0xc5, 0xf9, 0x79, 0x87, 0x5c, 0x9e, 0x76, 0x3e, 0xff, 0x42, 0x74, 0xe5, 0x9a, 0x4a, 0x85, 0x45, 0x46, 0xba, 0x7e, 0x8a, 0xf5, 0x04, 0x89, 0x4f, 0x70, 0xf0, 0x2b, 0x41, 0xe4, 0x9e, 0x18, 0x49, 0xd1, 0xd0, 0x09, 0x82, 0xf5, 0x5b, 0x8a, 0x3a, 0x52, 0x06, 0x21, 0xd6, 0xff, 0xe0, 0xc7, 0x22, 0x1b, 0xe6, 0x22, 0x93, 0xe7, 0xd8, 0x4f, 0xe2, 0x97, 0x59, 0x9e, 0xe6, 0xc7, 0x27, 0x3f, 0x96, 0x46, 0x76, 0x1c, 0x47, 0x11, 0xc5, 0x3a, 0xfb, 0xe9, 0x2f, 0x8f, 0xe4, 0xd5, 0x7d, 0x2b, 0x7b, 0xf9, 0xa7, 0xad, 0xbd, 0x77, 0x8f, 0x5a, 0xdd, 0xf3, 0xe9, 0x99, 0x9b, 0xff, 0xb6, 0xd4, 0x0b, 0x15, 0x6c, 0xdc, 0x76, 0x2e, 0xf3, 0x61, 0xc0, 0xdd, 0xce, 0x68, 0x44, 0x79, 0x2a, 0x4a, 0xfc, 0x3c, 0xeb, 0x0c, 0xba, 0xc7, 0x27, 0xe7, 0x1f, 0xc8, 0x59, 0xef, 0x7c, 0x70, 0x71, 0xd5, 0x27, 0x9d, 0xf3, 0x23, 0xd0, 0xbb, 0xeb, 0x93, 0x73, 0x54, 0x38, 0x0c, 0x18, 0xcc, 0xed, 0x02, 0x49, 0x09, 0x2d, 0x9f, 0x7a, 0x7f, 0xb1, 0xcf, 0x22, 0x0e, 0xba, 0xf0, 0x9e, 0x41, 0x04, 0x34, 0x11, 0x66, 0x2f, 0xfd, 0xa9, 0x2a, 0xa7, 0x97, 0xa6, 0xd9, 0x74, 0x46, 0x33, 0xd7, 0x47, 0xb3, 0x85, 0x87, 0x69, 0x71, 0xaa, 0xb6, 0x66, 0xd5, 0x95, 0xb4, 0x47, 0xb2, 0xf2, 0xbb, 0x16, 0xb3, 0x94, 0x63, 0x45, 0xd4, 0x54, 0x2a, 0x0c, 0x2c, 0xdc, 0xa2, 0x46, 0x41, 0x16, 0xf6, 0x2d, 0xac, 0x65, 0xad, 0x7a, 0x4a, 0xfd, 0x42, 0x16, 0x6b, 0xcd, 0xf1, 0x9c, 0xa6, 0x7a, 0x54, 0x27, 0x9a, 0xd8, 0xd6, 0x30, 0x37, 0xad, 0xe8, 0xff, 0x9b, 0xf8, 0x49, 0x16, 0x3f, 0xa2, 0xf4, 0xd6, 0x23, 0xc1, 0xc6, 0xcf, 0x8d, 0xaa, 0x24, 0x17, 0xa9, 0x2a, 0x3d, 0x46, 0x2e, 0xbc, 0x67, 0x14, 0xf7, 0xff, 0x81, 0x47, 0x20, 0x7c, 0x98, 0x2a, 0x97, 0x42, 0x25, 0x44, 0x7d, 0xa4, 0xdf, 0x3b, 0x42, 0xbd, 0x07, 0x2e, 0xfb, 0x5b, 0xed, 0xfa, 0xa8, 0xfd, 0x16, 0x34, 0x96, 0x83, 0x1c, 0x7b, 0xcf, 0xb5, 0x9c, 0x18, 0x16, 0x69, 0x1e, 0xe6, 0x16, 0x26, 0xe0, 0x51, 0x15, 0x5a, 0x8b, 0xf5, 0xe5, 0xe3, 0x71, 0x7c, 0xf1, 0xfb, 0x8c, 0x29, 0xd0, 0x78, 0x89, 0xd6, 0x27, 0x26, 0x02, 0x36, 0x95, 0x69, 0xc5, 0x10, 0x44, 0x7b, 0x6d, 0xf5, 0xef, 0x39, 0x4b, 0x40, 0x9f, 0x56, 0x4f, 0x41, 0x73, 0xd6, 0x56, 0x3b, 0x91, 0x97, 0xb2, 0xc9, 0xea, 0xaf, 0x74, 0xca, 0x5d, 0x7f, 0x35, 0xf1, 0xe3, 0x2c, 0xfe, 0x12, 0x72, 0xaf, 0x44, 0x58, 0x49, 0xad, 0xc4, 0x9f, 0x82, 0x41, 0x43, 0x72, 0xd9, 0x9c, 0x52, 0xf6, 0x2d, 0xeb, 0x7a, 0x25, 0x7c, 0xc4, 0x42, 0x48, 0x6b, 0x03, 0xfc, 0xed, 0xe5, 0xf8, 0x77, 0x20, 0xca, 0x04, 0x33, 0xda, 0x05, 0x55, 0xd4, 0x85, 0xcd, 0xdb, 0x85, 0x0d, 0x69, 0xd7, 0x90, 0x00, 0xfe, 0x22, 0xa7, 0x29, 0x8c, 0x11, 0x24, 0x02, 0xf6, 0x05, 0x53, 0xd4, 0x6c, 0x3a, 0x14, 0x90, 0x57, 0x67, 0x8c, 0x7c, 0x63, 0x69, 0xdc, 0x84, 0x94, 0x23, 0x65, 0xaf, 0xf0, 0xc6, 0xec, 0xb4, 0x49, 0x50, 0x79, 0x89, 0xd1, 0x5e, 0x5b, 0x7e, 0x5e, 0x28, 0xf2, 0x63, 0x28, 0x7e, 0x7a, 0xf4, 0x29, 0xd9, 0x5a, 0x4c, 0xf1, 0x8f, 0x34, 0x80, 0x40, 0x8b, 0xad, 0xf6, 0x43, 0x9e, 0xf9, 0x35, 0x62, 0xea, 0x77, 0x44, 0xbe, 0x7b, 0x76, 0x52, 0x6a, 0xc4, 0xce, 0xd9, 0x04, 0x77, 0x74, 0x72, 0xa0, 0xcd, 0x45, 0x44, 0x8e, 0xec, 0x1d, 0x5f, 0x95, 0x51, 0xce, 0xa1, 0x6e, 0x05, 0xab, 0x35, 0x69, 0x20, 0xf1, 0xe2, 0x32, 0x97, 0xdb, 0xc8, 0xf1, 0x88, 0xf4, 0x27, 0x34, 0xcd, 0xfc, 0x10, 0x14, 0x04, 0xf8, 0xa3, 0xef, 0xf6, 0xba, 0x1c, 0xef, 0x2e, 0xe3, 0x1d, 0x8c, 0x29, 0xd0, 0x5f, 0x90, 0x8c, 0x51, 0x65, 0xa7, 0x41, 0x0d, 0xd5, 0xe6, 0xdd, 0xf3, 0x11, 0xfc, 0x03, 0xdf, 0xf8, 0x8d, 0x2d, 0x26, 0x78, 0x9f, 0x05, 0x23, 0xa0, 0xf7, 0x04, 0x90, 0x5f, 0x3d, 0x03, 0x41, 0x0e, 0x57, 0x0f, 0x21, 0x93, 0xe6, 0x37, 0x6b, 0xab, 0xc3, 0x69, 0xa5, 0xe1, 0xcb, 0x96, 0xb7, 0xf7, 0xfa, 0xcd, 0xde, 0xd6, 0xa6, 0xf3, 0x7a, 0x77, 0xe7, 0x8d, 0xb3, 0xbd, 0xb9, 0x3e, 0x72, 0xf6, 0x36, 0xd7, 0xb7, 0x9d, 0x8d, 0x37, 0x5b, 0x6f, 0x46, 0x2e, 0xdb, 0xa0, 0x8c, 0x6d, 0x3b, 0x49, 0x7a, 0x5b, 0x76, 0x90, 0x38, 0x9e, 0xa8, 0xf1, 0xd2, 0xf7, 0xe0, 0x11, 0x5c, 0x00, 0x2b, 0x7e, 0x76, 0xee, 0xe9, 0x55, 0x9a, 0x0b, 0x1f, 0xe4, 0x03, 0xcb, 0x60, 0x5e, 0xdc, 0x12, 0x05, 0xaa, 0xbe, 0xc7, 0xb8, 0xa2, 0xaf, 0xa7, 0x2e, 0xb6, 0xe8, 0x2a, 0x6c, 0x9c, 0x41, 0x15, 0x19, 0xd9, 0xc0, 0x68, 0xd3, 0x82, 0x81, 0x1f, 0x48, 0x6b, 0xbc, 0x22, 0x0d, 0x6e, 0x6d, 0x4a, 0xf4, 0xa1, 0x81, 0x32, 0x89, 0xe6, 0xf6, 0xc9, 0x72, 0x65, 0x29, 0x6e, 0x71, 0x94, 0xd7, 0xea, 0x52, 0x08, 0x87, 0x05, 0x0b, 0x74, 0x51, 0x70, 0xd9, 0x1f, 0xeb, 0xcb, 0x05, 0x28, 0x56, 0xb7, 0xc4, 0xe3, 0x02, 0xef, 0xed, 0x7b, 0x26, 0xdb, 0xff, 0x13, 0xa4, 0x27, 0xd0, 0xce, 0x61, 0xf0, 0x8c, 0xbd, 0x35, 0xfe, 0x77, 0x1e, 0xb0, 0x48, 0x56, 0x5b, 0x5b, 0x18, 0xe7, 0xf0, 0x73, 0xe1, 0xf8, 0xf2, 0xb6, 0x93, 0xa9, 0xc6, 0x57, 0x6b, 0x47, 0x99, 0x2d, 0xef, 0x5c, 0x94, 0x8b, 0xf2, 0x6d, 0x8f, 0xc2, 0xeb, 0xda, 0x1d, 0x35, 0x90, 0x7a, 0x67, 0xc4, 0xef, 0x1a, 0xd5, 0x3a, 0x7e, 0xe9, 0x99, 0x22, 0x36, 0x11, 0x9a, 0xe9, 0xe8, 0x0d, 0x40, 0x1f, 0x45, 0x1a, 0xc7, 0xa1, 0xf4, 0x02, 0xaa, 0x15, 0xe8, 0xf7, 0x87, 0xfd, 0x72, 0xd8, 0x47, 0x4a, 0xf7, 0x12, 0xb4, 0x64, 0x44, 0x7a, 0x58, 0xe1, 0xc2, 0x53, 0x26, 0x6f, 0xe1, 0x09, 0x7c, 0xe5, 0x78, 0x11, 0x75, 0x42, 0x9a, 0xa0, 0x2d, 0x87, 0xd4, 0x3e, 0x0e, 0x1c, 0x9f, 0x05, 0x89, 0x70, 0x6c, 0x4e, 0x27, 0x1c, 0xe5, 0xf5, 0x1c, 0xb9, 0x59, 0x2b, 0x32, 0x55, 0x28, 0x3c, 0x71, 0xc6, 0x90, 0x68, 0x08, 0x07, 0xcf, 0x1e, 0x52, 0x36, 0xce, 0xf1, 0xc2, 0xb6, 0xd7, 0x68},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x38, 0x2d, 0x40, 0x00, 0x40, 0x06, 0x65, 0x6e, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xaa, 0xf5, 0x65, 0x80, 0x10, 0x0f, 0xdd, 0x8b, 0x18, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xa9, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x18, 0x50, 0x40, 0x00, 0x40, 0x06, 0x85, 0x4b, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xaa, 0xfa, 0xbd, 0x80, 0x10, 0x0f, 0xd5, 0x85, 0xc8, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xa9, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0xf1, 0x50, 0x40, 0x00, 0x40, 0x06, 0xac, 0x4a, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x00, 0x15, 0x80, 0x10, 0x0f, 0xaa, 0x80, 0x9b, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xa9, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x9f, 0x83, 0x40, 0x00, 0x40, 0x06, 0xfe, 0x17, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x05, 0x6d, 0x80, 0x10, 0x0f, 0x7f, 0x7b, 0x6e, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xa9, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x9e, 0x4e, 0x40, 0x00, 0x40, 0x06, 0xff, 0x4c, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x05, 0x6d, 0x80, 0x10, 0x10, 0x00, 0x7a, 0xec, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xaa, 0xb3, 0xa1, 0x66, 0x25},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x05, 0x8c, 0x3c, 0x5a, 0x00, 0x00, 0xef, 0x06, 0xec, 0xc8, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x05, 0x6d, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0xde, 0x6c, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x2b, 0x1b, 0x4f, 0x37, 0xa9, 0x1f, 0x41, 0x30, 0xaf, 0x81, 0x11, 0x04, 0x46, 0x24, 0xb0, 0x22, 0x41, 0x14, 0xc6, 0x85, 0x16, 0xc0, 0x08, 0xc6, 0x2b, 0x12, 0x18, 0xc1, 0xa3, 0x0a, 0x0b, 0x8c, 0x94, 0xa2, 0x3f, 0xb9, 0x9d, 0x68, 0x76, 0x10, 0x1f, 0xb7, 0x56, 0xed, 0x4f, 0x55, 0xf1, 0x44, 0x88, 0x89, 0x92, 0x3e, 0x5d, 0x83, 0x34, 0xc9, 0x09, 0x64, 0xe4, 0x86, 0x4c, 0xd1, 0x69, 0x92, 0xde, 0xda, 0x75, 0xf0, 0x0a, 0xac, 0xd0, 0x51, 0xbb, 0xca, 0xae, 0x10, 0xd3, 0xfb, 0x73, 0x2b, 0x79, 0x6f, 0x56, 0x3c, 0x0f, 0xd6, 0x96, 0x43, 0x29, 0x1e, 0xd9, 0x78, 0xa6, 0x80, 0x1c, 0xad, 0xab, 0x23, 0xf2, 0x31, 0x88, 0x8c, 0xdc, 0x22, 0xdc, 0x71, 0x42, 0xf0, 0x5f, 0x18, 0x18, 0x8e, 0xf9, 0x28, 0x33, 0x87, 0x20, 0x4e, 0xe9, 0xa3, 0x08, 0x8d, 0xf6, 0x95, 0x84, 0x20, 0xf1, 0xb7, 0xf6, 0x19, 0x21, 0x10, 0x84, 0x40, 0xfe, 0xbc, 0x43, 0x34, 0x08, 0x82, 0x20, 0xcc, 0x91, 0x09, 0x84, 0x93, 0x16, 0xc4, 0x33, 0xaf, 0x48, 0xf0, 0xd4, 0xf9, 0xea, 0x8c, 0x52, 0x2a, 0xbf, 0xbd, 0x91, 0xc5, 0x9e, 0x07, 0x4e, 0x01, 0x02, 0xbd, 0x48, 0xc8, 0x98, 0xd1, 0x89, 0x62, 0x88, 0x2f, 0xf0, 0x82, 0xd5, 0x37, 0x86, 0x07, 0xf5, 0xae, 0xcf, 0x30, 0xbc, 0x42, 0xc3, 0xae, 0x4f, 0xb1, 0x20, 0xb8, 0xc6, 0x8f, 0x05, 0xf0, 0x0c, 0xd4, 0xfc, 0xd7, 0x26, 0x58, 0x34, 0x84, 0x84, 0x7a, 0x2a, 0x21, 0x11, 0x84, 0x44, 0xce, 0x11, 0x08, 0xc4, 0xde, 0x00, 0x84, 0x00, 0xaf, 0x2c, 0x90, 0x67, 0x5e, 0x0b, 0x04, 0x82, 0x58, 0xf5, 0xed, 0x60, 0xbc, 0x8a, 0xd1, 0x2a, 0xfc, 0x74, 0x84, 0x13, 0x63, 0x0c, 0x0a, 0x5a, 0x13, 0xb1, 0x11, 0x07, 0xa3, 0xfd, 0x49, 0x75, 0x22, 0xb8, 0x21, 0x8d, 0xb1, 0x25, 0xfc, 0xfc, 0x9f, 0xff, 0xfa, 0x6f, 0x41, 0x64, 0x37, 0xa2, 0xbb, 0x3d, 0x33, 0x62, 0x34, 0x72, 0xa8, 0xe7, 0xe3, 0xe9, 0x39, 0x1d, 0x47, 0xb1, 0xe0, 0xc2, 0x49, 0x72, 0xd0, 0xe4, 0x31, 0x4f, 0x03, 0x81, 0x98, 0x86, 0xf8, 0xa1, 0x06, 0x1f, 0x72, 0x3a, 0x40, 0x14, 0x52, 0xd1, 0x1b, 0x79, 0xb2, 0xa3, 0xf0, 0xc6, 0xfc, 0x88, 0x06, 0xfa, 0x6c, 0x11, 0xcf, 0x15, 0xc1, 0xc4, 0xca, 0x52, 0xa7, 0xa3, 0xe3, 0x23, 0x62, 0xe1, 0x11, 0x84, 0x47, 0x24, 0x3c, 0x5c, 0x94, 0x02, 0x45, 0xd4, 0x58, 0xa2, 0xc6, 0x12, 0x04, 0xfc, 0xcc, 0xeb, 0x92, 0xe2, 0x1d, 0xe4, 0x68, 0xaa, 0xb0, 0x12, 0x31, 0xc1, 0x8f, 0x7a, 0x98, 0x72, 0x2c, 0x47, 0xd7, 0x99, 0x08, 0x07, 0x5a, 0xbd, 0x90, 0x02, 0x2f, 0xf0, 0x42, 0x39, 0x9e, 0x17, 0xe6, 0xe9, 0x98, 0xa1, 0x04, 0xa9, 0x6b, 0x7a, 0x39, 0x1a, 0x27, 0x3b, 0x9c, 0xe8, 0xe1, 0xc4, 0x0c, 0x27, 0x66, 0x38, 0x51, 0xc3, 0x89, 0x1e, 0x3e, 0x77, 0x29, 0xf6, 0x90, 0x63, 0xf9, 0x22, 0x1a, 0x85, 0xdf, 0x81, 0xd8, 0x07, 0xcb, 0x3c, 0xa5, 0xd5, 0x28, 0xac, 0xba, 0x02, 0x5f, 0x78, 0xca, 0xaa, 0x33, 0x60, 0xb7, 0xea, 0xa4, 0xd8, 0xba, 0x83, 0x63, 0x3c, 0x28, 0x8c, 0xd4, 0xa5, 0x83, 0x8a, 0x43, 0x58, 0xd1, 0x18, 0x2e, 0xde, 0x00, 0xf0, 0xed, 0xd0, 0x02, 0x3d, 0x48, 0xda, 0x5a, 0x72, 0x8e, 0xd6, 0xe6, 0xfa, 0xce, 0xeb, 0xf5, 0x37, 0x33, 0x71, 0x52, 0x31, 0xa8, 0xb5, 0xb9, 0xf5, 0xe6, 0x0d, 0x78, 0x08, 0xac, 0x4d, 0x65, 0x69, 0x29, 0xc8, 0xe9, 0x79, 0x78, 0x73, 0x84, 0x9c, 0x35, 0xc9, 0x11, 0x23, 0x57, 0xa0, 0x83, 0xe0, 0xf4, 0xc4, 0x1a, 0x39, 0x3b, 0xb2, 0x17, 0xb5, 0x37, 0x37, 0x5e, 0xe3, 0x25, 0xec, 0x72, 0x66, 0xff, 0x5d, 0xb8, 0x0d, 0x2e, 0x8e, 0x3a, 0x9f, 0xdf, 0x92, 0x85, 0x13, 0x16, 0x13, 0x68, 0x4e, 0x3d, 0x9d, 0x28, 0xbb, 0x1b, 0x3b, 0x7b, 0xf7, 0x13, 0x65, 0x77, 0x2e, 0x51, 0xba, 0xc0, 0x6e, 0x40, 0x0c, 0x23, 0x27, 0xc9, 0xf7, 0xe7, 0x23, 0x85, 0xc2, 0x68, 0x63, 0xbd, 0xb5, 0xb1, 0x43, 0x48, 0x6d, 0x9a, 0x7f, 0x07, 0x01, 0xf6, 0xd6, 0x37, 0x96, 0x10, 0x60, 0x77, 0x1e, 0x01, 0x2e, 0x4d, 0x0e, 0x80, 0x55, 0xbc, 0x47, 0x6c, 0x08, 0xde, 0xfd, 0x39, 0x89, 0x20, 0xb1, 0x42, 0x22, 0xec, 0x11, 0x32, 0x67, 0xaa, 0x19, 0x42, 0x48, 0x9d, 0xfd, 0xc3, 0x43, 0xa7, 0x5a, 0xa0, 0xb5, 0x65, 0xe5, 0xbb, 0x5f, 0x6f, 0xf1, 0x8b, 0x4d, 0x60, 0x4f, 0xe5, 0x87, 0x62, 0x2c, 0x30, 0x16, 0x79, 0x25, 0x4d, 0x86, 0x50, 0x22, 0x62, 0x2e, 0xba, 0xae, 0xcc, 0xd7, 0x49, 0x4f, 0x59, 0xa1, 0x75, 0x04, 0x5d, 0x3e, 0x5c, 0x2c, 0x6d, 0xcc, 0x8c, 0xa8, 0xcb, 0x86, 0x71, 0x7c, 0x23, 0xbf, 0x51, 0x80, 0x68, 0x0f, 0xe5, 0xa6, 0x56, 0x2d, 0x70, 0x36, 0xbd, 0xe4, 0xed, 0x08, 0xf9, 0x24, 0xbf, 0xfd, 0xa3, 0xc3, 0x5a, 0xd3, 0x56, 0xbd, 0x66, 0x3d, 0x77, 0x4a, 0xfd, 0xdd, 0x21, 0x3b, 0x5b, 0x6d, 0x1e, 0xfd, 0x1a, 0x14, 0x54, 0x3d, 0x94, 0x67, 0xd1, 0x4d, 0x0f, 0x98, 0x04, 0x37, 0x80, 0xf2, 0x21, 0xb3, 0x5c, 0xa8, 0x4d, 0xa2, 0x5f, 0x37, 0xda, 0x9f, 0xd5, 0x43, 0x79, 0x12, 0x68, 0x1a, 0x40, 0xd3, 0x03, 0x26, 0x41, 0xe2, 0xe1, 0xe9, 0x3d, 0x45, 0xeb, 0xbf, 0x68, 0x3d, 0xb6, 0x03, 0xd6, 0xd1, 0xeb, 0xc7, 0xf2, 0x74, 0xb6, 0x71, 0xd9, 0x84, 0x42, 0xcf, 0x28, 0x22, 0x9a, 0xb8, 0x3e, 0xcd, 0xd4, 0xd7, 0x76, 0x3c, 0x4f, 0x4e, 0x1a, 0x8f, 0x46, 0x1c, 0xa5, 0xa4, 0x36, 0xb9, 0xe9, 0x0b, 0xc9, 0x9c, 0x7e, 0x2a, 0x4f, 0x6d, 0xda, 0xaa, 0x33, 0x5b, 0x8f, 0x54, 0x24, 0x29, 0x95, 0x6c, 0x45, 0xee, 0x13, 0x2e, 0xc8, 0x58, 0x66, 0x36, 0x14, 0x8b, 0xbd, 0xc3, 0x9f, 0xbe, 0x2b, 0xa8, 0xae, 0xc9, 0x3b, 0xb8, 0xf7, 0x2e, 0x4b, 0x03, 0x87, 0x31, 0xc4, 0xbf, 0x58, 0xc5, 0x99, 0xf2, 0xb0, 0xd8, 0x16, 0x84, 0x2c, 0xfc, 0x0e, 0xbc, 0xb6, 0xb6, 0x52, 0x72, 0xe7, 0xb4, 0xf8, 0x86, 0x92, 0x41, 0xca, 0xd1, 0xb0, 0xf6, 0xf0, 0x6a, 0xed, 0x03, 0x76, 0x09, 0x47, 0x14, 0xc2, 0xd2, 0x62, 0x4c, 0xe9, 0xda, 0xff, 0xa5, 0xc2, 0x86, 0x48, 0x6c, 0x64, 0x9a, 0x0c, 0xd8, 0xac, 0x91, 0x89, 0xcf, 0x03, 0xcc, 0x64, 0x70, 0x4f, 0x15, 0x5f, 0x4f, 0x18, 0x2c, 0x5d, 0x5d, 0x01, 0x66, 0x33, 0xfb, 0x86, 0xc4, 0x99, 0x73, 0x1c, 0xfb, 0x62, 0xf6, 0xe0, 0xbb, 0x7a, 0x60, 0x5a, 0xd9, 0x3b, 0x8c, 0x63, 0xcc, 0x59, 0x92, 0x5c, 0xf8, 0xc5, 0x07, 0x47, 0x34, 0x33, 0xc9, 0x9c, 0x53, 0x5a, 0xd2, 0x3b, 0x3f, 0x92, 0x9f, 0xf4, 0x9b, 0xd3, 0x03, 0x8f, 0x7e, 0xd5, 0x6b, 0xf5, 0x4d, 0x93, 0x1a, 0x12, 0xd5, 0x0f, 0x98, 0xa8, 0x76, 0x65, 0xe0, 0xd4, 0x97, 0xb0, 0xaa, 0x87, 0xb9, 0x15, 0xd3, 0xa7, 0xcf, 0xfa, 0x1f, 0x61, 0xc1, 0xcc, 0x6d, 0x69, 0x75, 0x76, 0x12, 0x11, 0x63, 0xa5, 0x1a, 0xb3, 0x46, 0xad, 0xde, 0xc5, 0xde, 0xb6, 0x5f, 0x6a, 0xbe, 0xea, 0x93, 0x68, 0x23, 0xd5, 0x98, 0x31, 0x68, 0xb5, 0x0e, 0x0b, 0x67, 0x98, 0xb1, 0x5d, 0xf5, 0x19, 0xb4, 0x85, 0x6a, 0xcc, 0x58, 0xb3, 0x5a, 0x87, 0x85, 0x33, 0x2c, 0x30, 0x5c, 0xf5, 0x79, 0xac, 0x69, 0x6a, 0xcc, 0x31, 0x66, 0x33, 0x9d, 0xe6, 0xcf, 0xf6, 0x30, 0xab, 0x55, 0x9f, 0xd9, 0x58, 0xa6, 0xc6, 0xac, 0x21, 0xab, 0x77, 0x91, 0xd3, 0x1a, 0x99, 0x55, 0x3b, 0xe5, 0x52, 0x9a, 0xab, 0x82, 0xa4, 0x24, 0xbc, 0xe0, 0xbb, 0xfa, 0x59, 0xfb, 0x2a, 0x86, 0x56, 0x83, 0xe2, 0xab, 0x18, 0xa5, 0x3b, 0xe9, 0x95, 0x81, 0xb6, 0xc6, 0xce, 0x5e, 0x6e, 0xaf, 0x7e, 0xcc, 0xc2, 0x6c, 0xa3, 0xcb, 0x74, 0x08, 0xab, 0xf6, 0x44, 0xa5, 0x40, 0xa8, 0xdc, 0xa7, 0x80, 0x54, 0xa9, 0x7e, 0x29, 0xf5, 0x48, 0xe3, 0x89, 0x33, 0x0a, 0x72, 0xee, 0xcd, 0x14, 0x19, 0x95, 0x7b, 0xa1, 0x41, 0x77, 0x36, 0xe7, 0xd5, 0x21, 0x51, 0xd3, 0x45, 0xa5, 0x3e, 0xf8, 0x05, 0x8f, 0xe2, 0xf3, 0x1d, 0x33, 0xdd, 0x61, 0x00, 0x9f, 0x19, 0xe0, 0x4c, 0x7c, 0x59, 0x30, 0xb8, 0xdf, 0x9a, 0x57, 0x7e, 0x54, 0xff, 0x2e, 0x85, 0x2d, 0xcd, 0xa9, 0x7f, 0x95, 0x62, 0x16, 0x5f, 0xa2, 0xe9, 0xb9, 0xa0, 0x84, 0x8a, 0x85, 0x76, 0x97, 0x4b, 0x96, 0xa9, 0x39, 0xfa, 0xc3, 0x27, 0xe0, 0x34, 0x31, 0xd1, 0xd2, 0x85, 0x9e, 0x2c, 0x9c, 0x33, 0xb4, 0x38, 0xcc, 0x28, 0xa6, 0x98, 0xc3, 0x0a, 0xdb, 0x7f, 0x5e, 0x61, 0x95, 0x26, 0xdf, 0xdc, 0x5b, 0xce, 0xe5, 0x6d, 0x55, 0x5d, 0x98, 0x39, 0x77, 0x3c, 0x7e, 0xdc, 0x0e, 0x7d, 0xad, 0xbd, 0x14, 0xad, 0xbe, 0xed, 0x32, 0xb7, 0xe3, 0x2c, 0x21, 0x55, 0xf3, 0x7c, 0xdc, 0x96, 0xa3, 0x5c, 0xd2, 0x76, 0x8f, 0xa7, 0x10, 0x1f, 0xc6, 0xe9, 0x74, 0x4e, 0x5c, 0xfa, 0xd0, 0x25, 0x1c, 0x19, 0x18, 0xdf, 0xbd, 0x08, 0x95, 0xae, 0x3f, 0x80, 0xd6, 0xb2, 0x88, 0xc7, 0xcd, 0x1e, 0x4f, 0x65, 0x35, 0xee, 0xdf, 0x8e, 0x60, 0x89, 0xb2, 0x58, 0x5f, 0x8b, 0x65, 0x90, 0x69, 0x41, 0xd9, 0x2e, 0x0d, 0x20, 0x3c, 0xa7, 0xa9, 0x68, 0x99, 0xb2, 0xc8, 0x47, 0x2f, 0xc4, 0x5e, 0x0b, 0x33, 0xb0, 0x7e, 0xa8, 0xe0, 0xb4, 0x42, 0x9a, 0x08, 0x47, 0xc9, 0x8c, 0xbc, 0x4e, 0x90, 0x50, 0xb9, 0x03, 0xf5, 0x78, 0x99, 0x39, 0x03, 0x40, 0x6b, 0xe4, 0xc8, 0x42, 0xfa, 0xe1, 0xf2, 0x8f, 0xfe, 0x47, 0x7e, 0xd5, 0x53, 0xed, 0xd7, 0x40, 0xd0, 0x2a, 0xad, 0x46, 0xc1, 0x2c, 0x7d, 0x83, 0x41, 0xaf, 0xf0, 0xd1, 0x0b, 0xbc, 0x54, 0xe3, 0x74, 0xed, 0xcc, 0xa0, 0x32, 0xc9, 0x13, 0xc5, 0xf0, 0x51, 0xd2, 0x68, 0xfc, 0xac, 0xfd, 0x0a, 0xea, 0x13, 0x14, 0xbd, 0x67, 0x60, 0x3c, 0x1b, 0xb7, 0xaa, 0xd5, 0x9c, 0xa6, 0x6d},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x01, 0xb7, 0x3c, 0x5b, 0x00, 0x00, 0xef, 0x06, 0xf0, 0x9c, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0a, 0xc5, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x13, 0x1f, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x2b, 0x1b, 0x4f, 0x37, 0xa9, 0xce, 0xd7, 0x61, 0xe6, 0xf9, 0x88, 0x31, 0x83, 0x90, 0x3a, 0xba, 0x8d, 0x83, 0x5b, 0x36, 0xeb, 0x03, 0xef, 0xf3, 0x12, 0x5d, 0xc9, 0x5a, 0xfb, 0x85, 0x94, 0x25, 0x7e, 0xa2, 0x3c, 0xcd, 0x93, 0x3c, 0x85, 0x61, 0x87, 0xfa, 0x4c, 0xf1, 0x53, 0xf8, 0xa0, 0xbf, 0x73, 0x5c, 0x1c, 0x9c, 0x9a, 0x72, 0x70, 0xfd, 0xb1, 0xe3, 0x9f, 0xe5, 0x4d, 0x44, 0x2e, 0x30, 0x93, 0x78, 0xc2, 0xd2, 0xfa, 0x0a, 0x82, 0xbe, 0xa9, 0xfd, 0xb3, 0xd6, 0x91, 0xa5, 0x39, 0x93, 0xa9, 0xc2, 0x13, 0x56, 0x32, 0x00, 0x18, 0xaa, 0x0c, 0x88, 0xf4, 0x41, 0xed, 0x99, 0xba, 0x6a, 0xff, 0xe3, 0xd7, 0x03, 0x82, 0x9b, 0x43, 0xdc, 0xca, 0xd2, 0x27, 0xac, 0xe5, 0xa3, 0x81, 0xc1, 0x45, 0xf8, 0x7f, 0x43, 0xf7, 0x41, 0x4a, 0xa6, 0xf8, 0xd1, 0x1d, 0x88, 0xf6, 0x1e, 0xa9, 0xfc, 0x3d, 0xdc, 0xdd, 0x23, 0x1f, 0x59, 0xf4, 0x10, 0xdd, 0xaf, 0x4c, 0xf3, 0x64, 0xe5, 0x07, 0x6e, 0xa8, 0x73, 0xc1, 0xe2, 0x58, 0x50, 0x9d, 0x0a, 0x96, 0x52, 0xba, 0x47, 0x72, 0x66, 0x49, 0x39, 0xe6, 0x8f, 0xf5, 0xa7, 0xd2, 0xcf, 0xc8, 0xc3, 0x84, 0xa7, 0x18, 0x37, 0xc9, 0x9f, 0xcb, 0x80, 0x46, 0xea, 0x22, 0xe6, 0x8f, 0x89, 0xd8, 0x10, 0x5f, 0x8d, 0x3a, 0x26, 0xa2, 0x78, 0x67, 0xd9, 0xdc, 0x13, 0x95, 0xd5, 0x13, 0x90, 0xc8, 0x26, 0xfe, 0xf4, 0xf1, 0xab, 0x91, 0x97, 0x9a, 0x2f, 0xd3, 0xd8, 0xcb, 0xdd, 0x27, 0x79, 0xfe, 0xd9, 0xd6, 0x85, 0xca, 0x73, 0x6f, 0x53, 0xad, 0xa1, 0xae, 0x6f, 0x65, 0x4d, 0x0b, 0xd8, 0x18, 0xf7, 0x0d, 0x1f, 0x97, 0x9f, 0x96, 0x37, 0x19, 0xdd, 0x38, 0x99, 0xa6, 0x58, 0xdd, 0xd9, 0x68, 0xbf, 0xc4, 0xe7, 0x77, 0xb8, 0x47, 0x47, 0xae, 0xd8, 0x58, 0x96, 0x70, 0xe8, 0xcb, 0xec, 0x73, 0xf7, 0xfe, 0xec, 0x57, 0xff, 0xe3, 0x91, 0x6b, 0xbf, 0xf9, 0xaf, 0xb6, 0x6b, 0x17, 0xfd, 0x3f, 0x02, 0xa8, 0xbd, 0xca, 0x59, 0xea, 0x16, 0x69, 0x2e, 0x68, 0x45, 0x28, 0x6c, 0x8a, 0x2b, 0x7f, 0xe1, 0x35, 0xf5, 0x1c, 0xef, 0x1d, 0x0d, 0xf0, 0x17, 0x02, 0xbc, 0x16, 0xac, 0xc6, 0x87, 0x7b, 0x29, 0xb6, 0x32, 0xd3, 0xc7, 0xec, 0x6b, 0xd5, 0xb6, 0xb7, 0x4a, 0x7b, 0x0f, 0xaa, 0xb1, 0x2f, 0x3f, 0xf1, 0x2b, 0x48, 0xf9, 0x5b, 0xd7, 0xf6, 0x53, 0xe5, 0xea, 0xc3, 0x94, 0x5e, 0xed, 0x23, 0xe7, 0x33, 0xdd, 0x92, 0x20, 0x1f, 0xf3, 0xa8, 0xfe, 0x2d, 0xf4, 0x99, 0x6e, 0xe3, 0x20, 0x1e, 0xd2, 0x60, 0x59, 0xaf, 0xd2, 0x17, 0xeb, 0x97, 0xf4, 0x54, 0x57, 0xc0, 0x6a, 0xbd, 0x56, 0xf4, 0x57, 0xc0, 0xe1, 0xaf, 0xfa, 0x3f, 0x1b, 0xf9, 0x5f, 0xa8, 0xe7, 0x2a, 0x0d, 0x7d, 0x64, 0x00, 0x00},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0x35, 0xdd, 0x40, 0x00, 0x40, 0x06, 0x67, 0xbe, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x0a, 0xc5, 0x80, 0x10, 0x0f, 0xd5, 0x75, 0xa7, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xbc, 0xb3, 0xa1, 0x66, 0x2b},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0xea, 0x8c, 0x40, 0x00, 0x40, 0x06, 0xb3, 0x0e, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x0c, 0x48, 0x80, 0x10, 0x0f, 0xf3, 0x74, 0x06, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x37, 0xbc, 0xb3, 0xa1, 0x66, 0x2b},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x34, 0x38, 0x2d, 0x40, 0x00, 0xef, 0x06, 0xb6, 0x4d, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0c, 0x48, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x64, 0x9b, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x2b, 0x1b, 0x4f, 0x37, 0xaa},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x34, 0x18, 0x50, 0x40, 0x00, 0xef, 0x06, 0xd6, 0x2a, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0c, 0x48, 0x4d, 0xa6, 0xae, 0x36, 0x80, 0x18, 0x1f, 0x68, 0x64, 0x9b, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0xb3, 0xa1, 0x66, 0x2b, 0x1b, 0x4f, 0x37, 0xaa},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x28, 0x1b, 0xf7, 0x40, 0x00, 0x30, 0x06, 0x91, 0x90, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0c, 0x48, 0x4d, 0xa6, 0xae, 0x36, 0x50, 0x11, 0x00, 0x7a, 0x29, 0x6e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0xfd, 0x9b, 0x40, 0x00, 0x40, 0x06, 0x9f, 0xff, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x0c, 0x49, 0x80, 0x10, 0x10, 0x00, 0x60, 0x9c, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x4f, 0x4b, 0x18, 0xb3, 0xa1, 0x66, 0x2b},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x28, 0xab, 0x91, 0x00, 0x00, 0x40, 0x06, 0x32, 0x16, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x35, 0xfe, 0xab, 0x0c, 0x49, 0x50, 0x10, 0x10, 0x00, 0x19, 0xe9, 0x00, 0x00},\n\t\t{0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x08, 0x00, 0x45, 0x00, 0x00, 0x34, 0xaf, 0x96, 0x40, 0x00, 0x40, 0x06, 0xee, 0x04, 0x0a, 0x01, 0x0a, 0x4c, 0xa4, 0x43, 0xe4, 0x98, 0xe1, 0xa1, 0x00, 0x50, 0x4d, 0xa6, 0xae, 0x36, 0xfe, 0xab, 0x0c, 0x49, 0x80, 0x11, 0x10, 0x00, 0x9a, 0xcb, 0x00, 0x00, 0x01, 0x01, 0x08, 0x0a, 0x1b, 0x51, 0x10, 0xe6, 0xb3, 0xa1, 0x66, 0x2b},\n\t\t{0xb8, 0xe8, 0x56, 0x32, 0x0b, 0xde, 0x4a, 0x1d, 0x70, 0xcf, 0xa6, 0xe5, 0x08, 0x00, 0x45, 0x20, 0x00, 0x28, 0x00, 0x00, 0x40, 0x00, 0x30, 0x06, 0xad, 0x87, 0xa4, 0x43, 0xe4, 0x98, 0x0a, 0x01, 0x0a, 0x4c, 0x00, 0x50, 0xe1, 0xa1, 0xfe, 0xab, 0x0c, 0x49, 0x13, 0x1c, 0x26, 0x5c, 0x50, 0x04, 0x00, 0x00, 0xec, 0x58, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}}\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/rpcwrapper/interfaces.go",
    "content": "package rpcwrapper\n\nimport \"context\"\n\n// RPCClient is the client interface\ntype RPCClient interface {\n\tNewRPCClient(contextID string, channel string, rpcSecret string) error\n\tGetRPCClient(contextID string) (*RPCHdl, error)\n\tRemoteCall(contextID string, methodName string, req *Request, resp *Response) error\n\tDestroyRPCClient(contextID string)\n\tContextList() []string\n\tCheckValidity(req *Request, secret string) bool\n}\n\n// RPCServer is the server interface\ntype RPCServer interface {\n\tStartServer(ctx context.Context, protocol string, path string, handler interface{}) error\n\tProcessMessage(req *Request, secret string) bool\n\tCheckValidity(req *Request, secret string) bool\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/rpcwrapper/mockrpcwrapper/mockrpcwrapper.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/internal/enforcer/utils/rpcwrapper/interfaces.go\n\n// Package mockrpcwrapper is a generated GoMock package.\npackage mockrpcwrapper\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\trpcwrapper \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n)\n\n// MockRPCClient is a mock of RPCClient interface\n// nolint\ntype MockRPCClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockRPCClientMockRecorder\n}\n\n// MockRPCClientMockRecorder is the mock recorder for MockRPCClient\n// nolint\ntype MockRPCClientMockRecorder struct {\n\tmock *MockRPCClient\n}\n\n// NewMockRPCClient creates a new mock instance\n// nolint\nfunc NewMockRPCClient(ctrl *gomock.Controller) *MockRPCClient {\n\tmock := &MockRPCClient{ctrl: ctrl}\n\tmock.recorder = &MockRPCClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockRPCClient) EXPECT() *MockRPCClientMockRecorder {\n\treturn m.recorder\n}\n\n// NewRPCClient mocks base method\n// nolint\nfunc (m *MockRPCClient) NewRPCClient(contextID, channel, rpcSecret string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NewRPCClient\", contextID, channel, rpcSecret)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NewRPCClient indicates an expected call of NewRPCClient\n// nolint\nfunc (mr *MockRPCClientMockRecorder) NewRPCClient(contextID, channel, rpcSecret interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NewRPCClient\", reflect.TypeOf((*MockRPCClient)(nil).NewRPCClient), contextID, channel, rpcSecret)\n}\n\n// GetRPCClient mocks base method\n// nolint\nfunc (m *MockRPCClient) GetRPCClient(contextID string) (*rpcwrapper.RPCHdl, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetRPCClient\", contextID)\n\tret0, _ := ret[0].(*rpcwrapper.RPCHdl)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// GetRPCClient indicates an expected call of GetRPCClient\n// nolint\nfunc (mr *MockRPCClientMockRecorder) GetRPCClient(contextID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetRPCClient\", reflect.TypeOf((*MockRPCClient)(nil).GetRPCClient), contextID)\n}\n\n// RemoteCall mocks base method\n// nolint\nfunc (m *MockRPCClient) RemoteCall(contextID, methodName string, req *rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RemoteCall\", contextID, methodName, req, resp)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// RemoteCall indicates an expected call of RemoteCall\n// nolint\nfunc (mr *MockRPCClientMockRecorder) RemoteCall(contextID, methodName, req, resp interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RemoteCall\", reflect.TypeOf((*MockRPCClient)(nil).RemoteCall), contextID, methodName, req, resp)\n}\n\n// DestroyRPCClient mocks base method\n// nolint\nfunc (m *MockRPCClient) DestroyRPCClient(contextID string) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"DestroyRPCClient\", contextID)\n}\n\n// DestroyRPCClient indicates an expected call of DestroyRPCClient\n// nolint\nfunc (mr *MockRPCClientMockRecorder) DestroyRPCClient(contextID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DestroyRPCClient\", reflect.TypeOf((*MockRPCClient)(nil).DestroyRPCClient), contextID)\n}\n\n// ContextList mocks base method\n// nolint\nfunc (m *MockRPCClient) ContextList() []string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContextList\")\n\tret0, _ := ret[0].([]string)\n\treturn ret0\n}\n\n// ContextList indicates an expected call of ContextList\n// nolint\nfunc (mr *MockRPCClientMockRecorder) ContextList() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContextList\", reflect.TypeOf((*MockRPCClient)(nil).ContextList))\n}\n\n// CheckValidity mocks base method\n// nolint\nfunc (m *MockRPCClient) CheckValidity(req *rpcwrapper.Request, secret string) bool {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CheckValidity\", req, secret)\n\tret0, _ := ret[0].(bool)\n\treturn ret0\n}\n\n// CheckValidity indicates an expected call of CheckValidity\n// nolint\nfunc (mr *MockRPCClientMockRecorder) CheckValidity(req, secret interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CheckValidity\", reflect.TypeOf((*MockRPCClient)(nil).CheckValidity), req, secret)\n}\n\n// MockRPCServer is a mock of RPCServer interface\n// nolint\ntype MockRPCServer struct {\n\tctrl     *gomock.Controller\n\trecorder *MockRPCServerMockRecorder\n}\n\n// MockRPCServerMockRecorder is the mock recorder for MockRPCServer\n// nolint\ntype MockRPCServerMockRecorder struct {\n\tmock *MockRPCServer\n}\n\n// NewMockRPCServer creates a new mock instance\n// nolint\nfunc NewMockRPCServer(ctrl *gomock.Controller) *MockRPCServer {\n\tmock := &MockRPCServer{ctrl: ctrl}\n\tmock.recorder = &MockRPCServerMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockRPCServer) EXPECT() *MockRPCServerMockRecorder {\n\treturn m.recorder\n}\n\n// StartServer mocks base method\n// nolint\nfunc (m *MockRPCServer) StartServer(ctx context.Context, protocol, path string, handler interface{}) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"StartServer\", ctx, protocol, path, handler)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// StartServer indicates an expected call of StartServer\n// nolint\nfunc (mr *MockRPCServerMockRecorder) StartServer(ctx, protocol, path, handler interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StartServer\", reflect.TypeOf((*MockRPCServer)(nil).StartServer), ctx, protocol, path, handler)\n}\n\n// ProcessMessage mocks base method\n// nolint\nfunc (m *MockRPCServer) ProcessMessage(req *rpcwrapper.Request, secret string) bool {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ProcessMessage\", req, secret)\n\tret0, _ := ret[0].(bool)\n\treturn ret0\n}\n\n// ProcessMessage indicates an expected call of ProcessMessage\n// nolint\nfunc (mr *MockRPCServerMockRecorder) ProcessMessage(req, secret interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ProcessMessage\", reflect.TypeOf((*MockRPCServer)(nil).ProcessMessage), req, secret)\n}\n\n// CheckValidity mocks base method\n// nolint\nfunc (m *MockRPCServer) CheckValidity(req *rpcwrapper.Request, secret string) bool {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CheckValidity\", req, secret)\n\tret0, _ := ret[0].(bool)\n\treturn ret0\n}\n\n// CheckValidity indicates an expected call of CheckValidity\n// nolint\nfunc (mr *MockRPCServerMockRecorder) CheckValidity(req, secret interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CheckValidity\", reflect.TypeOf((*MockRPCServer)(nil).CheckValidity), req, secret)\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/rpcwrapper/rpc_handle.go",
    "content": "package rpcwrapper\n\nimport (\n\t\"context\"\n\t\"crypto/hmac\"\n\t\"crypto/sha256\"\n\t\"encoding/binary\"\n\t\"encoding/gob\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"net/rpc\"\n\t\"os\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/mitchellh/hashstructure\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/oidc\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/pkitokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\n// RPCHdl is a per client handle\ntype RPCHdl struct {\n\tClient  *rpc.Client\n\tChannel string\n\tSecret  string\n}\n\n// RPCWrapper  is a struct which holds stats for all rpc sesions\ntype RPCWrapper struct {\n\trpcClientMap *cache.Cache\n\tserver       *rpc.Server\n\tsync.Mutex\n}\n\n// NewRPCWrapper creates a new rpcwrapper\nfunc NewRPCWrapper() *RPCWrapper {\n\n\tRegisterTypes()\n\n\treturn &RPCWrapper{\n\t\trpcClientMap: cache.NewCache(\"RPCWrapper\"),\n\t}\n}\n\nconst (\n\tmaxRetries     = 10000\n\tenvRetryString = \"REMOTE_RPCRETRIES\"\n)\n\n// NewRPCClient exported\nfunc (r *RPCWrapper) NewRPCClient(contextID string, channel string, sharedsecret string) error {\n\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tmax := maxRetries\n\tretries := os.Getenv(envRetryString)\n\tif len(retries) > 0 {\n\t\tmax, _ = strconv.Atoi(retries)\n\t}\n\n\tnumRetries := 0\n\tclient, err := rpc.DialHTTP(\"unix\", channel)\n\tfor err != nil {\n\t\tnumRetries++\n\t\tif numRetries >= max {\n\t\t\treturn err\n\t\t}\n\n\t\ttime.Sleep(5 * time.Millisecond)\n\t\tclient, err = rpc.DialHTTP(\"unix\", channel)\n\t}\n\n\tr.rpcClientMap.AddOrUpdate(contextID, &RPCHdl{Client: client, Channel: channel, Secret: sharedsecret})\n\n\treturn nil\n\n}\n\n// GetRPCClient gets a handle to the rpc client for the contextID( enforcer in the container)\nfunc (r *RPCWrapper) GetRPCClient(contextID string) (*RPCHdl, error) {\n\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tval, err := r.rpcClientMap.Get(contextID)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn val.(*RPCHdl), nil\n}\n\n// RemoteCall is a wrapper around rpc.Call and also ensure message integrity by adding a hmac\nfunc (r *RPCWrapper) RemoteCall(contextID string, methodName string, req *Request, resp *Response) error {\n\n\trpcClient, err := r.GetRPCClient(contextID)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tdigest := hmac.New(sha256.New, []byte(rpcClient.Secret))\n\thash, err := payloadHash(req.Payload)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif _, err := digest.Write(hash); err != nil {\n\t\treturn err\n\t}\n\n\treq.HashAuth = digest.Sum(nil)\n\n\treturn rpcClient.Client.Call(methodName, req, resp)\n}\n\n// CheckValidity checks if the received message is valid\nfunc (r *RPCWrapper) CheckValidity(req *Request, secret string) bool {\n\n\tdigest := hmac.New(sha256.New, []byte(secret))\n\n\thash, err := payloadHash(req.Payload)\n\tif err != nil {\n\t\treturn false\n\t}\n\n\tif _, err := digest.Write(hash); err != nil {\n\t\treturn false\n\t}\n\n\treturn hmac.Equal(req.HashAuth, digest.Sum(nil))\n}\n\n//NewRPCServer returns an interface RPCServer\nfunc NewRPCServer() RPCServer {\n\n\treturn &RPCWrapper{\n\t\tserver: rpc.NewServer(),\n\t}\n}\n\n// StartServer Starts a server and waits for new connections this function never returns\nfunc (r *RPCWrapper) StartServer(ctx context.Context, protocol string, path string, handler interface{}) error {\n\n\tif len(path) == 0 {\n\t\tzap.L().Fatal(\"Sock param not passed in environment\")\n\t}\n\n\t// Register RPC Type\n\tRegisterTypes()\n\n\t// Register handlers\n\tif err := r.server.Register(handler); err != nil {\n\t\treturn err\n\t}\n\n\tr.server.HandleHTTP(rpc.DefaultRPCPath, rpc.DefaultDebugPath)\n\n\t// removing old path in case it exists already - error if we can't remove it\n\tif _, err := os.Stat(path); err == nil {\n\n\t\tzap.L().Debug(\"Socket path already exists: removing\", zap.String(\"path\", path))\n\n\t\tif rerr := os.Remove(path); rerr != nil {\n\t\t\treturn fmt.Errorf(\"unable to delete existing socket path %s: %s\", path, rerr)\n\t\t}\n\t}\n\n\t// Get listener\n\tlisten, err := net.Listen(protocol, path)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tgo http.Serve(listen, nil) // nolint\n\n\t<-ctx.Done()\n\n\tif merr := listen.Close(); merr != nil {\n\t\tzap.L().Warn(\"Connection already closed\", zap.Error(merr))\n\t}\n\n\tif _, err = os.Stat(path); !os.IsNotExist(err) {\n\t\tif err := os.Remove(path); err != nil {\n\t\t\tzap.L().Warn(\"failed to remove old path\", zap.Error(err))\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// DestroyRPCClient calls close on the rpc and cleans up the connection\nfunc (r *RPCWrapper) DestroyRPCClient(contextID string) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\trpcHdl, err := r.rpcClientMap.Get(contextID)\n\tif err != nil {\n\t\treturn\n\t}\n\n\tif err = rpcHdl.(*RPCHdl).Client.Close(); err != nil {\n\t\tzap.L().Warn(\"Failed to close channel\",\n\t\t\tzap.String(\"contextID\", contextID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err = os.Remove(rpcHdl.(*RPCHdl).Channel); err != nil {\n\t\tzap.L().Debug(\"Failed to remove channel - already closed\",\n\t\t\tzap.String(\"contextID\", contextID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err = r.rpcClientMap.Remove(contextID); err != nil {\n\t\tzap.L().Warn(\"Failed to remove item from cache\",\n\t\t\tzap.String(\"contextID\", contextID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n}\n\n// ContextList returns the list of active context managed by the rpcwrapper\nfunc (r *RPCWrapper) ContextList() []string {\n\tkeylist := r.rpcClientMap.KeyList()\n\tcontextArray := []string{}\n\tfor _, key := range keylist {\n\t\tif kstring, ok := key.(string); ok {\n\t\t\tcontextArray = append(contextArray, kstring)\n\t\t}\n\t}\n\treturn contextArray\n}\n\n// ProcessMessage checks if the given request is valid\nfunc (r *RPCWrapper) ProcessMessage(req *Request, secret string) bool {\n\n\treturn r.CheckValidity(req, secret)\n}\n\n// payloadHash returns the has of the payload\nfunc payloadHash(payload interface{}) ([]byte, error) {\n\thash, err := hashstructure.Hash(payload, nil)\n\tif err != nil {\n\t\treturn []byte{}, err\n\t}\n\n\tbuf := make([]byte, 8)\n\tbinary.BigEndian.PutUint64(buf, hash)\n\treturn buf, nil\n}\n\n// RegisterTypes  registers types that are exchanged between the controller and remoteenforcer\nfunc RegisterTypes() {\n\t// TODO: figure out why the RegisterName() calls are written as *(&x{}) when the Register() calls are just &x{}\n\tgob.Register(&secrets.RPCSecrets{})\n\tgob.Register(&pkitokens.PKIJWTVerifier{})\n\tgob.Register(&oidc.TokenVerifier{})\n\tgob.Register(&collector.CounterReport{})\n\tgob.Register(&collector.PingReport{})\n\tgob.Register(&collector.DNSRequestReport{})\n\tgob.Register(&collector.PacketReport{})\n\tgob.Register(&collector.ConnectionExceptionReport{})\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.Init_Request_Payload\", *(&InitRequestPayload{}))                                // nolint:staticcheck // SA4001: *&x will be simplified to x. It will not copy x.\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.Enforce_Payload\", *(&EnforcePayload{}))                                         // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.UnEnforce_Payload\", *(&UnEnforcePayload{}))                                     // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.Stats_Payload\", *(&StatsPayload{}))                                             // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.UpdateSecrets_Payload\", *(&UpdateSecretsPayload{}))                             // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.SetTargetNetworks_Payload\", *(&SetTargetNetworksPayload{}))                     // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.EnableIPTablesPacketTracing_PayLoad\", *(&EnableIPTablesPacketTracingPayLoad{})) // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.EnableDatapathPacketTracing_PayLoad\", *(&EnableDatapathPacketTracingPayLoad{})) // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.Report_Payload\", *(&ReportPayload{}))                                           // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.SetLogLevel_Payload\", *(&SetLogLevelPayload{}))                                 // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.TokenRequest_Payload\", *(&TokenRequestPayload{}))                               // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.TokenResponse_Payload\", *(&TokenResponsePayload{}))                             // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.Ping_Payload\", *(&PingPayload{}))                                               // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.DebugCollect_Payload\", *(&DebugCollectPayload{}))                               // nolint:staticcheck\n\tgob.RegisterName(\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper.DebugCollectResponse_Payload\", *(&DebugCollectResponsePayload{}))               // nolint:staticcheck\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/rpcwrapper/rpc_handlemock.go",
    "content": "package rpcwrapper\n\nimport (\n\t\"context\"\n\t\"net/rpc\"\n\t\"sync\"\n\t\"testing\"\n)\n\n// MockRPCHdl is mock of rpchdl\ntype MockRPCHdl struct {\n\tClient  *rpc.Client\n\tChannel string\n}\n\ntype mockedMethods struct {\n\tNewRPCClientMock     func(contextID string, channel string, secret string) error\n\tGetRPCClientMock     func(contextID string) (*RPCHdl, error)\n\tRemoteCallMock       func(contextID string, methodName string, req *Request, resp *Response) error\n\tDestroyRPCClientMock func(contextID string)\n\tStartServerMock      func(ctx context.Context, protocol string, path string, handler interface{}) error\n\tProcessMessageMock   func(req *Request, secret string) bool\n\tContextListMock      func() []string\n\tCheckValidityMock    func(req *Request, secret string) bool\n}\n\n// TestRPCClient is a RPC Client used for test\ntype TestRPCClient interface {\n\tRPCClient\n\tMockNewRPCClient(t *testing.T, impl func(contextID string, channel string, secret string) error)\n\tMockGetRPCClient(t *testing.T, impl func(contextID string) (*RPCHdl, error))\n\tMockRemoteCall(t *testing.T, impl func(contextID string, methodName string, req *Request, resp *Response) error)\n\tMockDestroyRPCClient(t *testing.T, impl func(contextID string))\n\tMockContextList(t *testing.T, impl func() []string)\n\tMockCheckValidity(t *testing.T, impl func(req *Request, secret string) bool)\n}\n\n// TestRPCServer is a RPC Server used for test\ntype TestRPCServer interface {\n\tRPCServer\n\tMockStartServer(t *testing.T, impl func(ctx context.Context, protocol string, path string, handler interface{}) error)\n\tMockProcessMessage(t *testing.T, impl func(req *Request, secret string) bool)\n\tMockCheckValidity(t *testing.T, impl func(req *Request, secret string) bool)\n}\n\ntype testRPC struct {\n\tmocks       map[*testing.T]*mockedMethods\n\tlock        *sync.Mutex\n\tcurrentTest *testing.T\n}\n\n// NewTestRPCServer is a Test RPC Server\nfunc NewTestRPCServer() TestRPCServer {\n\treturn &testRPC{\n\t\tlock:  &sync.Mutex{},\n\t\tmocks: map[*testing.T]*mockedMethods{},\n\t}\n}\n\n// NewTestRPCClient is a Test RPC Client\nfunc NewTestRPCClient() TestRPCClient {\n\treturn &testRPC{\n\t\tlock:  &sync.Mutex{},\n\t\tmocks: map[*testing.T]*mockedMethods{},\n\t}\n}\n\n// MockNewRPCClient mocks the NewRPCClient function\nfunc (m *testRPC) MockNewRPCClient(t *testing.T, impl func(contextID string, channel string, secret string) error) {\n\tm.currentMocks(t).NewRPCClientMock = impl\n}\n\n// MockGetRPCClient mocks the GetRPCClient function\nfunc (m *testRPC) MockGetRPCClient(t *testing.T, impl func(contextID string) (*RPCHdl, error)) {\n\tm.currentMocks(t).GetRPCClientMock = impl\n}\n\n// MockRemoteCall mocks the RemoteCall function\nfunc (m *testRPC) MockRemoteCall(t *testing.T, impl func(contextID string, methodName string, req *Request, resp *Response) error) {\n\tm.currentMocks(t).RemoteCallMock = impl\n}\n\n// MockDestroyRPCClient mocks the DestroyRPCClient function\nfunc (m *testRPC) MockDestroyRPCClient(t *testing.T, impl func(contextID string)) {\n\tm.currentMocks(t).DestroyRPCClientMock = impl\n}\n\n// MockStartServer mocks the StartServer function\nfunc (m *testRPC) MockStartServer(t *testing.T, impl func(ctx context.Context, protocol string, path string, handler interface{}) error) {\n\tm.currentMocks(t).StartServerMock = impl\n\n}\n\n// MockProcessMessage mocks the ProcessMessage function\nfunc (m *testRPC) MockProcessMessage(t *testing.T, impl func(req *Request, secret string) bool) {\n\tm.currentMocks(t).ProcessMessageMock = impl\n}\n\n// MockContextList mocks the ContextList function\nfunc (m *testRPC) MockContextList(t *testing.T, impl func() []string) {\n\tm.currentMocks(t).ContextListMock = impl\n}\n\n// MockCheckValidity mocks the CheckValidity function\nfunc (m *testRPC) MockCheckValidity(t *testing.T, impl func(req *Request, secret string) bool) {\n\tm.currentMocks(t).CheckValidityMock = impl\n}\n\n// NewRPCClient implements the new interface\nfunc (m *testRPC) NewRPCClient(contextID string, channel string, secret string) error {\n\tif mock := m.currentMocks(nil); mock != nil && mock.NewRPCClientMock != nil {\n\t\treturn mock.NewRPCClientMock(contextID, channel, secret)\n\t}\n\treturn nil\n}\n\n// GetRPCClient implements the interface with a mock\nfunc (m *testRPC) GetRPCClient(contextID string) (*RPCHdl, error) {\n\tif mock := m.currentMocks(nil); mock != nil && mock.GetRPCClientMock != nil {\n\t\treturn mock.GetRPCClientMock(contextID)\n\t}\n\treturn nil, nil\n}\n\n// RemoteCall implements the interface with a mock\nfunc (m *testRPC) RemoteCall(contextID string, methodName string, req *Request, resp *Response) error {\n\tif mock := m.currentMocks(nil); mock != nil && mock.RemoteCallMock != nil {\n\t\treturn mock.RemoteCallMock(contextID, methodName, req, resp)\n\t}\n\treturn nil\n}\n\n// DestroyRPCClient implements the interface with a Mock\nfunc (m *testRPC) DestroyRPCClient(contextID string) {\n\tif mock := m.currentMocks(nil); mock != nil && mock.DestroyRPCClientMock != nil {\n\t\tmock.DestroyRPCClientMock(contextID)\n\t\treturn\n\t}\n}\n\n// CheckValidity implements the interface with a mock\nfunc (m *testRPC) CheckValidity(req *Request, secret string) bool {\n\tif mock := m.currentMocks(nil); mock != nil && mock.DestroyRPCClientMock != nil {\n\t\treturn mock.CheckValidityMock(req, secret)\n\t}\n\treturn false\n}\n\n// StartServer implements the interface with a mock\nfunc (m *testRPC) StartServer(ctx context.Context, protocol string, path string, handler interface{}) error {\n\tif mock := m.currentMocks(nil); mock != nil && mock.StartServerMock != nil {\n\t\treturn mock.StartServerMock(ctx, protocol, path, handler)\n\t}\n\treturn nil\n}\n\n// ProcessMessage implements the interface with a mock\nfunc (m *testRPC) ProcessMessage(req *Request, secret string) bool {\n\tif mock := m.currentMocks(nil); mock != nil && mock.ProcessMessageMock != nil {\n\t\treturn mock.ProcessMessageMock(req, secret)\n\t}\n\treturn true\n}\n\n// ContextList implements the interface with a mock\nfunc (m *testRPC) ContextList() []string {\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.ContextListMock != nil {\n\t\treturn mock.ContextListMock()\n\t}\n\treturn []string{}\n}\n\n// currentMocks returns the list of current mocks\nfunc (m *testRPC) currentMocks(t *testing.T) *mockedMethods {\n\tm.lock.Lock()\n\tdefer m.lock.Unlock()\n\n\tif t == nil {\n\t\tt = m.currentTest\n\t} else {\n\t\tm.currentTest = t\n\t}\n\n\tmocks := m.mocks[t]\n\tif mocks == nil {\n\t\tmocks = &mockedMethods{}\n\t\tm.mocks[t] = mocks\n\t}\n\n\treturn mocks\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/rpcwrapper/rpc_handletest.go",
    "content": "package rpcwrapper\n\nimport (\n\t\"testing\"\n\t\"time\"\n)\n\n//Not mocking system libraries\n//Will create actual rpc client server without using wrapper to test our implementations\n\nconst (\n\tdefaultchannel = \"/tmp/test.sock\"\n)\n\n// TestNewRPCClient mocks an RPC client test\nfunc TestNewRPCClient(t *testing.T) {\n\n\t//Test without  a rpc server\n\trpchdl := NewRPCWrapper()\n\tresp := make(chan error, 1)\n\tgo asyncRpcclient(resp, rpchdl)\n\tselect {\n\tcase r := <-resp:\n\t\tif r == nil {\n\t\t\tt.Errorf(\"SUCCESS in the absence of rpc server\")\n\t\t}\n\tcase <-time.After(1 * time.Second):\n\t\tt.Errorf(\"RPCClient blocked and does not return\")\n\n\t}\n\terr := rpchdl.NewRPCClient(\"12345\", defaultchannel, \"mysecret\")\n\tif err == nil {\n\t\tt.Errorf(\"No error returned when there is not server\")\n\t}\n\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/rpcwrapper/rpc_testhelper.go",
    "content": "package rpcwrapper\n\nfunc asyncRpcclient(resp chan<- error, rpchdl *RPCWrapper) {\n\terr := rpchdl.NewRPCClient(\"12345\", defaultchannel, \"mysecret\")\n\tresp <- err\n}\n"
  },
  {
    "path": "controller/internal/enforcer/utils/rpcwrapper/types.go",
    "content": "package rpcwrapper\n\nimport (\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// CaptureType identifies the type of iptables implementation that should be used\ntype CaptureType int\n\nconst (\n\t// IPTables forces an IPTables implementation\n\tIPTables CaptureType = iota\n\t// IPSets forces an IPSet implementation\n\tIPSets\n)\n\n// PayloadType is the type of payload in the request.\ntype PayloadType int\n\n// Payload report types.\nconst (\n\tPacketReport PayloadType = iota\n\tDNSReport\n\tCounterReport\n\tPingReport\n\tConnectionExceptionReport\n)\n\n//Request exported\ntype Request struct {\n\tHashAuth    []byte\n\tPayloadType PayloadType\n\tPayload     interface{}\n}\n\n//exported consts from the package\nconst (\n\tSUCCESS = 0\n)\n\n//Response is the response for every RPC call. This is used to carry the status of the actual function call\n//made on the remote end\ntype Response struct {\n\tStatus  string\n\tPayload interface{} `json:\",omitempty\"`\n}\n\n//InitRequestPayload Payload for enforcer init request\ntype InitRequestPayload struct {\n\tMutualAuth             bool                   `json:\",omitempty\"`\n\tPacketLogs             bool                   `json:\",omitempty\"`\n\tValidity               time.Duration          `json:\",omitempty\"`\n\tServerID               string                 `json:\",omitempty\"`\n\tExternalIPCacheTimeout time.Duration          `json:\",omitempty\"`\n\tSecrets                secrets.RPCSecrets     `json:\",omitempty\"`\n\tConfiguration          *runtime.Configuration `json:\",omitempty\"`\n\tBinaryTokens           bool                   `json:\",omitempty\"`\n\tIsBPFEnabled           bool                   `json:\",omitempty\"`\n\tIPv6Enabled            bool                   `json:\",omitempty\"`\n\tIPTablesLockfile       string                 `json:\",omitempty\"`\n\tServiceMeshType        policy.ServiceMesh     `json:\",omitempty\"`\n}\n\n// UpdateSecretsPayload payload for the update secrets to remote enforcers\ntype UpdateSecretsPayload struct {\n\tSecrets secrets.RPCSecrets `json:\",omitempty\"`\n}\n\n// EnforcePayload Payload for enforce request\ntype EnforcePayload struct {\n\tContextID string                 `json:\",omitempty\"`\n\tPolicy    *policy.PUPolicyPublic `json:\",omitempty\"`\n\tSecrets   secrets.RPCSecrets     `json:\",omitempty\"`\n}\n\n//UnEnforcePayload payload for unenforce request\ntype UnEnforcePayload struct {\n\tContextID string `json:\",omitempty\"`\n}\n\n//SetLogLevelPayload payload for set log level request\ntype SetLogLevelPayload struct {\n\tLevel constants.LogLevel `json:\",omitempty\"`\n}\n\n//StatsPayload is the payload carries by the stats reporting form the remote enforcer\ntype StatsPayload struct {\n\tFlows map[uint64]*collector.FlowRecord `json:\",omitempty\"`\n\tUsers map[string]*collector.UserRecord `json:\",omitempty\"`\n}\n\n// ReportPayload is the generic report from remote enforcer\ntype ReportPayload struct {\n\tType    PayloadType\n\tPayload interface{}\n}\n\n//SetTargetNetworksPayload carries the payload for target networks\ntype SetTargetNetworksPayload struct {\n\tConfiguration *runtime.Configuration `json:\",omitempty\"`\n}\n\n// EnableIPTablesPacketTracingPayLoad is the payload message to enable iptable trace in remote containers\ntype EnableIPTablesPacketTracingPayLoad struct {\n\tIPTablesPacketTracing bool          `json:\",omitempty\"`\n\tInterval              time.Duration `json:\",omitempty\"`\n\tContextID             string        `json:\",omitempty\"`\n}\n\n// EnableDatapathPacketTracingPayLoad is the payload to enable nfq packet tracing in the remote container\ntype EnableDatapathPacketTracingPayLoad struct {\n\tDirection packettracing.TracingDirection `json:\",omitempty\"`\n\tInterval  time.Duration                  `json:\",omitempty\"`\n\tContextID string                         `json:\",omitempty\"`\n}\n\n// TokenRequestPayload carries the payload for issuing tokens.\ntype TokenRequestPayload struct {\n\tContextID        string                  `json:\",omitempty\"`\n\tAudience         string                  `json:\",omitempty\"`\n\tValidity         time.Duration           `json:\",omitempty\"`\n\tServiceTokenType common.ServiceTokenType `json:\",omitempty\"`\n}\n\n// TokenResponsePayload returns the issued token.\ntype TokenResponsePayload struct {\n\tToken string `json:\",omitempty\"`\n}\n\n// PingPayload represents the payload for ping config.\ntype PingPayload struct {\n\tContextID  string\n\tPingConfig *policy.PingConfig\n}\n\n// DebugCollectPayload is the payload for the DebugCollect request.\ntype DebugCollectPayload struct {\n\tContextID    string\n\tPcapFilePath string\n\tPcapFilter   string\n\tCommandExec  string\n}\n\n// DebugCollectResponsePayload is the payload for the DebugCollect response.\ntype DebugCollectResponsePayload struct {\n\tContextID     string\n\tPID           int\n\tCommandOutput string\n}\n"
  },
  {
    "path": "controller/internal/processmon/interfaces.go",
    "content": "package processmon\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// ProcessManager interface exposes methods implemented by a processmon\ntype ProcessManager interface {\n\tKillRemoteEnforcer(contextID string, force bool) error\n\tLaunchRemoteEnforcer(contextID string, refPid int, refNsPath string, arg string, statssecret string, procMountPoint string, enforcerType policy.EnforcerType) (bool, error)\n}\n"
  },
  {
    "path": "controller/internal/processmon/mockprocessmon/mockprocessmon.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/internal/processmon/interfaces.go\n\n// Package mockprocessmon is a generated GoMock package.\npackage mockprocessmon\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// MockProcessManager is a mock of ProcessManager interface\n// nolint\ntype MockProcessManager struct {\n\tctrl     *gomock.Controller\n\trecorder *MockProcessManagerMockRecorder\n}\n\n// MockProcessManagerMockRecorder is the mock recorder for MockProcessManager\n// nolint\ntype MockProcessManagerMockRecorder struct {\n\tmock *MockProcessManager\n}\n\n// NewMockProcessManager creates a new mock instance\n// nolint\nfunc NewMockProcessManager(ctrl *gomock.Controller) *MockProcessManager {\n\tmock := &MockProcessManager{ctrl: ctrl}\n\tmock.recorder = &MockProcessManagerMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockProcessManager) EXPECT() *MockProcessManagerMockRecorder {\n\treturn m.recorder\n}\n\n// KillRemoteEnforcer mocks base method\n// nolint\nfunc (m *MockProcessManager) KillRemoteEnforcer(contextID string, force bool) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"KillRemoteEnforcer\", contextID, force)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// KillRemoteEnforcer indicates an expected call of KillRemoteEnforcer\n// nolint\nfunc (mr *MockProcessManagerMockRecorder) KillRemoteEnforcer(contextID, force interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"KillRemoteEnforcer\", reflect.TypeOf((*MockProcessManager)(nil).KillRemoteEnforcer), contextID, force)\n}\n\n// LaunchRemoteEnforcer mocks base method\n// nolint\nfunc (m *MockProcessManager) LaunchRemoteEnforcer(contextID string, refPid int, refNsPath, arg, statssecret, procMountPoint string, enforcerType policy.EnforcerType) (bool, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"LaunchRemoteEnforcer\", contextID, refPid, refNsPath, arg, statssecret, procMountPoint, enforcerType)\n\tret0, _ := ret[0].(bool)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// LaunchRemoteEnforcer indicates an expected call of LaunchRemoteEnforcer\n// nolint\nfunc (mr *MockProcessManagerMockRecorder) LaunchRemoteEnforcer(contextID, refPid, refNsPath, arg, statssecret, procMountPoint, enforcerType interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"LaunchRemoteEnforcer\", reflect.TypeOf((*MockProcessManager)(nil).LaunchRemoteEnforcer), contextID, refPid, refNsPath, arg, statssecret, procMountPoint, enforcerType)\n}\n"
  },
  {
    "path": "controller/internal/processmon/processmon.go",
    "content": "// +build linux darwin\n\n// Package processmon is to manage and monitor remote enforcers.\npackage processmon\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/internal/logging/masterlog\"\n\t\"go.aporeto.io/enforcerd/internal/utils\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/env\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\tprocessMonitorCacheName = \"ProcessMonitorCache\"\n\tsecretLength            = 32\n)\n\nvar (\n\t// netNSPath holds the directory to ensure ip netns command works\n\tnetNSPath = \"/var/run/netns/\"\n\n\t// execCommand is to used to fake exec commands in tests.\n\texecCommand = exec.Command\n)\n\n// RemoteMonitor is an instance of processMonitor\ntype RemoteMonitor struct {\n\t// netNSPath made configurable to enable running tests\n\tnetNSPath string\n\t// activeProcesses holds a cache of the currently active processes.\n\tactiveProcesses *cache.Cache\n\t// childExitStatus is a channel to monitor the exit status of processes.\n\tchildExitStatus chan exitStatus\n\t// logWithID is the ID for for log files if logging to file.\n\tlogWithID bool\n\t// logLevel is the level of logs for remote command.\n\tlogLevel string\n\t// logFormat selects the format of the logs for the remote.\n\tlogFormat string\n\t// compressedTags instructs the remotes to use compressed tags.\n\tcompressedTags claimsheader.CompressionType\n\t// numQueues is the number of nfqueus\n\tnumQueues string\n\t// runtimeErrorChannel is the channel to communicate errors to the policy engine.\n\truntimeErrorChannel chan *policy.RuntimeError\n\t// rpc is the rpc client to communicate with the remotes.\n\trpc rpcwrapper.RPCClient\n\n\tsync.Mutex\n}\n\n// processInfo stores per process information\ntype processInfo struct {\n\tcontextID string\n\tprocess   *os.Process\n\tsync.Mutex\n}\n\n// exitStatus captures the exit status of a process\ntype exitStatus struct {\n\tpid int // exitted PID\n\t// The contextID is optional and is primarily used by remote enforcer\n\t// processes to represent the namespace in which the process was running\n\tcontextID  string\n\texitStatus error\n}\n\n// New is a method to create a new remote process monitor.\nfunc New(ctx context.Context, p *env.RemoteParameters, c chan *policy.RuntimeError, r rpcwrapper.RPCClient, numQueues int) ProcessManager {\n\n\tnumQueuesStr := strconv.Itoa(numQueues)\n\n\tm := &RemoteMonitor{\n\t\tnetNSPath:           netNSPath,\n\t\tactiveProcesses:     cache.NewCache(processMonitorCacheName),\n\t\tchildExitStatus:     make(chan exitStatus, 100),\n\t\tlogWithID:           p.LogWithID,\n\t\tlogLevel:            p.LogLevel,\n\t\tlogFormat:           p.LogFormat,\n\t\tcompressedTags:      p.CompressedTags,\n\t\tnumQueues:           numQueuesStr,\n\t\truntimeErrorChannel: c,\n\t\trpc:                 r,\n\t}\n\n\tgo m.collectChildExitStatus(ctx)\n\n\treturn m\n}\n\n// LaunchRemoteEnforcer prepares the environment and launches the process. If the process\n// is already launched, it will notify the caller, so that it can avoid any\n// new initialization.\nfunc (p *RemoteMonitor) LaunchRemoteEnforcer(\n\tcontextID string,\n\trefPid int,\n\trefNSPath string,\n\targ string,\n\tstatsServerSecret string,\n\tprocMountPoint string,\n\tenforcerType policy.EnforcerType,\n) (bool, error) {\n\t// Locking here to get the procesinfo to avoid race conditions\n\t// where multiple LaunchProcess happen for the same context.\n\tp.Lock()\n\tif _, err := p.activeProcesses.Get(contextID); err == nil {\n\t\tp.Unlock()\n\t\treturn false, nil\n\t}\n\n\tprocInfo := &processInfo{\n\t\tcontextID: contextID,\n\t}\n\tp.activeProcesses.AddOrUpdate(contextID, procInfo)\n\tp.Unlock()\n\n\t// We will lock the procInfo here, so a kill will have to wait and avoid any race.\n\tprocInfo.Lock()\n\tdefer procInfo.Unlock()\n\n\tvar err error\n\tdefer func() {\n\t\t// If we encoutered an error we remove it from the cache. We will\n\t\t// not be monitoring it any more. Caller is responsible for re-launching.\n\t\tif err != nil {\n\t\t\tp.Lock()\n\t\t\tdefer p.Unlock()\n\t\t\tp.activeProcesses.Remove(contextID) // nolint errcheck\n\t\t}\n\t}()\n\n\t// We check if the NetNsPath was given as parameter.\n\t// If it was we will use it. Otherwise we will determine it based on the PID.\n\tnsPath := refNSPath\n\tif refNSPath == \"\" {\n\t\tnsPath = filepath.Join(procMountPoint, strconv.Itoa(refPid), \"ns\", \"net\")\n\t}\n\n\tvar hoststat os.FileInfo\n\tif hoststat, err = os.Stat(filepath.Join(procMountPoint, \"1\", \"ns\", \"net\")); err != nil {\n\t\treturn false, err\n\t}\n\n\tvar pidstat os.FileInfo\n\tif pidstat, err = os.Stat(nsPath); err != nil {\n\t\treturn false, fmt.Errorf(\"container pid %d not found: %s\", refPid, err)\n\t}\n\n\tif pidstat.Sys().(*syscall.Stat_t).Ino == hoststat.Sys().(*syscall.Stat_t).Ino {\n\t\terr = fmt.Errorf(\"refused to launch a remote enforcer in host namespace\")\n\t\treturn false, err\n\t}\n\n\tif _, err = os.Stat(p.netNSPath); err != nil {\n\t\terr = os.MkdirAll(p.netNSPath, os.ModeDir)\n\t\tif err != nil {\n\t\t\tzap.L().Warn(\"could not create directory netns directory\", zap.Error(err))\n\t\t}\n\t}\n\n\t// A symlink is created from /var/run/netns/<context> to the NetNSPath\n\tcontextFile := filepath.Join(p.netNSPath, strings.Replace(contextID, \"/\", \"_\", -1))\n\t// Remove the context file if it already exists.\n\tif removeErr := os.RemoveAll(contextFile); err != nil {\n\t\tzap.L().Warn(\"Failed to remove namespace link\",\n\t\t\tzap.Error(removeErr))\n\t}\n\n\t// NOTE: This is used by 'enforcer cleanup' command.\n\tif err = os.Symlink(nsPath, contextFile); err != nil {\n\t\tzap.L().Warn(\"Failed to create symlink for use by ip netns\", zap.Error(err))\n\t}\n\n\tvar randomkeystring string\n\trandomkeystring, err = crypto.GenerateRandomString(secretLength)\n\tif err != nil {\n\t\t// This is a more serious failure. We can't reliably control the remote enforcer\n\t\treturn false, fmt.Errorf(\"unable to generate secret: %s\", err)\n\t}\n\n\tnewEnvVars := p.getLaunchProcessEnvVars(\n\t\tprocMountPoint,\n\t\tcontextID,\n\t\trandomkeystring,\n\t\tstatsServerSecret,\n\t\trefPid,\n\t\trefNSPath,\n\t\tenforcerType,\n\t)\n\n\tcmd, cmdName, cmdArgs := getLaunchProcessCmd(arg)\n\tcmd.Env = append(os.Environ(), newEnvVars...)\n\n\tif reader, err := getCmdReader(cmd); err == nil {\n\t\tmasterlog.HandleRemoteLog(reader)\n\t} else {\n\t\tzap.L().Error(\"Failed to get reader for remote logs\")\n\t}\n\n\tif err = cmd.Start(); err != nil {\n\t\t// Cleanup resources\n\t\tif err1 := os.Remove(contextFile); err1 != nil {\n\t\t\tzap.L().Warn(\"Failed to clean up netns path\", zap.Error(err1))\n\t\t}\n\t\tzap.L().Error(\"Unable to start remote enforcer binary\", zap.Error(err))\n\t\treturn false, fmt.Errorf(\"Unable to start remote enforcer binary: %s\", err)\n\t}\n\n\tzap.L().Debug(\"Remote enforcer launched\",\n\t\tzap.String(\"command\", cmdName),\n\t\tzap.Strings(\"args\", cmdArgs),\n\t\tzap.Int(\"remotePid\", cmd.Process.Pid),\n\t)\n\n\tprocInfo.process = cmd.Process\n\tgo func() {\n\t\tstatus := cmd.Wait()\n\t\tp.childExitStatus <- exitStatus{\n\t\t\tpid:        cmd.Process.Pid,\n\t\t\tcontextID:  contextID,\n\t\t\texitStatus: status,\n\t\t}\n\t}()\n\tif err = p.rpc.NewRPCClient(contextID, contextID2SocketPath(contextID), randomkeystring); err != nil {\n\t\treturn false, fmt.Errorf(\"failed to established rpc channel: %s\", err)\n\t}\n\n\treturn true, nil\n}\n\n// KillRemoteEnforcer sends a rpc to the process to exit failing which it will kill the process\nfunc (p *RemoteMonitor) KillRemoteEnforcer(contextID string, force bool) error {\n\n\tp.Lock()\n\ts, err := p.activeProcesses.Get(contextID)\n\tif err != nil {\n\t\tp.Unlock()\n\t\treturn fmt.Errorf(\"unable to find process for context: %s\", contextID)\n\t}\n\n\tp.activeProcesses.Remove(contextID) // nolint errcheck\n\tp.Unlock()\n\n\tprocInfo, ok := s.(*processInfo)\n\tif !ok {\n\t\treturn fmt.Errorf(\"internal error - invalid type for process\")\n\t}\n\n\tprocInfo.Lock()\n\tdefer procInfo.Unlock()\n\n\tif procInfo.process == nil {\n\t\treturn fmt.Errorf(\"cannot find process for context: %s\", contextID)\n\t}\n\n\treq := &rpcwrapper.Request{\n\t\tPayload: procInfo.process.Pid,\n\t}\n\tresp := &rpcwrapper.Response{}\n\n\tc := make(chan error, 1)\n\tgo func() {\n\t\tc <- p.rpc.RemoteCall(contextID, remoteenforcer.EnforcerExit, req, resp)\n\t}()\n\n\tselect {\n\tcase err := <-c:\n\t\tif err != nil && force {\n\t\t\tzap.L().Error(\"Failed to stop gracefully - forcing kill\",\n\t\t\t\tzap.Error(err))\n\t\t\tprocInfo.process.Kill() // nolint\n\t\t}\n\tcase <-time.After(5 * time.Second):\n\t\tif force {\n\t\t\tzap.L().Error(\"Time out on terminating remote enforcer - forcing kill\")\n\t\t\tprocInfo.process.Kill() // nolint\n\t\t}\n\t}\n\n\tp.rpc.DestroyRPCClient(contextID)\n\n\treturn nil\n}\n\n// collectChildExitStatus is an async function which collects status for all launched child processes\nfunc (p *RemoteMonitor) collectChildExitStatus(ctx context.Context) {\n\n\tdefer func() {\n\t\tif r := recover(); r != nil {\n\t\t\tzap.L().Error(\"Policy engine has possibly closed the channel\")\n\t\t\treturn\n\t\t}\n\t}()\n\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\n\t\tcase es := <-p.childExitStatus:\n\n\t\t\t// if we've already been removed from the activeProcesses then it's an expected exit.\n\t\t\tif err := p.activeProcesses.Remove(es.contextID); err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// ... otherwise the exit was unexpected. Stop the RPC client connection to\n\t\t\t// the remote enforcer and generate a runtime alarm\n\t\t\tp.rpc.DestroyRPCClient(es.contextID)\n\n\t\t\tif p.runtimeErrorChannel != nil {\n\t\t\t\tif es.exitStatus != nil {\n\t\t\t\t\tzap.L().Error(\"Remote enforcer exited with an error\",\n\t\t\t\t\t\tzap.String(\"nativeContextID\", es.contextID),\n\t\t\t\t\t\tzap.Int(\"childPid\", es.pid),\n\t\t\t\t\t\tzap.Error(es.exitStatus),\n\t\t\t\t\t)\n\t\t\t\t\tp.runtimeErrorChannel <- &policy.RuntimeError{\n\t\t\t\t\t\tContextID: es.contextID,\n\t\t\t\t\t\tError:     fmt.Errorf(\"remote enforcer terminated: childPid: %d, contextID: %s, exitStatus: %s\", es.pid, es.contextID, es.exitStatus.Error()),\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tzap.L().Warn(\"Remote enforcer exited\",\n\t\t\t\t\t\tzap.String(\"nativeContextID\", es.contextID),\n\t\t\t\t\t\tzap.Int(\"childPid\", es.pid),\n\t\t\t\t\t)\n\t\t\t\t\tp.runtimeErrorChannel <- &policy.RuntimeError{\n\t\t\t\t\t\tContextID: es.contextID,\n\t\t\t\t\t\tError:     fmt.Errorf(\"remote enforcer terminated: childPid: %d, contextID: %s\", es.pid, es.contextID),\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc getCmdReader(cmd *exec.Cmd) (io.ReadCloser, error) {\n\n\treader, err := cmd.StdoutPipe()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tcmd.Stderr = cmd.Stdout\n\n\treturn reader, nil\n}\n\n// getLaunchProcessCmd returns the os.exec object used to launch the remote enforcerd\nfunc getLaunchProcessCmd(arg string) (cmd *exec.Cmd, cmdName string, cmdArgs []string) {\n\tcmdArgs = strings.Fields(arg)\n\tcmdName = constants.RemoteEnforcerPath\n\treturn execCommand(cmdName, cmdArgs...), cmdName, cmdArgs\n}\n\n// getLaunchProcessEnvVars returns a slice of env variable strings where each string is in the form of key=value\nfunc (p *RemoteMonitor) getLaunchProcessEnvVars(\n\tprocMountPoint string,\n\tcontextID string,\n\trandomkeystring string,\n\tstatsServerSecret string,\n\trefPid int,\n\trefNSPath string,\n\tenforcerType policy.EnforcerType,\n) []string {\n\n\tnewEnvVars := []string{\n\t\tconstants.EnvMountPoint + \"=\" + procMountPoint,\n\t\tconstants.EnvContextSocket + \"=\" + contextID2SocketPath(contextID),\n\t\tconstants.EnvStatsChannel + \"=\" + constants.StatsChannel,\n\t\tconstants.EnvDebugChannel + \"=\" + constants.DebugChannel,\n\t\tconstants.EnvRPCClientSecret + \"=\" + randomkeystring,\n\t\tconstants.EnvStatsSecret + \"=\" + statsServerSecret,\n\t\tconstants.EnvContainerPID + \"=\" + strconv.Itoa(refPid),\n\t\tconstants.EnvLogLevel + \"=\" + p.logLevel,\n\t\tconstants.EnvLogFormat + \"=\" + p.logFormat,\n\t\tconstants.EnvEnforcerType + \"=\" + enforcerType.String(),\n\t\tconstants.EnvEnforcerdToolsDir + \"=\" + utils.GetEnforcerdStartupDir(),\n\t\tconstants.EnvEnforcerdNFQueues + \"=\" + p.numQueues,\n\t}\n\n\tnewEnvVars = append(newEnvVars, constants.EnvCompressedTags+\"=\"+strconv.Itoa(int(p.compressedTags)))\n\n\tif p.logWithID {\n\t\tnewEnvVars = append(newEnvVars, constants.EnvLogID+\"=\"+contextID)\n\t}\n\n\t// If the PURuntime Specified a NSPath, then it is added as a new env var also.\n\tif refNSPath != \"\" {\n\t\tnewEnvVars = append(newEnvVars, constants.EnvNSPath+\"=\"+refNSPath)\n\t}\n\n\treturn newEnvVars\n}\n\n// contextID2SocketPath returns the socket path to use for a givent context\nfunc contextID2SocketPath(contextID string) string {\n\n\tif contextID == \"\" {\n\t\tzap.L().Fatal(\"contextID is empty\")\n\t}\n\n\thash, err := policy.Fnv32Hash(contextID)\n\tif err != nil {\n\t\tzap.L().Fatal(\"unable to hash contextID\", zap.Error(err))\n\t}\n\n\tpath := filepath.Join(constants.SocketsPath, strings.Replace(hash, \"/\", \"_\", -1)+\".sock\")\n\n\t// Validation enforced by Linux.\n\t// Ref https://man7.org/linux/man-pages/man7/unix.7.html.\n\tif len(path) > 107 {\n\t\tzap.L().Fatal(\"socket path too long\", zap.String(\"path\", path))\n\t}\n\n\treturn path\n}\n"
  },
  {
    "path": "controller/internal/processmon/processmon_linux_test.go",
    "content": "// +build linux,!rhel6\n\npackage processmon\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/env\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nconst (\n\ttestDirBase = \"/tmp\"\n)\n\nfunc TestLaunchProcess(t *testing.T) {\n\n\tConvey(\"Given a test setup for launch\", t, func() {\n\t\t//Will use refPid to be 1 (init) guaranteed to be there\n\t\t//Normal case should launch a process\n\n\t\trefPid := 1\n\t\tdir, _ := os.Getwd()\n\t\trefNSPath := \"\"\n\n\t\terr := os.MkdirAll(\"/tmp/1/ns/net\", os.ModePerm)\n\t\tSo(err, ShouldBeNil)\n\t\tdefer func() {\n\t\t\tos.RemoveAll(\"/tmp/1/ns/net\") // nolint errcheck\n\t\t}()\n\n\t\terr = os.Chdir(\"testbinary\")\n\t\tSo(err, ShouldBeNil)\n\t\tdefer os.Chdir(dir) // nolint\n\n\t\tbuildCmd := fmt.Sprintf(\"GOOS=%s GOARCH=%s go build\", runtime.GOOS, runtime.GOARCH)\n\n\t\terr = exec.Command(\"bash\", \"-c\", buildCmd).Run()\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = exec.Command(\"cp\", \"-p\", filepath.Join(dir, \"testbinary/testbinary\"), testDirBase).Run()\n\t\tSo(err, ShouldBeNil)\n\n\t\tctx, cancel := context.WithCancel(context.TODO())\n\t\tdefer cancel()\n\n\t\terrChannel := make(chan *policy.RuntimeError)\n\t\tdefer cleanupErrChannel(errChannel)\n\n\t\trpchdl := rpcwrapper.NewTestRPCClient()\n\t\tcontextID := \"pu1\"\n\n\t\tpm := New(ctx, &env.RemoteParameters{}, errChannel, rpchdl, 0)\n\t\tp, ok := pm.(*RemoteMonitor)\n\t\tSo(ok, ShouldBeTrue)\n\n\t\tConvey(\"if the process is already activated, then it should return with initialize false and no error\", func() {\n\t\t\tp.activeProcesses.AddOrUpdate(contextID, &processInfo{})\n\n\t\t\tinitialize, err := p.LaunchRemoteEnforcer(contextID, refPid, refNSPath, \"\", \"mysecret\", testDirBase, policy.EnforcerMapping)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(initialize, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"if the process is not already activated and stat fails, it should error and cleanup\", func() {\n\t\t\tinitialize, err := p.LaunchRemoteEnforcer(contextID, refPid, \"\", \"\", \"my secret\", \"/badpath\", policy.EnforcerMapping)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(initialize, ShouldBeFalse)\n\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\n\t\t})\n\n\t\tConvey(\"if the process is not already activated and pid stat fails, it should error and cleanup\", func() {\n\t\t\tinitialize, err := p.LaunchRemoteEnforcer(contextID, 10000, refNSPath, \"\", \"my secret\", \"/badpath\", policy.EnforcerMapping)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(initialize, ShouldBeFalse)\n\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\n\t\t})\n\n\t\tConvey(\"if the process is not already activated and this is the host namespace, it should fail and cleanup\", func() {\n\t\t\trpchdl.MockGetRPCClient(t, func(string) (*rpcwrapper.RPCHdl, error) {\n\t\t\t\treturn nil, nil\n\t\t\t})\n\t\t\tinitialize, err := p.LaunchRemoteEnforcer(contextID, refPid, refNSPath, \"\", \"my secret\", testDirBase, policy.EnforcerMapping)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(initialize, ShouldBeFalse)\n\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\n\t\t})\n\n\t\tConvey(\"if the process is not already activated and the namespace is there\", func() {\n\t\t\trpchdl.MockGetRPCClient(t, func(string) (*rpcwrapper.RPCHdl, error) {\n\t\t\t\treturn nil, nil\n\t\t\t})\n\t\t\tpid := launchContainer(testDirBase)\n\t\t\tdefer killContainer()\n\n\t\t\texecCommand = fakeExecCommand\n\t\t\tinitialize, err := p.LaunchRemoteEnforcer(contextID, pid, refNSPath, \"\", \"my secret\", testDirBase, policy.EnforcerMapping)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(initialize, ShouldBeTrue)\n\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t})\n\n\t})\n\n}\n"
  },
  {
    "path": "controller/internal/processmon/processmon_test.go",
    "content": "// +build !windows\n\npackage processmon\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"math/rand\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/env\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc launchContainer(path string) int {\n\n\tvar out, out2 bytes.Buffer\n\truncmd := exec.Command(\"docker\", \"run\", \"-d\", \"--name=testprocessmon\", \"nginx\")\n\truncmd.Stdout = &out\n\truncmd.Run() // nolint: errcheck\n\truncmd1 := exec.Command(\"docker\", \"inspect\", \"testprocessmon\")\n\truncmd1.Stdout = &out2\n\truncmd1.Run() // nolint: errcheck\n\treader := bytes.NewReader(out2.Bytes())\n\tscanner := bufio.NewScanner(reader)\n\tfor scanner.Scan() {\n\t\tif strings.Contains(scanner.Text(), \"Pid\") {\n\t\t\ta := strings.Split(scanner.Text(), \":\")[1]\n\t\t\tpid, _ := strconv.Atoi(strings.TrimSpace(a[:len(a)-1]))\n\t\t\tif err := os.MkdirAll(filepath.Join(path, fmt.Sprintf(\"%d/ns/net\", pid)), os.ModePerm); err != nil {\n\t\t\t\treturn 0\n\t\t\t}\n\t\t\treturn pid\n\t\t}\n\t}\n\treturn 0\n}\n\nfunc killContainer() {\n\n\tkillcmd := exec.Command(\"docker\", \"rm\", \"-f\", \"testprocessmon\")\n\tkillcmd.Run() // nolint: errcheck\n}\n\nfunc fakeExecCommand(command string, args ...string) *exec.Cmd {\n\tcs := []string{\"-test.run=TestCmdHelper\", \"--\", command}\n\tcs = append(cs, args...)\n\tcmd := exec.Command(os.Args[0], cs...)\n\tcmd.Env = []string{\"GO_WANT_HELPER_PROCESS=1\"}\n\treturn cmd\n}\n\n// cleanup and close the errChannel properly to prevent data race\nfunc cleanupErrChannel(errChannel chan *policy.RuntimeError) {\nforLoop:\n\tfor {\n\t\tselect {\n\t\tcase <-errChannel:\n\t\t\tbreak forLoop\n\t\tcase <-time.After(2 * time.Second):\n\t\t\tbreak forLoop\n\t\t}\n\t}\n\tclose(errChannel)\n}\n\nfunc TestCmdHelper(t *testing.T) {\n\tif os.Getenv(\"GO_WANT_HELPER_PROCESS\") != \"1\" {\n\t\treturn\n\t}\n\n\tos.Exit(0)\n}\n\nfunc Test_KillRemoteEnforcer(t *testing.T) {\n\tConvey(\"Given a test setup for kill \", t, func() {\n\t\tctx, cancel := context.WithCancel(context.TODO())\n\t\tdefer cancel()\n\n\t\terrChannel := make(chan *policy.RuntimeError)\n\t\tdefer cleanupErrChannel(errChannel)\n\n\t\trpchdl := rpcwrapper.NewTestRPCClient()\n\t\tcontextID := \"abcd\"\n\n\t\tpm := New(ctx, &env.RemoteParameters{}, errChannel, rpchdl, 0)\n\t\tp, ok := pm.(*RemoteMonitor)\n\t\tSo(ok, ShouldBeTrue)\n\n\t\tConvey(\"if the process is not already activated, I should get an error\", func() {\n\t\t\terr := p.KillRemoteEnforcer(contextID, false)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"if the process info is empty, it should error and should remove it from cache\", func() {\n\t\t\tp.activeProcesses.AddOrUpdate(contextID, &processInfo{\n\t\t\t\tcontextID: contextID,\n\t\t\t})\n\n\t\t\terr := p.KillRemoteEnforcer(contextID, false)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"if the RPC call succeeds, it should complete with no errors\", func() {\n\t\t\tp.activeProcesses.AddOrUpdate(contextID, &processInfo{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tprocess:   &os.Process{},\n\t\t\t})\n\n\t\t\trpchdl.MockRemoteCall(t, func(contextID string, name string, req *rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\t\t\t\treturn nil\n\t\t\t})\n\t\t\trpchdl.MockDestroyRPCClient(t, func(string) {\n\t\t\t})\n\n\t\t\terr := p.KillRemoteEnforcer(contextID, false)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"if the RPC call fails, it should complete with no errors after issuing a kill and its not force\", func() {\n\t\t\tp.activeProcesses.AddOrUpdate(contextID, &processInfo{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tprocess:   &os.Process{},\n\t\t\t})\n\n\t\t\trpchdl.MockRemoteCall(t, func(contextID string, name string, req *rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\t\t\t\treturn fmt.Errorf(\"some error\")\n\t\t\t})\n\t\t\trpchdl.MockDestroyRPCClient(t, func(string) {\n\t\t\t})\n\n\t\t\terr := p.KillRemoteEnforcer(contextID, false)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"if the RPC call fails, it should complete with no errors after issuing a kill and  it is force\", func() {\n\t\t\tp.activeProcesses.AddOrUpdate(contextID, &processInfo{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tprocess:   &os.Process{},\n\t\t\t})\n\n\t\t\trpchdl.MockRemoteCall(t, func(contextID string, name string, req *rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\t\t\t\treturn fmt.Errorf(\"some error\")\n\t\t\t})\n\t\t\trpchdl.MockDestroyRPCClient(t, func(string) {\n\t\t\t})\n\n\t\t\terr := p.KillRemoteEnforcer(contextID, false)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"if the RPC call timesout it should complete with no errors\", func() {\n\t\t\tp.activeProcesses.AddOrUpdate(contextID, &processInfo{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tprocess:   &os.Process{},\n\t\t\t})\n\n\t\t\trpchdl.MockRemoteCall(t, func(contextID string, name string, req *rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\t\t\t\ttime.Sleep(10 * time.Second)\n\t\t\t\treturn fmt.Errorf(\"time-out-error\")\n\t\t\t})\n\t\t\trpchdl.MockDestroyRPCClient(t, func(string) {\n\t\t\t})\n\n\t\t\terr := p.KillRemoteEnforcer(contextID, false)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\t_, err = p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc Test_CollectExitStatus(t *testing.T) {\n\tConvey(\"Given a test setup for kill \", t, func() {\n\t\tctx, cancel := context.WithCancel(context.TODO())\n\t\tdefer cancel()\n\n\t\terrChannel := make(chan *policy.RuntimeError)\n\t\tdefer cleanupErrChannel(errChannel)\n\n\t\trand.Seed(time.Now().UnixNano())\n\n\t\trpchdl := rpcwrapper.NewTestRPCClient()\n\t\tpid := rand.Intn(299999) + 1\n\t\tcontextID := strconv.Itoa(rand.Intn(1000000000) + 1)\n\n\t\tpm := New(ctx, &env.RemoteParameters{}, errChannel, rpchdl, 0)\n\t\tp, ok := pm.(*RemoteMonitor)\n\t\tSo(ok, ShouldBeTrue)\n\n\t\tConvey(\"When I call collectExistStatus in the background, I should get the errors in the channel\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.TODO())\n\t\t\tdefer cancel()\n\t\t\tp.activeProcesses.AddOrUpdate(contextID, &processInfo{\n\t\t\t\tcontextID: contextID,\n\t\t\t\tprocess:   &os.Process{},\n\t\t\t})\n\n\t\t\tgo p.collectChildExitStatus(ctx)\n\n\t\t\tp.childExitStatus <- exitStatus{\n\t\t\t\tpid:        pid,\n\t\t\t\tcontextID:  contextID,\n\t\t\t\texitStatus: fmt.Errorf(\"process error\"),\n\t\t\t}\n\n\t\t\trecievedError := <-errChannel\n\n\t\t\tSo(recievedError, ShouldNotBeNil)\n\t\t\tSo(recievedError.ContextID, ShouldEqual, contextID)\n\t\t\tSo(recievedError.Error, ShouldNotBeNil)\n\t\t\tSo(recievedError.Error.Error(), ShouldResemble, \"remote enforcer terminated: childPid: \"+strconv.Itoa(pid)+\", contextID: \"+contextID+\", exitStatus: process error\")\n\t\t\t_, err := p.activeProcesses.Get(contextID)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t})\n}\n"
  },
  {
    "path": "controller/internal/processmon/processmon_windows.go",
    "content": "// +build windows\n\n// Package processmon is to manage and monitor remote enforcers.\npackage processmon\n\nimport (\n\t\"context\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/env\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\ntype remoteMonitor struct {\n}\n\n// New is a method to create a new remote process monitor.\nfunc New(ctx context.Context, p *env.RemoteParameters, c chan *policy.RuntimeError, r rpcwrapper.RPCClient, numNfqueues int) ProcessManager {\n\treturn &remoteMonitor{}\n}\n\n// LaunchRemoteEnforcer for Windows: does nothing right now\nfunc (p *remoteMonitor) LaunchRemoteEnforcer(\n\tcontextID string,\n\trefPid int,\n\trefNSPath string,\n\targ string,\n\tstatsServerSecret string,\n\tprocMountPoint string,\n\tenforcerType policy.EnforcerType,\n) (bool, error) {\n\treturn true, nil\n}\n\n// KillRemoteEnforcer for Windows: does nothing right now\nfunc (p *remoteMonitor) KillRemoteEnforcer(contextID string, force bool) error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/processmon/testbinary/testbinary.go",
    "content": "package main\n\nimport \"fmt\"\n\nfunc main() {\n\tfmt.Println(\"Test Binary Executed\")\n}\n"
  },
  {
    "path": "controller/internal/supervisor/interfaces.go",
    "content": "package supervisor\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// A Supervisor is implementing the node control plane that captures the packets.\ntype Supervisor interface {\n\n\t// Supervise adds a new supervised processing unit.\n\tSupervise(contextID string, puInfo *policy.PUInfo) error\n\n\t// Unsupervise unsupervises the given PU\n\tUnsupervise(contextID string) error\n\n\t// Start starts the Supervisor.\n\tRun(ctx context.Context) error\n\n\t// SetTargetNetworks sets the target networks of the supervisor\n\tSetTargetNetworks(cfg *runtime.Configuration) error\n\n\t// CleanUp requests the supervisor to clean up all ACLs\n\tCleanUp() error\n\n\t// EnableIPTablesPacketTracing enables ip tables packet tracing\n\tEnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error\n}\n\n// Implementor is the interface of the implementation based on iptables, ipsets, remote etc\ntype Implementor interface {\n\n\t// ConfigureRules configures the rules in the ACLs and datapath\n\tConfigureRules(version int, contextID string, containerInfo *policy.PUInfo) error\n\n\t// UpdateRules updates the rules with a new version\n\tUpdateRules(version int, contextID string, containerInfo *policy.PUInfo, oldContainerInfo *policy.PUInfo) error\n\n\t// DeleteRules\n\tDeleteRules(version int, context string, tcpPorts, udpPorts string, mark string, uid string, containerInfo *policy.PUInfo) error\n\n\t// SetTargetNetworks sets the target networks of the supervisor\n\tSetTargetNetworks(cfg *runtime.Configuration) error\n\n\t// Start initializes any defaults\n\tRun(ctx context.Context) error\n\n\t// CleanUp requests the implementor to clean up all ACLs\n\tCleanUp() error\n\n\t// ACLProvider returns the ACL provider used by the implementor\n\tACLProvider() []provider.IptablesProvider\n\n\t// CreateCustomRulesChain creates a custom rules chain if it doesnt exist\n\tCreateCustomRulesChain() error\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/acls.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\t\"text/template\"\n\n\t\"github.com/kballard/go-shellquote\"\n\t\"github.com/mattn/go-shellwords\"\n\tmgrconstants \"go.aporeto.io/cns-agent-mgr/pkg/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\tmarkconstants \"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n\t\"go.aporeto.io/gaia/protocols\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\tnumPackets   = \"100\"\n\tinitialCount = \"99\"\n)\n\nvar (\n\tcnsAgentBootPid    int\n\tcnsAgentMgrPid     int\n\tgetEnforcerPID     = func() int { return os.Getpid() }\n\tgetCnsAgentMgrPID  = func() int { return cnsAgentMgrPid }\n\tgetCnsAgentBootPID = func() int { return cnsAgentBootPid }\n)\n\nfunc init() {\n\tcnsAgentBootPid = -1\n\tif mgrconstants.IsManagedByCnsAgentManager() {\n\t\tcnsAgentBootPid = discoverCnsAgentBootPID()\n\t}\n\tcnsAgentMgrPid = -1\n\tif mgrconstants.IsManagedByCnsAgentManager() {\n\t\tcnsAgentMgrPid = os.Getppid()\n\t}\n}\n\ntype rulesInfo struct {\n\tRejectObserveApply    [][]string\n\tRejectNotObserved     [][]string\n\tRejectObserveContinue [][]string\n\n\tAcceptObserveApply    [][]string\n\tAcceptNotObserved     [][]string\n\tAcceptObserveContinue [][]string\n\tReverseRules          [][]string\n}\n\n// cgroupChainRules provides the rules for redirecting to a processing unit\n// specific chain based for Linux processed and based on the cgroups and net_cls\n// configuration.\nfunc (i *iptables) cgroupChainRules(cfg *ACLInfo) [][]string {\n\n\tif legacyRules, ok := i.legacyPuChainRules(cfg); ok {\n\t\treturn legacyRules\n\t}\n\n\ttmpl := template.Must(template.New(cgroupCaptureTemplate).Funcs(template.FuncMap{\n\t\t\"isUDPPorts\": func() bool {\n\t\t\treturn cfg.UDPPorts != \"0\"\n\t\t},\n\t\t\"isTCPPorts\": func() bool {\n\t\t\treturn cfg.TCPPorts != \"0\"\n\t\t},\n\t\t\"isHostPU\": func() bool {\n\t\t\treturn cfg.AppSection == HostModeOutput && cfg.NetSection == HostModeInput\n\t\t},\n\t\t\"isProcessPU\": func() bool {\n\t\t\treturn cfg.PUType == common.LinuxProcessPU || cfg.PUType == common.WindowsProcessPU\n\t\t},\n\t\t\"isIPV6Enabled\": func() bool {\n\t\t\t// icmpv6 rules are needed for ipv6\n\t\t\treturn cfg.needICMPRules\n\t\t},\n\t}).Parse(cgroupCaptureTemplate))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t}\n\trules = append(rules, i.proxyRules(cfg)...)\n\trules = append(rules, i.proxyDNSRules(cfg)...)\n\treturn rules\n}\n\n// containerChainRules provides the list of rules that are used to send traffic to\n// a particular chain\nfunc (i *iptables) containerChainRules(cfg *ACLInfo) [][]string {\n\ttmpl := template.Must(template.New(containerChainTemplate).Parse(containerChainTemplate))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t}\n\trules = append(rules, i.istioRules(cfg)...)\n\tif i.serviceMeshType == policy.None {\n\t\trules = append(rules, i.proxyRules(cfg)...)\n\t}\n\trules = append(rules, i.proxyDNSRules(cfg)...)\n\treturn rules\n}\n\nfunc (i *iptables) istioRules(cfg *ACLInfo) [][]string {\n\tif i.serviceMeshType == policy.Istio {\n\t\ttmpl := template.Must(template.New(istioChainTemplate).Funcs(template.FuncMap{\n\t\t\t\"IstioUID\": func() string {\n\t\t\t\treturn IstioUID\n\t\t\t},\n\t\t}).Parse(istioChainTemplate))\n\t\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\t\tif err != nil {\n\t\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t\t}\n\t\tzap.L().Debug(\"configured Istio: \", zap.Any(\" rules \", rules))\n\t\treturn rules\n\t}\n\treturn nil\n}\n\n// proxyRules creates the rules that allow traffic to go through if it is handled\n// by the services.\nfunc (i *iptables) proxyRules(cfg *ACLInfo) [][]string {\n\n\ttmpl := template.Must(template.New(proxyChainTemplate).Funcs(template.FuncMap{\n\t\t\"isCgroupSet\": func() bool {\n\t\t\treturn cfg.CgroupMark != \"\"\n\t\t},\n\t\t\"enableDNSProxy\": func() bool {\n\t\t\treturn cfg.DNSServerIP != \"\"\n\t\t},\n\t}).Parse(proxyChainTemplate))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t}\n\treturn rules\n}\n\nfunc (i *iptables) proxyDNSRules(cfg *ACLInfo) [][]string {\n\ttmpl := template.Must(template.New(proxyDNSChainTemplate).Funcs(template.FuncMap{\n\t\t\"isCgroupSet\": func() bool {\n\t\t\treturn cfg.CgroupMark != \"\"\n\t\t},\n\t\t\"enableDNSProxy\": func() bool {\n\t\t\treturn cfg.DNSServerIP != \"\"\n\t\t},\n\t}).Parse(proxyDNSChainTemplate))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\tzap.L().Warn(\"proxyDNSRules unable to extract rules\", zap.Error(err))\n\t}\n\treturn rules\n}\n\n// extractPreNetworkACLRules creates the rules that come before ACL rules.\nfunc (i *iptables) extractPreNetworkACLRules(cfg *ACLInfo) [][]string {\n\n\ttmpl := template.Must(template.New(preNetworkACLRuleTemplate).Funcs(template.FuncMap{\n\t\t\"Increment\": func(i int) int {\n\t\t\treturn i + 1\n\t\t},\n\t}).Parse(preNetworkACLRuleTemplate))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t}\n\treturn rules\n}\n\n// trapRules provides the packet capture rules that are defined for each processing unit.\nfunc (i *iptables) trapRules(cfg *ACLInfo, isHostPU bool, appAnyRules, netAnyRules [][]string) [][]string {\n\n\toutputMark, _ := strconv.Atoi(cfg.PacketMark)\n\toutputMark = outputMark * cfg.NumNFQueues\n\n\ttmpl := template.Must(template.New(packetCaptureTemplate).Funcs(template.FuncMap{\n\t\t\"windowsAllIpsetName\": func() string {\n\t\t\treturn i.ipsetmanager.GetIPsetPrefix() + \"WindowsAllIPs\"\n\t\t},\n\t\t\"packetMark\": func() string {\n\t\t\toutputMark, _ := strconv.Atoi(cfg.PacketMark)\n\t\t\toutputMark = outputMark * cfg.NumNFQueues\n\t\t\treturn strconv.Itoa(outputMark)\n\t\t},\n\t\t\"getOutputMark\": func() string {\n\t\t\tm := strconv.Itoa(outputMark)\n\t\t\toutputMark++\n\t\t\treturn m\n\t\t},\n\t\t\"queueBalance\": func() string {\n\t\t\treturn fmt.Sprintf(\"0:%d\", cfg.NumNFQueues-1)\n\t\t},\n\t\t\"isNotContainerPU\": func() bool {\n\t\t\treturn cfg.PUType != common.ContainerPU\n\t\t},\n\t\t\"needDnsRules\": func() bool {\n\t\t\treturn isHostPU\n\t\t},\n\t\t\"needICMP\": func() bool {\n\t\t\treturn cfg.needICMPRules\n\t\t},\n\t\t\"appAnyRules\": func() [][]string {\n\t\t\treturn appAnyRules\n\t\t},\n\t\t\"netAnyRules\": func() [][]string {\n\t\t\treturn netAnyRules\n\t\t},\n\t\t\"joinRule\": func(rule []string) string {\n\t\t\treturn strings.Join(rule, \" \")\n\t\t},\n\t\t\"isBPFEnabled\": func() bool {\n\t\t\treturn i.bpf != nil\n\t\t},\n\t\t\"isHostPU\": func() bool {\n\t\t\treturn isHostPU\n\t\t},\n\t\t\"Increment\": func(i int) int {\n\t\t\treturn i + 1\n\t\t},\n\t\t\"isAppDrop\": func() bool {\n\t\t\treturn strings.EqualFold(cfg.AppDefaultAction, \"DROP\")\n\t\t},\n\t\t\"isNetDrop\": func() bool {\n\t\t\treturn strings.EqualFold(cfg.NetDefaultAction, \"DROP\")\n\t\t},\n\t}).Parse(packetCaptureTemplate))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t}\n\n\treturn rules\n}\n\n// getProtocolAnyRules returns app any acls and net any acls.\nfunc (i *iptables) getProtocolAnyRules(cfg *ACLInfo, appRules, netRules []aclIPset) ([][]string, [][]string, error) {\n\n\tappAnyRules, _ := extractProtocolAnyRules(appRules)\n\tnetAnyRules, _ := extractProtocolAnyRules(netRules)\n\n\tsortedAppAnyRulesBuckets := i.sortACLsInBuckets(cfg, cfg.AppChain, cfg.NetChain, appAnyRules, true)\n\tsortedNetAnyRulesBuckets := i.sortACLsInBuckets(cfg, cfg.NetChain, cfg.AppChain, netAnyRules, false)\n\n\tsortedAppAnyRules, err := extractACLsFromTemplate(sortedAppAnyRulesBuckets)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"unable extract app protocol any rules: %v\", err)\n\t}\n\n\tsortedNetAnyRules, err := extractACLsFromTemplate(sortedNetAnyRulesBuckets)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"unable extract net protocol any rules: %v\", err)\n\t}\n\n\tsortedAppAnyRules = transformACLRules(sortedAppAnyRules, cfg, sortedAppAnyRulesBuckets, true)\n\tsortedNetAnyRules = transformACLRules(sortedNetAnyRules, cfg, sortedNetAnyRulesBuckets, false)\n\n\treturn sortedAppAnyRules, sortedNetAnyRules, nil\n}\n\nfunc extractACLsFromTemplate(rulesBucket *rulesInfo) ([][]string, error) {\n\n\ttmpl := template.Must(template.New(acls).Funcs(template.FuncMap{\n\t\t\"joinRule\": func(rule []string) string {\n\t\t\treturn shellquote.Join(rule...)\n\t\t},\n\t}).Parse(acls))\n\n\taclRules, err := extractRulesFromTemplate(tmpl, *rulesBucket)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to extract rules from template: %s\", err)\n\t}\n\n\treturn aclRules, nil\n}\n\n// extractProtocolAnyRules extracts protocol any rules from the set and returns\n// protocol any rules and all other rules without any.\nfunc extractProtocolAnyRules(rules []aclIPset) (anyRules []aclIPset, otherRules []aclIPset) {\n\n\tfor _, rule := range rules {\n\t\tfor _, proto := range rule.Protocols {\n\n\t\t\tif proto != constants.AllProtoString {\n\t\t\t\totherRules = append(otherRules, rule)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tanyRules = append(anyRules, rule)\n\t\t}\n\t}\n\n\treturn anyRules, otherRules\n}\n\n// processRulesFromList is a generic helper that parses a set of rules and sends the corresponding\n// ACL commands.\nfunc (i *iptables) processRulesFromList(rulelist [][]string, methodType string) error {\n\tvar err error\n\tfor _, cr := range rulelist {\n\t\t// HACK: Adding a retry loop to avoid iptables error of \"invalid argument\"\n\t\t// Once in a while iptables\n\tL:\n\t\tfor retry := 0; retry < 3; retry++ {\n\t\t\tswitch methodType {\n\t\t\tcase \"Append\":\n\t\t\t\tif err = i.impl.Append(cr[0], cr[1], cr[2:]...); err == nil {\n\t\t\t\t\tbreak L\n\t\t\t\t}\n\t\t\tcase \"Insert\":\n\t\t\t\torder, err := strconv.Atoi(cr[2])\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"Incorrect format for iptables insert\")\n\t\t\t\t\treturn errors.New(\"invalid format\")\n\t\t\t\t}\n\t\t\t\tif err = i.impl.Insert(cr[0], cr[1], order, cr[3:]...); err == nil {\n\t\t\t\t\tbreak L\n\t\t\t\t}\n\n\t\t\tcase \"Delete\":\n\t\t\t\tif err = i.impl.Delete(cr[0], cr[1], cr[2:]...); err == nil {\n\t\t\t\t\tbreak L\n\t\t\t\t}\n\n\t\t\tdefault:\n\t\t\t\treturn errors.New(\"invalid method type\")\n\t\t\t}\n\t\t}\n\t\tif err != nil && methodType != \"Delete\" {\n\t\t\treturn fmt.Errorf(\"unable to %s rule for table %s and chain %s with error %s\", methodType, cr[0], cr[1], err)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// addChainrules implements all the iptable rules that redirect traffic to a chain\nfunc (i *iptables) addChainRules(cfg *ACLInfo) error {\n\tif i.mode != constants.LocalServer {\n\t\treturn i.processRulesFromList(i.containerChainRules(cfg), \"Append\")\n\t}\n\n\treturn i.processRulesFromList(i.cgroupChainRules(cfg), \"Append\")\n}\n\n// addPacketTrap adds the necessary iptables rules to capture control packets to user space\nfunc (i *iptables) addPacketTrap(cfg *ACLInfo, isHostPU bool, appAnyRules, netAnyRules [][]string) error {\n\n\treturn i.processRulesFromList(i.trapRules(cfg, isHostPU, appAnyRules, netAnyRules), \"Append\")\n}\n\n// programExtensionsRules programs iptable rules for the given extensions\nfunc (i *iptables) programExtensionsRules(contextID string, rule *aclIPset, chain, proto, ipMatchDirection, nfLogGroup string) error {\n\n\trulesspec := []string{\n\t\t\"-p\", proto,\n\t\t\"-m\", \"set\", \"--match-set\", rule.ipset, ipMatchDirection,\n\t}\n\n\tfor _, ext := range rule.Extensions {\n\t\tif rule.Policy.Action&policy.Log > 0 {\n\t\t\tif err := i.programNflogExtensionRule(contextID, rule, rulesspec, ext, chain, nfLogGroup); err != nil {\n\t\t\t\treturn fmt.Errorf(\"unable to program nflog extension: %v\", err)\n\t\t\t}\n\t\t}\n\n\t\targs, err := shellwords.Parse(ext)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"unable to parse extension %s: %v\", ext, err)\n\t\t}\n\n\t\textRulesSpec := append(rulesspec, args...)\n\t\tif err := i.impl.Append(appPacketIPTableContext, chain, extRulesSpec...); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to program extension rules: %v\", err)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// WARNING: The extension should always contain the action at the end else,\n// the function returns error.\nfunc (i *iptables) programNflogExtensionRule(contextID string, rule *aclIPset, rulesspec []string, ext string, chain, nfLogGroup string) error {\n\n\tparts := strings.SplitN(ext, \" -j \", 2)\n\tif len(parts) != 2 {\n\t\treturn fmt.Errorf(\"invalid extension format: %s\", ext)\n\t}\n\tfilter, target := parts[0], parts[1]\n\n\tif filter == \"\" || target == \"\" {\n\t\treturn fmt.Errorf(\"filter or target is empty: %s\", ext)\n\t}\n\n\tfilterArgs, err := shellwords.Parse(filter)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to parse extension %s: %v\", ext, err)\n\t}\n\n\taction := \"3\"\n\tif target == \"DROP\" {\n\t\taction = \"6\"\n\t}\n\n\tdefaultNflogSuffix := []string{\"-m\", \"state\", \"--state\", \"NEW\",\n\t\t\"-j\", \"NFLOG\", \"--nflog-group\", nfLogGroup, \"--nflog-prefix\", rule.Policy.LogPrefixAction(contextID, action)}\n\tfilterArgs = append(filterArgs, defaultNflogSuffix...)\n\n\tnflogRulesspec := append(rulesspec, filterArgs...)\n\treturn i.impl.Append(appPacketIPTableContext, chain, nflogRulesspec...)\n}\n\n// sortACLsInBuckets will process all the rules and add them in a list of buckets\n// based on their priority. We need an explicit order of these buckets\n// in order to support observation only of ACL actions. The parameters\n// must provide the chain and whether it is App or Net ACLs so that the rules\n// can be created accordingly.\nfunc (i *iptables) sortACLsInBuckets(cfg *ACLInfo, chain string, reverseChain string, rules []aclIPset, isAppACLs bool) *rulesInfo {\n\n\trulesBucket := &rulesInfo{\n\t\tRejectObserveApply:    [][]string{},\n\t\tRejectNotObserved:     [][]string{},\n\t\tRejectObserveContinue: [][]string{},\n\t\tAcceptObserveApply:    [][]string{},\n\t\tAcceptNotObserved:     [][]string{},\n\t\tAcceptObserveContinue: [][]string{},\n\t\tReverseRules:          [][]string{},\n\t}\n\n\tdirection := \"src\"\n\treverse := \"dst\"\n\tnflogGroup := \"11\"\n\tif isAppACLs {\n\t\tdirection = \"dst\"\n\t\treverse = \"src\"\n\t\tnflogGroup = \"10\"\n\t}\n\n\tfor _, rule := range rules {\n\n\t\tfor _, proto := range rule.Protocols {\n\n\t\t\tif !i.impl.ProtocolAllowed(proto) {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif i.aclSkipProto(proto) {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tacls, r := i.generateACLRules(cfg, &rule, chain, reverseChain, nflogGroup, proto, direction, reverse, isAppACLs)\n\t\t\trulesBucket.ReverseRules = append(rulesBucket.ReverseRules, r...)\n\n\t\t\tif testReject(rule.Policy) && testObserveApply(rule.Policy) {\n\t\t\t\trulesBucket.RejectObserveApply = append(rulesBucket.RejectObserveApply, acls...)\n\t\t\t}\n\n\t\t\tif testReject(rule.Policy) && testNotObserved(rule.Policy) {\n\t\t\t\trulesBucket.RejectNotObserved = append(rulesBucket.RejectNotObserved, acls...)\n\t\t\t}\n\n\t\t\tif testReject(rule.Policy) && testObserveContinue(rule.Policy) {\n\t\t\t\trulesBucket.RejectObserveContinue = append(rulesBucket.RejectObserveContinue, acls...)\n\t\t\t}\n\n\t\t\tif testAccept(rule.Policy) && testObserveContinue(rule.Policy) {\n\t\t\t\trulesBucket.AcceptObserveContinue = append(rulesBucket.AcceptObserveContinue, acls...)\n\t\t\t}\n\n\t\t\tif testAccept(rule.Policy) && testNotObserved(rule.Policy) {\n\t\t\t\trulesBucket.AcceptNotObserved = append(rulesBucket.AcceptNotObserved, acls...)\n\t\t\t}\n\n\t\t\tif testAccept(rule.Policy) && testObserveApply(rule.Policy) {\n\t\t\t\trulesBucket.AcceptObserveApply = append(rulesBucket.AcceptObserveApply, acls...)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn rulesBucket\n}\n\n// addExternalACLs adds a set of rules to the external services that are initiated\n// by an application. The allow rules are inserted with highest priority.\nfunc (i *iptables) addExternalACLs(cfg *ACLInfo, chain string, reverseChain string, rules []aclIPset, isAppAcls bool) error {\n\n\t_, rules = extractProtocolAnyRules(rules)\n\n\trulesBucket := i.sortACLsInBuckets(cfg, chain, reverseChain, rules, isAppAcls)\n\n\taclRules, err := extractACLsFromTemplate(rulesBucket)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to extract rules from template: %s\", err)\n\t}\n\n\taclRules = transformACLRules(aclRules, cfg, rulesBucket, isAppAcls)\n\n\tif err := i.processRulesFromList(aclRules, \"Append\"); err != nil {\n\t\treturn fmt.Errorf(\"unable to install rules - mode :%s %v\", err, isAppAcls)\n\t}\n\n\treturn nil\n}\n\nfunc (i *iptables) addPreNetworkACLRules(cfg *ACLInfo) error {\n\n\trules := i.extractPreNetworkACLRules(cfg)\n\n\tif err := i.processRulesFromList(rules, \"Append\"); err != nil {\n\t\treturn fmt.Errorf(\"unable to install networkd SYN rule : %s\", err)\n\t}\n\n\treturn nil\n}\n\n// deleteChainRules deletes the rules that send traffic to our chain\nfunc (i *iptables) deleteChainRules(cfg *ACLInfo) error {\n\n\tif i.mode != constants.LocalServer {\n\t\treturn i.processRulesFromList(i.containerChainRules(cfg), \"Delete\")\n\t}\n\n\treturn i.processRulesFromList(i.cgroupChainRules(cfg), \"Delete\")\n}\n\n// setGlobalRules installs the global rules\nfunc (i *iptables) setGlobalRules() error {\n\tcfg, err := i.newACLInfo(0, \"\", nil, 0)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t_, _, excludedNetworkName := i.ipsetmanager.GetIPsetNamesForTargetAndExcludedNetworks()\n\n\tinputMark, _ := strconv.Atoi(cfg.DefaultInputMark) //nolint\n\toutputMark := 0\n\n\ttmpl := template.Must(template.New(globalRules).Funcs(template.FuncMap{\n\t\t\"isIstioEnabled\": func() bool {\n\t\t\treturn i.serviceMeshType == policy.Istio\n\t\t},\n\t\t\"IstioRedirPort\": func() string {\n\t\t\treturn IstioRedirPort\n\t\t},\n\t\t\"getInputMark\": func() string {\n\t\t\tm := strconv.Itoa(inputMark)\n\t\t\tinputMark++\n\t\t\treturn m\n\t\t},\n\t\t\"getOutputMark\": func() string {\n\t\t\tm := strconv.Itoa(outputMark)\n\t\t\toutputMark++\n\t\t\treturn m\n\t\t},\n\t\t\"queueBalance\": func() string {\n\t\t\treturn fmt.Sprintf(\"0:%d\", cfg.NumNFQueues-1)\n\t\t},\n\t\t\"isLocalServer\": func() bool {\n\t\t\treturn i.mode == constants.LocalServer\n\t\t},\n\t\t\"isBPFEnabled\": func() bool {\n\t\t\treturn i.bpf != nil\n\t\t},\n\t\t\"enableDNSProxy\": func() bool {\n\t\t\treturn cfg.DNSServerIP != \"\"\n\t\t},\n\t\t\"Increment\": func(i int) int {\n\t\t\treturn i + 1\n\t\t},\n\t\t\"EnforcerPID\": func() string {\n\t\t\treturn strconv.Itoa(getEnforcerPID())\n\t\t},\n\t\t\"CnsAgentMgrPID\": func() string {\n\t\t\treturn strconv.Itoa(getCnsAgentMgrPID())\n\t\t},\n\t\t\"CnsAgentBootPID\": func() string {\n\t\t\treturn strconv.Itoa(getCnsAgentBootPID())\n\t\t},\n\t\t\"isManagedByCnsAgentManager\": func() bool {\n\t\t\treturn getCnsAgentBootPID() > 0\n\t\t},\n\t\t\"isIPv4\": func() bool {\n\t\t\treturn i.impl.IPVersion() == IPV4\n\t\t},\n\t\t\"windowsDNSServerName\": func() string {\n\t\t\treturn i.ipsetmanager.GetIPsetPrefix() + \"WindowsDNSServer\"\n\t\t},\n\t\t\"isKubernetesPU\": func() bool {\n\t\t\treturn cfg.PUType == common.KubernetesPU\n\t\t},\n\t\t\"needICMP\": func() bool {\n\t\t\treturn cfg.needICMPRules\n\t\t},\n\t}).Parse(globalRules))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t}\n\n\tif err := i.processRulesFromList(rules, \"Append\"); err != nil {\n\t\treturn fmt.Errorf(\"unable to install global rules:%s\", err)\n\t}\n\n\t// Insert the Istio nat rules into the ISTIO_OUTPUT table\n\t// the following is done so that there is no loop in the dataPath.\n\t// basically, the envoy packets which are already processed by us, we should\n\t// accept the packets.\n\tif i.serviceMeshType == policy.Istio {\n\t\terr = i.impl.Insert(appProxyIPTableContext,\n\t\t\tipTableSectionOutput, 1,\n\t\t\t\"-p\", \"tcp\",\n\t\t\t\"-m\", \"mark\", \"--mark\", strconv.Itoa(markconstants.IstioPacketMark),\n\t\t\t\"-j\", \"ACCEPT\")\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"unable to add Istio accept for marked packets : %s\", err)\n\t\t}\n\t}\n\n\t// nat rules cannot be templated, since they interfere with Docker.\n\terr = i.impl.Insert(appProxyIPTableContext,\n\t\tipTableSectionPreRouting, 1,\n\t\t\"-p\", \"tcp\",\n\t\t\"-m\", \"addrtype\", \"--dst-type\", \"LOCAL\",\n\t\t\"-m\", \"set\", \"!\", \"--match-set\", excludedNetworkName, \"src\",\n\t\t\"-j\", natProxyInputChain)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to add default allow for marked packets at net: %s\", err)\n\t}\n\n\terr = i.impl.Insert(appProxyIPTableContext,\n\t\tipTableSectionOutput, 1,\n\t\t\"-m\", \"set\", \"!\", \"--match-set\", excludedNetworkName, \"dst\",\n\t\t\"-j\", natProxyOutputChain)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to add default allow for marked packets at net: %s\", err)\n\t}\n\n\treturn nil\n}\n\nfunc (i *iptables) removeGlobalHooks(cfg *ACLInfo) error {\n\n\ttmpl := template.Must(template.New(globalHooks).Funcs(template.FuncMap{\n\t\t\"isLocalServer\": func() bool {\n\t\t\treturn i.mode == constants.LocalServer\n\t\t},\n\t}).Parse(globalHooks))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to create trireme chains:%s\", err)\n\t}\n\n\ti.processRulesFromList(rules, \"Delete\") // nolint\n\treturn nil\n}\n\nfunc (i *iptables) generateACLRules(cfg *ACLInfo, rule *aclIPset, chain string, reverseChain string, nfLogGroup, proto, ipMatchDirection string, reverseDirection string, isAppACLs bool) ([][]string, [][]string) {\n\n\tiptRules := [][]string{}\n\treverseRules := [][]string{}\n\n\ttargetTCPName, targetUDPName, _ := i.ipsetmanager.GetIPsetNamesForTargetAndExcludedNetworks()\n\n\tobserveContinue := rule.Policy.ObserveAction.ObserveContinue()\n\tcontextID := cfg.ContextID\n\n\tbaseRule := func(proto string) []string {\n\n\t\tiptRule := []string{appPacketIPTableContext, chain}\n\n\t\tif splits := strings.Split(proto, \"/\"); strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP || strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP6 {\n\t\t\tiptRule = append(iptRule, icmpRule(proto, rule.Ports)...)\n\n\t\t\tif strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP6 {\n\t\t\t\tproto = \"icmpv6\"\n\t\t\t} else {\n\t\t\t\tproto = \"icmp\"\n\t\t\t}\n\t\t}\n\n\t\tiptRule = append(iptRule, []string{\n\t\t\t\"-p\", proto,\n\t\t\t\"-m\", \"set\", \"--match-set\", rule.ipset, ipMatchDirection,\n\t\t}...)\n\n\t\tif proto == constants.UDPProtoNum || proto == constants.UDPProtoString {\n\t\t\tudpRule := generateUDPACLRule()\n\t\t\tiptRule = append(iptRule, udpRule...)\n\t\t}\n\n\t\tif proto == constants.TCPProtoNum || proto == constants.TCPProtoString {\n\t\t\tstateMatch := []string{\"-m\", \"state\", \"--state\", \"NEW\"}\n\t\t\tiptRule = append(iptRule, stateMatch...)\n\t\t}\n\n\t\t// add the target network condition if tcp and not a reject action and is the app chain\n\t\tif (rule.Policy.Action&policy.Reject == 0 && isAppACLs) && (proto == constants.TCPProtoNum || proto == constants.TCPProtoString) {\n\t\t\ttargetNet := []string{\"-m\", \"set\", \"!\", \"--match-set\", targetTCPName, ipMatchDirection}\n\t\t\tiptRule = append(iptRule, targetNet...)\n\t\t}\n\n\t\t// add the target network condition if tcp and not a reject action and is the app chain\n\t\tif (rule.Policy.Action&policy.Reject == 0 && isAppACLs) && (proto == constants.UDPProtoNum || proto == constants.UDPProtoString) {\n\n\t\t\ttargetUDPClause := targetUDPNetworkClause(rule, targetUDPName, ipMatchDirection)\n\t\t\tif len(targetUDPClause) > 0 {\n\t\t\t\tiptRule = append(iptRule, targetUDPClause...)\n\t\t\t}\n\n\t\t}\n\t\t// port match is required only for tcp and udp protocols\n\t\tif proto == constants.TCPProtoNum || proto == constants.UDPProtoNum || proto == constants.TCPProtoString || proto == constants.UDPProtoString {\n\n\t\t\tportMatchSet := []string{\"--match\", \"multiport\", \"--dports\", strings.Join(rule.Ports, \",\")}\n\t\t\tiptRule = append(iptRule, portMatchSet...)\n\t\t}\n\n\t\treturn iptRule\n\t}\n\n\tif err := i.programExtensionsRules(contextID, rule, chain, proto, ipMatchDirection, nfLogGroup); err != nil {\n\t\tzap.L().Warn(\"unable to program extension rules\",\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\t// If log or observeContinue\n\tif rule.Policy.Action&policy.Log != 0 || observeContinue {\n\t\tstate := []string{}\n\t\tif proto == constants.TCPProtoNum || proto == constants.UDPProtoNum || proto == constants.TCPProtoString || proto == constants.UDPProtoString {\n\t\t\tstate = []string{\"-m\", \"state\", \"--state\", \"NEW\"}\n\t\t}\n\n\t\tnflog := append(state, []string{\"-j\", \"NFLOG\", \"--nflog-group\", nfLogGroup, \"--nflog-prefix\", rule.Policy.LogPrefix(contextID)}...)\n\t\tnfLogRule := append(baseRule(proto), nflog...)\n\n\t\tiptRules = append(iptRules, nfLogRule)\n\t}\n\n\tif !observeContinue {\n\t\tif (rule.Policy.Action & policy.Accept) != 0 {\n\t\t\tif proto == constants.UDPProtoNum || proto == constants.UDPProtoString {\n\t\t\t\tconnmarkClause := connmarkUDPConnmarkClause()\n\t\t\t\tif len(connmarkClause) > 0 {\n\t\t\t\t\tconnmarkRule := append(baseRule(proto), connmarkClause...)\n\t\t\t\t\tiptRules = append(iptRules, connmarkRule)\n\t\t\t\t}\n\t\t\t}\n\t\t\tacceptRule := append(baseRule(proto), []string{\"-j\", \"ACCEPT\"}...)\n\t\t\tiptRules = append(iptRules, acceptRule)\n\t\t}\n\n\t\tif rule.Policy.Action&policy.Reject != 0 {\n\t\t\treject := []string{\"-j\", \"DROP\"}\n\t\t\trejectRule := append(baseRule(proto), reject...)\n\t\t\tiptRules = append(iptRules, rejectRule)\n\t\t}\n\n\t\tif rule.Policy.Action&policy.Accept != 0 && (proto == constants.UDPProtoNum || proto == constants.UDPProtoString) {\n\t\t\treverseRules = append(reverseRules, []string{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\treverseChain,\n\t\t\t\t\"-p\", proto,\n\t\t\t\t\"-m\", \"set\", \"--match-set\", rule.ipset, reverseDirection,\n\t\t\t\t\"-m\", \"state\", \"--state\", \"ESTABLISHED\",\n\t\t\t\t\"-m\", \"connmark\", \"--mark\", strconv.Itoa(int(markconstants.DefaultExternalConnMark)),\n\t\t\t\t\"-j\", \"ACCEPT\",\n\t\t\t})\n\t\t}\n\t}\n\n\treturn iptRules, reverseRules\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/acls_darwin.go",
    "content": "package iptablesctrl\n\nfunc (i *iptables) aclSkipProto(proto string) bool {\n\treturn false\n}\n\nfunc (i *iptables) legacyPuChainRules(cfg *ACLInfo) ([][]string, bool) {\n\treturn nil, false\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/acls_linux.go",
    "content": "// +build !rhel6\n\npackage iptablesctrl\n\nfunc (i *iptables) aclSkipProto(proto string) bool {\n\treturn false\n}\n\nfunc (i *iptables) legacyPuChainRules(cfg *ACLInfo) ([][]string, bool) {\n\treturn nil, false\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/acls_nonwindows.go",
    "content": "// +build !windows\n\npackage iptablesctrl\n\nimport (\n\t\"fmt\"\n\t\"strconv\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\tmarkconstants \"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n\t\"go.uber.org/zap\"\n)\n\n// discoverCnsAgentBootPID is only used in Windows rules\nvar discoverCnsAgentBootPID = func() int {\n\treturn -1\n}\n\n// addContainerChain adds a chain for the specific container and redirects traffic there\n// This simplifies significantly the management and makes the iptable rules more readable\n// All rules related to a container are contained within the dedicated chain\nfunc (i *iptables) addContainerChain(cfg *ACLInfo) error {\n\n\tappChain := cfg.AppChain\n\tnetChain := cfg.NetChain\n\tif err := i.impl.NewChain(appPacketIPTableContext, appChain); err != nil {\n\t\treturn fmt.Errorf(\"unable to add chain %s of context %s: %s\", appChain, appPacketIPTableContext, err)\n\t}\n\n\t// if err := i.impl.NewChain(appProxyIPTableContext, appChain); err != nil {\n\t// \treturn fmt.Errorf(\"unable to add chain %s of context %s: %s\", appChain, appPacketIPTableContext, err)\n\t// }\n\n\tif err := i.impl.NewChain(netPacketIPTableContext, netChain); err != nil {\n\t\treturn fmt.Errorf(\"unable to add netchain %s of context %s: %s\", netChain, netPacketIPTableContext, err)\n\t}\n\n\treturn nil\n}\n\n// deletePUChains removes all the container specific chains and basic rules\nfunc (i *iptables) deletePUChains(cfg *ACLInfo) error {\n\n\tif err := i.impl.ClearChain(appPacketIPTableContext, cfg.AppChain); err != nil {\n\t\tzap.L().Warn(\"Failed to clear the container ack packets chain\",\n\t\t\tzap.String(\"appChain\", cfg.AppChain),\n\t\t\tzap.String(\"context\", appPacketIPTableContext),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err := i.impl.DeleteChain(appPacketIPTableContext, cfg.AppChain); err != nil {\n\t\tzap.L().Warn(\"Failed to delete the container ack packets chain\",\n\t\t\tzap.String(\"appChain\", cfg.AppChain),\n\t\t\tzap.String(\"context\", appPacketIPTableContext),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err := i.impl.ClearChain(netPacketIPTableContext, cfg.NetChain); err != nil {\n\t\tzap.L().Warn(\"Failed to clear the container net packets chain\",\n\t\t\tzap.String(\"netChain\", cfg.NetChain),\n\t\t\tzap.String(\"context\", netPacketIPTableContext),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err := i.impl.DeleteChain(netPacketIPTableContext, cfg.NetChain); err != nil {\n\t\tzap.L().Warn(\"Failed to delete the container net packets chain\",\n\t\t\tzap.String(\"netChain\", cfg.NetChain),\n\t\t\tzap.String(\"context\", netPacketIPTableContext),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\treturn nil\n}\n\nfunc transformACLRules(aclRules [][]string, cfg *ACLInfo, rulesBucket *rulesInfo, isAppAcls bool) [][]string {\n\t// pass through on linux\n\treturn aclRules\n}\n\nfunc (i *iptables) platformInit() error {\n\treturn nil\n}\n\nfunc (i *iptables) cleanACLs() error { // nolint\n\tcfg, err := i.newACLInfo(0, \"\", nil, 0)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// First clear the nat rules\n\tif err := i.removeGlobalHooks(cfg); err != nil {\n\t\tzap.L().Error(\"unable to remove nat proxy rules\")\n\t}\n\n\t// Clean all rules with TRI- sub\n\ti.impl.ResetRules(\"TRI-\") // nolint: errcheck\n\t// Always return nil here. No reason to block anything if cleans fail.\n\treturn nil\n}\n\nfunc generateUDPACLRule() []string {\n\treturn []string{\"-m\", \"string\", \"!\", \"--string\", packet.UDPAuthMarker, \"--algo\", \"bm\", \"--to\", \"128\"}\n}\n\nfunc targetUDPNetworkClause(rule *aclIPset, targetUDPName string, ipMatchDirection string) []string {\n\treturn []string{\"-m\", \"set\", \"!\", \"--match-set\", targetUDPName, ipMatchDirection}\n}\n\nfunc connmarkUDPConnmarkClause() []string {\n\treturn []string{\"-j\", \"CONNMARK\", \"--set-mark\", strconv.Itoa(int(markconstants.DefaultExternalConnMark))}\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/acls_rhel6.go",
    "content": "// +build rhel6\n\npackage iptablesctrl\n\nimport (\n\t\"strings\"\n\t\"text/template\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/gaia/protocols\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\ttcpProto  = \"tcp\"\n\ticmpProto = \"icmp\"\n\tudpProto  = \"udp\"\n)\n\nfunc (i *iptables) aclSkipProto(proto string) bool {\n\tif splits := strings.Split(proto, \"/\"); strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP || strings.ToUpper(splits[0]) == protocols.L4ProtocolICMP6 {\n\t\treturn true\n\t}\n\treturn false\n}\n\n// This refers to the pu chain rules for pus in older distros like RH 6.9/Ubuntu 14.04. The rules\n// consider source ports to identify packets from the process.\nfunc (i *iptables) legacyPuChainRules(cfg *ACLInfo) ([][]string, bool) {\n\tif !(cfg.PUType == common.HostNetworkPU || cfg.PUType == common.HostPU) {\n\t\treturn nil, false\n\t}\n\n\tiptableCgroupSection := cfg.AppSection\n\tiptableNetSection := cfg.NetSection\n\trules := [][]string{}\n\tif cfg.TCPPorts != \"0\" {\n\t\trules = append(rules, [][]string{\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", icmpProto,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", \"MARK\", \"--set-mark\", cfg.PacketMark,\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", tcpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--source-ports\", cfg.TCPPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", \"MARK\", \"--set-mark\", cfg.PacketMark,\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", tcpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--source-ports\", cfg.TCPPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", cfg.AppChain,\n\t\t\t},\n\t\t\t{\n\t\t\t\tnetPacketIPTableContext,\n\t\t\t\tiptableNetSection,\n\t\t\t\t\"-p\", tcpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--destination-ports\", cfg.TCPPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Container-specific-chain\",\n\t\t\t\t\"-j\", cfg.NetChain,\n\t\t\t}}...)\n\t}\n\n\tif cfg.UDPPorts != \"0\" {\n\t\trules = append(rules, [][]string{\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", udpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--source-ports\", cfg.UDPPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", \"MARK\", \"--set-mark\", cfg.PacketMark,\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", udpProto, \"-m\", \"mark\", \"--mark\", cfg.PacketMark,\n\t\t\t\t\"-m\", \"addrtype\", \"--src-type\", \"LOCAL\",\n\t\t\t\t\"-m\", \"addrtype\", \"--dst-type\", \"LOCAL\",\n\t\t\t\t\"-m\", \"state\", \"--state\", \"NEW\",\n\t\t\t\t\"-j\", \"NFLOG\", \"--nflog-group\", \"10\",\n\t\t\t\t\"--nflog-prefix\", policy.DefaultLogPrefix(cfg.ContextID, policy.Accept),\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"traffic-same-pu\",\n\t\t\t\t\"-p\", udpProto, \"-m\", \"mark\", \"--mark\", cfg.PacketMark,\n\t\t\t\t\"-m\", \"addrtype\", \"--src-type\", \"LOCAL\",\n\t\t\t\t\"-m\", \"addrtype\", \"--dst-type\", \"LOCAL\",\n\t\t\t\t\"-j\", \"ACCEPT\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", udpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--source-ports\", cfg.UDPPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", cfg.AppChain,\n\t\t\t},\n\t\t\t{\n\t\t\t\tnetPacketIPTableContext,\n\t\t\t\tiptableNetSection,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"traffic-same-pu\",\n\t\t\t\t\"-p\", udpProto, \"-m\", \"mark\", \"--mark\", cfg.PacketMark,\n\t\t\t\t\"-m\", \"addrtype\", \"--src-type\", \"LOCAL\",\n\t\t\t\t\"-m\", \"addrtype\", \"--dst-type\", \"LOCAL\",\n\t\t\t\t\"-j\", \"ACCEPT\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tnetPacketIPTableContext,\n\t\t\t\tiptableNetSection,\n\t\t\t\t\"-p\", udpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--destination-ports\", cfg.UDPPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Container-specific-chain\",\n\t\t\t\t\"-j\", cfg.NetChain,\n\t\t\t}}...)\n\t}\n\n\tif cfg.PUType == common.HostPU {\n\t\t// Add a capture all traffic rule for host pu. This traps all traffic going out\n\t\t// of the box.\n\n\t\trules = append(rules, []string{\n\t\t\tappPacketIPTableContext,\n\t\t\tiptableCgroupSection,\n\t\t\t\"-m\", \"comment\", \"--comment\", \"capture all outgoing traffic\",\n\t\t\t\"-j\", cfg.AppChain,\n\t\t})\n\t\trules = append(rules, []string{\n\t\t\tnetPacketIPTableContext,\n\t\t\tiptableNetSection,\n\t\t\t\"-m\", \"comment\", \"--comment\", \"capture all outgoing traffic\",\n\t\t\t\"-j\", cfg.NetChain,\n\t\t})\n\t}\n\n\treturn append(rules, i.legacyProxyRules(cfg.TCPPorts, cfg.ProxyPort, cfg.DestIPSet, cfg.SrvIPSet, cfg.PacketMark, cfg.DNSProxyPort, cfg.DNSServerIP)...), true\n}\n\nfunc (i *iptables) legacyProxyRules(tcpPorts, proxyPort, destSetName, srvSetName, cgroupMark, dnsProxyPort, dnsServerIP string) [][]string {\n\n\taclInfo := ACLInfo{\n\t\tMangleTable:         appPacketIPTableContext,\n\t\tNatTable:            appProxyIPTableContext,\n\t\tMangleProxyAppChain: proxyOutputChain,\n\t\tMangleProxyNetChain: proxyInputChain,\n\t\tNatProxyNetChain:    natProxyInputChain,\n\t\tNatProxyAppChain:    natProxyOutputChain,\n\t\tCgroupMark:          cgroupMark,\n\t\tDestIPSet:           destSetName,\n\t\tSrvIPSet:            srvSetName,\n\t\tProxyPort:           proxyPort,\n\t\tProxyMark:           constants.ProxyMark,\n\t\tTCPPorts:            tcpPorts,\n\t\tDNSProxyPort:        dnsProxyPort,\n\t\tDNSServerIP:         dnsServerIP,\n\t}\n\n\ttmpl := template.Must(template.New(legacyProxyRules).Funcs(template.FuncMap{\n\t\t\"isCgroupSet\": func() bool {\n\t\t\treturn cgroupMark != \"\"\n\t\t},\n\t\t\"enableDNSProxy\": func() bool {\n\t\t\treturn dnsServerIP != \"\"\n\t\t},\n\t}).Parse(legacyProxyRules))\n\n\trules, err := extractRulesFromTemplate(tmpl, aclInfo)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t}\n\treturn rules\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/acls_windows.go",
    "content": "// +build windows\n\npackage iptablesctrl\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\t\"syscall\"\n\t\"unsafe\"\n\n\t\"github.com/kballard/go-shellquote\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\twinipt \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/windows\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.uber.org/zap\"\n)\n\n// discoverCnsAgentBootPID finds parent's parent pid on Windows.\n// needs to happen early, in case our mgr parent is relaunched.\nvar discoverCnsAgentBootPID = func() int {\n\tpppid, err := getGrandparentPid()\n\tif err != nil {\n\t\tzap.L().Error(\"Could not get CnsAgentBootPID\", zap.Error(err))\n\t\treturn -1\n\t}\n\treturn pppid\n}\n\nfunc getGrandparentPid() (int, error) {\n\tppid := os.Getppid()\n\tif ppid <= 0 {\n\t\treturn -1, fmt.Errorf(\"getGrandparentPid failed to get ppid\")\n\t}\n\t// from getProcessEntry in syscall_windows.go\n\tsnapshot, err := syscall.CreateToolhelp32Snapshot(syscall.TH32CS_SNAPPROCESS, 0)\n\tif err != nil {\n\t\treturn -1, err\n\t}\n\tdefer syscall.CloseHandle(snapshot) // nolint: errcheck\n\tvar procEntry syscall.ProcessEntry32\n\tprocEntry.Size = uint32(unsafe.Sizeof(procEntry))\n\tif err = syscall.Process32First(snapshot, &procEntry); err != nil {\n\t\treturn -1, err\n\t}\n\tfor {\n\t\tif procEntry.ProcessID == uint32(ppid) {\n\t\t\treturn int(procEntry.ParentProcessID), nil\n\t\t}\n\t\terr = syscall.Process32Next(snapshot, &procEntry)\n\t\tif err != nil {\n\t\t\treturn -1, err\n\t\t}\n\t}\n}\n\nfunc (i *iptables) aclSkipProto(proto string) bool {\n\treturn false\n}\n\nfunc (i *iptables) legacyPuChainRules(cfg *ACLInfo) ([][]string, bool) {\n\treturn nil, false\n}\n\n// create ipsets needed for Windows rules\nfunc (i *iptables) platformInit() error {\n\n\tipset := ipsetmanager.IPsetProvider()\n\n\tcfg, err := i.newACLInfo(0, \"\", nil, 0)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\texistingSets, err := ipset.ListIPSets()\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tsetExists := func(s string) bool {\n\t\tfor _, existing := range existingSets {\n\t\t\tif existing == s {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\n\tif !setExists(\"TRI-v4-WindowsAllIPs\") {\n\t\tallIPsV4, err := ipset.NewIpset(\"TRI-v4-WindowsAllIPs\", \"hash:net\", nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\terr = allIPsV4.Add(\"0.0.0.0/0\", 0)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif !setExists(\"TRI-v6-WindowsAllIPs\") {\n\t\tallIPsV6, err := ipset.NewIpset(\"TRI-v6-WindowsAllIPs\", \"hash:net\", nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\terr = allIPsV6.Add(\"::/0\", 0)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif cfg.DNSServerIP != \"\" {\n\t\t// TRI-v4-WindowsDNSServer is used in a global rule that applies to both IPv4/IPv6,\n\t\t// but we need the TRI-v4 prefix on the name so that it is properly cleaned up\n\t\tif !setExists(\"TRI-v4-WindowsDNSServer\") {\n\t\t\tdnsIPSet, err := ipset.NewIpset(\"TRI-v4-WindowsDNSServer\", \"hash:net\", nil)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tswitch cfg.DNSServerIP {\n\t\t\tcase IPv4DefaultIP, IPv6DefaultIP:\n\t\t\t\t// in the case of an all-network range (which is the default value), we need to allow all for ipv4+ipv6\n\t\t\t\terr = dnsIPSet.Add(IPv4DefaultIP, 0)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\terr = dnsIPSet.Add(IPv6DefaultIP, 0)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\t// for now, we add\n\t\t\t\terr = dnsIPSet.Add(cfg.DNSServerIP, 0)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// addContainerChain for Windows\nfunc (i *iptables) addContainerChain(cfg *ACLInfo) error {\n\tappChain := cfg.AppChain\n\tnetChain := cfg.NetChain\n\tif err := i.impl.NewChain(appPacketIPTableContext, appChain); err != nil {\n\t\treturn fmt.Errorf(\"unable to add chain %s of context %s: %s\", appChain, appPacketIPTableContext, err)\n\t}\n\tif err := i.impl.NewChain(netPacketIPTableContext, netChain); err != nil {\n\t\treturn fmt.Errorf(\"unable to add netchain %s of context %s: %s\", netChain, netPacketIPTableContext, err)\n\t}\n\treturn nil\n}\n\n// deletePUChains removes all the container specific chains and basic rules\nfunc (i *iptables) deletePUChains(cfg *ACLInfo) error {\n\n\tif err := i.impl.ClearChain(appPacketIPTableContext, cfg.AppChain); err != nil {\n\t\tzap.L().Warn(\"Failed to clear the container ack packets chain\",\n\t\t\tzap.String(\"appChain\", cfg.AppChain),\n\t\t\tzap.String(\"context\", appPacketIPTableContext),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err := i.impl.DeleteChain(appPacketIPTableContext, cfg.AppChain); err != nil {\n\t\tzap.L().Warn(\"Failed to delete the container ack packets chain\",\n\t\t\tzap.String(\"appChain\", cfg.AppChain),\n\t\t\tzap.String(\"context\", appPacketIPTableContext),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err := i.impl.ClearChain(netPacketIPTableContext, cfg.NetChain); err != nil {\n\t\tzap.L().Warn(\"Failed to clear the container net packets chain\",\n\t\t\tzap.String(\"netChain\", cfg.NetChain),\n\t\t\tzap.String(\"context\", netPacketIPTableContext),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err := i.impl.DeleteChain(netPacketIPTableContext, cfg.NetChain); err != nil {\n\t\tzap.L().Warn(\"Failed to delete the container net packets chain\",\n\t\t\tzap.String(\"netChain\", cfg.NetChain),\n\t\t\tzap.String(\"context\", netPacketIPTableContext),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\treturn nil\n}\n\n// try to merge two acl rules (one log and one accept/drop) into one for Windows\nfunc makeTerminatingRuleFromPair(aclRule1, aclRule2 []string) *winipt.WindowsRuleSpec {\n\n\tif aclRule1 == nil || aclRule2 == nil {\n\t\treturn nil\n\t}\n\twinRuleSpec1, err := winipt.ParseRuleSpec(aclRule1[2:]...)\n\tif err != nil {\n\t\treturn nil\n\t}\n\twinRuleSpec2, err := winipt.ParseRuleSpec(aclRule2[2:]...)\n\tif err != nil {\n\t\treturn nil\n\t}\n\n\t// save action/log properties, as long as one rule is an action and the other is nflog\n\taction := 0\n\tlogPrefix := \"\"\n\tgroupID := 0\n\tif action == 0 && winRuleSpec1.Action != 0 && winRuleSpec2.Log {\n\t\taction = winRuleSpec1.Action\n\t\tlogPrefix = winRuleSpec2.LogPrefix\n\t\tgroupID = winRuleSpec2.GroupID\n\t}\n\tif action == 0 && winRuleSpec2.Action != 0 && winRuleSpec1.Log {\n\t\taction = winRuleSpec2.Action\n\t\tlogPrefix = winRuleSpec1.LogPrefix\n\t\tgroupID = winRuleSpec1.GroupID\n\t}\n\tif action == 0 {\n\t\treturn nil\n\t}\n\n\t// if one is nflog and one is another action, and they are otherwise equal, then combine into one rule\n\twinRuleSpec1.Log = false\n\twinRuleSpec1.LogPrefix = \"\"\n\twinRuleSpec1.GroupID = 0\n\twinRuleSpec1.Action = 0\n\twinRuleSpec2.Log = false\n\twinRuleSpec2.LogPrefix = \"\"\n\twinRuleSpec2.GroupID = 0\n\twinRuleSpec2.Action = 0\n\tif winRuleSpec1.Equal(winRuleSpec2) {\n\t\twinRuleSpec1.Log = true\n\t\twinRuleSpec1.LogPrefix = logPrefix\n\t\twinRuleSpec1.GroupID = groupID\n\t\twinRuleSpec1.Action = action\n\t\treturn winRuleSpec1\n\t}\n\treturn nil\n}\n\n// take a parsed acl rule and clean it up, returning an acl rule in []string format\nfunc processWindowsACLRule(table, _ string, winRuleSpec *winipt.WindowsRuleSpec, cfg *ACLInfo, isAppAcls bool) ([]string, error) {\n\tvar chain string\n\tif isAppAcls {\n\t\tchain = cfg.AppChain\n\t} else {\n\t\tchain = cfg.NetChain\n\t}\n\tswitch cfg.PUType {\n\tcase common.HostPU:\n\tcase common.HostNetworkPU:\n\t\tif isAppAcls {\n\t\t\treturn nil, nil\n\t\t}\n\t\tswitch winRuleSpec.Protocol {\n\t\tcase packet.IPProtocolTCP:\n\t\tcase packet.IPProtocolUDP:\n\t\tdefault:\n\t\t\treturn nil, nil\n\t\t}\n\tcase common.WindowsProcessPU:\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"unexpected Windows PU: %v\", cfg.PUType)\n\t}\n\trulespec, _ := winipt.MakeRuleSpecText(winRuleSpec, false)\n\n\trule, err := shellquote.Split(rulespec)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn append([]string{table, chain}, rule...), nil\n}\n\n// while not strictly necessary now for Windows, we still try to combine a log (non-terminating rule) and another terminating rule.\nfunc transformACLRules(aclRules [][]string, cfg *ACLInfo, rulesBucket *rulesInfo, isAppAcls bool) [][]string {\n\n\t// find the reverse rules and remove them.\n\t// note: we assume that reverse rules are the ones we add for UDP established reverse flows.\n\t// we handle this in the windows driver so we don't need a rule for it.\n\t// again: our driver assumes that all UDP acl rules will have a reverse flow added.\n\tif rulesBucket != nil {\n\t\tfor _, rr := range rulesBucket.ReverseRules {\n\t\t\trevTable, revChain := rr[0], rr[1]\n\t\t\trevRule, err := winipt.ParseRuleSpec(rr[2:]...)\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Error(\"transformACLRules failed to parse reverse rule\", zap.Error(err))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tfound := false\n\t\t\tfor i, r := range aclRules {\n\t\t\t\trule, err := winipt.ParseRuleSpec(r[2:]...)\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"transformACLRules failed to parse rule\", zap.Error(err))\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\ttable, chain := r[0], r[1]\n\t\t\t\tif table == revTable && chain == revChain && rule.Equal(revRule) {\n\t\t\t\t\tfound = true\n\t\t\t\t\taclRules = append(aclRules[:i], aclRules[i+1:]...)\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !found {\n\t\t\t\tzap.L().Warn(\"transformACLRules could not find reverse rule\")\n\t\t\t}\n\t\t}\n\t}\n\n\tvar result [][]string\n\n\t// now in the loop, compare successive rules to see if they are equal, disregarding their action or log properties.\n\t// if they are, then combine them into one rule.\n\tvar aclRule1, aclRule2 []string\n\tfor i := 0; i < len(aclRules) || aclRule1 != nil; i++ {\n\t\tif aclRule1 == nil {\n\t\t\taclRule1 = aclRules[i]\n\t\t\ti++\n\t\t}\n\t\tif i < len(aclRules) {\n\t\t\taclRule2 = aclRules[i]\n\t\t}\n\t\ttable, chain := aclRule1[0], aclRule1[1]\n\t\twinRule := makeTerminatingRuleFromPair(aclRule1, aclRule2)\n\t\tif winRule == nil {\n\t\t\t// not combinable, so work on rule 1\n\t\t\tvar err error\n\t\t\twinRule, err = winipt.ParseRuleSpec(aclRule1[2:]...)\n\t\t\taclRule1 = aclRule2\n\t\t\taclRule2 = nil\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Error(\"transformACLRules failed\", zap.Error(err))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t} else {\n\t\t\taclRule1 = nil\n\t\t\taclRule2 = nil\n\t\t}\n\t\t// process rule\n\t\txformedRule, err := processWindowsACLRule(table, chain, winRule, cfg, isAppAcls)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"transformACLRules failed\", zap.Error(err))\n\t\t\tcontinue\n\t\t}\n\t\tif xformedRule != nil {\n\t\t\tresult = append(result, xformedRule)\n\t\t}\n\t}\n\n\tif result == nil {\n\t\tresult = [][]string{}\n\t}\n\treturn result\n}\n\nfunc (i *iptables) cleanACLs() error { // nolint\n\tcfg, err := i.newACLInfo(0, \"\", nil, 0)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// First clear the nat rules\n\tif err := i.removeGlobalHooks(cfg); err != nil {\n\t\tzap.L().Error(\"unable to remove nat proxy rules\")\n\t}\n\n\t// Clean Application Rules/Chains\n\ti.cleanACLSection(appPacketIPTableContext, constants.ChainPrefix)\n\ti.cleanACLSection(appProxyIPTableContext, constants.ChainPrefix)\n\n\ti.impl.Commit() // nolint\n\n\t// Always return nil here. No reason to block anything if cleans fail.\n\treturn nil\n}\n\n// cleanACLSection flushes and deletes all chains with Prefix - Trireme\nfunc (i *iptables) cleanACLSection(context, chainPrefix string) {\n\n\trules, err := i.impl.ListChains(context)\n\tif err != nil {\n\t\tzap.L().Warn(\"Failed to list chains\",\n\t\t\tzap.String(\"context\", context),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tfor _, rule := range rules {\n\t\tif strings.Contains(rule, chainPrefix) {\n\t\t\tif err := i.impl.ClearChain(context, rule); err != nil {\n\t\t\t\tzap.L().Warn(\"Can not clear the chain\",\n\t\t\t\t\tzap.String(\"context\", context),\n\t\t\t\t\tzap.String(\"section\", rule),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t}\n\t}\n\n\tfor _, rule := range rules {\n\t\tif strings.Contains(rule, chainPrefix) {\n\t\t\tif err := i.impl.DeleteChain(context, rule); err != nil {\n\t\t\t\tzap.L().Warn(\"Can not delete the chain\",\n\t\t\t\t\tzap.String(\"context\", context),\n\t\t\t\t\tzap.String(\"section\", rule),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc generateUDPACLRule() []string {\n\t//-m\", \"string\", \"!\", \"--string\", packet.UDPAuthMarker, \"--offset\", \"4\"\n\treturn []string{\"-m\", \"string\", \"--string\", \"!\", packet.UDPAuthMarker, \"--offset\", \"6\"}\n}\n\nfunc targetUDPNetworkClause(rule *aclIPset, targetUDPName string, ipMatchDirection string) []string {\n\tif !strings.Contains(strings.Join(rule.Ports, \",\"), \"53\") {\n\t\treturn []string{\"-m\", \"set\", \"!\", \"--match-set\", targetUDPName, ipMatchDirection}\n\t}\n\treturn []string{}\n}\n\nfunc connmarkUDPConnmarkClause() []string {\n\treturn []string{}\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/acls_windows_test.go",
    "content": "// +build windows\n\npackage iptablesctrl\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/kballard/go-shellquote\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/windows\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n)\n\nconst (\n\tsampleTCPPorts = \"80,443\"\n\tsampleUDPPorts = \"\"\n)\n\nfunc TestTransformACLRuleHost(t *testing.T) {\n\n\tConvey(\"When I parse some acl rules\", t, func() {\n\n\t\tvar aclRules [][]string\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-hostZ7PbqL-0 -p 6 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 1:65535 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-hostZ7PbqL-0 -p 6 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 1:65535 -j DROP\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-hostZ7PbqL-0 -p 17 -m set --match-set TRI-v4-TargetUDP src --match multiport --dports 80,443,8080:8443 -j ACCEPT\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-hostZ7PbqL-0 -p 6 -m set --match-set TRI-v4-ext-z4QRD1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 2323 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 531138568:5d9e2e2d8431510001bcc931:5d61b8f4884e46000146bcd9:3\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-hostZ7PbqL-0 -p 6 -m set --match-set TRI-v4-ext-z4QRD1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 2323 -j ACCEPT\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-hostZ7PbqL-0 -p 6 -m state --state NEW -m set --match-set TRI-v4-TargetTCP dst --match multiport --dports 2323 -j ACCEPT\", \" \"))\n\n\t\taclInfo := &ACLInfo{}\n\t\taclInfo.TCPPorts = sampleTCPPorts\n\t\taclInfo.UDPPorts = sampleUDPPorts\n\t\taclInfo.PUType = common.HostPU\n\n\t\txformedRules := transformACLRules(aclRules, aclInfo, nil, true)\n\n\t\tConvey(\"Adjacent like ones should be merged\", func() {\n\n\t\t\tSo(xformedRules, ShouldHaveLength, 4)\n\n\t\t\t// check combined rule 1 and 2\n\t\t\t// OUTPUT HostSvcRules-OUTPUT -p 6 --dports 1:65535 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd dstIP,dstPort -m set ! --match-set TRI-v4-TargetTCP dstIP,dstPort -j DROP -j NFLOG --nflog-group 0 --nflog-prefix 531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\n\t\t\trs, err := windows.ParseRuleSpec(xformedRules[0][2:]...)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 6)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionBlock)\n\t\t\tSo(rs.Log, ShouldBeTrue)\n\t\t\tSo(rs.TCPFlagsSpecified, ShouldBeTrue)\n\t\t\tSo(rs.TCPFlags, ShouldEqual, 2)\n\t\t\tSo(rs.TCPFlagsMask, ShouldEqual, 18)\n\t\t\tSo(rs.LogPrefix, ShouldEqual, \"531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\")\n\t\t\tSo(rs.MatchDstPort, ShouldHaveLength, 1)\n\t\t\tSo(rs.MatchDstPort[0].Start, ShouldEqual, 1)\n\t\t\tSo(rs.MatchDstPort[0].End, ShouldEqual, 65535)\n\t\t\tSo(rs.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(rs.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-v4-ext-cUDEx1114Z2xd\")\n\t\t\tSo(rs.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-v4-TargetTCP\")\n\t\t\tSo(rs.MatchSet[1].MatchSetNegate, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[1].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[1].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetDstPort, ShouldBeTrue)\n\n\t\t\t// check singular rule 3\n\t\t\t// OUTPUT TRI-App-hostZ7PbqL-0 -p 17 -m set --match-set TRI-v4-TargetUDP src --match multiport --dports 80,443,8080:8443 -j ACCEPT\n\t\t\trs, err = windows.ParseRuleSpec(xformedRules[1][2:]...)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 17)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(rs.Log, ShouldBeFalse)\n\t\t\tSo(rs.MatchDstPort, ShouldHaveLength, 3)\n\t\t\tSo(rs.MatchDstPort[0].Start, ShouldEqual, 80)\n\t\t\tSo(rs.MatchDstPort[0].End, ShouldEqual, 80)\n\t\t\tSo(rs.MatchDstPort[1].Start, ShouldEqual, 443)\n\t\t\tSo(rs.MatchDstPort[1].End, ShouldEqual, 443)\n\t\t\tSo(rs.MatchDstPort[2].Start, ShouldEqual, 8080)\n\t\t\tSo(rs.MatchDstPort[2].End, ShouldEqual, 8443)\n\t\t\tSo(rs.MatchSet, ShouldHaveLength, 1)\n\n\t\t\t// check combined rule 4 and 5\n\t\t\t// OUTPUT HostSvcRules-OUTPUT -p 6 --dports 2323 -m set --match-set TRI-v4-ext-z4QRD1114Z2xd dstIP,dstPort -m set ! --match-set TRI-v4-TargetTCP dstIP,dstPort -j ACCEPT -j NFLOG --nflog-group 0 --nflog-prefix 531138568:5d9e2e2d8431510001bcc931:5d61b8f4884e46000146bcd9:3\n\t\t\trs, err = windows.ParseRuleSpec(xformedRules[2][2:]...)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 6)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(rs.Log, ShouldBeTrue)\n\t\t\tSo(rs.LogPrefix, ShouldEqual, \"531138568:5d9e2e2d8431510001bcc931:5d61b8f4884e46000146bcd9:3\")\n\t\t\tSo(rs.MatchDstPort, ShouldHaveLength, 1)\n\t\t\tSo(rs.MatchDstPort[0].Start, ShouldEqual, 2323)\n\t\t\tSo(rs.MatchDstPort[0].End, ShouldEqual, 2323)\n\t\t\tSo(rs.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(rs.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-v4-ext-z4QRD1114Z2xd\")\n\t\t\tSo(rs.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-v4-TargetTCP\")\n\t\t\tSo(rs.MatchSet[1].MatchSetNegate, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[1].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[1].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetDstPort, ShouldBeTrue)\n\n\t\t\t// check last rule 6\n\t\t\t// OUTPUT TRI-App-hostZ7PbqL-0 -p 6 -m state --state NEW -m set --match-set TRI-v4-TargetTCP dst --match multiport --dports 2323 -j ACCEPT\n\t\t\trs, err = windows.ParseRuleSpec(xformedRules[3][2:]...)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 6)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(rs.Log, ShouldBeFalse)\n\t\t\tSo(rs.TCPFlagsSpecified, ShouldBeTrue)\n\t\t\tSo(rs.TCPFlags, ShouldEqual, 2)\n\t\t\tSo(rs.TCPFlagsMask, ShouldEqual, 18)\n\t\t\tSo(rs.MatchDstPort, ShouldHaveLength, 1)\n\t\t\tSo(rs.MatchDstPort[0].Start, ShouldEqual, 2323)\n\t\t\tSo(rs.MatchDstPort[0].End, ShouldEqual, 2323)\n\t\t\tSo(rs.MatchSet, ShouldHaveLength, 1)\n\n\t\t})\n\n\t})\n\n}\n\nfunc TestTransformACLRuleHostNet(t *testing.T) {\n\n\tConvey(\"When I parse a set of net acl rules for host pu\", t, func() {\n\n\t\tvar aclRules [][]string\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-hostZ7PbqL-0 -p 6 -m set --match-set TRI-v6-ext-cUDEx1114Z2xd src -m state --state NEW -m set ! --match-set TRI-v6-TargetTCP src --match multiport --dports 1:65535 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 3617624947:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-hostZ7PbqL-0 -p 6 -m set --match-set TRI-v6-ext-cUDEx1114Z2xd src -m state --state NEW -m set ! --match-set TRI-v6-TargetTCP src --match multiport --dports 1:65535 -j ACCEPT\", \" \"))\n\n\t\taclInfo := &ACLInfo{}\n\t\taclInfo.TCPPorts = sampleTCPPorts\n\t\taclInfo.UDPPorts = sampleUDPPorts\n\t\taclInfo.PUType = common.HostPU\n\n\t\txformedRules := transformACLRules(aclRules, aclInfo, nil, false)\n\n\t\tConvey(\"They should be merged to one rule for the HostPU-INPUT chain\", func() {\n\n\t\t\tSo(xformedRules, ShouldHaveLength, 1)\n\n\t\t\t// check combined rule 1 and 2\n\t\t\t// OUTPUT HostPU-INPUT -p 6 --dports 1:65535 -m set --match-set TRI-v6-ext-cUDEx1114Z2xd srcIP,srcPort -m set ! --match-set TRI-v6-TargetTCP srcIP,srcPort -j ACCEPT -j NFLOG --nflog-group 0 --nflog-prefix 3617624947:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\n\t\t\trs, err := windows.ParseRuleSpec(xformedRules[0][2:]...)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 6)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(rs.Log, ShouldBeTrue)\n\t\t\tSo(rs.LogPrefix, ShouldEqual, \"3617624947:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\")\n\t\t\tSo(rs.TCPFlagsSpecified, ShouldBeTrue)\n\t\t\tSo(rs.TCPFlags, ShouldEqual, 2)\n\t\t\tSo(rs.TCPFlagsMask, ShouldEqual, 18)\n\t\t\tSo(rs.MatchDstPort, ShouldHaveLength, 1)\n\t\t\tSo(rs.MatchDstPort[0].Start, ShouldEqual, 1)\n\t\t\tSo(rs.MatchDstPort[0].End, ShouldEqual, 65535)\n\t\t\tSo(rs.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(rs.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-v6-ext-cUDEx1114Z2xd\")\n\t\t\tSo(rs.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[0].MatchSetSrcPort, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-v6-TargetTCP\")\n\t\t\tSo(rs.MatchSet[1].MatchSetNegate, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetSrcPort, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[1].MatchSetDstPort, ShouldBeFalse)\n\n\t\t})\n\t})\n\n}\n\nfunc TestTransformACLRuleHostSvc(t *testing.T) {\n\n\tConvey(\"When I parse some acl rules for a host service\", t, func() {\n\n\t\tvar aclRules [][]string\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-1114oqLQAD-0 -p 6 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 1:65535 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-1114oqLQAD-0 -p 6 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 1:65535 -j DROP\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-1114oqLQAD-0 -p 17 -m set --match-set TRI-v4-TargetUDP src --match multiport --dports 80,443,8080:8443 -j ACCEPT\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-1114oqLQAD-0 -m set --match-set TRI-v4-ext-z4QRD1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 2323 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 531138568:5d9e2e2d8431510001bcc931:5d61b8f4884e46000146bcd9:3\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-1114oqLQAD-0 -m set --match-set TRI-v4-ext-z4QRD1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 2323 -j ACCEPT\", \" \"))\n\n\t\taclInfo := &ACLInfo{}\n\t\taclInfo.TCPPorts = sampleTCPPorts\n\t\taclInfo.UDPPorts = sampleUDPPorts\n\t\taclInfo.PUType = common.HostNetworkPU\n\n\t\txformedRules := transformACLRules(aclRules, aclInfo, nil, true)\n\n\t\tConvey(\"No outgoing rules are kept for host-service PU\", func() {\n\n\t\t\tSo(xformedRules, ShouldHaveLength, 0)\n\n\t\t})\n\n\t})\n\n}\n\nfunc TestTransformACLRuleHostSvcNet(t *testing.T) {\n\n\tConvey(\"When I parse a set of net acl rules for a host svc pu\", t, func() {\n\n\t\tvar aclRules [][]string\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-1114oqLQAD-0 -p 6 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd src -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP src --match multiport --dports 1:65535 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 531138568:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-1114oqLQAD-0 -p 6 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd src -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP src --match multiport --dports 1:65535 -j ACCEPT\", \" \"))\n\t\t// protocol any rules for input on host-svc should be dropped\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-1114oqLQAD-0 -m set --match-set TRI-v4-ext-dxxgXBWCQy0= src -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP src --match multiport --dports 1:65535 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 187906336:5e2b46b82e67d60001766eda:5dfd1e479facec0001e5936b:3\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-1114oqLQAD-0 -m set --match-set TRI-v4-ext-dxxgXBWCQy0= src -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP src --match multiport --dports 1:65535 -j ACCEPT\", \" \"))\n\n\t\taclInfo := &ACLInfo{}\n\t\taclInfo.TCPPorts = sampleTCPPorts\n\t\taclInfo.UDPPorts = sampleUDPPorts\n\t\taclInfo.PUType = common.HostNetworkPU\n\n\t\txformedRules := transformACLRules(aclRules, aclInfo, nil, false)\n\n\t\tConvey(\"They should be merged to one rule for the HostSvcRules-INPUT chain and should have the PU's ports\", func() {\n\n\t\t\tSo(xformedRules, ShouldHaveLength, 1)\n\n\t\t\t// check combined rule 1 and 2\n\t\t\t// dports should be replaced with PU's ports\n\t\t\t// OUTPUT HostSvcRules-INPUT -p 6 --dports 80,443 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd srcIP,srcPort -m set ! --match-set TRI-v4-TargetTCP srcIP,srcPort -j ACCEPT -j NFLOG --nflog-group 0 --nflog-prefix 531138568:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\n\t\t\trs, err := windows.ParseRuleSpec(xformedRules[0][2:]...)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 6)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(rs.Log, ShouldBeTrue)\n\t\t\tSo(rs.LogPrefix, ShouldEqual, \"531138568:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\")\n\t\t\tSo(rs.TCPFlagsSpecified, ShouldBeTrue)\n\t\t\tSo(rs.TCPFlags, ShouldEqual, 2)\n\t\t\tSo(rs.TCPFlagsMask, ShouldEqual, 18)\n\t\t\tSo(rs.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(rs.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-v4-ext-cUDEx1114Z2xd\")\n\t\t\tSo(rs.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[0].MatchSetSrcPort, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-v4-TargetTCP\")\n\t\t\tSo(rs.MatchSet[1].MatchSetNegate, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetSrcPort, ShouldBeTrue)\n\t\t\tSo(rs.MatchSet[1].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(rs.MatchSet[1].MatchSetDstPort, ShouldBeFalse)\n\n\t\t})\n\n\t})\n\n}\n\nfunc TestTransformACLRuleIcmp(t *testing.T) {\n\n\tConvey(\"When I parse a set of net acl rules with an icmp rule\", t, func() {\n\n\t\tvar aclRules [][]string\n\n\t\trule, err := shellquote.Split(\"OUTPUT TRI-Net-hostZ7PbqL-0 -p 1 --icmp-type 3/0:2,6 -j NFLOG --nflog-group 11 --nflog-prefix \\\"3617624947:5d6967333561e000018a3a65:5d60448a884e46000145cf67:incoming n_3484738895:3\\\"\")\n\t\tSo(err, ShouldBeNil)\n\n\t\taclRules = append(aclRules, rule)\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-hostZ7PbqL-0 -p 1 --icmp-type 3/0:2,6 -j ACCEPT\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-hostZ7PbqL-0 -p 1 --icmp-type 8/0:3,5 -j NFLOG --nflog-group 11 --nflog-prefix 3617624947:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-Net-hostZ7PbqL-0 -p 1 --icmp-type 8/0:3 -j ACCEPT\", \" \"))\n\n\t\taclInfo := &ACLInfo{}\n\t\taclInfo.TCPPorts = sampleTCPPorts\n\t\taclInfo.UDPPorts = sampleUDPPorts\n\t\taclInfo.PUType = common.HostPU\n\n\t\txformedRules := transformACLRules(aclRules, aclInfo, nil, false)\n\n\t\tConvey(\"They should be merged to one rule for the HostPU-INPUT chain\", func() {\n\n\t\t\tSo(xformedRules, ShouldHaveLength, 3)\n\n\t\t\t// check combined rule 1 and 2\n\t\t\t// OUTPUT HostPU-INPUT -p 1 --icmp-type 3/0:2,6 -j ACCEPT -j NFLOG --nflog-group 11 --nflog-prefix 3617624947:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\n\t\t\trs, err := windows.ParseRuleSpec(xformedRules[0][2:]...)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 1)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(rs.IcmpMatch, ShouldHaveLength, 2)\n\t\t\tSo(rs.IcmpMatch[0].IcmpType, ShouldEqual, 3)\n\t\t\tSo(rs.IcmpMatch[0].IcmpCodeRange.Start, ShouldEqual, 0)\n\t\t\tSo(rs.IcmpMatch[0].IcmpCodeRange.End, ShouldEqual, 2)\n\t\t\tSo(rs.IcmpMatch[1].IcmpType, ShouldEqual, 3)\n\t\t\tSo(rs.IcmpMatch[1].IcmpCodeRange.Start, ShouldEqual, 6)\n\t\t\tSo(rs.IcmpMatch[1].IcmpCodeRange.End, ShouldEqual, 6)\n\t\t\tSo(rs.Log, ShouldBeTrue)\n\t\t\tSo(rs.LogPrefix, ShouldEqual, \"3617624947:5d6967333561e000018a3a65:5d60448a884e46000145cf67:incoming n_3484738895:3\")\n\n\t\t\t// rules 3 and 4 should not be combined (they differ by icmp code)\n\t\t\trs, err = windows.ParseRuleSpec(xformedRules[1][2:]...)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 1)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionContinue)\n\t\t\tSo(rs.IcmpMatch, ShouldHaveLength, 2)\n\t\t\tSo(rs.IcmpMatch[0].IcmpType, ShouldEqual, 8)\n\t\t\tSo(rs.IcmpMatch[0].IcmpCodeRange.Start, ShouldEqual, 0)\n\t\t\tSo(rs.IcmpMatch[0].IcmpCodeRange.End, ShouldEqual, 3)\n\t\t\tSo(rs.IcmpMatch[1].IcmpType, ShouldEqual, 8)\n\t\t\tSo(rs.IcmpMatch[1].IcmpCodeRange.Start, ShouldEqual, 5)\n\t\t\tSo(rs.IcmpMatch[1].IcmpCodeRange.End, ShouldEqual, 5)\n\t\t\tSo(rs.Log, ShouldBeTrue)\n\t\t\tSo(rs.LogPrefix, ShouldEqual, \"3617624947:5d6967333561e000018a3a65:5d60448a884e46000145cf67:3\")\n\n\t\t\trs, err = windows.ParseRuleSpec(xformedRules[2][2:]...)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 1)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(rs.IcmpMatch, ShouldHaveLength, 1)\n\t\t\tSo(rs.IcmpMatch[0].IcmpType, ShouldEqual, 8)\n\t\t\tSo(rs.IcmpMatch[0].IcmpCodeRange.Start, ShouldEqual, 0)\n\t\t\tSo(rs.IcmpMatch[0].IcmpCodeRange.End, ShouldEqual, 3)\n\t\t\tSo(rs.Log, ShouldBeFalse)\n\t\t\tSo(rs.LogPrefix, ShouldEqual, \"\")\n\n\t\t})\n\t})\n\n\tConvey(\"When I parse a set of app acl rules with an icmp rule\", t, func() {\n\n\t\tvar aclRules [][]string\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-hostZ7PbqL-0 -p 1 --icmp-type 8/1:3 -j NFLOG --nflog-group 10 --nflog-prefix 531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\", \" \"))\n\t\taclRules = append(aclRules, strings.Split(\"OUTPUT TRI-App-hostZ7PbqL-0 -p 1 --icmp-type 8/1:3 -j DROP\", \" \"))\n\n\t\taclInfo := &ACLInfo{}\n\t\taclInfo.TCPPorts = sampleTCPPorts\n\t\taclInfo.UDPPorts = sampleUDPPorts\n\t\taclInfo.PUType = common.HostPU\n\n\t\txformedRules := transformACLRules(aclRules, aclInfo, nil, true)\n\n\t\tConvey(\"They should be merged to one rule for the HostPU-OUTPUT chain\", func() {\n\n\t\t\tSo(xformedRules, ShouldHaveLength, 1)\n\n\t\t\t// check combined rule 1 and 2\n\t\t\t// OUTPUT HostPU-OUTPUT -p 1 --icmp-type 8/1:3 -j DROP -j NFLOG --nflog-group 10 --nflog-prefix 531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\n\t\t\trs, err := windows.ParseRuleSpec(xformedRules[0][2:]...)\n\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(rs.Protocol, ShouldEqual, 1)\n\t\t\tSo(rs.Action, ShouldEqual, frontman.FilterActionBlock)\n\t\t\tSo(rs.IcmpMatch, ShouldHaveLength, 1)\n\t\t\tSo(rs.IcmpMatch[0].IcmpType, ShouldEqual, 8)\n\t\t\tSo(rs.IcmpMatch[0].IcmpCodeRange.Start, ShouldEqual, 1)\n\t\t\tSo(rs.IcmpMatch[0].IcmpCodeRange.End, ShouldEqual, 3)\n\t\t\tSo(rs.Log, ShouldBeTrue)\n\t\t\tSo(rs.LogPrefix, ShouldEqual, \"531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\")\n\n\t\t})\n\t})\n\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/comparators.go",
    "content": "package iptablesctrl\n\nimport \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\nfunc testObserveContinue(p *policy.FlowPolicy) bool {\n\treturn p.ObserveAction.ObserveContinue()\n}\n\nfunc testNotObserved(p *policy.FlowPolicy) bool {\n\treturn !p.ObserveAction.Observed()\n}\n\nfunc testObserveApply(p *policy.FlowPolicy) bool {\n\treturn p.ObserveAction.ObserveApply()\n}\n\nfunc testReject(p *policy.FlowPolicy) bool {\n\treturn (p.Action&policy.Reject != 0)\n}\n\nfunc testAccept(p *policy.FlowPolicy) bool {\n\treturn (p.Action&policy.Accept != 0)\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/constants_nonwindows.go",
    "content": "// +build !windows\n\npackage iptablesctrl\n\nconst (\n\tipTableSectionOutput     = \"OUTPUT\"\n\tipTableSectionPreRouting = \"PREROUTING\"\n\tappPacketIPTableContext  = \"mangle\"\n\tnetPacketIPTableContext  = \"mangle\"\n\tappProxyIPTableContext   = \"nat\"\n\n\tcustomQOSChainNFHook = \"POSTROUTING\"\n\tcustomQOSChainTable  = \"mangle\"\n\t// CustomQOSChain is the name of the chain where users can install custom QOS rules\n\tCustomQOSChain = \"POST-CUSTOM-QOS\"\n)\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/constants_windows.go",
    "content": "// +build windows\n\npackage iptablesctrl\n\nconst (\n\tipTableSectionOutput     = \"OUTPUT\"\n\tipTableSectionPreRouting = \"PREROUTING\"\n\tappPacketIPTableContext  = \"OUTPUT\"\n\tappProxyIPTableContext   = \"OUTPUT\"\n\tcustomQOSChainNFHook     = \"POSTROUTING\"\n\tcustomQOSChainTable      = \"mangle\"\n\t// CustomQOSChain is the name of the chain where users can install custom QOS rules\n\tCustomQOSChain          = \"POST-CUSTOM-QOS\"\n\tnetPacketIPTableContext = \"INPUT\"\n)\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/icmp_linux.go",
    "content": "// +build !rhel6\n\npackage iptablesctrl\n\n/*\n#cgo linux LDFLAGS: -L/tmp -lpcap\n#include<string.h>\n#include<stdlib.h>\n#include<pcap.h>\n\nchar bpf_program[1500];\n\nchar *compileBPF(const char *expr) {\n\tstruct bpf_program program;\n\tstruct bpf_insn *ins;\n        char buf[100];\n\tint i, dlt = DLT_RAW;\n\n\tif (pcap_compile_nopcap(65535, dlt, &program, expr, 1,\n\t\t\t\tPCAP_NETMASK_UNKNOWN)) {\n\t\treturn NULL;\n\t}\n\n        if (program.bf_len > 63) {\n               return NULL;\n        }\n\n\tsprintf(bpf_program, \"%d,\", program.bf_len);\n\tins = program.bf_insns;\n\n\n\tfor (i = 0; i < program.bf_len-1; ++ins, ++i) {\n                sprintf(buf, \"%u %u %u %u,\", ins->code, ins->jt, ins->jf, ins->k);\n                strcat(bpf_program, buf);\n        }\n\n        sprintf(buf, \"%u %u %u %u\", ins->code, ins->jt, ins->jf, ins->k);\n        strcat(bpf_program, buf);\n\tpcap_freecode(&program);\n\treturn bpf_program;\n}\n*/\nimport \"C\"\nimport (\n\t\"strings\"\n\t\"sync\"\n\t\"unsafe\"\n\n\t\"go.aporeto.io/gaia/protocols\"\n)\n\nfunc icmpRule(icmpTypeCode string, policyRestrictions []string) []string {\n\tbytecode := getBPFCode(icmpTypeCode, policyRestrictions)\n\treturn []string{\"-m\", \"bpf\", \"--bytecode\", bytecode}\n}\n\nfunc getICMPv6() string {\n\n\tgenString := func(icmpType string, icmpCode string) string {\n\t\treturn \"(icmp6[0] == \" + icmpType + \" and icmp6[1] == \" + icmpCode + \")\"\n\t}\n\n\trouterSolicitation := genString(\"133\", \"0\")\n\trouterAdvertisement := genString(\"134\", \"0\")\n\tneighborSolicitation := genString(\"135\", \"0\")\n\tneighborAdvertisement := genString(\"136\", \"0\")\n\tinverseNeighborSolicitation := genString(\"141\", \"0\")\n\tinverseNeighborAdvertisement := genString(\"142\", \"0\")\n\n\ts := []string{routerSolicitation,\n\t\trouterAdvertisement,\n\t\tneighborSolicitation,\n\t\tneighborAdvertisement,\n\t\tinverseNeighborSolicitation,\n\t\tinverseNeighborAdvertisement}\n\n\treturn strings.Join(s, \" or \")\n}\n\nvar lock sync.Mutex\n\nfunc compileExprToBPF(expr string) string {\n\tlock.Lock()\n\tdefer lock.Unlock()\n\n\tcExpr := C.CString(expr)\n\tdefer C.free(unsafe.Pointer(cExpr))\n\n\tbpfString := C.compileBPF(cExpr)\n\n\treturn C.GoString(bpfString)\n}\n\nfunc getBPFCode(icmpTypeCode string, policyRestriction []string) string {\n\tbytecode := compileExprToBPF(generateExpr(icmpTypeCode, policyRestriction))\n\n\t// bpf can return empty bytecodes as it is smart to know the expression\n\t// doesn't have a match. eg. 'icmp and icmp6'. In that case generate\n\t// and expression which doesn't match anything but bpf still generates byte code for.\n\tif bytecode == \"\" {\n\t\tbytecode = compileExprToBPF(\"icmp[0] > 5 and icmp[0] < 5\")\n\t}\n\n\treturn bytecode\n}\n\nfunc generateExpr(icmpTypeCode string, policyRestriction []string) string {\n\n\tprocessList := func(vs []string, f func(string) string) []string {\n\t\tvals := make([]string, len(vs))\n\n\t\tfor i, v := range vs {\n\t\t\tvals[i] = f(v)\n\t\t}\n\n\t\treturn vals\n\t}\n\n\tleafProcessElement := func(f func() string) string {\n\t\treturn \"(\" + f() + \")\"\n\t}\n\n\tgenProto := func(val string) string {\n\t\treturn leafProcessElement(func() string { return val })\n\t}\n\n\tgenType := func(proto, icmpType string) string {\n\n\t\tswitch strings.ToUpper(proto) {\n\t\tcase protocols.L4ProtocolICMP:\n\t\t\treturn leafProcessElement(func() string { return \"icmp[0] == \" + icmpType })\n\t\tdefault:\n\t\t\treturn leafProcessElement(func() string { return \"icmp6[0] == \" + icmpType })\n\t\t}\n\t}\n\n\tgenCodes := func(proto, val string) string {\n\n\t\tgenCode := func(v string) string {\n\n\t\t\tswitch splits := strings.Split(v, \":\"); len(splits) {\n\t\t\tcase 1:\n\t\t\t\tswitch strings.ToUpper(proto) {\n\t\t\t\tcase protocols.L4ProtocolICMP:\n\t\t\t\t\treturn leafProcessElement(func() string { return \"icmp[1] == \" + v })\n\t\t\t\tdefault:\n\t\t\t\t\treturn leafProcessElement(func() string { return \"icmp6[1] == \" + v })\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\tmin := splits[0]\n\t\t\t\tmax := splits[1]\n\n\t\t\t\tswitch strings.ToUpper(proto) {\n\t\t\t\tcase protocols.L4ProtocolICMP:\n\t\t\t\t\tminExpr := leafProcessElement(func() string { return \"icmp[1] >= \" + min })\n\t\t\t\t\tmaxExpr := leafProcessElement(func() string { return \"icmp[1] <= \" + max })\n\t\t\t\t\treturn leafProcessElement(func() string { return minExpr + \" and \" + maxExpr })\n\t\t\t\tdefault:\n\t\t\t\t\tminExpr := leafProcessElement(func() string { return \"icmp6[1] >= \" + min })\n\t\t\t\t\tmaxExpr := leafProcessElement(func() string { return \"icmp6[1] <= \" + max })\n\t\t\t\t\treturn leafProcessElement(func() string { return minExpr + \" and \" + maxExpr })\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tsplits := strings.Split(val, \",\")\n\t\tvals := processList(splits, genCode)\n\n\t\treturn leafProcessElement(func() string { return strings.Join(vals, \"or\") })\n\t}\n\n\tprocessSingleTypeCode := func(icmpTypeCode string) string {\n\t\texpr := \"\"\n\t\tsplits := strings.Split(icmpTypeCode, \"/\")\n\t\tproto := splits[0]\n\n\t\tfor i, val := range splits {\n\t\t\tswitch i {\n\t\t\tcase 0:\n\t\t\t\texpr = genProto(val)\n\t\t\tcase 1:\n\t\t\t\texpr = leafProcessElement(func() string { return expr + \" and \" + genType(proto, val) })\n\t\t\tcase 2:\n\t\t\t\texpr = leafProcessElement(func() string { return expr + \" and \" + genCodes(proto, val) })\n\t\t\t}\n\t\t}\n\n\t\treturn expr\n\t}\n\n\tcombined := []string{}\n\tbpfExprForPolicyRestriction := strings.Join(processList(policyRestriction, processSingleTypeCode), \" or \")\n\n\tif bpfExprForPolicyRestriction != \"\" {\n\t\tbpfExprForPolicyRestriction = leafProcessElement(func() string { return bpfExprForPolicyRestriction })\n\t\tcombined = []string{bpfExprForPolicyRestriction}\n\t}\n\n\tbpfExprForExtNet := processSingleTypeCode(icmpTypeCode)\n\n\tcombined = append(combined, bpfExprForExtNet)\n\treturn leafProcessElement(func() string { return strings.Join(combined, \" and \") })\n}\n\nvar icmpAllow = func() string {\n\treturn compileExprToBPF(getICMPv6())\n}\n\nfunc allowICMPv6(cfg *ACLInfo) {\n\tcfg.ICMPv6Allow = icmpAllow()\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/icmp_linux_test.go",
    "content": "// +build !windows,!rhel6\n\npackage iptablesctrl\n\nimport \"testing\"\n\nfunc Test_getICMPv6(t *testing.T) {\n\ttests := []struct {\n\t\tname string\n\t\twant string\n\t}{\n\t\t{\"icmpv6DefaultAllow\", \"(icmp6[0] == 133 and icmp6[1] == 0) or (icmp6[0] == 134 and icmp6[1] == 0) or (icmp6[0] == 135 and icmp6[1] == 0) or (icmp6[0] == 136 and icmp6[1] == 0) or (icmp6[0] == 141 and icmp6[1] == 0) or (icmp6[0] == 142 and icmp6[1] == 0)\"},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := getICMPv6(); got != tt.want {\n\t\t\t\tt.Errorf(\"getICMPv6() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_generateExpr(t *testing.T) {\n\ttype args struct {\n\t\ticmpTypeCode      string\n\t\tpolicyRestriction []string\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant string\n\t}{\n\t\t{\"1\", args{\"icmp\", []string{}}, \"((icmp))\"},\n\t\t{\"2\", args{\"icmp6\", []string{}}, \"((icmp6))\"},\n\t\t{\"3\", args{\"icmp/1\", []string{}}, \"(((icmp) and (icmp[0] == 1)))\"},\n\t\t{\"4\", args{\"icmp6/1\", []string{}}, \"(((icmp6) and (icmp6[0] == 1)))\"},\n\t\t{\"5\", args{\"icmp/255\", []string{}}, \"(((icmp) and (icmp[0] == 255)))\"},\n\t\t{\"6\", args{\"icmp/1/2\", []string{}}, \"((((icmp) and (icmp[0] == 1)) and ((icmp[1] == 2))))\"},\n\t\t{\"7\", args{\"icmp/1/2,3,4\", []string{}}, \"((((icmp) and (icmp[0] == 1)) and ((icmp[1] == 2)or(icmp[1] == 3)or(icmp[1] == 4))))\"},\n\t\t{\"8\", args{\"icmp/1/0:255\", []string{}}, \"((((icmp) and (icmp[0] == 1)) and (((icmp[1] >= 0) and (icmp[1] <= 255)))))\"},\n\t\t{\"9\", args{\"icmp/1/1,2,3:255\", []string{}}, \"((((icmp) and (icmp[0] == 1)) and ((icmp[1] == 1)or(icmp[1] == 2)or((icmp[1] >= 3) and (icmp[1] <= 255)))))\"},\n\t\t{\"10\", args{\"icmp6/255\", []string{}}, \"(((icmp6) and (icmp6[0] == 255)))\"},\n\t\t{\"11\", args{\"icmp6/1/2\", []string{}}, \"((((icmp6) and (icmp6[0] == 1)) and ((icmp6[1] == 2))))\"},\n\t\t{\"12\", args{\"icmp6/1/2,3,4\", []string{}}, \"((((icmp6) and (icmp6[0] == 1)) and ((icmp6[1] == 2)or(icmp6[1] == 3)or(icmp6[1] == 4))))\"},\n\t\t{\"13\", args{\"icmp6/1/0:255\", []string{}}, \"((((icmp6) and (icmp6[0] == 1)) and (((icmp6[1] >= 0) and (icmp6[1] <= 255)))))\"},\n\t\t{\"14\", args{\"icmp6/1/1,2,3:255\", []string{}}, \"((((icmp6) and (icmp6[0] == 1)) and ((icmp6[1] == 1)or(icmp6[1] == 2)or((icmp6[1] >= 3) and (icmp6[1] <= 255)))))\"},\n\t\t{\"15\", args{\"icmp\", []string{\"icmp/1/1\"}}, \"(((((icmp) and (icmp[0] == 1)) and ((icmp[1] == 1)))) and (icmp))\"},\n\t\t{\"16\", args{\"icmp6\", []string{\"icmp6/1/0:255\"}}, \"(((((icmp6) and (icmp6[0] == 1)) and (((icmp6[1] >= 0) and (icmp6[1] <= 255))))) and (icmp6))\"},\n\t\t{\"17\", args{\"icmp/1\", []string{\"icmp\", \"icmp6\"}}, \"(((icmp) or (icmp6)) and ((icmp) and (icmp[0] == 1)))\"},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := generateExpr(tt.args.icmpTypeCode, tt.args.policyRestriction); got != tt.want {\n\t\t\t\tt.Errorf(\"generateExpr() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/icmp_rhel6.go",
    "content": "// +build rhel6 darwin\n\npackage iptablesctrl\n\nfunc icmpRule(icmpTypeCode string, policyRestrictions []string) []string {\n\treturn []string{}\n}\n\nfunc allowICMPv6(cfg *ACLInfo) {\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/icmp_windows.go",
    "content": "// +build windows\n\npackage iptablesctrl\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/windows\"\n\t\"go.uber.org/zap\"\n)\n\nfunc allowICMPv6(cfg *ACLInfo) {\n\t// appropriate rules are in rules_windows.go already\n}\n\nfunc icmpRule(icmpTypeCode string, policyRestrictions []string) []string {\n\truleSub, err := windows.ReduceIcmpProtoString(icmpTypeCode, policyRestrictions)\n\tif err != nil {\n\t\tzap.L().Debug(\"could not formulate ICMP rule\", zap.Error(err))\n\t\t// we cannot return empty because it will match all icmp\n\t\truleSub = windows.GetIcmpNoMatch()\n\t}\n\treturn ruleSub\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/instance.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ebpf\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\t//IPV4 version for ipv4\n\tIPV4 = iota\n\t//IPV6 version for ipv6\n\tIPV6\n)\n\n//Instance is the structure holding the ipv4 and ipv6 handles\ntype Instance struct {\n\tiptv4 *iptables\n\tiptv6 *iptables\n}\n\n// SetTargetNetworks updates ths target networks. There are three different\n// types of target networks:\n//   - TCPTargetNetworks for TCP traffic (by default 0.0.0.0/0)\n//   - UDPTargetNetworks for UDP traffic (by default empty)\n//   - ExcludedNetworks that are always ignored (by default empty)\nfunc (i *Instance) SetTargetNetworks(c *runtime.Configuration) error {\n\n\tif err := i.iptv4.SetTargetNetworks(c); err != nil {\n\t\treturn err\n\t}\n\n\tif err := i.iptv6.SetTargetNetworks(c); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// Run starts the iptables controller\nfunc (i *Instance) Run(ctx context.Context) error {\n\n\tif err := i.iptv4.Run(ctx); err != nil {\n\t\treturn err\n\t}\n\n\tif err := i.iptv6.Run(ctx); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// ConfigureRules implments the ConfigureRules interface. It will create the\n// port sets and then it will call install rules to create all the ACLs for\n// the given chains. PortSets are only created here. Updates will use the\n// exact same logic.\nfunc (i *Instance) ConfigureRules(version int, contextID string, pu *policy.PUInfo) error {\n\tif err := i.iptv4.ConfigureRules(version, contextID, pu); err != nil {\n\t\treturn err\n\t}\n\n\tif err := i.iptv6.ConfigureRules(version, contextID, pu); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// DeleteRules implements the DeleteRules interface. This is responsible\n// for cleaning all ACLs and associated chains, as well as ll the sets\n// that we have created. Note, that this only clears up the state\n// for a given processing unit.\nfunc (i *Instance) DeleteRules(version int, contextID string, tcpPorts, udpPorts string, mark string, username string, containerInfo *policy.PUInfo) error {\n\n\tif err := i.iptv4.DeleteRules(version, contextID, tcpPorts, udpPorts, mark, username, containerInfo); err != nil {\n\t\tzap.L().Warn(\"Delete rules for iptables v4 returned error\")\n\t}\n\n\tif err := i.iptv6.DeleteRules(version, contextID, tcpPorts, udpPorts, mark, username, containerInfo); err != nil {\n\t\tzap.L().Warn(\"Delete rules for iptables v6 returned error\")\n\t}\n\n\treturn nil\n}\n\n// UpdateRules implements the update part of the interface. Update will call\n// installrules to install the new rules and then it will delete the old rules.\n// For installations that do not have latests iptables-restore we time\n// the operations so that the switch is almost atomic, by creating the new rules\n// first. For latest kernel versions iptables-restorce will update all the rules\n// in one shot.\nfunc (i *Instance) UpdateRules(version int, contextID string, containerInfo *policy.PUInfo, oldContainerInfo *policy.PUInfo) error {\n\n\tif err := i.iptv4.UpdateRules(version, contextID, containerInfo, oldContainerInfo); err != nil {\n\t\treturn err\n\t}\n\n\tif err := i.iptv6.UpdateRules(version, contextID, containerInfo, oldContainerInfo); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// CleanUp requires the implementor to clean up all ACLs and destroy all\n// the IP sets.\nfunc (i *Instance) CleanUp() error {\n\n\tif err := i.iptv4.CleanUp(); err != nil {\n\t\tzap.L().Error(\"Failed to cleanup ipv4 rules\")\n\t}\n\n\tif err := i.iptv6.CleanUp(); err != nil {\n\t\tzap.L().Error(\"Failed to cleanup ipv6 rules\")\n\t}\n\n\treturn nil\n}\n\n// CreateCustomRulesChain creates a custom rules chain if it doesnt exist\nfunc (i *Instance) CreateCustomRulesChain() error {\n\tnonbatchedv4tableprovider, _ := provider.NewGoIPTablesProviderV4([]string{}, CustomQOSChain)\n\tnonbatchedv6tableprovider, _ := provider.NewGoIPTablesProviderV6([]string{}, CustomQOSChain)\n\terr := nonbatchedv4tableprovider.NewChain(customQOSChainTable, CustomQOSChain)\n\tif err != nil {\n\t\tzap.L().Debug(\"Chain already exists\", zap.Error(err))\n\n\t}\n\tpostroutingchainrulesv4, err := nonbatchedv4tableprovider.ListRules(customQOSChainTable, customQOSChainNFHook)\n\tif err != nil {\n\t\tzap.L().Error(\"ListRules returned error\", zap.Error(err))\n\t\treturn err\n\t}\n\tcheckCustomRulesv4 := func() bool {\n\t\tfor _, rule := range postroutingchainrulesv4 {\n\t\t\tif strings.Contains(rule, CustomQOSChain) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\tif !checkCustomRulesv4() {\n\t\tif err := nonbatchedv4tableprovider.Insert(customQOSChainTable, customQOSChainNFHook, 1,\n\t\t\t\"-m\", \"addrtype\",\n\t\t\t\"--src-type\", \"LOCAL\",\n\t\t\t\"-j\", CustomQOSChain,\n\t\t); err != nil {\n\t\t\tzap.L().Debug(\"Unable to create ipv4 custom rule\", zap.Error(err))\n\t\t}\n\t}\n\n\terr = nonbatchedv6tableprovider.NewChain(customQOSChainTable, CustomQOSChain)\n\tif err != nil {\n\t\tzap.L().Debug(\"Chain already exists\", zap.Error(err))\n\t}\n\tpostroutingchainrulesv6, err := nonbatchedv6tableprovider.ListRules(customQOSChainTable, customQOSChainNFHook)\n\tif err != nil {\n\t\treturn err\n\t}\n\tcheckCustomRulesv6 := func() bool {\n\t\tfor _, rule := range postroutingchainrulesv6 {\n\t\t\tif strings.Contains(rule, CustomQOSChain) {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\tif !checkCustomRulesv6() {\n\t\tif err := nonbatchedv6tableprovider.Append(customQOSChainTable, customQOSChainNFHook,\n\t\t\t\"-m\", \"addrtype\",\n\t\t\t\"--src-type\", \"LOCAL\",\n\t\t\t\"-j\", CustomQOSChain,\n\t\t); err != nil {\n\t\t\tzap.L().Debug(\"Unable to create ipv6 custom rule\", zap.Error(err))\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// NewInstance creates a new iptables controller instance\nfunc NewInstance(fqc fqconfig.FilterQueue, mode constants.ModeType, ipv6Enabled bool, ebpf ebpf.BPFModule, iptablesLockfile string, serviceMeshType policy.ServiceMesh) (*Instance, error) {\n\n\t// our iptables binary `aporeto-iptables` uses the environment variable XT_LOCK_NAME\n\t// to set the iptables lockfile. Standard iptables does not look at this environment variable\n\tif iptablesLockfile != \"\" {\n\t\tif err := os.Setenv(\"XT_LOCK_NAME\", iptablesLockfile); err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to set XT_LOCK_NAME: %s\", err)\n\t\t}\n\t}\n\n\tipv4Impl, err := GetIPv4Impl()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to create ipv4 instance: %s\", err)\n\t}\n\n\tipsetV4 := ipsetmanager.V4()\n\tiptInstanceV4 := createIPInstance(ipv4Impl, ipsetV4, fqc, mode, ebpf, serviceMeshType)\n\n\tipv6Impl, err := GetIPv6Impl(ipv6Enabled)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to create ipv6 instance: %s\", err)\n\t}\n\n\tipsetV6 := ipsetmanager.V6()\n\tiptInstanceV6 := createIPInstance(ipv6Impl, ipsetV6, fqc, mode, ebpf, serviceMeshType)\n\n\treturn newInstanceWithProviders(iptInstanceV4, iptInstanceV6)\n}\n\n// newInstanceWithProviders is called after ipt and ips have been created. This helps\n// with all the unit testing to be able to mock the providers.\nfunc newInstanceWithProviders(iptv4 *iptables, iptv6 *iptables) (*Instance, error) {\n\n\ti := &Instance{\n\t\tiptv4: iptv4,\n\t\tiptv6: iptv6,\n\t}\n\n\treturn i, nil\n}\n\n// ACLProvider returns the current ACL provider that can be re-used by other entities.\nfunc (i *Instance) ACLProvider() []provider.IptablesProvider {\n\treturn []provider.IptablesProvider{i.iptv4.impl, i.iptv6.impl}\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/ipsets.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"fmt\"\n\t\"strconv\"\n\n\tprovider \"go.aporeto.io/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\n// updateTargetNetworks updates the set of target networks. Tries to minimize\n// read/writes to the ipset structures\nfunc (i *iptables) updateTargetNetworks(set provider.Ipset, old, new []string) error {\n\n\tdeleteMap := map[string]bool{}\n\tfor _, net := range old {\n\t\tdeleteMap[net] = true\n\t}\n\n\tfor _, net := range new {\n\t\tif _, ok := deleteMap[net]; ok {\n\t\t\tdeleteMap[net] = false\n\t\t\tcontinue\n\t\t}\n\t\tif err := i.aclmanager.AddToIPset(set, net); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to update target set: %s\", err)\n\t\t}\n\t}\n\n\tfor net, delete := range deleteMap {\n\t\tif delete {\n\t\t\tif err := i.aclmanager.DelFromIPset(set, net); err != nil {\n\t\t\t\tzap.L().Debug(\"unable to remove network from set\", zap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n// createProxySet creates a new target set -- ipportset is a list of {ip,port}\nfunc (i *iptables) createProxySets(portSetName string) error {\n\tdestSetName, srvSetName := i.getSetNames(portSetName)\n\n\t_, err := i.ipset.NewIpset(destSetName, \"hash:net,port\", i.impl.GetIPSetParam())\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to create ipset for %s: %s\", destSetName, err)\n\t}\n\n\t// create ipset for port match\n\t_, err = i.ipset.NewIpset(srvSetName, proxySetPortIpsetType, nil)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to create ipset for %s: %s\", srvSetName, err)\n\t}\n\n\treturn nil\n}\n\nfunc (i *iptables) updateProxySet(policy *policy.PUPolicy, portSetName string) error {\n\n\tipFilter := i.impl.IPFilter()\n\tdstSetName, srvSetName := i.getSetNames(portSetName)\n\tvipTargetSet := i.ipset.GetIpset(dstSetName)\n\tif ferr := vipTargetSet.Flush(); ferr != nil {\n\t\tzap.L().Warn(\"Unable to flush the vip proxy set\")\n\t}\n\n\tfor _, dependentService := range policy.DependentServices() {\n\t\taddresses := dependentService.NetworkInfo.Addresses\n\t\tmin, max := dependentService.NetworkInfo.Ports.Range()\n\t\tfor _, addr := range addresses {\n\t\t\tif ipFilter(addr.IP) {\n\t\t\t\tfor i := int(min); i <= int(max); i++ {\n\t\t\t\t\tpair := addr.String() + \",\" + strconv.Itoa(i)\n\t\t\t\t\tif err := vipTargetSet.Add(pair, 0); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"unable to add dependent ip %s to target networks ipset: %s\", pair, err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tsrvTargetSet := i.ipset.GetIpset(srvSetName)\n\tif ferr := srvTargetSet.Flush(); ferr != nil {\n\t\tzap.L().Warn(\"Unable to flush the pip proxy set\")\n\t}\n\n\tfor _, exposedService := range policy.ExposedServices() {\n\t\tmin, max := exposedService.PrivateNetworkInfo.Ports.Range()\n\t\tfor i := int(min); i <= int(max); i++ {\n\t\t\tif err := srvTargetSet.Add(strconv.Itoa(i), 0); err != nil {\n\t\t\t\tzap.L().Error(\"Failed to add vip\", zap.Error(err))\n\t\t\t\treturn fmt.Errorf(\"unable to add ip %d to target ports ipset: %s\", i, err)\n\t\t\t}\n\t\t}\n\t\tif exposedService.PublicNetworkInfo != nil {\n\t\t\tmin, max := exposedService.PublicNetworkInfo.Ports.Range()\n\t\t\tfor i := int(min); i <= int(max); i++ {\n\t\t\t\tif err := srvTargetSet.Add(strconv.Itoa(i), 0); err != nil {\n\t\t\t\t\tzap.L().Error(\"Failed to VIP for public network\", zap.Error(err))\n\t\t\t\t\treturn fmt.Errorf(\"Failed to program VIP: %s\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\n//getSetNamePair returns a pair of strings represent proxySetNames\nfunc (i *iptables) getSetNames(portSetName string) (string, string) {\n\treturn portSetName + \"-dst\", portSetName + \"-srv\"\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/iptables.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\t\"text/template\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ebpf\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\tmainAppChain        = constants.ChainPrefix + \"App\"\n\tmainNetChain        = constants.ChainPrefix + \"Net\"\n\tappChainPrefix      = constants.ChainPrefix + \"App-\"\n\tnetChainPrefix      = constants.ChainPrefix + \"Net-\"\n\tnatProxyOutputChain = constants.ChainPrefix + \"Redir-App\"\n\tnatProxyInputChain  = constants.ChainPrefix + \"Redir-Net\"\n\tproxyOutputChain    = constants.ChainPrefix + \"Prx-App\"\n\tproxyInputChain     = constants.ChainPrefix + \"Prx-Net\"\n\tistioChain          = constants.ChainPrefix + \"Istio\"\n\n\t// TriremeInput represent the chain that contains pu input rules.\n\tTriremeInput = constants.ChainPrefix + \"Pid-Net\"\n\t// TriremeOutput represent the chain that contains pu output rules.\n\tTriremeOutput = constants.ChainPrefix + \"Pid-App\"\n\n\t// NetworkSvcInput represent the chain that contains NetworkSvc input rules.\n\tNetworkSvcInput = constants.ChainPrefix + \"Svc-Net\"\n\n\t// NetworkSvcOutput represent the chain that contains NetworkSvc output rules.\n\tNetworkSvcOutput = constants.ChainPrefix + \"Svc-App\"\n\n\t// HostModeInput represent the chain that contains Hostmode input rules.\n\tHostModeInput = constants.ChainPrefix + \"Hst-Net\"\n\n\t// HostModeOutput represent the chain that contains Hostmode output rules.\n\tHostModeOutput = constants.ChainPrefix + \"Hst-App\"\n\t// NfqueueOutput represents the chain that contains the nfqueue output rules\n\tNfqueueOutput = constants.ChainPrefix + \"Nfq-OUT\"\n\t// NfqueueInput represents the chain that contains the nfqueue input rules\n\tNfqueueInput = constants.ChainPrefix + \"Nfq-IN\"\n\t// IstioUID is the UID of the istio-proxy(envoy) that is used in the iptables to identify the\n\t// envoy generated traffic\n\tIstioUID = \"1337\"\n\t// IstioRedirPort is the port where the App traffic from the output chain\n\t// is redirected into Istio-proxy, we need to accept this traffic as we don't to come in between\n\t// APP --> Envoy traffic.\n\tIstioRedirPort = \"15001\"\n)\n\ntype iptables struct {\n\timpl            IPImpl\n\tfqc             fqconfig.FilterQueue\n\tmode            constants.ModeType\n\tipsetmanager    ipsetmanager.IPSetManager\n\tbpf             ebpf.BPFModule\n\tserviceMeshType policy.ServiceMesh\n}\n\n// IPImpl interface is to be used by the iptable implentors like ipv4 and ipv6.\ntype IPImpl interface {\n\tprovider.IptablesProvider\n\tIPVersion() int\n\tProtocolAllowed(proto string) bool\n\tIPFilter() func(net.IP) bool\n\tGetDefaultIP() string\n\tNeedICMP() bool\n}\n\ntype ipFilter func(net.IP) bool\n\nfunc createIPInstance(impl IPImpl, ipsetmanager ipsetmanager.IPSetManager, fqc fqconfig.FilterQueue, mode constants.ModeType, ebpf ebpf.BPFModule, ServiceMeshType policy.ServiceMesh) *iptables {\n\n\treturn &iptables{\n\t\timpl:            impl,\n\t\tfqc:             fqc,\n\t\tmode:            mode,\n\t\tipsetmanager:    ipsetmanager,\n\t\tbpf:             ebpf,\n\t\tserviceMeshType: ServiceMeshType,\n\t}\n}\n\nfunc (i *iptables) SetTargetNetworks(c *runtime.Configuration) error {\n\tif c == nil {\n\t\treturn nil\n\t}\n\n\ttcp := c.TCPTargetNetworks\n\tudp := c.UDPTargetNetworks\n\texcluded := c.ExcludedNetworks\n\n\t// If there are no target networks, capture all traffic\n\tif len(tcp) == 0 {\n\t\ttcp = []string{IPv4DefaultIP, IPv6DefaultIP}\n\t}\n\n\treturn i.ipsetmanager.UpdateIPsetsForTargetAndExcludedNetworks(tcp, udp, excluded)\n}\n\nfunc (i *iptables) Run(ctx context.Context) error {\n\n\t// Clean any previous ACLs. This is needed in case we crashed at some\n\t// earlier point or there are other ACLs that create conflicts. We\n\t// try to clean only ACLs related to Trireme.\n\tif err := i.cleanACLs(); err != nil {\n\t\treturn fmt.Errorf(\"Unable to clean previous acls while starting the supervisor: %s\", err)\n\t}\n\n\tif err := i.ipsetmanager.DestroyAllIPsets(); err != nil {\n\t\tzap.L().Debug(\"ipset destroy all ipset returned error\", zap.Error(err))\n\t}\n\n\tif err := i.ipsetmanager.CreateIPsetsForTargetAndExcludedNetworks(); err != nil {\n\t\tif err1 := i.ipsetmanager.DestroyAllIPsets(); err1 != nil {\n\t\t\tzap.L().Debug(\"ipset destroy all ipset returned error\", zap.Error(err1))\n\t\t}\n\t\treturn fmt.Errorf(\"unable to create target network ipsets: %s\", err)\n\t}\n\n\t// Windows needs to initialize some ipsets\n\tif err := i.platformInit(); err != nil {\n\t\treturn err\n\t}\n\n\t// Initialize all the global Trireme chains. There are several global chaims\n\t// that apply to all PUs:\n\t// Tri-App/Tri-Net are the main chains for the egress/ingress directions\n\t// UID related chains for any UID PUs.\n\t// Host, Service, Pid chains for the different modes of operation (host mode, pu mode, host service).\n\t// The priority is explicit (Pid activations take precedence of Service activations and Host Services)\n\tif err := i.initializeChains(); err != nil {\n\t\treturn fmt.Errorf(\"Unable to initialize chains: %s\", err)\n\t}\n\n\t// Insert the global ACLS. These are the main ACLs that will direct traffic from\n\t// the INPUT/OUTPUT chains to the Trireme chains. They also includes the main\n\t// rules of the main chains. These rules are never touched again, unless\n\t// if we gracefully terminate.\n\tif err := i.setGlobalRules(); err != nil {\n\t\treturn fmt.Errorf(\"failed to update synack networks: %s\", err)\n\t}\n\n\tif err := i.impl.Commit(); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (i *iptables) ConfigureRules(version int, contextID string, pu *policy.PUInfo) error {\n\tvar err error\n\tvar cfg *ACLInfo\n\n\t// First we create an IPSet for destination matching ports. This only\n\t// applies to Linux type PUs. A port set is associated with every PU,\n\t// and packets matching this destination get associated with the context\n\t// of the PU.\n\tif i.mode != constants.RemoteContainer {\n\t\tif err = i.ipsetmanager.CreateServerPortSet(contextID); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Create the proxy sets. These are the target sets that will match\n\t// traffic towards the L4 and L4 services. There are two sets created\n\t// for every PU in this context (for outgoing and incoming traffic).\n\t// The outgoing sets capture all traffic towards specific destinations\n\t// as proxied traffic. Incoming sets correspond to the listening\n\t// services.\n\t// create proxySets only if there is no serviceMesh.\n\tif i.serviceMeshType == policy.None {\n\t\tif err := i.ipsetmanager.CreateProxySets(contextID); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// We create the generic ACL object that is used for all the templates.\n\tcfg, err = i.newACLInfo(version, contextID, pu, pu.Runtime.PUType())\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// At this point we can install all the ACL rules that will direct\n\t// traffic to user space, allow for external access or direct\n\t// traffic towards the proxies\n\tif err = i.installRules(cfg, pu); err != nil {\n\t\treturn err\n\t}\n\n\t// We commit the ACLs at the end. Note, that some of the ACLs in the\n\t// NAT table are not committed as a group. The commit function only\n\t// applies when newer versions of tables are installed (1.6.2 and above).\n\tif err = i.impl.Commit(); err != nil {\n\t\tzap.L().Error(\"unable to configure rules\", zap.Error(err))\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (i *iptables) DeleteRules(version int, contextID string, tcpPorts, udpPorts string, mark string, username string, containerInfo *policy.PUInfo) error {\n\tcfg, err := i.newACLInfo(version, contextID, nil, containerInfo.Runtime.PUType())\n\tif err != nil {\n\t\tzap.L().Error(\"unable to create cleanup configuration\", zap.Error(err))\n\t\treturn err\n\t}\n\tif i.mode == constants.LocalServer {\n\t\tcfg.PacketMark = mark\n\t}\n\tcfg.UDPPorts = udpPorts\n\tcfg.TCPPorts = tcpPorts\n\tcfg.CgroupMark = mark\n\tcfg.Mark = mark\n\n\tcfg.PUType = containerInfo.Runtime.PUType()\n\tcfg.ProxyPort = containerInfo.Policy.ServicesListeningPort()\n\tcfg.DNSProxyPort = containerInfo.Policy.DNSProxyPort()\n\t// We clean up the chain rules first, so that we can delete the chains.\n\t// If any rule is not deleted, then the chain will show as busy.\n\tif err := i.deleteChainRules(cfg); err != nil {\n\t\tzap.L().Warn(\"Failed to clean rules\", zap.Error(err))\n\t}\n\n\t// We can now delete the chains we have created for this PU. Note that\n\t// in every case we only create two chains for every PU. All other\n\t// chains are global.\n\tif err = i.deletePUChains(cfg); err != nil {\n\t\tzap.L().Warn(\"Failed to clean container chains while deleting the rules\", zap.Error(err))\n\t}\n\n\t// We call commit to update all the changes, before destroying the ipsets.\n\t// References must be deleted for ipset deletion to succeed.\n\tif err := i.impl.Commit(); err != nil {\n\t\tzap.L().Warn(\"Failed to commit ACL changes\", zap.Error(err))\n\t}\n\n\tif i.mode != constants.RemoteContainer {\n\t\t// We delete the set that captures all destination ports of the\n\t\t// PU. This only holds for Linux PUs.\n\t\tif err := i.ipsetmanager.DestroyServerPortSet(contextID); err != nil {\n\t\t\tzap.L().Warn(\"Failed to remove port set\")\n\t\t}\n\t}\n\n\t// if serviceMesh is enabled then don't detroy the proxySets as we have not create them.\n\tif i.serviceMeshType == policy.None {\n\t\t// We delete the proxy port sets that were created for this PU.\n\t\ti.ipsetmanager.DestroyProxySets(contextID)\n\t}\n\treturn nil\n}\n\nfunc (i *iptables) UpdateRules(version int, contextID string, containerInfo *policy.PUInfo, oldContainerInfo *policy.PUInfo) error {\n\tpolicyrules := containerInfo.Policy\n\tif policyrules == nil {\n\t\treturn errors.New(\"policy rules cannot be nil\")\n\t}\n\n\t// We cache the old config and we use it to delete the previous\n\t// rules. Every time we update the policy the version changes to\n\t// its binary complement.\n\tnewCfg, err := i.newACLInfo(version, contextID, containerInfo, containerInfo.Runtime.PUType())\n\tif err != nil {\n\t\treturn err\n\t}\n\n\toldCfg, err := i.newACLInfo(version^1, contextID, oldContainerInfo, containerInfo.Runtime.PUType())\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Install all the new rules. The hooks to the new chains are appended\n\t// and do not take effect yet.\n\tif err := i.installRules(newCfg, containerInfo); err != nil {\n\t\treturn err\n\t}\n\n\t// Remove mapping from old chain. By removing the old hooks the new\n\t// hooks take priority.\n\tif err := i.deleteChainRules(oldCfg); err != nil {\n\t\treturn err\n\t}\n\n\t// Delete the old chains, since there are not references any more.\n\tif err := i.deletePUChains(oldCfg); err != nil {\n\t\treturn err\n\t}\n\n\t// Commit all actions in on iptables-restore function.\n\tif err := i.impl.Commit(); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (i *iptables) CleanUp() error {\n\n\tif err := i.cleanACLs(); err != nil {\n\t\tzap.L().Error(\"Failed to clean acls while stopping the supervisor\", zap.Error(err))\n\t}\n\n\tif err := i.ipsetmanager.DestroyAllIPsets(); err != nil {\n\t\tzap.L().Error(\"Failed to clean up ipsets\", zap.Error(err))\n\t}\n\n\ti.ipsetmanager.Reset()\n\n\treturn nil\n}\n\n// InitializeChains initializes the chains.\nfunc (i *iptables) initializeChains() error {\n\n\tcfg, err := i.newACLInfo(0, \"\", nil, 0)\n\tif err != nil {\n\t\treturn err\n\t}\n\ttmpl := template.Must(template.New(triremChains).Funcs(template.FuncMap{\n\t\t\"isLocalServer\": func() bool {\n\t\t\treturn i.mode == constants.LocalServer\n\t\t},\n\t\t\"isIstioEnabled\": func() bool {\n\t\t\treturn i.serviceMeshType == policy.Istio\n\t\t},\n\t}).Parse(triremChains))\n\n\trules, err := extractRulesFromTemplate(tmpl, cfg)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to create trireme chains:%s\", err)\n\t}\n\tfor _, rule := range rules {\n\t\tif len(rule) != 4 {\n\t\t\tcontinue\n\t\t}\n\t\tif err := i.impl.NewChain(rule[1], rule[3]); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// configureContainerRules adds the chain rules for a container.\n// We separate in different methods to keep track of the changes\n// independently.\nfunc (i *iptables) configureContainerRules(cfg *ACLInfo) error {\n\treturn i.addChainRules(cfg)\n}\n\n// configureLinuxRules adds the chain rules for a linux process or a UID process.\nfunc (i *iptables) configureLinuxRules(cfg *ACLInfo) error {\n\n\t// These checks are for rather unusal error scenarios. We should\n\t// never see errors here. But better safe than sorry.\n\tif cfg.CgroupMark == \"\" {\n\t\treturn errors.New(\"no mark value found\")\n\t}\n\n\tif cfg.TCPPortSet == \"\" {\n\t\treturn fmt.Errorf(\"port set was not found for the contextID. This should not happen\")\n\t}\n\n\treturn i.addChainRules(cfg)\n}\n\ntype aclIPset struct {\n\tipset string\n\t*policy.IPRule\n}\n\nfunc (i *iptables) getACLIPSets(ipRules policy.IPRuleList) []aclIPset {\n\n\tipsets := i.ipsetmanager.GetACLIPsetsNames(ipRules)\n\n\taclIPsets := make([]aclIPset, 0)\n\n\tfor i, ipset := range ipsets {\n\t\tif len(ipset) > 0 {\n\t\t\taclIPsets = append(aclIPsets, aclIPset{ipset, &ipRules[i]})\n\t\t}\n\t}\n\n\treturn aclIPsets\n}\n\n// Install rules will install all the rules and update the port sets.\nfunc (i *iptables) installRules(cfg *ACLInfo, containerInfo *policy.PUInfo) error {\n\n\tpolicyrules := containerInfo.Policy\n\n\t// update the proxy set only if there is no serviceMesh enabled.\n\tif i.serviceMeshType == policy.None {\n\t\tif err := i.updateProxySet(cfg.ContextID, containerInfo.Policy); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tappACLIPset := i.getACLIPSets(policyrules.ApplicationACLs())\n\tnetACLIPset := i.getACLIPSets(policyrules.NetworkACLs())\n\n\t// Install the PU specific chain first.\n\tif err := i.addContainerChain(cfg); err != nil {\n\t\treturn err\n\t}\n\n\t// If its a remote and thus container, configure container rules.\n\tif i.mode == constants.RemoteContainer {\n\t\tif err := i.configureContainerRules(cfg); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// If its a Linux process configure the Linux rules.\n\tif i.mode == constants.LocalServer {\n\t\tif err := i.configureLinuxRules(cfg); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tisHostPU := extractors.IsHostPU(containerInfo.Runtime, i.mode)\n\n\tif err := i.addPreNetworkACLRules(cfg); err != nil {\n\t\treturn err\n\t}\n\n\tif err := i.addExternalACLs(cfg, cfg.AppChain, cfg.NetChain, appACLIPset, true); err != nil {\n\t\treturn err\n\t}\n\n\tif err := i.addExternalACLs(cfg, cfg.NetChain, cfg.AppChain, netACLIPset, false); err != nil {\n\t\treturn err\n\t}\n\n\tappAnyRules, netAnyRules, err := i.getProtocolAnyRules(cfg, appACLIPset, netACLIPset)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn i.addPacketTrap(cfg, isHostPU, appAnyRules, netAnyRules)\n}\n\nfunc (i *iptables) updateProxySet(contextID string, policy *policy.PUPolicy) error {\n\ti.ipsetmanager.FlushProxySets(contextID)\n\n\tfor _, dependentService := range policy.DependentServices() {\n\t\taddresses := dependentService.NetworkInfo.Addresses\n\t\tmin, max := dependentService.NetworkInfo.Ports.Range()\n\n\t\tfor addrS := range addresses {\n\t\t\t_, addr, _ := net.ParseCIDR(addrS)\n\t\t\tfor port := int(min); port <= int(max); port++ {\n\t\t\t\tif err := i.ipsetmanager.AddIPPortToDependentService(contextID, addr, strconv.Itoa(port)); err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"unable to add dependent ip %v to dependent networks ipset: %v\", port, err)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tfor _, exposedService := range policy.ExposedServices() {\n\t\tmin, max := exposedService.PrivateNetworkInfo.Ports.Range()\n\t\tfor port := int(min); port <= int(max); port++ {\n\t\t\tif err := i.ipsetmanager.AddPortToExposedService(contextID, strconv.Itoa(port)); err != nil {\n\t\t\t\tzap.L().Error(\"Failed to add vip\", zap.Error(err))\n\t\t\t\treturn fmt.Errorf(\"unable to add port %d to exposed ports ipset: %s\", port, err)\n\t\t\t}\n\t\t}\n\n\t\tif exposedService.PublicNetworkInfo != nil {\n\t\t\tmin, max := exposedService.PublicNetworkInfo.Ports.Range()\n\t\t\tfor port := int(min); port <= int(max); port++ {\n\t\t\t\tif err := i.ipsetmanager.AddPortToExposedService(contextID, strconv.Itoa(port)); err != nil {\n\t\t\t\t\tzap.L().Error(\"Failed to VIP for public network\", zap.Error(err))\n\t\t\t\t\treturn fmt.Errorf(\"Failed to program VIP: %s\", err)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/iptablesV4_test.go",
    "content": "// +build !windows,!rhel6\n\npackage iptablesctrl\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"github.com/magiconair/properties/assert\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\ttacls \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/acls\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nfunc TestNewInstanceV4(t *testing.T) {\n\tConvey(\"When I create a new iptables instance\", t, func() {\n\t\tConvey(\"If I create a remote implemenetation and iptables exists\", func() {\n\t\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\t\tiptv4 := provider.NewTestIptablesProvider()\n\t\t\tiptv6 := provider.NewTestIptablesProvider()\n\n\t\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.RemoteContainer, policy.None)\n\t\t\tConvey(\"It should succeed\", func() {\n\t\t\t\tSo(i, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"When I create a new iptables instance\", t, func() {\n\t\tConvey(\"If I create a Linux server implemenetation and iptables exists\", func() {\n\t\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\t\tiptv4 := provider.NewTestIptablesProvider()\n\t\t\tiptv6 := provider.NewTestIptablesProvider()\n\n\t\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\t\tConvey(\"It should succeed\", func() {\n\t\t\t\tSo(i, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n\tConvey(\"When I create a new iptables instance, with Istio serviceMeshType\", t, func() {\n\t\tConvey(\"If I create a Linux server implemenetation and iptables exists with Istio\", func() {\n\t\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\t\tiptv4 := provider.NewTestIptablesProvider()\n\t\t\tiptv6 := provider.NewTestIptablesProvider()\n\n\t\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.Istio)\n\t\t\tConvey(\"It should succeed\", func() {\n\t\t\t\tSo(i, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(i.iptv4.serviceMeshType, ShouldEqual, policy.Istio)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_NegativeConfigureRulesV4(t *testing.T) {\n\tConvey(\"Given a valid instance\", t, func() {\n\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\tiptv4 := provider.NewTestIptablesProvider()\n\t\tiptv6 := provider.NewTestIptablesProvider()\n\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tctx, cancel := context.WithCancel(context.Background())\n\t\tdefer cancel()\n\n\t\terr = i.Run(ctx)\n\t\tSo(err, ShouldBeNil)\n\n\t\tcfg := &runtime.Configuration{}\n\t\ti.SetTargetNetworks(cfg) // nolint\n\t\tSo(err, ShouldBeNil)\n\n\t\tipl := policy.ExtendedMap{}\n\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\"Context\",\n\t\t\t\"/ns1\",\n\t\t\tpolicy.Police,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tipl,\n\t\t\t0,\n\t\t\t0,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\t[]string{},\n\t\t\tpolicy.EnforcerMapping,\n\t\t\tpolicy.Reject|policy.Log,\n\t\t\tpolicy.Reject|policy.Log,\n\t\t)\n\t\tcontainerinfo := policy.NewPUInfo(\"Context\",\n\t\t\t\"/ns1\", common.ContainerPU)\n\t\tcontainerinfo.Policy = policyrules\n\t\tcontainerinfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tcontainerinfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\tConvey(\"When I configure the rules with no errors, it should succeed\", func() {\n\t\t\terr := i.iptv4.ConfigureRules(1,\n\t\t\t\t\"ID\", containerinfo)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I configure the rules and the proxy set fails, it should error\", func() {\n\t\t\tips.MockNewIpset(t, func(name, hash string, p *ipset.Params) (ipsetmanager.Ipset, error) {\n\t\t\t\treturn nil, fmt.Errorf(\"error\")\n\t\t\t})\n\t\t\terr := i.iptv4.ConfigureRules(1,\n\t\t\t\t\"ID\", containerinfo)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I configure the rules and acls fail, it should error\", func() {\n\t\t\tiptv4.MockAppend(t, func(table, chain string, rulespec ...string) error {\n\t\t\t\treturn fmt.Errorf(\"error\")\n\t\t\t})\n\t\t\terr := i.iptv4.ConfigureRules(1,\n\t\t\t\t\"ID\", containerinfo)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I configure the rules and commit fails, it should error\", func() {\n\t\t\tiptv4.MockCommit(t, func() error {\n\t\t\t\treturn fmt.Errorf(\"error\")\n\t\t\t})\n\t\t\terr := i.iptv4.ConfigureRules(1,\n\t\t\t\t\"ID\", containerinfo)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nvar (\n\texpectedGlobalMangleChainsV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-j TRI-Prx-App\",\n\t\t\t\"-m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\", \"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\", \"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\",\n\t\t},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\",\n\t\t\t\"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\t}\n\n\texpectedGlobalMangleChainsV4Istio = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-j TRI-Istio\",\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-Istio\": {},\n\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\", \"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\",\n\t\t\t\"-p tcp --dport 15001 -m addrtype --dst-type LOCAL -m addrtype --src-type LOCAL -j ACCEPT\"},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\t}\n\n\texpectedGlobalNATChainsV4Istio = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t\t\"-p tcp -m mark --mark 68 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\texpectedMangleAfterPUInsertV4Istio = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-j TRI-Istio\",\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-Istio\": {},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\", \"-p tcp --dport 15001 -m addrtype --dst-type LOCAL -m addrtype --src-type LOCAL -j ACCEPT\"},\n\t\t\"TRI-Pid-App\": {\n\t\t\t\"-m cgroup --cgroup 10 -m comment --comment PU-Chain -j MARK --set-mark 10\",\n\t\t\t\"-m mark --mark 10 -m comment --comment PU-Chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Pid-Net\": {\n\t\t\t\"-p tcp -m multiport --destination-ports 9000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t\t\"-p udp -m multiport --destination-ports 5000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j NFLOG --nflog-group 11 --nflog-prefix 913787369:123a:a3:6\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j DROP\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j NFLOG --nflog-group 11 --nflog-prefix 913787369:123a:a3:3\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-m bpf --bytecode 7,48 0 0 0,84 0 0 240,21 0 3 64,48 0 0 9,21 0 1 1,6 0 0 65535,6 0 0 0 -p icmp -m set --match-set TRI-v4-ext-w5frVvhsnpU= dst -j ACCEPT\", \"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-p udp -m set --match-set TRI-v4-TargetUDP dst -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-m mark --mark 40 -j NFQUEUE --queue-num 0 --queue-bypass\", \"-m mark --mark 41 -j NFQUEUE --queue-num 1 --queue-bypass\", \"-m mark --mark 42 -j NFQUEUE --queue-num 2 --queue-bypass\", \"-m mark --mark 43 -j NFQUEUE --queue-num 3 --queue-bypass\", \"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\", \"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j NFLOG --nflog-group 10 --nflog-prefix 913787369:123a:a3:6\", \"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j DROP\", \"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j NFLOG --nflog-group 10 --nflog-prefix 913787369:123a:a3:3\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j ACCEPT\", \"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\", \"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d 0.0.0.0/0 -j DROP\"},\n\t}\n\texpectedGlobalNATChainsV4 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedMangleAfterPUInsertV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\", \"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {\n\t\t\t\"-m cgroup --cgroup 10 -m comment --comment PU-Chain -j MARK --set-mark 10\",\n\t\t\t\"-m mark --mark 10 -m comment --comment PU-Chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Pid-Net\": {\n\t\t\t\"-p tcp -m multiport --destination-ports 9000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t\t\"-p udp -m multiport --destination-ports 5000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j NFLOG --nflog-group 11 --nflog-prefix 913787369:123a:a3:6\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j DROP\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j NFLOG --nflog-group 11 --nflog-prefix 913787369:123a:a3:3\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-m bpf --bytecode 7,48 0 0 0,84 0 0 240,21 0 3 64,48 0 0 9,21 0 1 1,6 0 0 65535,6 0 0 0 -p icmp -m set --match-set TRI-v4-ext-w5frVvhsnpU= dst -j ACCEPT\", \"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-p udp -m set --match-set TRI-v4-TargetUDP dst -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-m mark --mark 40 -j NFQUEUE --queue-num 0 --queue-bypass\", \"-m mark --mark 41 -j NFQUEUE --queue-num 1 --queue-bypass\", \"-m mark --mark 42 -j NFQUEUE --queue-num 2 --queue-bypass\", \"-m mark --mark 43 -j NFQUEUE --queue-num 3 --queue-bypass\", \"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\", \"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\", \"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j NFLOG --nflog-group 10 --nflog-prefix 913787369:123a:a3:rockstars _4090221238:6\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j DROP\", \"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j NFLOG --nflog-group 10 --nflog-prefix 913787369:123a:a3:3\", \"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j ACCEPT\",\n\t\t\t\"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\", \"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d 0.0.0.0/0 -j DROP\"},\n\t}\n\n\texpectedMangleAfterPUInsertWithLogV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\", \"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {\n\t\t\t\"-m cgroup --cgroup 10 -m comment --comment PU-Chain -j MARK --set-mark 10\",\n\t\t\t\"-m mark --mark 10 -m comment --comment PU-Chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Pid-Net\": {\n\t\t\t\"-p tcp -m multiport --destination-ports 9000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t\t\"-p udp -m multiport --destination-ports 5000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:2:s2:3\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-m bpf --bytecode 7,48 0 0 0,84 0 0 240,21 0 3 64,48 0 0 9,21 0 1 1,6 0 0 65535,6 0 0 0 -p icmp -m set --match-set TRI-v4-ext-w5frVvhsnpU= dst -j ACCEPT\", \"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-p udp -m set --match-set TRI-v4-TargetUDP dst -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-m mark --mark 40 -j NFQUEUE --queue-num 0 --queue-bypass\", \"-m mark --mark 41 -j NFQUEUE --queue-num 1 --queue-bypass\", \"-m mark --mark 42 -j NFQUEUE --queue-num 2 --queue-bypass\", \"-m mark --mark 43 -j NFQUEUE --queue-num 3 --queue-bypass\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\", \"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\", \"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d 0.0.0.0/0 -j DROP\"},\n\t}\n\n\texpectedMangleAfterPUInsertWithExtensionsV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\", \"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {\n\t\t\t\"-m cgroup --cgroup 10 -m comment --comment PU-Chain -j MARK --set-mark 10\",\n\t\t\t\"-m mark --mark 10 -m comment --comment PU-Chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Pid-Net\": {\n\t\t\t\"-p tcp -m multiport --destination-ports 9000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t\t\"-p udp -m multiport --destination-ports 5000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst --match multiport --dports 443 -m bpf --bytecode 20,0 0 0 0,177 0 0 0,12 0 0 0,7 0 0 0,72 0 0 4,53 0 13 29,135 0 0 0,4 0 0 8,7 0 0 0,72 0 0 2,84 0 0 64655,21 0 7 0,72 0 0 4,21 0 5 1,64 0 0 6,21 0 3 0,72 0 0 10,37 1 0 1,6 0 0 0,6 0 0 65535 -j DROP\", \"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-m bpf --bytecode 7,48 0 0 0,84 0 0 240,21 0 3 64,48 0 0 9,21 0 1 1,6 0 0 65535,6 0 0 0 -p icmp -m set --match-set TRI-v4-ext-w5frVvhsnpU= dst -j ACCEPT\", \"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-p udp -m set --match-set TRI-v4-TargetUDP dst -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-m mark --mark 40 -j NFQUEUE --queue-num 0 --queue-bypass\", \"-m mark --mark 41 -j NFQUEUE --queue-num 1 --queue-bypass\", \"-m mark --mark 42 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 43 -j NFQUEUE --queue-num 3 --queue-bypass\", \"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\", \"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\",\n\t\t\t\"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\", \"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d 0.0.0.0/0 -j DROP\"},\n\t}\n\n\texpectedMangleAfterPUInsertWithExtensionsAndLogV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\", \"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {\n\t\t\t\"-m cgroup --cgroup 10 -m comment --comment PU-Chain -j MARK --set-mark 10\",\n\t\t\t\"-m mark --mark 10 -m comment --comment PU-Chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Pid-Net\": {\n\t\t\t\"-p tcp -m multiport --destination-ports 9000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t\t\"-p udp -m multiport --destination-ports 5000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst --match multiport --dports 443 -m bpf --bytecode 20,0 0 0 0,177 0 0 0,12 0 0 0,7 0 0 0,72 0 0 4,53 0 13 29,135 0 0 0,4 0 0 8,7 0 0 0,72 0 0 2,84 0 0 64655,21 0 7 0,72 0 0 4,21 0 5 1,64 0 0 6,21 0 3 0,72 0 0 10,37 1 0 1,6 0 0 0,6 0 0 65535 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:2:s2:6\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst --match multiport --dports 443 -m bpf --bytecode 20,0 0 0 0,177 0 0 0,12 0 0 0,7 0 0 0,72 0 0 4,53 0 13 29,135 0 0 0,4 0 0 8,7 0 0 0,72 0 0 2,84 0 0 64655,21 0 7 0,72 0 0 4,21 0 5 1,64 0 0 6,21 0 3 0,72 0 0 10,37 1 0 1,6 0 0 0,6 0 0 65535 -j DROP\", \"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:2:s2:3\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-m bpf --bytecode 7,48 0 0 0,84 0 0 240,21 0 3 64,48 0 0 9,21 0 1 1,6 0 0 65535,6 0 0 0 -p icmp -m set --match-set TRI-v4-ext-w5frVvhsnpU= dst -j ACCEPT\", \"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-p udp -m set --match-set TRI-v4-TargetUDP dst -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-m mark --mark 40 -j NFQUEUE --queue-num 0 --queue-bypass\", \"-m mark --mark 41 -j NFQUEUE --queue-num 1 --queue-bypass\", \"-m mark --mark 42 -j NFQUEUE --queue-num 2 --queue-bypass\", \"-m mark --mark 43 -j NFQUEUE --queue-num 3 --queue-bypass\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\", \"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\", \"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d 0.0.0.0/0 -j DROP\"},\n\t}\n\n\texpectedNATAfterPUInsertV4 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t\t\"-d 0.0.0.0/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j CONNMARK --save-mark\",\n\t\t\t\"-d 0.0.0.0/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv dst -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"POSTROUTING\": {\n\t\t\t\"-p udp -m addrtype --src-type LOCAL -m multiport --source-ports 5000 -j ACCEPT\",\n\t\t},\n\t}\n\texpectedNATAfterPUInsertV4Istio = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t\t\"-p tcp -m mark --mark 68 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t\t\"-d 0.0.0.0/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j CONNMARK --save-mark\",\n\t\t\t\"-d 0.0.0.0/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv dst -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"POSTROUTING\": {\n\t\t\t\"-p udp -m addrtype --src-type LOCAL -m multiport --source-ports 5000 -j ACCEPT\",\n\t\t},\n\t}\n\texpectedMangleAfterPUUpdateV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {\n\t\t\t\"-m cgroup --cgroup 10 -m comment --comment PU-Chain -j MARK --set-mark 10\",\n\t\t\t\"-m mark --mark 10 -m comment --comment PU-Chain -j TRI-App-pu1N7uS6--1\"},\n\t\t\"TRI-Pid-Net\": {\n\t\t\t\"-p tcp -m set --match-set TRI-v4-ProcPort-pu19gtV dst -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--1\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--1\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\"},\n\n\t\t\"TRI-App-pu1N7uS6--1\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-p udp -m set --match-set TRI-v4-TargetUDP dst -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-m mark --mark 40 -j NFQUEUE --queue-num 0 --queue-bypass\", \"-m mark --mark 41 -j NFQUEUE --queue-num 1 --queue-bypass\", \"-m mark --mark 42 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 43 -j NFQUEUE --queue-num 3 --queue-bypass\", \"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\", \"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\",\n\t\t\t\"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\", \"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d 0.0.0.0/0 -j DROP\"},\n\t}\n)\n\nfunc Test_OperationWithLinuxServicesV4(t *testing.T) {\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV4[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV4[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"50.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{},\n\t\t\t\t\t\tProtocols: []string{\"icmp\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t\tRuleName:  \"rockstars forev\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.LinuxProcessPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tudpPortSpec, err := portspec.NewPortSpecFromString(\"5000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPortSpec, err := portspec.NewPortSpecFromString(\"9000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpuInfo.Runtime.SetServices([]common.Service{\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    udpPortSpec,\n\t\t\t\t\t\tProtocol: 17,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    tcpPortSpec,\n\t\t\t\t\t\tProtocol: 6,\n\t\t\t\t\t},\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\t\t\t\terr = i.iptv4.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\terr = i.iptv4.ipsetmanager.AddPortToServerPortSet(\"pu1\",\n\t\t\t\t\t\"8080\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedMangleAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedNATAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedNATAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t\tConvey(\"When I update the policy, the update must result in correct state\", func() {\n\t\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\t\"Context\",\n\t\t\t\t\t\t\"/ns1\",\n\t\t\t\t\t\tpolicy.Police,\n\t\t\t\t\t\tappACLs,\n\t\t\t\t\t\tnetACLs,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tipl,\n\t\t\t\t\t\t0,\n\t\t\t\t\t\t0,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\t[]string{},\n\t\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\t)\n\t\t\t\t\tpuInfoUpdated := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\t\"/ns1\", common.LinuxProcessPU)\n\t\t\t\t\tpuInfoUpdated.Policy = policyrules\n\t\t\t\t\tpuInfoUpdated.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t\t})\n\n\t\t\t\t\tvar iprules policy.IPRuleList\n\n\t\t\t\t\tiprules = append(iprules, puInfoUpdated.Policy.ApplicationACLs()...)\n\t\t\t\t\tiprules = append(iprules, puInfoUpdated.Policy.NetworkACLs()...)\n\n\t\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\t\terr := i.iptv4.UpdateRules(1,\n\t\t\t\t\t\t\"pu1\", puInfoUpdated, puInfo)\n\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\ti.iptv4.ipsetmanager.DestroyUnusedIPsets()\n\n\t\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\tSo(expectedMangleAfterPUUpdateV4, ShouldContainKey, chain)\n\t\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUUpdateV4[chain])\n\t\t\t\t\t}\n\n\t\t\t\t\tConvey(\"When I delete the same rule, the chains must be restored in the global state\", func() {\n\t\t\t\t\t\terr = i.iptv4.ipsetmanager.DeletePortFromServerPortSet(\"pu1\",\n\t\t\t\t\t\t\t\"8080\")\n\t\t\t\t\t\terr := i.iptv4.DeleteRules(1,\n\t\t\t\t\t\t\t\"pu1\",\n\t\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\t\"5000\",\n\t\t\t\t\t\t\t\"10\",\n\t\t\t\t\t\t\t\"\", puInfoUpdated)\n\t\t\t\t\t\ti.iptv4.ipsetmanager.RemoveExternalNets(\"pu1\")\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\t\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\t\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\t\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\t\tSo(expectedGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV4[chain])\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\t\t\tif len(rules) > 0 {\n\t\t\t\t\t\t\t\tSo(expectedGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV4[chain])\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_OperationWithLinuxServicesV4Istio(t *testing.T) {\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.Istio)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedGlobalMangleChainsV4Istio, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV4Istio[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedGlobalNATChainsV4Istio, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV4Istio[chain])\n\t\t\t}\n\t\t\tConvey(\"When I configure a new PU with ISTIO and new ACLs, all rules must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"50.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{},\n\t\t\t\t\t\tProtocols: []string{\"icmp\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.LinuxProcessPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tudpPortSpec, err := portspec.NewPortSpecFromString(\"5000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPortSpec, err := portspec.NewPortSpecFromString(\"9000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpuInfo.Runtime.SetServices([]common.Service{\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    udpPortSpec,\n\t\t\t\t\t\tProtocol: 17,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    tcpPortSpec,\n\t\t\t\t\t\tProtocol: 6,\n\t\t\t\t\t},\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr = i.iptv4.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\terr = i.iptv4.ipsetmanager.AddPortToServerPortSet(\"pu1\",\n\t\t\t\t\t\"8080\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedMangleAfterPUInsertV4Istio, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUInsertV4Istio[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedNATAfterPUInsertV4Istio, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedNATAfterPUInsertV4Istio[chain])\n\t\t\t\t}\n\t\t\t})\n\t\t})\n\t})\n}\nfunc Test_Extensions1V4(t *testing.T) {\n\tConvey(\"Given an iptables controller with a memory backend with extensions in policy and log disabled\", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets and proper extensions should be configured\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV4[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV4[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t\tExtensions: []string{\"--match multiport --dports 443 -m bpf --bytecode 20,0 0 0 0,177 0 0 0,12 0 0 0,7 0 0 0,72 0 0 4,53 0 13 29,135 0 0 0,4 0 0 8,7 0 0 0,72 0 0 2,84 0 0 64655,21 0 7 0,72 0 0 4,21 0 5 1,64 0 0 6,21 0 3 0,72 0 0 10,37 1 0 1,6 0 0 0,6 0 0 65535 -j DROP\"},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"50.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{},\n\t\t\t\t\t\tProtocols: []string{\"icmp\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.LinuxProcessPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tudpPortSpec, err := portspec.NewPortSpecFromString(\"5000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPortSpec, err := portspec.NewPortSpecFromString(\"9000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpuInfo.Runtime.SetServices([]common.Service{\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    udpPortSpec,\n\t\t\t\t\t\tProtocol: 17,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    tcpPortSpec,\n\t\t\t\t\t\tProtocol: 6,\n\t\t\t\t\t},\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\n\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr = i.iptv4.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\terr = i.iptv4.ipsetmanager.AddPortToServerPortSet(\"pu1\",\n\t\t\t\t\t\"8080\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedMangleAfterPUInsertWithExtensionsV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUInsertWithExtensionsV4[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedNATAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedNATAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_Extensions2V4(t *testing.T) {\n\n\tConvey(\"Given an iptables controller with a memory backend with bad extensions in policy and log enabled\", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets and proper drop extension should be configured\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV4[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV4[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t\tExtensions: []string{\" -j DROP\"},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"50.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{},\n\t\t\t\t\t\tProtocols: []string{\"icmp\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.LinuxProcessPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tudpPortSpec, err := portspec.NewPortSpecFromString(\"5000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPortSpec, err := portspec.NewPortSpecFromString(\"9000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpuInfo.Runtime.SetServices([]common.Service{\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    udpPortSpec,\n\t\t\t\t\t\tProtocol: 17,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    tcpPortSpec,\n\t\t\t\t\t\tProtocol: 6,\n\t\t\t\t\t},\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr = i.iptv4.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\terr = i.iptv4.ipsetmanager.AddPortToServerPortSet(\"pu1\",\n\t\t\t\t\t\"8080\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedMangleAfterPUInsertWithLogV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUInsertWithLogV4[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedNATAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedNATAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_Extensions3V4(t *testing.T) {\n\n\tConvey(\"Given an iptables controller with a memory backend with extensions in policy and log enabled\", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets and proper drop extension should be configured\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV4[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV4[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t// Log enabled.\n\t\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t\tExtensions: []string{\"--match multiport --dports 443  -m bpf --bytecode 20,0 0 0 0,177 0 0 0,12 0 0 0,7 0 0 0,72 0 0 4,53 0 13 29,135 0 0 0,4 0 0 8,7 0 0 0,72 0 0 2,84 0 0 64655,21 0 7 0,72 0 0 4,21 0 5 1,64 0 0 6,21 0 3 0,72 0 0 10,37 1 0 1,6 0 0 0,6 0 0 65535 -j DROP\"},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"50.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{},\n\t\t\t\t\t\tProtocols: []string{\"icmp\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.LinuxProcessPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tudpPortSpec, err := portspec.NewPortSpecFromString(\"5000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPortSpec, err := portspec.NewPortSpecFromString(\"9000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpuInfo.Runtime.SetServices([]common.Service{\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    udpPortSpec,\n\t\t\t\t\t\tProtocol: 17,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    tcpPortSpec,\n\t\t\t\t\t\tProtocol: 6,\n\t\t\t\t\t},\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr = i.iptv4.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\terr = i.iptv4.ipsetmanager.AddPortToServerPortSet(\"pu1\",\n\t\t\t\t\t\"8080\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedMangleAfterPUInsertWithExtensionsAndLogV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUInsertWithExtensionsAndLogV4[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedNATAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedNATAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_OperationNomatchIpsetsV4(t *testing.T) {\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\",\n\t\t\t\t\"!10.10.10.0/24\",\n\t\t\t\t\"!10.0.0.0/8\",\n\t\t\t\t\"10.10.0.0/16\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tSo(ips.sets, ShouldContainKey,\n\t\t\t\t\"TRI-v4-TargetTCP\")\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"10.0.0.0/8\")\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set[\"10.0.0.0/8\"], ShouldBeTrue)\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"10.10.0.0/16\")\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set[\"10.10.0.0/16\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"0.0.0.0/1\")\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"128.0.0.0/1\")\n\n\t\t\t// update target networks\n\t\t\tcfgNew := &runtime.Configuration{\n\t\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\",\n\t\t\t\t\t\"!10.10.0.0/16\"},\n\t\t\t\tUDPTargetNetworks: []string{},\n\t\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t\t}\n\t\t\ti.SetTargetNetworks(cfgNew) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tSo(ips.sets, ShouldContainKey,\n\t\t\t\t\"TRI-v4-TargetTCP\")\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set, ShouldNotContainKey,\n\t\t\t\t\"10.0.0.0/8\")\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"10.10.0.0/16\")\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set[\"10.10.0.0/16\"], ShouldBeTrue)\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"0.0.0.0/1\")\n\t\t\tSo(ips.sets[\"TRI-v4-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"128.0.0.0/1\")\n\n\t\t})\n\t})\n}\n\nfunc Test_OperationNomatchIpsetsInExternalNetworksV4(t *testing.T) {\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\",\n\t\t\t\t\"!10.10.10.0/24\",\n\t\t\t\t\"!10.0.0.0/8\",\n\t\t\t\t\"10.10.0.0/16\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t// Setup external networks\n\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\tpolicy.IPRule{\n\t\t\t\t\tAddresses: []string{\"10.0.0.0/8\",\n\t\t\t\t\t\t\"!10.0.0.0/16\",\n\t\t\t\t\t\t\"!10.0.2.0/24\",\n\t\t\t\t\t\t\"10.0.2.7\"},\n\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\tServiceID: \"a1\",\n\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}\n\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\tpolicy.IPRule{\n\t\t\t\t\tAddresses: []string{\"0.0.0.0/0\",\n\t\t\t\t\t\t\"!10.0.0.0/8\",\n\t\t\t\t\t\t\"10.0.0.0/16\",\n\t\t\t\t\t\t\"!10.0.2.8\"},\n\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\tServiceID: \"a2\",\n\t\t\t\t\t\tPolicyID:  \"123b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}\n\n\t\t\tpolicyRules := policy.NewPUPolicy(\"Context\",\n\t\t\t\t\"/ns1\", policy.Police, appACLs, netACLs, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\"/ns1\", common.HostPU)\n\t\t\tpuInfo.Policy = policyRules\n\t\t\tpuInfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\t\tpuInfo.Runtime.SetPUType(common.HostPU)\n\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\tCgroupMark: \"10\",\n\t\t\t})\n\n\t\t\t// configure rules\n\t\t\tvar iprules policy.IPRuleList\n\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\terr = i.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\terr = i.ConfigureRules(0,\n\t\t\t\t\"pu1\", puInfo)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t// Check ipsets\n\t\t\tsetName := i.iptv4.ipsetmanager.GetACLIPsetsNames(appACLs[0:1])[0]\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.0.0/8\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.0.0/16\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.2.0/24\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.2.7\")\n\t\t\tSo(ips.sets[setName].set[\"10.0.0.0/8\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"10.0.0.0/16\"], ShouldBeTrue)\n\t\t\tSo(ips.sets[setName].set[\"10.0.2.0/24\"], ShouldBeTrue)\n\t\t\tSo(ips.sets[setName].set[\"10.0.2.7\"], ShouldBeFalse)\n\n\t\t\tsetName = i.iptv4.ipsetmanager.GetACLIPsetsNames(netACLs[0:1])[0]\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"0.0.0.0/1\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"128.0.0.0/1\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.0.0/8\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.0.0/16\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.2.8\")\n\t\t\tSo(ips.sets[setName].set[\"0.0.0.0/1\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"128.0.0.0/1\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"10.0.0.0/8\"], ShouldBeTrue)\n\t\t\tSo(ips.sets[setName].set[\"10.0.0.0/16\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"10.0.2.8\"], ShouldBeTrue)\n\n\t\t\t// Reconfigure external networks\n\t\t\tappACLs = policy.IPRuleList{\n\t\t\t\tpolicy.IPRule{\n\t\t\t\t\tAddresses: []string{\"10.0.0.0/8\",\n\t\t\t\t\t\t\"!10.0.0.0/16\",\n\t\t\t\t\t\t\"10.0.2.0/24\",\n\t\t\t\t\t\t\"!10.0.2.7\"},\n\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\tServiceID: \"a1\",\n\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}\n\t\t\tnetACLs = policy.IPRuleList{\n\t\t\t\tpolicy.IPRule{\n\t\t\t\t\tAddresses: []string{\"0.0.0.0/0\",\n\t\t\t\t\t\t\"10.0.0.0/8\",\n\t\t\t\t\t\t\"!10.0.2.0/24\"},\n\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\tServiceID: \"a2\",\n\t\t\t\t\t\tPolicyID:  \"123b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}\n\n\t\t\tpolicyRules = policy.NewPUPolicy(\"Context\",\n\t\t\t\t\"/ns1\", policy.Police, appACLs, netACLs, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\t\tpuInfoUpdated := policy.NewPUInfo(\"Context\",\n\t\t\t\t\"/ns1\", common.HostPU)\n\t\t\tpuInfoUpdated.Policy = policyRules\n\t\t\tpuInfoUpdated.Runtime = policy.NewPURuntimeWithDefaults()\n\t\t\tpuInfoUpdated.Runtime.SetPUType(common.HostPU)\n\t\t\tpuInfoUpdated.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\tCgroupMark: \"10\",\n\t\t\t})\n\n\t\t\t// Reconfigure rules\n\t\t\tiprules = nil\n\t\t\tiprules = append(iprules, puInfoUpdated.Policy.ApplicationACLs()...)\n\t\t\tiprules = append(iprules, puInfoUpdated.Policy.NetworkACLs()...)\n\t\t\terr = i.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\terr = i.UpdateRules(1,\n\t\t\t\t\"pu1\", puInfoUpdated, puInfo)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\ti.iptv4.ipsetmanager.DestroyUnusedIPsets()\n\n\t\t\t// Check ipsets again\n\t\t\tsetName = i.iptv4.ipsetmanager.GetACLIPsetsNames(appACLs[0:1])[0]\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.0.0/8\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.0.0/16\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.2.0/24\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.2.7\")\n\t\t\tSo(ips.sets[setName].set[\"10.0.0.0/8\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"10.0.0.0/16\"], ShouldBeTrue)\n\t\t\tSo(ips.sets[setName].set[\"10.0.2.0/24\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"10.0.2.7\"], ShouldBeTrue)\n\n\t\t\tsetName = i.iptv4.ipsetmanager.GetACLIPsetsNames(netACLs[0:1])[0]\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"0.0.0.0/1\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"128.0.0.0/1\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.0.0/8\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"10.0.2.0/24\")\n\t\t\tSo(ips.sets[setName].set, ShouldNotContainKey,\n\t\t\t\t\"10.0.2.8\")\n\t\t\tSo(ips.sets[setName].set[\"0.0.0.0/1\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"128.0.0.0/1\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"10.0.0.0/8\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"10.0.2.0/24\"], ShouldBeTrue)\n\n\t\t\t// Configure and check acl cache\n\t\t\taclCache := tacls.NewACLCache()\n\t\t\terr = aclCache.AddRuleList(puInfoUpdated.Policy.ApplicationACLs())\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tdefaultFlowPolicy := &policy.FlowPolicy{Action: policy.Reject | policy.Log, PolicyID: \"default\", ServiceID: \"default\"}\n\n\t\t\treport, _, err := aclCache.GetMatchingAction(net.ParseIP(\"10.0.2.7\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(report.Action, ShouldEqual, policy.Reject|policy.Log)\n\n\t\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"10.0.2.8\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(report.Action, ShouldEqual, policy.Accept|policy.Log)\n\n\t\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"10.0.3.1\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(report.Action, ShouldEqual, policy.Reject|policy.Log)\n\n\t\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"10.1.3.1\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(report.Action, ShouldEqual, policy.Accept|policy.Log)\n\n\t\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"11.1.3.1\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(report.Action, ShouldEqual, policy.Reject|policy.Log)\n\n\t\t})\n\t})\n}\n\nvar (\n\texpectedContainerGlobalMangleChainsV4Istio = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-j TRI-Istio\",\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-Istio\": {},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\", \"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-p tcp --dport 15001 -m addrtype --dst-type LOCAL -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t},\n\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedContainerGlobalMangleChainsV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p udp -j ACCEPT\"},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedContainerGlobalNATChainsV4 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedContainerGlobalNATChainsV4Istio = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t\t\"-p tcp -m mark --mark 68 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedContainerMangleAfterPUInsertV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\", \"-m comment --comment Container-specific-chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m comment --comment Container-specific-chain -j TRI-Net-pu1N7uS6--0\"},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j TRI-Nfq-OUT\", \"-p udp -m set --match-set TRI-v4-TargetUDP dst -j TRI-Nfq-OUT\", \"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\", \"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d 0.0.0.0/0 -j DROP\"},\n\t}\n\n\texpectedContainerMangleAfterPUInsertV4Istio = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-j TRI-Istio\",\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-Istio\": {\n\t\t\t\"-p tcp -m owner ! --uid-owner 1337 -j ACCEPT\",\n\t\t\t\"-p tcp -m owner --uid-owner 1337 -m addrtype --dst-type LOCAL -m addrtype --src-type LOCAL -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m owner --uid-owner 1337 -m addrtype --dst-type LOCAL -j ACCEPT\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\", \"-m comment --comment Container-specific-chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-p tcp --dport 15001 -m addrtype --dst-type LOCAL -m addrtype --src-type LOCAL -j ACCEPT\", \"-m comment --comment Container-specific-chain -j TRI-Net-pu1N7uS6--0\"},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j TRI-Nfq-OUT\", \"-p udp -m set --match-set TRI-v4-TargetUDP dst -j TRI-Nfq-OUT\", \"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\", \"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d 0.0.0.0/0 -j DROP\"},\n\t}\n\texpectedContainerNATAfterPUInsertV4 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t\t\"-d 0.0.0.0/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j CONNMARK --save-mark\",\n\t\t\t\"-d 0.0.0.0/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv dst -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t}\n\texpectedContainerNATAfterPUInsertV4Istio = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t\t\"-p tcp -m mark --mark 68 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t\t\"-d 0.0.0.0/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j CONNMARK --save-mark\",\n\t\t\t\"-d 0.0.0.0/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n)\n\nfunc Test_OperationWithContainersV4(t *testing.T) {\n\tConvey(\"Given an iptables controller with a memory backend for containers \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.RemoteContainer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and sets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedContainerGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalMangleChainsV4[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedContainerGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalNATChainsV4[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.ContainerPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr := i.iptv4.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedContainerMangleAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerMangleAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedContainerNATAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerNATAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t\tConvey(\"When I delete the same rule, the chains must be restored in the global state\", func() {\n\t\t\t\t\terr := i.iptv4.DeleteRules(0,\n\t\t\t\t\t\t\"pu1\",\n\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\"10\",\n\t\t\t\t\t\t\"\", puInfo)\n\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tprintTable(t)\n\t\t\t\t\t}\n\n\t\t\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\tSo(expectedContainerGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalMangleChainsV4[chain])\n\t\t\t\t\t}\n\n\t\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\t\tSo(expectedContainerGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalNATChainsV4[chain])\n\t\t\t\t\t}\n\t\t\t\t})\n\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_OperationWithContainersV4Istio(t *testing.T) {\n\tConvey(\"Given an iptables controller with a memory backend for containers \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.RemoteContainer, policy.Istio)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and sets of Istio\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedContainerGlobalMangleChainsV4Istio, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalMangleChainsV4Istio[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedContainerGlobalNATChainsV4Istio, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalNATChainsV4Istio[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.ContainerPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr := i.iptv4.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedContainerMangleAfterPUInsertV4Istio, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerMangleAfterPUInsertV4Istio[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedContainerNATAfterPUInsertV4Istio, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerNATAfterPUInsertV4Istio[chain])\n\t\t\t\t}\n\n\t\t\t\tConvey(\"When I delete the same rule, the chains must be restored in the global state of Istio\", func() {\n\t\t\t\t\terr := i.iptv4.DeleteRules(0,\n\t\t\t\t\t\t\"pu1\",\n\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\"10\",\n\t\t\t\t\t\t\"\", puInfo)\n\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tprintTable(t)\n\t\t\t\t\t}\n\n\t\t\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\tSo(expectedContainerGlobalMangleChainsV4Istio, ShouldContainKey, chain)\n\t\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalMangleChainsV4Istio[chain])\n\t\t\t\t\t}\n\n\t\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\t\tSo(expectedContainerGlobalNATChainsV4Istio, ShouldContainKey, chain)\n\t\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalNATChainsV4Istio[chain])\n\t\t\t\t\t}\n\t\t\t\t})\n\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestImplDefaultLock(t *testing.T) {\n\tinstance, err := NewInstance(nil, constants.LocalServer, true, nil,\n\t\t\"\", policy.None)\n\tassert.Equal(t, instance != nil, true,\n\t\t\"instance should not be nil\")\n\tassert.Equal(t, err == nil, true,\n\t\t\"err should be nil\")\n}\n\nfunc TestImplWithLock(t *testing.T) {\n\tinstance, err := NewInstance(nil, constants.LocalServer, true, nil,\n\t\t\"/tmp/xtables.lock\", policy.None)\n\tassert.Equal(t, instance != nil, true,\n\t\t\"instance should not be nil\")\n\tassert.Equal(t, err == nil, true,\n\t\t\"err should be nil\")\n\tassert.Equal(t, os.Getenv(\"XT_LOCK_NAME\") == \"/tmp/xtables.lock\", true,\n\t\t\"err env var XT_LOCK_NAME is not set\")\n}\n\nfunc printTable(t map[string]map[string][]string) {\n\tfmt.Printf(\"\\n\")\n\tfor table, chains := range t {\n\t\tfmt.Println(table)\n\t\tfor chain, rules := range chains {\n\t\t\tfmt.Println(chain)\n\t\t\tfor _, rule := range rules {\n\t\t\t\tfmt.Println(rule)\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/iptablesV6_test.go",
    "content": "// +build !windows,!rhel6\n\npackage iptablesctrl\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"testing\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"github.com/magiconair/properties/assert\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\ttacls \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/acls\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nfunc testICMPAllow() string {\n\treturn \"16,48 0 0 0,84 0 0 240,21 0 12 96,48 0 0 6,21 0 10 58,48 0 0 40,21 5 0 133,21 4 0 134,21 3 0 135,21 2 0 136,21 1 0 141,21 0 3 142,48 0 0 41,21 0 1 0,6 0 0 65535,6 0 0 0\"\n}\n\nfunc TestNewInstanceV6(t *testing.T) {\n\n\tConvey(\"When I create a new iptables instance\", t, func() {\n\t\tConvey(\"If I create a remote implemenetation and iptables exists\", func() {\n\t\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\t\tiptv4 := provider.NewTestIptablesProvider()\n\t\t\tiptv6 := provider.NewTestIptablesProvider()\n\n\t\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.RemoteContainer, policy.None)\n\t\t\tConvey(\"It should succeed\", func() {\n\t\t\t\tSo(i, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"When I create a new iptables instance\", t, func() {\n\t\tConvey(\"If I create a Linux server implemenetation and iptables exists\", func() {\n\t\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\t\tiptv4 := provider.NewTestIptablesProvider()\n\t\t\tiptv6 := provider.NewTestIptablesProvider()\n\n\t\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\t\tConvey(\"It should succeed\", func() {\n\t\t\t\tSo(i, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_NegativeConfigureRulesV6(t *testing.T) {\n\n\tConvey(\"Given a valid instance\", t, func() {\n\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\tiptv4 := provider.NewTestIptablesProvider()\n\t\tiptv6 := provider.NewTestIptablesProvider()\n\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\n\t\tctx, cancel := context.WithCancel(context.Background())\n\t\tdefer cancel()\n\n\t\terr = i.Run(ctx)\n\t\tSo(err, ShouldBeNil)\n\t\tcfg := &runtime.Configuration{}\n\t\ti.SetTargetNetworks(cfg) // nolint\n\n\t\tipl := policy.ExtendedMap{}\n\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\"Context\",\n\t\t\t\"/ns1\",\n\t\t\tpolicy.Police,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tipl,\n\t\t\t0,\n\t\t\t0,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\t[]string{},\n\t\t\tpolicy.EnforcerMapping,\n\t\t\tpolicy.Reject|policy.Log,\n\t\t\tpolicy.Reject|policy.Log,\n\t\t)\n\t\tcontainerinfo := policy.NewPUInfo(\"Context\",\n\t\t\t\"/ns1\", common.ContainerPU)\n\t\tcontainerinfo.Policy = policyrules\n\t\tcontainerinfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tcontainerinfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\tConvey(\"When I configure the rules with no errors, it should succeed\", func() {\n\t\t\terr := i.ConfigureRules(1,\n\t\t\t\t\"ID\", containerinfo)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I configure the rules and the proxy set fails, it should error\", func() {\n\t\t\tips.MockNewIpset(t, func(name, hash string, p *ipset.Params) (ipsetmanager.Ipset, error) {\n\t\t\t\treturn nil, fmt.Errorf(\"error\")\n\t\t\t})\n\t\t\terr := i.ConfigureRules(1,\n\t\t\t\t\"ID\", containerinfo)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I configure the rules and acls fail, it should error\", func() {\n\t\t\tiptv6.MockAppend(t, func(table, chain string, rulespec ...string) error {\n\t\t\t\treturn fmt.Errorf(\"error\")\n\t\t\t})\n\t\t\terr := i.ConfigureRules(1,\n\t\t\t\t\"ID\", containerinfo)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I configure the rules and commit fails, it should error\", func() {\n\t\t\tiptv6.MockCommit(t, func() error {\n\t\t\t\treturn fmt.Errorf(\"error\")\n\t\t\t})\n\t\t\terr := i.iptv6.ConfigureRules(1,\n\t\t\t\t\"ID\", containerinfo)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nvar (\n\texpectedGlobalMangleChainsV6 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\", \"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\t}\n\n\texpectedGlobalNATChainsV6 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v6-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedMangleAfterPUInsertV6 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\", \"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {\n\t\t\t\"-m cgroup --cgroup 10 -m comment --comment PU-Chain -j MARK --set-mark 10\",\n\t\t\t\"-m mark --mark 10 -m comment --comment PU-Chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Pid-Net\": {\n\t\t\t\"-p tcp -m multiport --destination-ports 9000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t\t\"-p udp -m multiport --destination-ports 5000 -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p icmpv6 -m bpf --bytecode 16,48 0 0 0,84 0 0 240,21 0 12 96,48 0 0 6,21 0 10 58,48 0 0 40,21 5 0 133,21 4 0 134,21 3 0 135,21 2 0 136,21 1 0 141,21 0 3 142,48 0 0 41,21 0 1 0,6 0 0 65535,6 0 0 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v6-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s ::/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s ::/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s ::/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v6-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v6-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-p icmpv6 -m set --match-set TRI-v6-ext-w5frVvhsnpU= dst -j ACCEPT\", \"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-p icmpv6 -m bpf --bytecode 16,48 0 0 0,84 0 0 240,21 0 12 96,48 0 0 6,21 0 10 58,48 0 0 40,21 5 0 133,21 4 0 134,21 3 0 135,21 2 0 136,21 1 0 141,21 0 3 142,48 0 0 41,21 0 1 0,6 0 0 65535,6 0 0 0 -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-p udp -m set --match-set TRI-v6-TargetUDP dst -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-m mark --mark 40 -j NFQUEUE --queue-num 0 --queue-bypass\", \"-m mark --mark 41 -j NFQUEUE --queue-num 1 --queue-bypass\", \"-m mark --mark 42 -j NFQUEUE --queue-num 2 --queue-bypass\", \"-m mark --mark 43 -j NFQUEUE --queue-num 3 --queue-bypass\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\", \"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\",\n\t\t\t\"-d ::/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\", \"-d ::/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d ::/0 -j DROP\"},\n\t}\n\n\texpectedNATAfterPUInsertV6 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v6-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t\t\"-d ::/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j CONNMARK --save-mark\",\n\t\t\t\"-d ::/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv dst -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"POSTROUTING\": {\n\t\t\t\"-p udp -m addrtype --src-type LOCAL -m multiport --source-ports 5000 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedMangleAfterPUUpdateV6 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\", \"-j TRI-Pid-App\", \"-j TRI-Svc-App\", \"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-j TRI-Pid-Net\", \"-j TRI-Svc-Net\", \"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {\n\t\t\t\"-m cgroup --cgroup 10 -m comment --comment PU-Chain -j MARK --set-mark 10\",\n\t\t\t\"-m mark --mark 10 -m comment --comment PU-Chain -j TRI-App-pu1N7uS6--1\"},\n\t\t\"TRI-Pid-Net\": {\n\t\t\t\"-p tcp -m set --match-set TRI-v6-ProcPort-pu19gtV dst -m comment --comment PU-Chain -j TRI-Net-pu1N7uS6--1\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--1\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p icmpv6 -m bpf --bytecode 16,48 0 0 0,84 0 0 240,21 0 12 96,48 0 0 6,21 0 10 58,48 0 0 40,21 5 0 133,21 4 0 134,21 3 0 135,21 2 0 136,21 1 0 141,21 0 3 142,48 0 0 41,21 0 1 0,6 0 0 65535,6 0 0 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v6-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s ::/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s ::/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s ::/0 -j DROP\"},\n\t\t\"TRI-App-pu1N7uS6--1\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p icmpv6 -m bpf --bytecode 16,48 0 0 0,84 0 0 240,21 0 12 96,48 0 0 6,21 0 10 58,48 0 0 40,21 5 0 133,21 4 0 134,21 3 0 135,21 2 0 136,21 1 0 141,21 0 3 142,48 0 0 41,21 0 1 0,6 0 0 65535,6 0 0 0 -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-p udp -m set --match-set TRI-v6-TargetUDP dst -j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 40 --hmark-rnd 0xdeadbeef\", \"-m mark --mark 40 -j NFQUEUE --queue-num 0 --queue-bypass\", \"-m mark --mark 41 -j NFQUEUE --queue-num 1 --queue-bypass\", \"-m mark --mark 42 -j NFQUEUE --queue-num 2 --queue-bypass\", \"-m mark --mark 43 -j NFQUEUE --queue-num 3 --queue-bypass\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\", \"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\",\n\t\t\t\"-d ::/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\", \"-d ::/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d ::/0 -j DROP\"},\n\t}\n)\n\nfunc Test_OperationWithLinuxServicesV6(t *testing.T) {\n\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"::/0\"},\n\t\t\tUDPTargetNetworks: []string{\"1120::/64\"},\n\t\t\tExcludedNetworks:  []string{\"::1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv6.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedGlobalMangleChainsV6, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV6[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedGlobalNATChainsV6, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV6[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1120::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1120::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"icmpv6\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"icmp\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.LinuxProcessPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tudpPortSpec, err := portspec.NewPortSpecFromString(\"5000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPortSpec, err := portspec.NewPortSpecFromString(\"9000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpuInfo.Runtime.SetServices([]common.Service{\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    udpPortSpec,\n\t\t\t\t\t\tProtocol: 17,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    tcpPortSpec,\n\t\t\t\t\t\tProtocol: 6,\n\t\t\t\t\t},\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv6.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr = i.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv6.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedMangleAfterPUInsertV6, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUInsertV6[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedNATAfterPUInsertV6, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedNATAfterPUInsertV6[chain])\n\t\t\t\t}\n\n\t\t\t\tConvey(\"When I update the policy, the update must result in correct state\", func() {\n\t\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\t\tAddresses: []string{\"1120::/64\"},\n\t\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\t\"Context\",\n\t\t\t\t\t\t\"/ns1\",\n\t\t\t\t\t\tpolicy.Police,\n\t\t\t\t\t\tappACLs,\n\t\t\t\t\t\tnetACLs,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tipl,\n\t\t\t\t\t\t0,\n\t\t\t\t\t\t0,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\t[]string{},\n\t\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\t)\n\t\t\t\t\tpuInfoUpdated := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\t\"/ns1\", common.LinuxProcessPU)\n\t\t\t\t\tpuInfoUpdated.Policy = policyrules\n\t\t\t\t\tpuInfoUpdated.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t\t})\n\n\t\t\t\t\terr := i.UpdateRules(1,\n\t\t\t\t\t\t\"pu1\", puInfoUpdated, puInfo)\n\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\tt := i.iptv6.impl.RetrieveTable()\n\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\tSo(expectedMangleAfterPUUpdateV6, ShouldContainKey, chain)\n\t\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUUpdateV6[chain])\n\t\t\t\t\t}\n\n\t\t\t\t\tConvey(\"When I delete the same rule, the chains must be restored in the global state\", func() {\n\t\t\t\t\t\terr := i.DeleteRules(1,\n\t\t\t\t\t\t\t\"pu1\",\n\t\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\t\"5000\",\n\t\t\t\t\t\t\t\"10\",\n\t\t\t\t\t\t\t\"\", puInfoUpdated)\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\t\tt := i.iptv6.impl.RetrieveTable()\n\n\t\t\t\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\t\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\t\tSo(expectedGlobalMangleChainsV6, ShouldContainKey, chain)\n\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV6[chain])\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\t\t\tif len(rules) > 0 {\n\t\t\t\t\t\t\t\tSo(expectedGlobalNATChainsV6, ShouldContainKey, chain)\n\t\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV6[chain])\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_OperationNomatchIpsetsV6(t *testing.T) {\n\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"::/0\",\n\t\t\t\t\"!2001:db8:1234::/48\"},\n\t\t\tUDPTargetNetworks: []string{\"1120::/64\"},\n\t\t\tExcludedNetworks:  []string{\"::1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tSo(ips.sets, ShouldContainKey,\n\t\t\t\t\"TRI-v6-TargetTCP\")\n\t\t\tSo(ips.sets[\"TRI-v6-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"2001:db8:1234::/48\")\n\t\t\tSo(ips.sets[\"TRI-v6-TargetTCP\"].set[\"2001:db8:1234::/48\"], ShouldBeTrue)\n\t\t\tSo(ips.sets[\"TRI-v6-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"::/1\")\n\t\t\tSo(ips.sets[\"TRI-v6-TargetTCP\"].set[\"::/1\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[\"TRI-v6-TargetTCP\"].set, ShouldContainKey,\n\t\t\t\t\"8000::/1\")\n\t\t\tSo(ips.sets[\"TRI-v6-TargetTCP\"].set[\"8000::/1\"], ShouldBeFalse)\n\t\t})\n\t})\n}\n\nfunc Test_OperationNomatchIpsetsInExternalNetworksV6(t *testing.T) {\n\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"::/0\",\n\t\t\t\t\"!2001:db8:1234::/48\"},\n\t\t\tUDPTargetNetworks: []string{\"1120::/64\"},\n\t\t\tExcludedNetworks:  []string{\"::1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t// Setup external networks\n\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\tpolicy.IPRule{\n\t\t\t\t\tAddresses: []string{\"::/0\",\n\t\t\t\t\t\t\"!2001:db8:1234::/48\"},\n\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\tPolicyID:  \"1234a\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}\n\t\t\tnetACLs := policy.IPRuleList{}\n\n\t\t\tpolicyRules := policy.NewPUPolicy(\"Context\",\n\t\t\t\t\"/ns1\", policy.Police, appACLs, netACLs, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\"/ns1\", common.HostPU)\n\t\t\tpuInfo.Policy = policyRules\n\t\t\tpuInfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\t\tpuInfo.Runtime.SetPUType(common.HostPU)\n\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\tCgroupMark: \"10\",\n\t\t\t})\n\n\t\t\t// configure rules\n\t\t\tvar iprules policy.IPRuleList\n\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\terr = i.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = i.iptv6.ipsetmanager.RegisterExternalNets(\"pu1\", iprules)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\terr = i.ConfigureRules(0,\n\t\t\t\t\"pu1\", puInfo)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t// Check ipsets\n\t\t\tsetName := i.iptv6.ipsetmanager.GetACLIPsetsNames(appACLs[0:1])[0]\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"::/1\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"8000::/1\")\n\t\t\tSo(ips.sets[setName].set, ShouldContainKey,\n\t\t\t\t\"2001:db8:1234::/48\")\n\t\t\tSo(ips.sets[setName].set[\"::/1\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"8000::/1\"], ShouldBeFalse)\n\t\t\tSo(ips.sets[setName].set[\"2001:db8:1234::/48\"], ShouldBeTrue)\n\n\t\t\t// Configure and check acl cache\n\t\t\taclCache := tacls.NewACLCache()\n\t\t\terr = aclCache.AddRuleList(puInfo.Policy.ApplicationACLs())\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tdefaultFlowPolicy := &policy.FlowPolicy{Action: policy.Reject | policy.Log, PolicyID: \"default\", ServiceID: \"default\"}\n\n\t\t\treport, _, err := aclCache.GetMatchingAction(net.ParseIP(\"2001:db8:5678::\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(report.Action, ShouldEqual, policy.Accept|policy.Log)\n\n\t\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"2001:db8:1234:5678::\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(report.Action, ShouldEqual, policy.Reject|policy.Log)\n\t\t})\n\t})\n}\n\nvar (\n\texpectedContainerGlobalMangleChainsV6 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\", \"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v6-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\", \"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedContainerGlobalNATChainsV6 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v6-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedContainerMangleAfterPUInsertV6 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\"-j HMARK --hmark-tuple dport,sport --hmark-mod 4 --hmark-offset 67 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 68 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 69 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 70 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"TRI-Nfq-OUT\": {\"-j HMARK --hmark-tuple sport,dport --hmark-mod 4 --hmark-offset 0 --hmark-rnd 0xdeadbeef\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-num 0 --queue-bypass\",\n\t\t\t\"-m mark --mark 1 -j NFQUEUE --queue-num 1 --queue-bypass\",\n\t\t\t\"-m mark --mark 2 -j NFQUEUE --queue-num 2 --queue-bypass\",\n\t\t\t\"-m mark --mark 3 -j NFQUEUE --queue-num 3 --queue-bypass\"},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup 1536 -j CONNMARK --set-mark 61167\", \"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\", \"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\", \"-m comment --comment Container-specific-chain -j TRI-App-pu1N7uS6--0\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-j TRI-Prx-Net\", \"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\", \"-p tcp -m mark --mark 66 -j ACCEPT\", \"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\", \"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\", \"-m connmark --mark 61166 -p udp -j ACCEPT\", \"-m comment --comment Container-specific-chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p icmpv6 -m bpf --bytecode 16,48 0 0 0,84 0 0 240,21 0 12 96,48 0 0 6,21 0 10 58,48 0 0 40,21 5 0 133,21 4 0 134,21 3 0 135,21 2 0 136,21 1 0 141,21 0 3 142,48 0 0 41,21 0 1 0,6 0 0 65535,6 0 0 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v6-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s ::/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s ::/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s ::/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\", \"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v6-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\", \"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v6-TargetUDP dst --match multiport --dports 443 -j ACCEPT\", \"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\", \"-p icmpv6 -m bpf --bytecode 16,48 0 0 0,84 0 0 240,21 0 12 96,48 0 0 6,21 0 10 58,48 0 0 40,21 5 0 133,21 4 0 134,21 3 0 135,21 2 0 136,21 1 0 141,21 0 3 142,48 0 0 41,21 0 1 0,6 0 0 65535,6 0 0 0 -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\", \"-m set --match-set TRI-v6-TargetTCP dst -p tcp -j TRI-Nfq-OUT\",\n\t\t\t\"-p udp -m set --match-set TRI-v6-TargetUDP dst -j TRI-Nfq-OUT\", \"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\", \"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\", \"-d ::/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d ::/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\", \"-d ::/0 -j DROP\"},\n\t}\n\n\texpectedContainerNATAfterPUInsertV6 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v6-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j RETURN\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t\t\"-d ::/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j CONNMARK --save-mark\",\n\t\t\t\"-d ::/0 -p udp --dport 53 -m mark ! --mark 0x40 -m cgroup --cgroup 10 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv dst -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t}\n)\n\nfunc Test_OperationWithContainersV6(t *testing.T) {\n\n\tConvey(\"Given an iptables controller with a memory backend for containers \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"::/0\"},\n\t\t\tUDPTargetNetworks: []string{\"1120::/64\"},\n\t\t\tExcludedNetworks:  []string{\"::1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.RemoteContainer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and sets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\n\t\t\tt := i.iptv6.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedContainerGlobalMangleChainsV6, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalMangleChainsV6[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedContainerGlobalNATChainsV6, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalNATChainsV6[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1120::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1120::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.ContainerPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv6.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr := i.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv6.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedContainerMangleAfterPUInsertV6, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerMangleAfterPUInsertV6[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedContainerNATAfterPUInsertV6, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerNATAfterPUInsertV6[chain])\n\t\t\t\t}\n\n\t\t\t\tConvey(\"When I delete the same rule, the chains must be restored in the global state\",\n\t\t\t\t\tfunc() {\n\t\t\t\t\t\terr := i.iptv6.DeleteRules(0,\n\t\t\t\t\t\t\t\"pu1\",\n\t\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\t\"10\",\n\t\t\t\t\t\t\t\"\", puInfo)\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\t\tt := i.iptv6.impl.RetrieveTable()\n\t\t\t\t\t\tif err != nil {\n\t\t\t\t\t\t\tprintTable(t)\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\t\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\t\tSo(expectedContainerGlobalMangleChainsV6, ShouldContainKey, chain)\n\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalMangleChainsV6[chain])\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\t\t\tSo(expectedContainerGlobalNATChainsV6, ShouldContainKey, chain)\n\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedContainerGlobalNATChainsV6[chain])\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestIpv6Disable(t *testing.T) {\n\tipv6Instance := &ipv6{ipv6Enabled: false}\n\n\tassert.Equal(t, ipv6Instance.Append(\"\", \"\") == nil, true, \"error should be nil\")\n\tassert.Equal(t, ipv6Instance.Insert(\"\", \"\", 0) == nil, true, \"error should be nil\")\n\tassert.Equal(t, ipv6Instance.ClearChain(\"\", \"\") == nil, true, \"error should be nil\")\n\tassert.Equal(t, ipv6Instance.DeleteChain(\"\", \"\") == nil, true, \"error should be nil\")\n\tassert.Equal(t, ipv6Instance.NewChain(\"\", \"\") == nil, true, \"error should be nil\")\n\tassert.Equal(t, ipv6Instance.Commit() == nil, true, \"error should be nil\")\n\tchains, err := ipv6Instance.ListChains(\"\")\n\n\tassert.Equal(t, chains == nil, true, \"chains should be nil\")\n\tassert.Equal(t, err == nil, true, \"error should be nil\")\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/iptables_linux_test.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// This file contains shared implementation for Linux iptables tests\n\nfunc createTestInstance(ips ipsetmanager.IpsetProvider, iptv4 provider.IptablesProvider, iptv6 provider.IptablesProvider, mode constants.ModeType, ServiceMeshType policy.ServiceMesh) (*Instance, error) {\n\n\tipv4Impl := &ipv4{ipt: iptv4}\n\tipv6Impl := &ipv6{ipt: iptv6, ipv6Enabled: true}\n\n\tfq := fqconfig.NewFilterQueue(4, []string{\"0.0.0.0/0\",\n\t\t\"::/0\"})\n\n\tipsetmanager.SetIpsetTestInstance(ips)\n\tipsv4 := ipsetmanager.V4test()\n\tipsv6 := ipsetmanager.V6test()\n\n\tiptInstanceV4 := createIPInstance(ipv4Impl, ipsv4, fq, mode, nil, ServiceMeshType)\n\tiptInstanceV6 := createIPInstance(ipv6Impl, ipsv6, fq, mode, nil, ServiceMeshType)\n\ticmpAllow = testICMPAllow\n\n\treturn newInstanceWithProviders(iptInstanceV4, iptInstanceV6)\n}\n\n// Fake iptables controller that always returns succes.\ntype baseIpt struct{}\n\n// Append apends a rule to chain of table\nfunc (b *baseIpt) Append(table, chain string, rulespec ...string) error { return nil }\n\n// Insert inserts a rule to a chain of table at the required pos\nfunc (b *baseIpt) Insert(table, chain string, pos int, rulespec ...string) error { return nil }\n\n// Delete deletes a rule of a chain in the given table\nfunc (b *baseIpt) Delete(table, chain string, rulespec ...string) error { return nil }\n\n// ListChains lists all the chains associated with a table\nfunc (b *baseIpt) ListChains(table string) ([]string, error) { return nil, nil }\n\n// ClearChain clears a chain in a table\nfunc (b *baseIpt) ClearChain(table, chain string) error { return nil }\n\n// DeleteChain deletes a chain in the table. There should be no references to this chain\nfunc (b *baseIpt) DeleteChain(table, chain string) error { return nil }\n\n// NewChain creates a new chain\nfunc (b *baseIpt) NewChain(table, chain string) error { return nil }\n\n// ListRules lists the rules in a table/chain\nfunc (b *baseIpt) ListRules(table, chain string) ([]string, error) { return []string{}, nil }\n\n// Fake memory IPset that will tell us if we are deleting or installing\n// bad things.\ntype memoryIPSet struct {\n\tset map[string]bool\n}\n\nfunc (m *memoryIPSet) Add(entry string, timeout int) error {\n\tm.set[entry] = false\n\treturn nil\n}\n\nfunc (m *memoryIPSet) AddOption(entry string, option string, timeout int) error {\n\tif option == \"nomatch\" {\n\t\tm.set[entry] = true\n\t\treturn nil\n\t}\n\treturn m.Add(entry, timeout)\n}\n\nfunc (m *memoryIPSet) Del(entry string) error {\n\tif _, ok := m.set[entry]; !ok {\n\t\treturn fmt.Errorf(\"not found\")\n\t}\n\tdelete(m.set, entry)\n\treturn nil\n}\n\nfunc (m *memoryIPSet) Destroy() error {\n\tm.set = map[string]bool{}\n\treturn nil\n}\n\nfunc (m *memoryIPSet) Flush() error {\n\tm.set = map[string]bool{}\n\treturn nil\n}\n\nfunc (m *memoryIPSet) Test(entry string) (bool, error) {\n\t_, ok := m.set[entry]\n\t// TODO nomatch\n\treturn ok, nil\n}\n\n// Fake IpSetProvider that will use memory and allow us to\n// to simulate the system.\ntype memoryIPSetProvider struct {\n\tsets map[string]*memoryIPSet\n}\n\nfunc (m *memoryIPSetProvider) NewIpset(name string, hasht string, p *ipset.Params) (ipsetmanager.Ipset, error) {\n\n\tif m.sets == nil {\n\t\treturn nil, fmt.Errorf(\"error\")\n\t}\n\n\t_, ok := m.sets[name]\n\tif ok {\n\t\treturn nil, fmt.Errorf(\"set exists\")\n\t}\n\n\tnewSet := &memoryIPSet{set: map[string]bool{}}\n\tm.sets[name] = newSet\n\treturn newSet, nil\n}\n\nfunc (m *memoryIPSetProvider) GetIpset(name string) ipsetmanager.Ipset {\n\treturn m.sets[name]\n}\n\nfunc (m *memoryIPSetProvider) DestroyAll(prefix string) error {\n\n\tfor set := range m.sets {\n\t\tif strings.HasPrefix(set, prefix) {\n\t\t\tdelete(m.sets, set)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (m *memoryIPSetProvider) ListIPSets() ([]string, error) {\n\tallSets := []string{}\n\tfor set := range m.sets {\n\t\tallSets = append(allSets, set)\n\t}\n\treturn allSets, nil\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/iptables_rhel6_test.go",
    "content": "// +build rhel6\n\npackage iptablesctrl\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nvar icmpAllow = testICMPAllow\n\nfunc testICMPAllow() string {\n\tpanic(\"icmp implementation for rhel6 should not call this\")\n}\n\nvar (\n\texpectedGlobalMangleChainsV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\n\t\t\t\"-j MARK --set-mark 67\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"TRI-Nfq-OUT\": {\n\t\t\t\"-j MARK --set-mark 0\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\n\t\t\"TRI-App\": {\n\t\t\t\"-p udp --dport 53 -j ACCEPT\",\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-j TRI-Prx-App\",\n\t\t\t\"-m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\",\n\t\t\t\"-j TRI-Svc-App\",\n\t\t\t\"-j TRI-Hst-App\",\n\t\t},\n\t\t\"TRI-Net\": {\n\t\t\t\"-p udp --sport 53 -j ACCEPT\",\n\t\t\t\"-j TRI-Prx-Net\",\n\t\t\t\"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-j TRI-Pid-Net\",\n\t\t\t\"-j TRI-Svc-Net\",\n\t\t\t\"-j TRI-Hst-Net\"},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\t}\n\n\texpectedGlobalNATChainsV4 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedMangleAfterPUInsertV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\n\t\t\t\"-j MARK --set-mark 67\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"TRI-Nfq-OUT\": {\n\t\t\t\"-j MARK --set-mark 0\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-p udp --dport 53 -j ACCEPT\",\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-j TRI-Prx-App\",\n\t\t\t\"-m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\",\n\t\t\t\"-j TRI-Svc-App\",\n\t\t\t\"-j TRI-Hst-App\",\n\t\t},\n\t\t\"TRI-Net\": {\n\t\t\t\"-p udp --sport 53 -j ACCEPT\",\n\t\t\t\"-j TRI-Prx-Net\",\n\t\t\t\"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-j TRI-Pid-Net\",\n\t\t\t\"-j TRI-Svc-Net\",\n\t\t\t\"-j TRI-Hst-Net\",\n\t\t},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {\n\t\t\t\"-p icmp -m comment --comment Server-specific-chain -j MARK --set-mark 10\",\n\t\t\t\"-p tcp -m multiport --source-ports 9000 -m comment --comment Server-specific-chain -j MARK --set-mark 10\",\n\t\t\t\"-p tcp -m multiport --source-ports 9000 -m comment --comment Server-specific-chain -j TRI-App-pu1N7uS6--0\",\n\t\t\t\"-p udp -m multiport --source-ports 5000 -m comment --comment Server-specific-chain -j MARK --set-mark 10\",\n\t\t\t\"-p udp -m mark --mark 10 -m addrtype --src-type LOCAL -m addrtype --dst-type LOCAL -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:3\",\n\t\t\t\"-m comment --comment traffic-same-pu -p udp -m mark --mark 10 -m addrtype --src-type LOCAL -m addrtype --dst-type LOCAL -j ACCEPT\",\n\t\t\t\"-p udp -m multiport --source-ports 5000 -m comment --comment Server-specific-chain -j TRI-App-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Svc-Net\": {\n\t\t\t\"-p tcp -m multiport --destination-ports 9000 -m comment --comment Container-specific-chain -j TRI-Net-pu1N7uS6--0\",\n\t\t\t\"-m comment --comment traffic-same-pu -p udp -m mark --mark 10 -m addrtype --src-type LOCAL -m addrtype --dst-type LOCAL -j ACCEPT\",\n\t\t\t\"-p udp -m multiport --destination-ports 5000 -m comment --comment Container-specific-chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p icmp -j NFQUEUE --queue-balance 0:3\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j NFLOG --nflog-group 11 --nflog-prefix 913787369:123a:a3:6\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j DROP\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j NFLOG --nflog-group 11 --nflog-prefix 913787369:123a:a3:3\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= src -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v4-TargetUDP dst --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p UDP -m set --match-set TRI-v4-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p icmp -j NFQUEUE --queue-balance 0:3\",\n\t\t\t\"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j MARK --set-mark 40\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP dst -j MARK --set-mark 40\",\n\t\t\t\"-m mark --mark 40 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j NFLOG --nflog-group 10 --nflog-prefix 913787369:123a:a3:rockstars _4090221238:6\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j DROP\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j NFLOG --nflog-group 10 --nflog-prefix 913787369:123a:a3:3\",\n\t\t\t\"-p ALL -m set --match-set TRI-v4-ext-_qhcdC8NcJc= dst -j ACCEPT\",\n\t\t\t\"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-d 0.0.0.0/0 -j DROP\",\n\t\t},\n\t}\n\n\texpectedNATAfterPUInsertV4 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v4-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -m multiport --source-ports 9000 -j REDIRECT --to-ports 0\",\n\t\t\t\"-p udp --dport 53 -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv dst -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"POSTROUTING\": {\n\t\t\t\"-p udp -m addrtype --src-type LOCAL -m multiport --source-ports 5000 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedMangleAfterPUUpdateV4 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\n\t\t\t\"-j MARK --set-mark 67\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"TRI-Nfq-OUT\": {\n\t\t\t\"-j MARK --set-mark 0\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v4-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-p udp --dport 53 -j ACCEPT\",\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-j TRI-Prx-App\",\n\t\t\t\"-m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\",\n\t\t\t\"-j TRI-Svc-App\",\n\t\t\t\"-j TRI-Hst-App\"},\n\t\t\"TRI-Net\": {\n\t\t\t\"-p udp --sport 53 -j ACCEPT\",\n\t\t\t\"-j TRI-Prx-Net\",\n\t\t\t\"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v4-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-j TRI-Pid-Net\",\n\t\t\t\"-j TRI-Svc-Net\",\n\t\t\t\"-j TRI-Hst-Net\",\n\t\t},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--1\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p icmp -j NFQUEUE --queue-balance 0:3\",\n\t\t\t\"-p tcp -m set --match-set TRI-v4-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s 0.0.0.0/0 -j DROP\",\n\t\t},\n\n\t\t\"TRI-App-pu1N7uS6--1\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v4-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p icmp -j NFQUEUE --queue-balance 0:3\",\n\t\t\t\"-m set --match-set TRI-v4-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v4-TargetTCP dst -p tcp -j MARK --set-mark 40\",\n\t\t\t\"-p udp -m set --match-set TRI-v4-TargetUDP dst -j MARK --set-mark 40\",\n\t\t\t\"-m mark --mark 40 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\",\n\t\t\t\"-d 0.0.0.0/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d 0.0.0.0/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-d 0.0.0.0/0 -j DROP\",\n\t\t},\n\t}\n)\n\nfunc Test_Rhel6ConfigureRulesV4(t *testing.T) {\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"127.0.0.1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV4[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV4[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"50.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{},\n\t\t\t\t\t\tProtocols: []string{\"icmp\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t\tRuleName:  \"rockstars forev\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"60.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     nil,\n\t\t\t\t\t\tProtocols: []string{constants.AllProtoString},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject | policy.Log,\n\t\t\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t//\"/ns1\", common.HostPU)\n\t\t\t\t\t\"/ns1\", common.HostNetworkPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tudpPortSpec, err := portspec.NewPortSpecFromString(\"5000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPortSpec, err := portspec.NewPortSpecFromString(\"9000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpuInfo.Runtime.SetServices([]common.Service{\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    udpPortSpec,\n\t\t\t\t\t\tProtocol: 17,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    tcpPortSpec,\n\t\t\t\t\t\tProtocol: 6,\n\t\t\t\t\t},\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\t\t\t\terr = i.iptv4.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\terr = i.iptv4.ipsetmanager.AddPortToServerPortSet(\"pu1\",\n\t\t\t\t\t\"8080\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedMangleAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedNATAfterPUInsertV4, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedNATAfterPUInsertV4[chain])\n\t\t\t\t}\n\n\t\t\t\tConvey(\"When I update the policy, the update must result in correct state\", func() {\n\t\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\t\tAddresses: []string{\"30.0.0.0/24\"},\n\t\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\t\"Context\",\n\t\t\t\t\t\t\"/ns1\",\n\t\t\t\t\t\tpolicy.Police,\n\t\t\t\t\t\tappACLs,\n\t\t\t\t\t\tnetACLs,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tipl,\n\t\t\t\t\t\t0,\n\t\t\t\t\t\t0,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\t[]string{},\n\t\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\t)\n\t\t\t\t\tpuInfoUpdated := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\t//\"/ns1\", common.HostPU)\n\t\t\t\t\t\t\"/ns1\", common.HostNetworkPU)\n\t\t\t\t\tpuInfoUpdated.Policy = policyrules\n\t\t\t\t\tpuInfoUpdated.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t\t})\n\n\t\t\t\t\tvar iprules policy.IPRuleList\n\n\t\t\t\t\tiprules = append(iprules, puInfoUpdated.Policy.ApplicationACLs()...)\n\t\t\t\t\tiprules = append(iprules, puInfoUpdated.Policy.NetworkACLs()...)\n\n\t\t\t\t\ti.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\t\terr := i.iptv4.UpdateRules(1,\n\t\t\t\t\t\t\"pu1\", puInfoUpdated, puInfo)\n\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\ti.iptv4.ipsetmanager.DestroyUnusedIPsets()\n\n\t\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\tSo(expectedMangleAfterPUUpdateV4, ShouldContainKey, chain)\n\t\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUUpdateV4[chain])\n\t\t\t\t\t}\n\n\t\t\t\t\tConvey(\"When I delete the same rule, the chains must be restored in the global state\", func() {\n\t\t\t\t\t\terr = i.iptv4.ipsetmanager.DeletePortFromServerPortSet(\"pu1\",\n\t\t\t\t\t\t\t\"8080\")\n\t\t\t\t\t\terr := i.iptv4.DeleteRules(1,\n\t\t\t\t\t\t\t\"pu1\",\n\t\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\t\"5000\",\n\t\t\t\t\t\t\t\"10\",\n\t\t\t\t\t\t\t\"\", puInfoUpdated)\n\t\t\t\t\t\ti.iptv4.ipsetmanager.RemoveExternalNets(\"pu1\")\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\t\tt := i.iptv4.impl.RetrieveTable()\n\t\t\t\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\t\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\t\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\t\tSo(expectedGlobalMangleChainsV4, ShouldContainKey, chain)\n\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV4[chain])\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\t\t\tif len(rules) > 0 {\n\t\t\t\t\t\t\t\tSo(expectedGlobalNATChainsV4, ShouldContainKey, chain)\n\t\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV4[chain])\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nvar (\n\texpectedGlobalMangleChainsV6 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\n\t\t\t\"-j MARK --set-mark 67\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"TRI-Nfq-OUT\": {\n\t\t\t\"-j MARK --set-mark 0\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-p udp --dport 53 -j ACCEPT\",\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-j TRI-Prx-App\",\n\t\t\t\"-m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\",\n\t\t\t\"-j TRI-Svc-App\",\n\t\t\t\"-j TRI-Hst-App\",\n\t\t},\n\t\t\"TRI-Net\": {\n\t\t\t\"-p udp --sport 53 -j ACCEPT\",\n\t\t\t\"-j TRI-Prx-Net\",\n\t\t\t\"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v6-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-j TRI-Pid-Net\",\n\t\t\t\"-j TRI-Svc-Net\",\n\t\t\t\"-j TRI-Hst-Net\",\n\t\t},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\t}\n\n\texpectedGlobalNATChainsV6 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v6-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedMangleAfterPUInsertV6 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\n\t\t\t\"-j MARK --set-mark 67\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"TRI-Nfq-OUT\": {\n\t\t\t\"-j MARK --set-mark 0\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-p udp --dport 53 -j ACCEPT\",\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-j TRI-Prx-App\", \"-m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\",\n\t\t\t\"-j TRI-Svc-App\",\n\t\t\t\"-j TRI-Hst-App\",\n\t\t},\n\t\t\"TRI-Net\": {\n\t\t\t\"-p udp --sport 53 -j ACCEPT\",\n\t\t\t\"-j TRI-Prx-Net\",\n\t\t\t\"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v6-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-j TRI-Pid-Net\",\n\t\t\t\"-j TRI-Svc-Net\",\n\t\t\t\"-j TRI-Hst-Net\",\n\t\t},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {\n\t\t\t\"-p icmp -m comment --comment Server-specific-chain -j MARK --set-mark 10\",\n\t\t\t\"-p tcp -m multiport --source-ports 9000 -m comment --comment Server-specific-chain -j MARK --set-mark 10\",\n\t\t\t\"-p tcp -m multiport --source-ports 9000 -m comment --comment Server-specific-chain -j TRI-App-pu1N7uS6--0\",\n\t\t\t\"-p udp -m multiport --source-ports 5000 -m comment --comment Server-specific-chain -j MARK --set-mark 10\",\n\t\t\t\"-p udp -m mark --mark 10 -m addrtype --src-type LOCAL -m addrtype --dst-type LOCAL -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:3\",\n\t\t\t\"-m comment --comment traffic-same-pu -p udp -m mark --mark 10 -m addrtype --src-type LOCAL -m addrtype --dst-type LOCAL -j ACCEPT\",\n\t\t\t\"-p udp -m multiport --source-ports 5000 -m comment --comment Server-specific-chain -j TRI-App-pu1N7uS6--0\",\n\t\t},\n\t\t\"TRI-Svc-Net\": {\n\t\t\t\"-p tcp -m multiport --destination-ports 9000 -m comment --comment Container-specific-chain -j TRI-Net-pu1N7uS6--0\",\n\t\t\t\"-m comment --comment traffic-same-pu -p udp -m mark --mark 10 -m addrtype --src-type LOCAL -m addrtype --dst-type LOCAL -j ACCEPT\",\n\t\t\t\"-p udp -m multiport --destination-ports 5000 -m comment --comment Container-specific-chain -j TRI-Net-pu1N7uS6--0\",\n\t\t},\n\n\t\t\"TRI-Net-pu1N7uS6--0\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= src -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= src -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p icmp -j NFQUEUE --queue-balance 0:3\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v6-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s ::/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s ::/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s ::/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--0\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v6-TargetUDP dst --match multiport --dports 443 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-6zlJIvP3B68= dst -m string ! --string n30njxq7bmiwr6dtxq --algo bm --to 128 -m set ! --match-set TRI-v6-TargetUDP dst --match multiport --dports 443 -j ACCEPT\",\n\t\t\t\"-p icmpv6 -m set --match-set TRI-v6-ext-w5frVvhsnpU= dst -j ACCEPT\",\n\t\t\t\"-p UDP -m set --match-set TRI-v6-ext-IuSLsD1R-mE= dst -m state --state ESTABLISHED -m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p icmp -j NFQUEUE --queue-balance 0:3\",\n\t\t\t\"-m set --match-set TRI-v6-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v6-TargetTCP dst -p tcp -j MARK --set-mark 40\",\n\t\t\t\"-p udp -m set --match-set TRI-v6-TargetUDP dst -j MARK --set-mark 40\",\n\t\t\t\"-m mark --mark 40 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\",\n\t\t\t\"-d ::/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d ::/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-d ::/0 -j DROP\",\n\t\t},\n\t}\n\n\texpectedNATAfterPUInsertV6 = map[string][]string{\n\t\t\"PREROUTING\": {\n\t\t\t\"-p tcp -m addrtype --dst-type LOCAL -m set ! --match-set TRI-v6-Excluded src -j TRI-Redir-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-Redir-App\",\n\t\t},\n\t\t\"TRI-Redir-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -m multiport --source-ports 9000 -j REDIRECT --to-ports 0\",\n\t\t\t\"-p udp --dport 53 -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"TRI-Redir-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv dst -m mark ! --mark 0x40 -j REDIRECT --to-ports 0\",\n\t\t},\n\t\t\"POSTROUTING\": {\n\t\t\t\"-p udp -m addrtype --src-type LOCAL -m multiport --source-ports 5000 -j ACCEPT\",\n\t\t},\n\t}\n\n\texpectedMangleAfterPUUpdateV6 = map[string][]string{\n\t\t\"TRI-Nfq-IN\": {\n\t\t\t\"-j MARK --set-mark 67\",\n\t\t\t\"-m mark --mark 67 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"TRI-Nfq-OUT\": {\n\t\t\t\"-j MARK --set-mark 0\",\n\t\t\t\"-m mark --mark 0 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t},\n\t\t\"INPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded src -j TRI-Net\",\n\t\t},\n\t\t\"OUTPUT\": {\n\t\t\t\"-m set ! --match-set TRI-v6-Excluded dst -j TRI-App\",\n\t\t},\n\t\t\"TRI-App\": {\n\t\t\t\"-p udp --dport 53 -j ACCEPT\",\n\t\t\t\"-m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-j TRI-Prx-App\",\n\t\t\t\"-m connmark --mark 61167 -j ACCEPT\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-m mark --mark 1073741922 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-OUT\",\n\t\t\t\"-j TRI-Pid-App\",\n\t\t\t\"-j TRI-Svc-App\",\n\t\t\t\"-j TRI-Hst-App\",\n\t\t},\n\t\t\"TRI-Net\": {\n\t\t\t\"-p udp --sport 53 -j ACCEPT\",\n\t\t\t\"-j TRI-Prx-Net\",\n\t\t\t\"-p tcp -m mark --mark 66 -j CONNMARK --set-mark 61167\",\n\t\t\t\"-p tcp -m mark --mark 66 -j ACCEPT\",\n\t\t\t\"-m connmark --mark 61167 -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v6-TargetTCP src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m string --string n30njxq7bmiwr6dtxq --algo bm --to 65535 -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m connmark --mark 61165 -m comment --comment Drop UDP ACL -j DROP\",\n\t\t\t\"-m connmark --mark 61166 -p udp -j ACCEPT\",\n\t\t\t\"-j TRI-Pid-Net\",\n\t\t\t\"-j TRI-Svc-Net\",\n\t\t\t\"-j TRI-Hst-Net\",\n\t\t},\n\t\t\"TRI-Pid-App\": {},\n\t\t\"TRI-Pid-Net\": {},\n\t\t\"TRI-Prx-App\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --sport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --sport 0 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst dst,dst -m mark ! --mark 0x40 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Prx-Net\": {\n\t\t\t\"-m mark --mark 0x40 -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-dst src,src -j ACCEPT\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-Proxy-pu19gtV-srv src -m addrtype --src-type LOCAL -j ACCEPT\",\n\t\t\t\"-p tcp -m tcp --dport 0 -j ACCEPT\",\n\t\t\t\"-p udp -m udp --dport 0 -j ACCEPT\",\n\t\t},\n\t\t\"TRI-Hst-App\": {},\n\t\t\"TRI-Hst-Net\": {},\n\t\t\"TRI-Svc-App\": {},\n\t\t\"TRI-Svc-Net\": {},\n\n\t\t\"TRI-Net-pu1N7uS6--1\": {\n\t\t\t\"-p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-w5frVvhsnpU= src -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p icmp -j NFQUEUE --queue-balance 0:3\",\n\t\t\t\"-p tcp -m set --match-set TRI-v6-TargetTCP src -m tcp --tcp-flags SYN NONE -j TRI-Nfq-IN\",\n\t\t\t\"-p udp -m set --match-set TRI-v6-TargetUDP src --match limit --limit 1000/s -j TRI-Nfq-IN\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-s ::/0 -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-s ::/0 -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-s ::/0 -j DROP\",\n\t\t},\n\t\t\"TRI-App-pu1N7uS6--1\": {\n\t\t\t\"-p TCP -m set --match-set TRI-v6-ext-uNdc0vdcFZA= dst -m state --state NEW --match multiport --dports 80 -j DROP\",\n\t\t\t\"-p icmp -j NFQUEUE --queue-balance 0:3\",\n\t\t\t\"-m set --match-set TRI-v6-TargetTCP dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\",\n\t\t\t\"-m set --match-set TRI-v6-TargetTCP dst -p tcp -j MARK --set-mark 40\",\n\t\t\t\"-p udp -m set --match-set TRI-v6-TargetUDP dst -j MARK --set-mark 40\",\n\t\t\t\"-m mark --mark 40 -j NFQUEUE --queue-balance 0:3 --queue-bypass\",\n\t\t\t\"-p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\",\n\t\t\t\"-p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\",\n\t\t\t\"-d ::/0 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:6\",\n\t\t\t\"-d ::/0 -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix 913787369:default:default:10\",\n\t\t\t\"-d ::/0 -j DROP\",\n\t\t},\n\t}\n)\n\nfunc Test_Rhel6ConfigureRulesV6(t *testing.T) {\n\n\tConvey(\"Given an iptables controller with a memory backend \", t, func() {\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"::/0\"},\n\t\t\tUDPTargetNetworks: []string{\"1120::/64\"},\n\t\t\tExcludedNetworks:  []string{\"::1\"},\n\t\t}\n\n\t\tcommitFunc := func(buf *bytes.Buffer) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tiptv4 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv4, ShouldNotBeNil)\n\n\t\tiptv6 := provider.NewCustomBatchProvider(&baseIpt{}, commitFunc, []string{\"nat\",\n\t\t\t\"mangle\"})\n\t\tSo(iptv6, ShouldNotBeNil)\n\n\t\tips := &memoryIPSetProvider{sets: map[string]*memoryIPSet{}}\n\t\ti, err := createTestInstance(ips, iptv4, iptv6, constants.LocalServer, policy.None)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(i, ShouldNotBeNil)\n\n\t\tConvey(\"When I start the controller, I should get the right global chains and ipsets\", func() {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tdefer cancel()\n\t\t\terr := i.Run(ctx)\n\t\t\ti.SetTargetNetworks(cfg) // nolint\n\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tt := i.iptv6.impl.RetrieveTable()\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(len(t), ShouldEqual, 2)\n\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\tSo(expectedGlobalMangleChainsV6, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV6[chain])\n\t\t\t}\n\n\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\tSo(expectedGlobalNATChainsV6, ShouldContainKey, chain)\n\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV6[chain])\n\t\t\t}\n\n\t\t\tConvey(\"When I configure a new set of rules, the ACLs must be correct\", func() {\n\n\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1120::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1120::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s2\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"icmpv6\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"40.0.0.0/24\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"icmp\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"3\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\tPorts:     []string{\"443\"},\n\t\t\t\t\t\tProtocols: []string{\"UDP\"},\n\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\tAction:    policy.Accept,\n\t\t\t\t\t\t\tServiceID: \"s4\",\n\t\t\t\t\t\t\tPolicyID:  \"2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\"Context\",\n\t\t\t\t\t\"/ns1\",\n\t\t\t\t\tpolicy.Police,\n\t\t\t\t\tappACLs,\n\t\t\t\t\tnetACLs,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\tipl,\n\t\t\t\t\t0,\n\t\t\t\t\t0,\n\t\t\t\t\tnil,\n\t\t\t\t\tnil,\n\t\t\t\t\t[]string{},\n\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t)\n\t\t\t\tpuInfo := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\"/ns1\", common.HostNetworkPU)\n\t\t\t\tpuInfo.Policy = policyrules\n\t\t\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t})\n\n\t\t\t\tudpPortSpec, err := portspec.NewPortSpecFromString(\"5000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\ttcpPortSpec, err := portspec.NewPortSpecFromString(\"9000\", nil)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tpuInfo.Runtime.SetServices([]common.Service{\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    udpPortSpec,\n\t\t\t\t\t\tProtocol: 17,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tPorts:    tcpPortSpec,\n\t\t\t\t\t\tProtocol: 6,\n\t\t\t\t\t},\n\t\t\t\t})\n\n\t\t\t\tvar iprules policy.IPRuleList\n\t\t\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\t\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\t\t\ti.iptv6.ipsetmanager.RegisterExternalNets(\"pu1\", iprules) // nolint\n\n\t\t\t\terr = i.ConfigureRules(0,\n\t\t\t\t\t\"pu1\", puInfo)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tt := i.iptv6.impl.RetrieveTable()\n\n\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\tSo(expectedMangleAfterPUInsertV6, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUInsertV6[chain])\n\t\t\t\t}\n\n\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\tSo(expectedNATAfterPUInsertV6, ShouldContainKey, chain)\n\t\t\t\t\tSo(rules, ShouldResemble, expectedNATAfterPUInsertV6[chain])\n\t\t\t\t}\n\n\t\t\t\tConvey(\"When I update the policy, the update must result in correct state\", func() {\n\t\t\t\t\tappACLs := policy.IPRuleList{\n\t\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\t\tAddresses: []string{\"1120::/64\"},\n\t\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\t\tServiceID: \"s1\",\n\t\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t\tnetACLs := policy.IPRuleList{\n\t\t\t\t\t\tpolicy.IPRule{\n\t\t\t\t\t\t\tAddresses: []string{\"1122::/64\"},\n\t\t\t\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\t\t\t\tProtocols: []string{\"TCP\"},\n\t\t\t\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\t\t\t\tAction:    policy.Reject,\n\t\t\t\t\t\t\t\tServiceID: \"s3\",\n\t\t\t\t\t\t\t\tPolicyID:  \"1\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t\tipl := policy.ExtendedMap{}\n\t\t\t\t\tpolicyrules := policy.NewPUPolicy(\n\t\t\t\t\t\t\"Context\",\n\t\t\t\t\t\t\"/ns1\",\n\t\t\t\t\t\tpolicy.Police,\n\t\t\t\t\t\tappACLs,\n\t\t\t\t\t\tnetACLs,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tipl,\n\t\t\t\t\t\t0,\n\t\t\t\t\t\t0,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\tnil,\n\t\t\t\t\t\t[]string{},\n\t\t\t\t\t\tpolicy.EnforcerMapping,\n\t\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\t\tpolicy.Reject|policy.Log,\n\t\t\t\t\t)\n\t\t\t\t\tpuInfoUpdated := policy.NewPUInfo(\"Context\",\n\t\t\t\t\t\t\"/ns1\", common.HostNetworkPU)\n\t\t\t\t\tpuInfoUpdated.Policy = policyrules\n\t\t\t\t\tpuInfoUpdated.Runtime.SetOptions(policy.OptionsType{\n\t\t\t\t\t\tCgroupMark: \"10\",\n\t\t\t\t\t})\n\n\t\t\t\t\terr := i.UpdateRules(1,\n\t\t\t\t\t\t\"pu1\", puInfoUpdated, puInfo)\n\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\tt := i.iptv6.impl.RetrieveTable()\n\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\tSo(expectedMangleAfterPUUpdateV6, ShouldContainKey, chain)\n\t\t\t\t\t\tSo(rules, ShouldResemble, expectedMangleAfterPUUpdateV6[chain])\n\t\t\t\t\t}\n\n\t\t\t\t\tConvey(\"When I delete the same rule, the chains must be restored in the global state\", func() {\n\t\t\t\t\t\terr := i.DeleteRules(1,\n\t\t\t\t\t\t\t\"pu1\",\n\t\t\t\t\t\t\t\"0\",\n\t\t\t\t\t\t\t\"5000\",\n\t\t\t\t\t\t\t\"10\",\n\t\t\t\t\t\t\t\"\", puInfoUpdated)\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\t\t\tt := i.iptv6.impl.RetrieveTable()\n\n\t\t\t\t\t\tSo(t[\"mangle\"], ShouldNotBeNil)\n\t\t\t\t\t\tSo(t[\"nat\"], ShouldNotBeNil)\n\n\t\t\t\t\t\tfor chain, rules := range t[\"mangle\"] {\n\t\t\t\t\t\t\tSo(expectedGlobalMangleChainsV6, ShouldContainKey, chain)\n\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedGlobalMangleChainsV6[chain])\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tfor chain, rules := range t[\"nat\"] {\n\t\t\t\t\t\t\tif len(rules) > 0 {\n\t\t\t\t\t\t\t\tSo(expectedGlobalNATChainsV6, ShouldContainKey, chain)\n\t\t\t\t\t\t\t\tSo(rules, ShouldResemble, expectedGlobalNATChainsV6[chain])\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/iptables_windows_test.go",
    "content": "// +build windows\n\npackage iptablesctrl\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"strings\"\n\t\"sync\"\n\t\"syscall\"\n\t\"testing\"\n\t\"unsafe\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\ttacls \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/acls\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n)\n\nconst (\n\terrInvalidParameter   = syscall.Errno(0xC000000D)\n\terrInsufficientBuffer = syscall.Errno(122)\n\terrAlreadyExists      = syscall.Errno(183)\n)\n\ntype abi struct {\n\tfilters       map[string]map[string]bool\n\tipsets        map[string][]string\n\tipsetByID     map[int]string\n\tipsetsNomatch map[string][]string\n\tipsetCount    int\n\tsync.Mutex\n}\n\nfunc (a *abi) FrontmanOpenShared() (uintptr, error) {\n\treturn 1234, nil\n}\n\nfunc (a *abi) GetDestInfo(driverHandle, socket, destInfo uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) ApplyDestHandle(socket, destHandle uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) FreeDestHandle(destHandle uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) NewIpset(driverHandle, name, ipsetType, ipset uintptr) (uintptr, error) {\n\tif name == 0 || ipsetType == 0 || ipset == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tnameStr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(name))) // nolint:govet\n\tipsetPtr := (*int)(unsafe.Pointer(ipset))                                    // nolint:govet\n\ta.ipsetCount++\n\t*ipsetPtr = a.ipsetCount\n\ta.ipsets[nameStr] = make([]string, 0)\n\ta.ipsetsNomatch[nameStr] = make([]string, 0)\n\ta.ipsetByID[a.ipsetCount] = nameStr\n\treturn 1, nil\n}\n\nfunc (a *abi) GetIpset(driverHandle, name, ipset uintptr) (uintptr, error) {\n\tif name == 0 || ipset == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tnameStr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(name))) // nolint:govet\n\tipsetPtr := (*int)(unsafe.Pointer(ipset))                                    // nolint:govet\n\tfor k, v := range a.ipsetByID {\n\t\tif v == nameStr {\n\t\t\t*ipsetPtr = k\n\t\t\treturn 1, nil\n\t\t}\n\t}\n\treturn 0, errInvalidParameter\n}\n\nfunc (a *abi) DestroyAllIpsets(driverHandle, prefix uintptr) (uintptr, error) {\n\tif prefix == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tprefixStr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(prefix))) // nolint:govet\n\tfor k := range a.ipsets {\n\t\tif strings.HasPrefix(k, prefixStr) {\n\t\t\tfor id, name := range a.ipsetByID {\n\t\t\t\tif name == k {\n\t\t\t\t\tdelete(a.ipsetByID, id)\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tdelete(a.ipsets, k)\n\t\t}\n\t}\n\treturn 1, nil\n}\n\nfunc (a *abi) ListIpsets(driverHandle, ipsetNames, ipsetNamesSize, bytesReturned uintptr) (uintptr, error) {\n\tif bytesReturned == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tbytesReturnedPtr := (*uint32)(unsafe.Pointer(bytesReturned)) // nolint:govet\n\tfstr := \"\"\n\tfor k := range a.ipsets {\n\t\tfstr += k + \",\"\n\t}\n\tsizeNeeded := len(fstr) * 2\n\tif sizeNeeded == 0 {\n\t\t*bytesReturnedPtr = 0\n\t\treturn 1, nil\n\t}\n\tif int(ipsetNamesSize) < sizeNeeded {\n\t\t*bytesReturnedPtr = uint32(sizeNeeded)\n\t\treturn 0, errInsufficientBuffer\n\t}\n\tif ipsetNames == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\tbuf := (*[1 << 20]uint16)(unsafe.Pointer(ipsetNames))[: ipsetNamesSize/2 : ipsetNamesSize/2] // nolint:govet\n\tcopy(buf, syscall.StringToUTF16(fstr))                                                       // nolint:staticcheck\n\tbuf[ipsetNamesSize/2-1] = 0\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetAdd(driverHandle, ipset, entry, timeout uintptr) (uintptr, error) {\n\treturn a.IpsetAddOption(driverHandle, ipset, entry, 0, timeout)\n}\n\nfunc (a *abi) IpsetAddOption(driverHandle, ipset, entry, option, timeout uintptr) (uintptr, error) {\n\tif entry == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tentryStr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(entry))) // nolint:govet\n\tid := int(ipset)\n\tname := a.ipsetByID[id]\n\tentries, ok := a.ipsets[name]\n\tif !ok {\n\t\treturn 0, errInvalidParameter\n\t}\n\tfor _, e := range entries {\n\t\tif e == entryStr {\n\t\t\treturn 0, errAlreadyExists\n\t\t}\n\t}\n\tif option != 0 {\n\t\toptionStr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(option))) // nolint:govet\n\t\tif optionStr == \"nomatch\" {\n\t\t\tentriesNomatch, ok := a.ipsetsNomatch[name]\n\t\t\tif !ok {\n\t\t\t\treturn 0, errInvalidParameter\n\t\t\t}\n\t\t\ta.ipsetsNomatch[name] = append(entriesNomatch, entryStr)\n\t\t}\n\t}\n\ta.ipsets[name] = append(entries, entryStr)\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetDelete(driverHandle, ipset, entry uintptr) (uintptr, error) {\n\tif entry == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tentryStr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(entry))) // nolint:govet\n\tid := int(ipset)\n\tname := a.ipsetByID[id]\n\tentries, ok := a.ipsets[name]\n\tif !ok {\n\t\treturn 0, errInvalidParameter\n\t}\n\tfor i, e := range entries {\n\t\tif e == entryStr {\n\t\t\ta.ipsets[name] = append(entries[:i], entries[i+1:]...)\n\t\t\tbreak\n\t\t}\n\t}\n\tif entriesNomatch, ok := a.ipsetsNomatch[name]; ok {\n\t\tfor i, e := range entriesNomatch {\n\t\t\tif e == entryStr {\n\t\t\t\ta.ipsetsNomatch[name] = append(entriesNomatch[:i], entriesNomatch[i+1:]...)\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetDestroy(driverHandle, ipset uintptr) (uintptr, error) {\n\ta.Lock()\n\tdefer a.Unlock()\n\tid := int(ipset)\n\tname := a.ipsetByID[id]\n\tif _, ok := a.ipsets[name]; !ok {\n\t\treturn 0, errInvalidParameter\n\t}\n\tdelete(a.ipsetByID, id)\n\tdelete(a.ipsets, name)\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetFlush(driverHandle, ipset uintptr) (uintptr, error) {\n\ta.Lock()\n\tdefer a.Unlock()\n\tid := int(ipset)\n\tname := a.ipsetByID[id]\n\tif _, ok := a.ipsets[name]; !ok {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.ipsets[name] = nil\n\treturn 1, nil\n}\n\nfunc (a *abi) IpsetTest(driverHandle, ipset, entry uintptr) (uintptr, error) {\n\tif entry == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tentryStr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(entry))) // nolint:govet\n\tid := int(ipset)\n\tname := a.ipsetByID[id]\n\tentries, ok := a.ipsets[name]\n\tif !ok {\n\t\treturn 0, errInvalidParameter\n\t}\n\tfor _, e := range entries {\n\t\t// TODO nomatch\n\t\tif e == entryStr {\n\t\t\treturn 1, nil\n\t\t}\n\t}\n\t// not found\n\treturn 0, nil\n}\n\nfunc (a *abi) PacketFilterStart(frontman, firewallName, receiveCallback, loggingCallback uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) PacketFilterClose() (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) PacketFilterForward(info, packet uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) AppendFilter(driverHandle, outbound, filterName, isGotoFilter uintptr) (uintptr, error) {\n\treturn a.InsertFilter(driverHandle, outbound, 1000, filterName, isGotoFilter)\n}\n\nfunc (a *abi) InsertFilter(driverHandle, outbound, priority, filterName, isGotoFilter uintptr) (uintptr, error) {\n\tif filterName == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tstr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(filterName))) // nolint:govet\n\ta.filters[str] = make(map[string]bool)\n\treturn 1, nil\n}\n\nfunc (a *abi) DestroyFilter(driverHandle, filterName uintptr) (uintptr, error) {\n\tif filterName == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tstr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(filterName))) // nolint:govet\n\tdelete(a.filters, str)\n\treturn 1, nil\n}\n\nfunc (a *abi) EmptyFilter(driverHandle, filterName uintptr) (uintptr, error) {\n\tif filterName == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tstr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(filterName))) // nolint:govet\n\ta.filters[str] = make(map[string]bool)\n\treturn 1, nil\n}\n\nfunc (a *abi) GetFilterList(driverHandle, outbound, buffer, bufferSize, bytesReturned uintptr) (uintptr, error) {\n\tif bytesReturned == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tbytesReturnedPtr := (*uint32)(unsafe.Pointer(bytesReturned)) // nolint:govet\n\tfstr := \"\"\n\tfor k := range a.filters {\n\t\tfstr += k + \",\"\n\t}\n\tsizeNeeded := len(fstr) * 2\n\tif sizeNeeded == 0 {\n\t\t*bytesReturnedPtr = 0\n\t\treturn 1, nil\n\t}\n\tif int(bufferSize) < sizeNeeded {\n\t\t*bytesReturnedPtr = uint32(sizeNeeded)\n\t\treturn 0, errInsufficientBuffer\n\t}\n\tif buffer == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\tbuf := (*[1 << 20]uint16)(unsafe.Pointer(buffer))[: bufferSize/2 : bufferSize/2] // nolint:govet\n\tcopy(buf, syscall.StringToUTF16(fstr))                                           // nolint:staticcheck\n\tbuf[bufferSize/2-1] = 0\n\treturn 1, nil\n}\n\nfunc (a *abi) AppendFilterCriteria(driverHandle, filterName, criteriaName, ruleSpec, ipsetRuleSpecs, ipsetRuleSpecCount uintptr) (uintptr, error) {\n\tif filterName == 0 || criteriaName == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tfstr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(filterName)))   // nolint:govet\n\tcstr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(criteriaName))) // nolint:govet\n\tm, ok := a.filters[fstr]\n\tif !ok {\n\t\treturn 0, errInvalidParameter\n\t}\n\tm[cstr] = true\n\treturn 1, nil\n}\n\nfunc (a *abi) DeleteFilterCriteria(driverHandle, filterName, criteriaName uintptr) (uintptr, error) {\n\tif filterName == 0 || criteriaName == 0 {\n\t\treturn 0, errInvalidParameter\n\t}\n\ta.Lock()\n\tdefer a.Unlock()\n\tfstr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(filterName)))   // nolint:govet\n\tcstr := frontman.WideCharPointerToString((*uint16)(unsafe.Pointer(criteriaName))) // nolint:govet\n\tm, ok := a.filters[fstr]\n\tif !ok {\n\t\treturn 0, errInvalidParameter\n\t}\n\tdelete(m, cstr)\n\treturn 1, nil\n}\n\nfunc (a *abi) ListIpsetsDetail(driverHandle, format, ipsetNames, ipsetNamesSize, bytesReturned uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (a *abi) GetCriteriaList(driverHandle, format, criteriaList, criteriaListSize, bytesReturned uintptr) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc Test_WindowsNomatchIpsets(t *testing.T) {\n\n\ta := &abi{\n\t\tfilters:       make(map[string]map[string]bool),\n\t\tipsets:        make(map[string][]string),\n\t\tipsetsNomatch: make(map[string][]string),\n\t\tipsetByID:     make(map[int]string),\n\t}\n\tfrontman.Driver = a\n\n\tgetEnforcerPID = func() int { return 111 }\n\tgetCnsAgentMgrPID = func() int { return 222 }\n\tgetCnsAgentBootPID = func() int { return 333 }\n\n\tConvey(\"Given a valid instance\", t, func() {\n\n\t\tfq := newFilterQueueWithDefaults()\n\t\timpl, err := NewInstance(fq, constants.LocalServer, false, nil, \"\", policy.None)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = impl.Run(context.Background())\n\t\tSo(err, ShouldBeNil)\n\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"!2001:db8:1234::/48\", \"!10.10.10.0/24\", \"::/0\", \"!10.0.0.0/8\", \"10.10.0.0/16\", \"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"!10.10.10.0/24\", \"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"!192.168.56.15/32\", \"127.0.0.1\", \"192.168.56.0/24\"},\n\t\t}\n\t\terr = impl.SetTargetNetworks(cfg) // nolint\n\t\tSo(err, ShouldBeNil)\n\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-TargetTCP\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-TargetTCP\"], ShouldContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-TargetTCP\"], ShouldContain, \"10.10.10.0/24\")\n\t\tSo(a.ipsets[\"TRI-v4-TargetTCP\"], ShouldContain, \"10.10.0.0/16\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-TargetTCP\"], ShouldNotContain, \"10.10.0.0/16\")\n\t\tSo(a.ipsets[\"TRI-v4-TargetTCP\"], ShouldContain, \"0.0.0.0/1\")\n\t\tSo(a.ipsets[\"TRI-v4-TargetTCP\"], ShouldContain, \"128.0.0.0/1\")\n\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-TargetTCP\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v6-TargetTCP\"], ShouldContain, \"2001:db8:1234::/48\")\n\t\tSo(a.ipsets[\"TRI-v6-TargetTCP\"], ShouldContain, \"::/1\")\n\t\tSo(a.ipsets[\"TRI-v6-TargetTCP\"], ShouldContain, \"8000::/1\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v6-TargetTCP\"], ShouldNotContain, \"::/1\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v6-TargetTCP\"], ShouldNotContain, \"8000::/1\")\n\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-TargetUDP\")\n\t\tSo(a.ipsets[\"TRI-v4-TargetUDP\"], ShouldContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-TargetUDP\"], ShouldNotContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsets[\"TRI-v4-TargetUDP\"], ShouldContain, \"10.10.10.0/24\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-TargetUDP\"], ShouldContain, \"10.10.10.0/24\")\n\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-Excluded\")\n\t\tSo(a.ipsets[\"TRI-v4-Excluded\"], ShouldContain, \"127.0.0.1\")\n\t\tSo(a.ipsets[\"TRI-v4-Excluded\"], ShouldContain, \"192.168.56.0/24\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-Excluded\"], ShouldNotContain, \"192.168.56.0/24\")\n\t\tSo(a.ipsets[\"TRI-v4-Excluded\"], ShouldContain, \"192.168.56.15/32\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-Excluded\"], ShouldContain, \"192.168.56.15/32\")\n\n\t\t// update target networks\n\t\tcfgNew := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"!10.10.0.0/16\", \"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{},\n\t\t\tExcludedNetworks:  []string{\"192.168.56.0/24\", \"!192.168.56.15/32\", \"127.0.0.1\"},\n\t\t}\n\t\terr = impl.SetTargetNetworks(cfgNew)\n\t\tSo(err, ShouldBeNil)\n\n\t\tSo(a.ipsets[\"TRI-v4-TargetTCP\"], ShouldNotContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsets[\"TRI-v4-TargetTCP\"], ShouldContain, \"10.10.0.0/16\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-TargetTCP\"], ShouldContain, \"10.10.0.0/16\")\n\t\tSo(a.ipsets[\"TRI-v4-TargetTCP\"], ShouldContain, \"0.0.0.0/1\")\n\t\tSo(a.ipsets[\"TRI-v4-TargetTCP\"], ShouldContain, \"128.0.0.0/1\")\n\n\t\tSo(a.ipsets[\"TRI-v4-TargetUDP\"], ShouldBeEmpty)\n\n\t\tSo(a.ipsets[\"TRI-v4-Excluded\"], ShouldContain, \"127.0.0.1\")\n\t\tSo(a.ipsets[\"TRI-v4-Excluded\"], ShouldContain, \"192.168.56.0/24\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-Excluded\"], ShouldNotContain, \"192.168.56.0/24\")\n\t\tSo(a.ipsets[\"TRI-v4-Excluded\"], ShouldContain, \"192.168.56.15/32\")\n\t\tSo(a.ipsetsNomatch[\"TRI-v4-Excluded\"], ShouldContain, \"192.168.56.15/32\")\n\t})\n}\n\nfunc Test_WindowsNomatchIpsetsInExternalNetworks(t *testing.T) {\n\n\ta := &abi{\n\t\tfilters:       make(map[string]map[string]bool),\n\t\tipsets:        make(map[string][]string),\n\t\tipsetsNomatch: make(map[string][]string),\n\t\tipsetByID:     make(map[int]string),\n\t}\n\tfrontman.Driver = a\n\n\tgetEnforcerPID = func() int { return 111 }\n\tgetCnsAgentMgrPID = func() int { return 222 }\n\tgetCnsAgentBootPID = func() int { return 333 }\n\n\tConvey(\"Given a valid instance\", t, func() {\n\n\t\tfq := newFilterQueueWithDefaults()\n\t\timpl, err := NewInstance(fq, constants.LocalServer, false, nil, \"\", policy.None)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = impl.Run(context.Background())\n\t\tSo(err, ShouldBeNil)\n\n\t\tcfg := &runtime.Configuration{\n\t\t\tTCPTargetNetworks: []string{\"!2001:db8:1234::/48\", \"!10.10.10.0/24\", \"::/0\", \"!10.0.0.0/8\", \"10.10.0.0/16\", \"0.0.0.0/0\"},\n\t\t\tUDPTargetNetworks: []string{\"!10.10.10.0/24\", \"10.0.0.0/8\"},\n\t\t\tExcludedNetworks:  []string{\"!192.168.56.15/32\", \"127.0.0.1\", \"192.168.56.0/24\"},\n\t\t}\n\t\terr = impl.SetTargetNetworks(cfg) // nolint\n\t\tSo(err, ShouldBeNil)\n\n\t\t// Setup external networks\n\t\tappACLs := policy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"10.0.0.0/8\", \"!10.0.0.0/16\", \"!10.0.2.0/24\", \"10.0.2.7\"},\n\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\tServiceID: \"a1\",\n\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"::/0\", \"!2001:db8:1234::/48\"},\n\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\tServiceID: \"a3\",\n\t\t\t\t\tPolicyID:  \"1234a\",\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tnetACLs := policy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"0.0.0.0/0\", \"!10.0.0.0/8\", \"10.0.0.0/16\", \"!10.0.2.8\"},\n\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\tServiceID: \"a2\",\n\t\t\t\t\tPolicyID:  \"123b\",\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\tpolicyRules := policy.NewPUPolicy(\"Context\", \"/ns1\", policy.Police, appACLs, netACLs, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\tpuInfo := policy.NewPUInfo(\"Context\", \"/ns1\", common.HostPU)\n\t\tpuInfo.Policy = policyRules\n\t\tpuInfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tpuInfo.Runtime.SetPUType(common.HostPU)\n\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\t// configure rules\n\t\tvar iprules policy.IPRuleList\n\t\tiprules = append(iprules, puInfo.Policy.ApplicationACLs()...)\n\t\tiprules = append(iprules, puInfo.Policy.NetworkACLs()...)\n\t\terr = impl.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules)\n\t\tSo(err, ShouldBeNil)\n\t\terr = impl.iptv6.ipsetmanager.RegisterExternalNets(\"pu1\", iprules)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = impl.ConfigureRules(0, \"pu1\", puInfo)\n\t\tSo(err, ShouldBeNil)\n\n\t\t// Check ipsets\n\t\tsetName := impl.iptv4.ipsetmanager.GetACLIPsetsNames(appACLs[0:1])[0]\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.0.0/16\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.2.0/24\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.2.7\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldContain, \"10.0.0.0/16\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldContain, \"10.0.2.0/24\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"10.0.2.7\")\n\n\t\tsetName = impl.iptv6.ipsetmanager.GetACLIPsetsNames(appACLs[1:2])[0]\n\t\tSo(a.ipsets[setName], ShouldContain, \"::/1\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"8000::/1\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"2001:db8:1234::/48\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"::/1\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"8000::/1\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldContain, \"2001:db8:1234::/48\")\n\n\t\tsetName = impl.iptv4.ipsetmanager.GetACLIPsetsNames(netACLs[0:1])[0]\n\t\tSo(a.ipsets[setName], ShouldContain, \"0.0.0.0/1\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"128.0.0.0/1\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.0.0/16\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.2.8\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"0.0.0.0/1\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"128.0.0.0/1\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"10.0.0.0/16\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldContain, \"10.0.2.8\")\n\n\t\t// Reconfigure external networks\n\t\tappACLs = policy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"10.0.0.0/8\", \"!10.0.0.0/16\", \"10.0.2.0/24\", \"!10.0.2.7\"},\n\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\tServiceID: \"a1\",\n\t\t\t\t\tPolicyID:  \"123a\",\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tnetACLs = policy.IPRuleList{\n\t\t\tpolicy.IPRule{\n\t\t\t\tAddresses: []string{\"0.0.0.0/0\", \"10.0.0.0/8\", \"!10.0.2.0/24\"},\n\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t\tProtocols: []string{constants.TCPProtoNum},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tAction:    policy.Accept | policy.Log,\n\t\t\t\t\tServiceID: \"a2\",\n\t\t\t\t\tPolicyID:  \"123b\",\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\tpolicyRules = policy.NewPUPolicy(\"Context\", \"/ns1\", policy.Police, appACLs, netACLs, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\tpuInfoUpdated := policy.NewPUInfo(\"Context\", \"/ns1\", common.HostPU)\n\t\tpuInfoUpdated.Policy = policyRules\n\t\tpuInfoUpdated.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tpuInfoUpdated.Runtime.SetPUType(common.HostPU)\n\t\tpuInfoUpdated.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\t// Reconfigure rules\n\t\tiprules = nil\n\t\tiprules = append(iprules, puInfoUpdated.Policy.ApplicationACLs()...)\n\t\tiprules = append(iprules, puInfoUpdated.Policy.NetworkACLs()...)\n\t\terr = impl.iptv4.ipsetmanager.RegisterExternalNets(\"pu1\", iprules)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = impl.UpdateRules(1, \"pu1\", puInfoUpdated, puInfo)\n\t\tSo(err, ShouldBeNil)\n\n\t\timpl.iptv4.ipsetmanager.DestroyUnusedIPsets()\n\n\t\t// Check ipsets again\n\t\tsetName = impl.iptv4.ipsetmanager.GetACLIPsetsNames(appACLs[0:1])[0]\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.0.0/16\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.2.0/24\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.2.7\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldContain, \"10.0.0.0/16\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"10.0.2.0/24\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldContain, \"10.0.2.7\")\n\n\t\tsetName = impl.iptv4.ipsetmanager.GetACLIPsetsNames(netACLs[0:1])[0]\n\t\tSo(a.ipsets[setName], ShouldContain, \"0.0.0.0/1\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"128.0.0.0/1\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsets[setName], ShouldContain, \"10.0.2.0/24\")\n\t\tSo(a.ipsets[setName], ShouldNotContain, \"10.0.2.8\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"0.0.0.0/1\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"128.0.0.0/1\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldNotContain, \"10.0.0.0/8\")\n\t\tSo(a.ipsetsNomatch[setName], ShouldContain, \"10.0.2.0/24\")\n\n\t\t// Configure and check acl cache\n\t\taclCache := tacls.NewACLCache()\n\t\terr = aclCache.AddRuleList(puInfoUpdated.Policy.ApplicationACLs())\n\t\tSo(err, ShouldBeNil)\n\n\t\tdefaultFlowPolicy := &policy.FlowPolicy{Action: policy.Reject | policy.Log, PolicyID: \"default\", ServiceID: \"default\"}\n\n\t\treport, _, err := aclCache.GetMatchingAction(net.ParseIP(\"10.0.2.7\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(report.Action, ShouldEqual, policy.Reject|policy.Log)\n\n\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"10.0.2.8\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(report.Action, ShouldEqual, policy.Accept|policy.Log)\n\n\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"10.0.3.1\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(report.Action, ShouldEqual, policy.Reject|policy.Log)\n\n\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"10.1.3.1\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(report.Action, ShouldEqual, policy.Accept|policy.Log)\n\n\t\treport, _, err = aclCache.GetMatchingAction(net.ParseIP(\"11.1.3.1\"), 80, packet.IPProtocolTCP, defaultFlowPolicy)\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(report.Action, ShouldEqual, policy.Reject|policy.Log)\n\t})\n}\n\nfunc newFilterQueueWithDefaults() fqconfig.FilterQueue {\n\treturn fqconfig.NewFilterQueue(0, []string{\"0.0.0.0/0\", \"::/0\"})\n}\n\nfunc Test_WindowsConfigureRulesV4(t *testing.T) {\n\n\ta := &abi{\n\t\tfilters:       make(map[string]map[string]bool),\n\t\tipsets:        make(map[string][]string),\n\t\tipsetsNomatch: make(map[string][]string),\n\t\tipsetByID:     make(map[int]string),\n\t}\n\tfrontman.Driver = a\n\n\tgetEnforcerPID = func() int { return 111 }\n\tgetCnsAgentMgrPID = func() int { return 222 }\n\tgetCnsAgentBootPID = func() int { return 333 }\n\n\tConvey(\"Given a valid instance\", t, func() {\n\n\t\tfq := newFilterQueueWithDefaults()\n\t\timpl, err := NewInstance(fq, constants.LocalServer, false, nil, \"\", policy.None)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = impl.Run(context.Background())\n\t\tSo(err, ShouldBeNil)\n\n\t\t// check filters\n\t\tSo(a.filters, ShouldHaveLength, 8)\n\t\tSo(a.filters, ShouldContainKey, \"GlobalRules-OUTPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"GlobalRules-INPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"ProcessRules-OUTPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"ProcessRules-INPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"HostSvcRules-OUTPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"HostSvcRules-INPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"HostPU-OUTPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"HostPU-INPUT-v4\")\n\n\t\t// check ipsets\n\t\tSo(a.ipsets, ShouldHaveLength, 9)\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-TargetTCP\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-TargetUDP\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-Excluded\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-TargetTCP\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-TargetUDP\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-Excluded\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-WindowsAllIPs\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-WindowsAllIPs\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-WindowsDNSServer\")\n\t\tSo(a.ipsets[\"TRI-v4-WindowsAllIPs\"], ShouldContain, \"0.0.0.0/0\")\n\t\tSo(a.ipsets[\"TRI-v4-WindowsDNSServer\"], ShouldContain, \"0.0.0.0/0\")\n\t\tSo(a.ipsets[\"TRI-v6-WindowsAllIPs\"], ShouldContain, \"::/0\")\n\n\t\tcfg := &runtime.Configuration{}\n\t\timpl.SetTargetNetworks(cfg) // nolint\n\t\tSo(err, ShouldBeNil)\n\n\t\tpolicyRules := policy.NewPUPolicy(\"Context\", \"/ns1\", policy.Police, nil, nil, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\tpuInfo := policy.NewPUInfo(\"Context\", \"/ns1\", common.HostPU)\n\t\tpuInfo.Policy = policyRules\n\t\tpuInfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tpuInfo.Runtime.SetPUType(common.HostPU)\n\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\t// configure rules\n\t\terr = impl.ConfigureRules(1, \"ID\", puInfo)\n\t\tSo(err, ShouldBeNil)\n\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v4\"], ShouldContainKey, \"-m set --match-set TRI-v4-Excluded dstIP -j ACCEPT_ONCE\")\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getEnforcerPID()))\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldContainKey, \"-m set --match-set TRI-v4-Excluded srcIP -j ACCEPT_ONCE\")\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getEnforcerPID()))\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldContainKey, \"-p udp --sports 53 -m set --match-set TRI-v4-WindowsDNSServer srcIP -j NFQUEUE_FORCE -j MARK 83\")\n\t\tSo(a.filters[\"ProcessRules-OUTPUT-v4\"], ShouldBeEmpty)\n\t\tSo(a.filters[\"ProcessRules-INPUT-v4\"], ShouldBeEmpty)\n\t\tSo(a.filters[\"HostSvcRules-OUTPUT-v4\"], ShouldBeEmpty)\n\t\tSo(a.filters[\"HostSvcRules-INPUT-v4\"], ShouldBeEmpty)\n\t\tSo(a.filters[\"HostPU-INPUT-v4\"], ShouldContainKey, \"-p tcp -m set --match-set TRI-v4-Proxy-IDeCFL-srv dstPort -j REDIRECT --to-ports 20992\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v4\"], ShouldContainKey, \"-p tcp --tcp-flags 1,1 -m set --match-set TRI-v4-TargetTCP dstIP -j ACCEPT\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v4\"], ShouldContainKey, \"-p udp -m set --match-set TRI-v4-TargetUDP dstIP -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v4\"], ShouldContainKey, \"-p tcp -m set --match-set TRI-v4-TargetTCP dstIP -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v4\"], ShouldContainKey, \"-p tcp --tcp-flags 18,18 -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v4\"], ShouldContainKey, \"-m set --match-set TRI-v4-WindowsAllIPs dstIP -j NFLOG --nflog-group 10 --nflog-prefix 1215753766:default:default:10\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v4\"], ShouldContainKey, \"-m set --match-set TRI-v4-WindowsAllIPs dstIP -j DROP -j NFLOG --nflog-group 10 --nflog-prefix 1215753766:default:default:6\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v4\"], ShouldContainKey, \"-p tcp --tcp-flags 45,0 --tcp-option 34 -j NFQUEUE MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v4\"], ShouldContainKey, \"-p udp -m string --string n30njxq7bmiwr6dtxq --offset 4 -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v4\"], ShouldContainKey, \"-p udp -m string --string n30njxq7bmiwr6dtxq --offset 6 -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v4\"], ShouldContainKey, \"-p tcp --tcp-flags 2,0 -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v4\"], ShouldContainKey, \"-p tcp --tcp-flags 18,18 -m set --match-set TRI-v4-TargetTCP srcIP -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v4\"], ShouldContainKey, \"-m set --match-set TRI-v4-WindowsAllIPs srcIP -j NFLOG --nflog-group 11 --nflog-prefix 1215753766:default:default:10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v4\"], ShouldContainKey, \"-m set --match-set TRI-v4-WindowsAllIPs srcIP -j DROP -j NFLOG --nflog-group 11 --nflog-prefix 1215753766:default:default:6\")\n\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-Proxy-IDeCFL-srv\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-Proxy-IDeCFL-dst\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-ProcPort-IDeCFL\")\n\n\t\t// configure rules for process wrap\n\t\tpuInfo = policy.NewPUInfo(\"Context\", \"/ns1\", common.WindowsProcessPU)\n\t\tpuInfo.Policy = policyRules\n\t\tpuInfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tpuInfo.Runtime.SetPUType(common.WindowsProcessPU)\n\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\terr = impl.ConfigureRules(1, \"1234\", puInfo)\n\t\tSo(err, ShouldBeNil)\n\n\t\tconst pidFilterSubstring = \"-m owner --pid-owner 1234 --pid-childrenonly\"\n\t\tSo(len(a.filters[\"ProcessRules-OUTPUT-v4\"]), ShouldNotBeZeroValue)\n\t\tSo(len(a.filters[\"ProcessRules-INPUT-v4\"]), ShouldNotBeZeroValue)\n\n\t\tfor filter := range a.filters {\n\n\t\t\tif strings.HasPrefix(filter, \"ProcessRules\") {\n\t\t\t\tfor k := range a.filters[\"ProcessRules-OUTPUT-v4\"] {\n\t\t\t\t\tSo(k, ShouldContainSubstring, pidFilterSubstring)\n\t\t\t\t}\n\t\t\t\tfor k := range a.filters[\"ProcessRules-INPUT-v4\"] {\n\t\t\t\t\tSo(k, ShouldContainSubstring, pidFilterSubstring)\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfor k := range a.filters[filter] {\n\t\t\t\t\tSo(k, ShouldNotContainSubstring, pidFilterSubstring)\n\t\t\t\t}\n\t\t\t\tfor k := range a.filters[filter] {\n\t\t\t\t\tSo(k, ShouldNotContainSubstring, pidFilterSubstring)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t})\n\n}\n\nfunc Test_WindowsConfigureRulesV6(t *testing.T) {\n\n\ta := &abi{\n\t\tfilters:       make(map[string]map[string]bool),\n\t\tipsets:        make(map[string][]string),\n\t\tipsetsNomatch: make(map[string][]string),\n\t\tipsetByID:     make(map[int]string),\n\t}\n\tfrontman.Driver = a\n\n\tgetEnforcerPID = func() int { return 111 }\n\tgetCnsAgentMgrPID = func() int { return 222 }\n\tgetCnsAgentBootPID = func() int { return 333 }\n\n\tConvey(\"Given a valid instance with ipv6 enabled\", t, func() {\n\n\t\tfq := newFilterQueueWithDefaults()\n\t\timpl, err := NewInstance(fq, constants.LocalServer, true, nil, \"\", policy.None)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = impl.Run(context.Background())\n\t\tSo(err, ShouldBeNil)\n\n\t\t// check filters\n\t\tSo(a.filters, ShouldHaveLength, 16)\n\t\tSo(a.filters, ShouldContainKey, \"GlobalRules-OUTPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"GlobalRules-INPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"ProcessRules-OUTPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"ProcessRules-INPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"HostSvcRules-OUTPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"HostSvcRules-INPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"HostPU-OUTPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"HostPU-INPUT-v4\")\n\t\tSo(a.filters, ShouldContainKey, \"GlobalRules-OUTPUT-v6\")\n\t\tSo(a.filters, ShouldContainKey, \"GlobalRules-INPUT-v6\")\n\t\tSo(a.filters, ShouldContainKey, \"ProcessRules-OUTPUT-v6\")\n\t\tSo(a.filters, ShouldContainKey, \"ProcessRules-INPUT-v6\")\n\t\tSo(a.filters, ShouldContainKey, \"HostSvcRules-OUTPUT-v6\")\n\t\tSo(a.filters, ShouldContainKey, \"HostSvcRules-INPUT-v6\")\n\t\tSo(a.filters, ShouldContainKey, \"HostPU-OUTPUT-v6\")\n\t\tSo(a.filters, ShouldContainKey, \"HostPU-INPUT-v6\")\n\n\t\t// check ipsets\n\t\tSo(a.ipsets, ShouldHaveLength, 9)\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-TargetTCP\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-TargetUDP\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-Excluded\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-TargetTCP\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-TargetUDP\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-Excluded\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-WindowsAllIPs\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-WindowsAllIPs\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v4-WindowsDNSServer\")\n\t\tSo(a.ipsets[\"TRI-v4-WindowsAllIPs\"], ShouldContain, \"0.0.0.0/0\")\n\t\tSo(a.ipsets[\"TRI-v4-WindowsDNSServer\"], ShouldContain, \"0.0.0.0/0\")\n\t\tSo(a.ipsets[\"TRI-v6-WindowsAllIPs\"], ShouldContain, \"::/0\")\n\n\t\tcfg := &runtime.Configuration{}\n\t\timpl.SetTargetNetworks(cfg) // nolint\n\t\tSo(err, ShouldBeNil)\n\n\t\tpolicyRules := policy.NewPUPolicy(\"Context\", \"/ns1\", policy.Police, nil, nil, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\tpuInfo := policy.NewPUInfo(\"Context\", \"/ns1\", common.HostPU)\n\t\tpuInfo.Policy = policyRules\n\t\tpuInfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tpuInfo.Runtime.SetPUType(common.HostPU)\n\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\t// configure rules\n\t\terr = impl.ConfigureRules(1, \"ID\", puInfo)\n\t\tSo(err, ShouldBeNil)\n\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v6\"], ShouldContainKey, \"-m set --match-set TRI-v6-Excluded dstIP -j ACCEPT_ONCE\")\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 133/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 134/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 135/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 136/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 141/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 142/0 -j ACCEPT\")\n\n\t\tSo(a.filters[\"GlobalRules-INPUT-v6\"], ShouldContainKey, \"-m set --match-set TRI-v6-Excluded srcIP -j ACCEPT_ONCE\")\n\t\tSo(a.filters[\"GlobalRules-INPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 133/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-INPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 134/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-INPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 135/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-INPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 136/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-INPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 141/0 -j ACCEPT\")\n\t\tSo(a.filters[\"GlobalRules-INPUT-v6\"], ShouldContainKey, \"-p icmpv6 --icmp-type 142/0 -j ACCEPT\")\n\t\tSo(a.filters[\"ProcessRules-OUTPUT-v6\"], ShouldBeEmpty)\n\t\tSo(a.filters[\"ProcessRules-INPUT-v6\"], ShouldBeEmpty)\n\t\tSo(a.filters[\"HostSvcRules-OUTPUT-v6\"], ShouldBeEmpty)\n\t\tSo(a.filters[\"HostSvcRules-INPUT-v6\"], ShouldBeEmpty)\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v6\"], ShouldContainKey, \"-p tcp --tcp-flags 1,1 -m set --match-set TRI-v6-TargetTCP dstIP -j ACCEPT\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v6\"], ShouldContainKey, \"-p udp -m set --match-set TRI-v6-TargetUDP dstIP -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v6\"], ShouldContainKey, \"-p tcp -m set --match-set TRI-v6-TargetTCP dstIP -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v6\"], ShouldContainKey, \"-p tcp --tcp-flags 18,18 -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v6\"], ShouldContainKey, \"-m set --match-set TRI-v6-WindowsAllIPs dstIP -j NFLOG --nflog-group 10 --nflog-prefix 1215753766:default:default:10\")\n\t\tSo(a.filters[\"TRI-App-IDtxit7H-1-v6\"], ShouldContainKey, \"-m set --match-set TRI-v6-WindowsAllIPs dstIP -j DROP -j NFLOG --nflog-group 10 --nflog-prefix 1215753766:default:default:6\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v6\"], ShouldContainKey, \"-p udp -m string --string n30njxq7bmiwr6dtxq --offset 4 -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v6\"], ShouldContainKey, \"-p udp -m string --string n30njxq7bmiwr6dtxq --offset 6 -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v6\"], ShouldContainKey, \"-p tcp --tcp-flags 45,0 --tcp-option 34 -j NFQUEUE MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v6\"], ShouldContainKey, \"-p tcp --tcp-flags 2,0 -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v6\"], ShouldContainKey, \"-p tcp --tcp-flags 18,18 -m set --match-set TRI-v6-TargetTCP srcIP -j NFQUEUE -j MARK 10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v6\"], ShouldContainKey, \"-m set --match-set TRI-v6-WindowsAllIPs srcIP -j NFLOG --nflog-group 11 --nflog-prefix 1215753766:default:default:10\")\n\t\tSo(a.filters[\"TRI-Net-IDtxit7H-1-v6\"], ShouldContainKey, \"-m set --match-set TRI-v6-WindowsAllIPs srcIP -j DROP -j NFLOG --nflog-group 11 --nflog-prefix 1215753766:default:default:6\")\n\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-Proxy-IDeCFL-srv\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-Proxy-IDeCFL-dst\")\n\t\tSo(a.ipsets, ShouldContainKey, \"TRI-v6-ProcPort-IDeCFL\")\n\t})\n\n}\n\nfunc Test_WindowsConfigureRulesManagedByCns(t *testing.T) {\n\n\ta := &abi{\n\t\tfilters:       make(map[string]map[string]bool),\n\t\tipsets:        make(map[string][]string),\n\t\tipsetsNomatch: make(map[string][]string),\n\t\tipsetByID:     make(map[int]string),\n\t}\n\tfrontman.Driver = a\n\n\tgetEnforcerPID = func() int { return 111 }\n\n\tConvey(\"Given a valid instance where managed by CNS\", t, func() {\n\n\t\tgetCnsAgentMgrPID = func() int { return 222 }\n\t\tgetCnsAgentBootPID = func() int { return 333 }\n\n\t\tfq := newFilterQueueWithDefaults()\n\t\timpl, err := NewInstance(fq, constants.LocalServer, false, nil, \"\", policy.None)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = impl.Run(context.Background())\n\t\tSo(err, ShouldBeNil)\n\n\t\tcfg := &runtime.Configuration{}\n\t\timpl.SetTargetNetworks(cfg) // nolint\n\t\tSo(err, ShouldBeNil)\n\n\t\tpolicyRules := policy.NewPUPolicy(\"Context\", \"/ns1\", policy.Police, nil, nil, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\tpuInfo := policy.NewPUInfo(\"Context\", \"/ns1\", common.HostPU)\n\t\tpuInfo.Policy = policyRules\n\t\tpuInfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tpuInfo.Runtime.SetPUType(common.HostPU)\n\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\t// configure rules\n\t\terr = impl.ConfigureRules(1, \"ID\", puInfo)\n\t\tSo(err, ShouldBeNil)\n\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getEnforcerPID()))\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getCnsAgentMgrPID()))\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d --pid-children -j ACCEPT\", getCnsAgentBootPID()))\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getEnforcerPID()))\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getCnsAgentMgrPID()))\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d --pid-children -j ACCEPT\", getCnsAgentBootPID()))\n\t})\n\n\tConvey(\"Given a valid instance where not managed by CNS\", t, func() {\n\n\t\tgetCnsAgentMgrPID = func() int { return -1 }\n\t\tgetCnsAgentBootPID = func() int { return -1 }\n\n\t\tfq := newFilterQueueWithDefaults()\n\t\timpl, err := NewInstance(fq, constants.LocalServer, false, nil, \"\", policy.None)\n\t\tSo(err, ShouldBeNil)\n\n\t\terr = impl.Run(context.Background())\n\t\tSo(err, ShouldBeNil)\n\n\t\tcfg := &runtime.Configuration{}\n\t\timpl.SetTargetNetworks(cfg) // nolint\n\t\tSo(err, ShouldBeNil)\n\n\t\tpolicyRules := policy.NewPUPolicy(\"Context\", \"/ns1\", policy.Police, nil, nil, nil, nil, nil, nil, nil, nil, nil, 20992, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject|policy.Log, policy.Reject|policy.Log)\n\n\t\tpuInfo := policy.NewPUInfo(\"Context\", \"/ns1\", common.HostPU)\n\t\tpuInfo.Policy = policyRules\n\t\tpuInfo.Runtime = policy.NewPURuntimeWithDefaults()\n\t\tpuInfo.Runtime.SetPUType(common.HostPU)\n\t\tpuInfo.Runtime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"10\",\n\t\t})\n\n\t\t// configure rules\n\t\terr = impl.ConfigureRules(1, \"ID\", puInfo)\n\t\tSo(err, ShouldBeNil)\n\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getEnforcerPID()))\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v4\"], ShouldNotContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getCnsAgentMgrPID()))\n\t\tSo(a.filters[\"GlobalRules-OUTPUT-v4\"], ShouldNotContainKey, fmt.Sprintf(\"-m owner --pid-owner %d --pid-children -j ACCEPT\", getCnsAgentBootPID()))\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getEnforcerPID()))\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldNotContainKey, fmt.Sprintf(\"-m owner --pid-owner %d -j ACCEPT\", getCnsAgentMgrPID()))\n\t\tSo(a.filters[\"GlobalRules-INPUT-v4\"], ShouldNotContainKey, fmt.Sprintf(\"-m owner --pid-owner %d --pid-children -j ACCEPT\", getCnsAgentBootPID()))\n\t})\n\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/ipv4.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"strings\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/gaia/protocols\"\n)\n\nconst (\n\t// IPv4DefaultIP is the default ip address of ipv4 subnets\n\tIPv4DefaultIP = \"0.0.0.0/0\"\n)\n\nvar ipsetV4Param *ipset.Params\n\ntype ipv4 struct {\n\tipt provider.IptablesProvider\n}\n\nfunc init() {\n\tipsetV4Param = &ipset.Params{}\n}\n\n// GetIPv4Impl creates the instance of ipv4 struct which implements the interface\n// ipImpl\nfunc GetIPv4Impl() (IPImpl, error) {\n\tipt, err := provider.NewGoIPTablesProviderV4([]string{\"mangle\"}, CustomQOSChain)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to initialize iptables provider: %s\", err)\n\t}\n\n\treturn &ipv4{ipt: ipt}, nil\n}\n\nfunc (i *ipv4) IPVersion() int {\n\treturn IPV4\n}\n\nfunc (i *ipv4) IPFilter() func(net.IP) bool {\n\tipv4Filter := func(ip net.IP) bool {\n\t\treturn (ip.To4() != nil)\n\t}\n\n\treturn ipv4Filter\n}\n\nfunc (i *ipv4) GetDefaultIP() string {\n\treturn IPv4DefaultIP\n}\n\nfunc (i *ipv4) NeedICMP() bool {\n\treturn false\n}\n\nfunc (i *ipv4) ProtocolAllowed(proto string) bool {\n\n\treturn !(strings.ToUpper(proto) == protocols.L4ProtocolICMP6)\n}\n\nfunc (i *ipv4) Append(table, chain string, rulespec ...string) error {\n\treturn i.ipt.Append(table, chain, rulespec...)\n}\n\nfunc (i *ipv4) Insert(table, chain string, pos int, rulespec ...string) error {\n\treturn i.ipt.Insert(table, chain, pos, rulespec...)\n}\n\nfunc (i *ipv4) ListChains(table string) ([]string, error) {\n\treturn i.ipt.ListChains(table)\n}\n\nfunc (i *ipv4) ClearChain(table, chain string) error {\n\treturn i.ipt.ClearChain(table, chain)\n}\n\nfunc (i *ipv4) DeleteChain(table, chain string) error {\n\treturn i.ipt.DeleteChain(table, chain)\n}\n\nfunc (i *ipv4) NewChain(table, chain string) error {\n\treturn i.ipt.NewChain(table, chain)\n}\n\nfunc (i *ipv4) Commit() error {\n\treturn i.ipt.Commit()\n}\n\nfunc (i *ipv4) Delete(table, chain string, rulespec ...string) error {\n\treturn i.ipt.Delete(table, chain, rulespec...)\n}\n\nfunc (i *ipv4) RetrieveTable() map[string]map[string][]string {\n\treturn i.ipt.RetrieveTable()\n}\n\nfunc (i *ipv4) ResetRules(subs string) error {\n\treturn i.ipt.ResetRules(subs)\n}\n\nfunc (i *ipv4) ListRules(table, chain string) ([]string, error) {\n\treturn i.ipt.ListRules(table, chain)\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/ipv6.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"net\"\n\t\"strings\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/gaia/protocols\"\n)\n\nconst (\n\t// IPv6DefaultIP is the default IP subnet of ipv6\n\tIPv6DefaultIP = \"::/0\"\n)\n\ntype ipv6 struct {\n\tipt         provider.IptablesProvider\n\tipv6Enabled bool\n}\n\nvar ipsetV6Param *ipset.Params\n\nfunc init() {\n\tipsetV6Param = &ipset.Params{HashFamily: \"inet6\"}\n}\n\nfunc (i *ipv6) IPVersion() int {\n\treturn IPV6\n}\n\nfunc (i *ipv6) IPFilter() func(net.IP) bool {\n\tipv6Filter := func(ip net.IP) bool {\n\t\treturn (ip.To4() == nil)\n\t}\n\n\treturn ipv6Filter\n}\n\nfunc (i *ipv6) GetDefaultIP() string {\n\treturn IPv6DefaultIP\n}\n\nfunc (i *ipv6) NeedICMP() bool {\n\treturn true\n}\n\nfunc (i *ipv6) ProtocolAllowed(proto string) bool {\n\treturn !(strings.ToUpper(proto) == protocols.L4ProtocolICMP)\n}\n\nfunc (i *ipv6) Append(table, chain string, rulespec ...string) error {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil\n\t}\n\n\treturn i.ipt.Append(table, chain, rulespec...)\n}\n\nfunc (i *ipv6) Insert(table, chain string, pos int, rulespec ...string) error {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil\n\t}\n\n\treturn i.ipt.Insert(table, chain, pos, rulespec...)\n}\n\nfunc (i *ipv6) ListChains(table string) ([]string, error) {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil, nil\n\t}\n\n\treturn i.ipt.ListChains(table)\n}\n\nfunc (i *ipv6) ClearChain(table, chain string) error {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil\n\t}\n\n\treturn i.ipt.ClearChain(table, chain)\n}\n\nfunc (i *ipv6) DeleteChain(table, chain string) error {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil\n\t}\n\n\treturn i.ipt.DeleteChain(table, chain)\n}\n\nfunc (i *ipv6) NewChain(table, chain string) error {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil\n\t}\n\n\treturn i.ipt.NewChain(table, chain)\n}\n\nfunc (i *ipv6) Commit() error {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil\n\t}\n\n\treturn i.ipt.Commit()\n}\n\nfunc (i *ipv6) Delete(table, chain string, rulespec ...string) error {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil\n\t}\n\n\treturn i.ipt.Delete(table, chain, rulespec...)\n}\n\nfunc (i *ipv6) RetrieveTable() map[string]map[string][]string {\n\treturn i.ipt.RetrieveTable()\n}\n\nfunc (i *ipv6) ResetRules(subs string) error {\n\tif !i.ipv6Enabled || i.ipt == nil {\n\t\treturn nil\n\t}\n\n\treturn i.ipt.ResetRules(subs)\n}\n\nfunc (i *ipv6) ListRules(table, chain string) ([]string, error) {\n\treturn i.ipt.ListRules(table, chain)\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/ipv6_nonwindows.go",
    "content": "// +build !windows\n\npackage iptablesctrl\n\nimport (\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n)\n\n// GetIPv6Impl creates the instance of ipv6 struct which implements\n// the interface ipImpl\nfunc GetIPv6Impl(ipv6Enabled bool) (IPImpl, error) {\n\tipt, err := provider.NewGoIPTablesProviderV6([]string{\"mangle\"}, CustomQOSChain)\n\tif err == nil {\n\t\t// test if the system supports ip6tables\n\t\tif _, err = ipt.ListChains(\"mangle\"); err == nil {\n\t\t\treturn &ipv6{ipt: ipt, ipv6Enabled: ipv6Enabled}, nil\n\t\t}\n\t}\n\n\treturn &ipv6{ipt: nil, ipv6Enabled: false}, nil\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/ipv6_windows.go",
    "content": "// +build windows\n\npackage iptablesctrl\n\nimport (\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n)\n\n// GetIPv6Impl creates the instance of ipv6 struct which implements\n// the interface ipImpl\nfunc GetIPv6Impl(ipv6Enabled bool) (IPImpl, error) {\n\tif ipt, err := provider.NewGoIPTablesProviderV6(nil, \"\"); err == nil {\n\t\treturn &ipv6{ipt: ipt, ipv6Enabled: ipv6Enabled}, nil\n\t}\n\treturn &ipv6{ipt: nil, ipv6Enabled: false}, nil\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/legacyacls.go",
    "content": "package iptablesctrl\n\n// legacyProxyRules creates all the proxy specific rules.\nimport (\n\t\"text/template\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\n// This refers to the pu chain rules for pus in older distros like RH 6.9/Ubuntu 14.04. The rules\n// consider source ports to identify packets from the process.\nfunc (i *iptables) legacyPuChainRules(contextID, appChain string, netChain string, mark string, tcpPorts, udpPorts string, proxyPort string, proxyPortSetName string,\n\tappSection, netSection string, puType common.PUType, dnsProxyPort string, dnsServerIP string) [][]string {\n\n\tiptableCgroupSection := appSection\n\tiptableNetSection := netSection\n\trules := [][]string{}\n\n\tif tcpPorts != \"0\" {\n\t\trules = append(rules, [][]string{\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", tcpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--source-ports\", tcpPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", \"MARK\", \"--set-mark\", mark,\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", tcpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--source-ports\", tcpPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", appChain,\n\t\t\t},\n\t\t\t{\n\t\t\t\tnetPacketIPTableContext,\n\t\t\t\tiptableNetSection,\n\t\t\t\t\"-p\", tcpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--destination-ports\", tcpPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Container-specific-chain\",\n\t\t\t\t\"-j\", netChain,\n\t\t\t}}...)\n\t}\n\n\tif udpPorts != \"0\" {\n\t\trules = append(rules, [][]string{\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", udpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--source-ports\", udpPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", \"MARK\", \"--set-mark\", mark,\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", udpProto, \"-m\", \"mark\", \"--mark\", mark,\n\t\t\t\t\"-m\", \"addrtype\", \"--src-type\", \"LOCAL\",\n\t\t\t\t\"-m\", \"addrtype\", \"--dst-type\", \"LOCAL\",\n\t\t\t\t\"-m\", \"state\", \"--state\", \"NEW\",\n\t\t\t\t\"-j\", \"NFLOG\", \"--nflog-group\", \"10\",\n\t\t\t\t\"--nflog-prefix\", policy.DefaultAcceptLogPrefix(contextID),\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"traffic-same-pu\",\n\t\t\t\t\"-p\", udpProto, \"-m\", \"mark\", \"--mark\", mark,\n\t\t\t\t\"-m\", \"addrtype\", \"--src-type\", \"LOCAL\",\n\t\t\t\t\"-m\", \"addrtype\", \"--dst-type\", \"LOCAL\",\n\t\t\t\t\"-j\", \"ACCEPT\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tappPacketIPTableContext,\n\t\t\t\tiptableCgroupSection,\n\t\t\t\t\"-p\", udpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--source-ports\", udpPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Server-specific-chain\",\n\t\t\t\t\"-j\", appChain,\n\t\t\t},\n\t\t\t{\n\t\t\t\tnetPacketIPTableContext,\n\t\t\t\tiptableNetSection,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"traffic-same-pu\",\n\t\t\t\t\"-p\", udpProto, \"-m\", \"mark\", \"--mark\", mark,\n\t\t\t\t\"-m\", \"addrtype\", \"--src-type\", \"LOCAL\",\n\t\t\t\t\"-m\", \"addrtype\", \"--dst-type\", \"LOCAL\",\n\t\t\t\t\"-j\", \"ACCEPT\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tnetPacketIPTableContext,\n\t\t\t\tiptableNetSection,\n\t\t\t\t\"-p\", udpProto,\n\t\t\t\t\"-m\", \"multiport\",\n\t\t\t\t\"--destination-ports\", udpPorts,\n\t\t\t\t\"-m\", \"comment\", \"--comment\", \"Container-specific-chain\",\n\t\t\t\t\"-j\", netChain,\n\t\t\t}}...)\n\t}\n\n\tif puType == common.HostPU {\n\t\t// Add a capture all traffic rule for host pu. This traps all traffic going out\n\t\t// of the box.\n\n\t\trules = append(rules, []string{\n\t\t\tappPacketIPTableContext,\n\t\t\tiptableCgroupSection,\n\t\t\t\"-m\", \"comment\", \"--comment\", \"capture all outgoing traffic\",\n\t\t\t\"-j\", appChain,\n\t\t})\n\t}\n\n\treturn append(rules, i.legacyProxyRules(tcpPorts, proxyPort, proxyPortSetName, mark, dnsProxyPort, dnsServerIP)...)\n}\n\nfunc (i *iptables) legacyProxyRules(tcpPorts string, proxyPort string, proxyPortSetName string, cgroupMark string, dnsProxyPort string, dnsServerIP string) [][]string {\n\tdestSetName, srvSetName := i.getSetNames(proxyPortSetName)\n\n\taclInfo := ACLInfo{\n\t\tMangleTable:         appPacketIPTableContext,\n\t\tNatTable:            appProxyIPTableContext,\n\t\tMangleProxyAppChain: proxyOutputChain,\n\t\tMangleProxyNetChain: proxyInputChain,\n\t\tNatProxyNetChain:    natProxyInputChain,\n\t\tNatProxyAppChain:    natProxyOutputChain,\n\t\tCgroupMark:          cgroupMark,\n\t\tDestIPSet:           destSetName,\n\t\tSrvIPSet:            srvSetName,\n\t\tProxyPort:           proxyPort,\n\t\tProxyMark:           proxyMark,\n\t\tTCPPorts:            tcpPorts,\n\t\tDNSProxyPort:        dnsProxyPort,\n\t\tDNSServerIP:         dnsServerIP,\n\t}\n\n\ttmpl := template.Must(template.New(legacyProxyRules).Funcs(template.FuncMap{\n\t\t\"isCgroupSet\": func() bool {\n\t\t\treturn cgroupMark != \"\"\n\t\t},\n\t\t\"enableDNSProxy\": func() bool {\n\t\t\treturn dnsServerIP != \"\"\n\t\t},\n\t}).Parse(legacyProxyRules))\n\n\trules, err := extractRulesFromTemplate(tmpl, aclInfo)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to extract rules\", zap.Error(err))\n\t}\n\treturn rules\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/portset.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"fmt\"\n\n\t\"go.aporeto.io/trireme-lib/controller/constants\"\n\t\"go.uber.org/zap\"\n)\n\nfunc (i *iptables) getPortSet(contextID string) string {\n\tportset, err := i.contextIDToPortSetMap.Get(contextID)\n\tif err != nil {\n\t\treturn \"\"\n\t}\n\n\treturn portset.(string)\n}\n\n// createPortSets creates either UID or process port sets. This is only\n// needed for Linux PUs and it returns immediately for container PUs.\nfunc (i *iptables) createPortSet(contextID string, username string) error {\n\n\tif i.mode == constants.RemoteContainer {\n\t\treturn nil\n\t}\n\n\tipsetPrefix := i.impl.GetIPSetPrefix()\n\tprefix := \"\"\n\tif username != \"\" {\n\t\tprefix = ipsetPrefix + uidPortSetPrefix\n\t} else {\n\t\tprefix = ipsetPrefix + processPortSetPrefix\n\t}\n\tportSetName := puPortSetName(contextID, prefix)\n\n\t_, err := i.ipset.NewIpset(portSetName, portSetIpsetType, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\ti.contextIDToPortSetMap.AddOrUpdate(contextID, portSetName)\n\treturn nil\n}\n\n// deletePortSet delets the ports set that was created for a Linux PU.\n// It returns without errors for container PUs.\nfunc (i *iptables) deletePortSet(contextID string) error {\n\n\tif i.mode == constants.RemoteContainer {\n\t\treturn nil\n\t}\n\n\tportSetName := i.getPortSet(contextID)\n\tif portSetName == \"\" {\n\t\treturn fmt.Errorf(\"Failed to find port set\")\n\t}\n\n\tips := i.ipset.GetIpset(portSetName)\n\tif err := ips.Destroy(); err != nil {\n\t\treturn fmt.Errorf(\"Failed to delete pu port set \"+portSetName, zap.Error(err))\n\t}\n\n\tif err := i.contextIDToPortSetMap.Remove(contextID); err != nil {\n\t\tzap.L().Debug(\"portset not found for the contextID\", zap.String(\"contextID\", contextID))\n\t}\n\n\treturn nil\n}\n\n// DeletePortFromPortSet deletes ports from port sets\nfunc (i *iptables) DeletePortFromPortSet(contextID string, port string) error {\n\n\tportSetName := i.getPortSet(contextID)\n\tif portSetName == \"\" {\n\t\treturn fmt.Errorf(\"unable to get portset for contextID %s\", contextID)\n\t}\n\n\tips := i.ipset.GetIpset(portSetName)\n\tif err := ips.Del(port); err != nil {\n\t\treturn fmt.Errorf(\"unable to delete port from portset: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// DeletePortFromPortSet deletes ports from port sets\nfunc (i *Instance) DeletePortFromPortSet(contextID string, port string) error {\n\n\tif err := i.iptv4.DeletePortFromPortSet(contextID, port); err != nil {\n\t\tzap.L().Warn(\"Failed to delete port from ipv4 portset \", zap.String(\"contextID\", contextID), zap.String(\"port\", port), zap.Error(err))\n\t}\n\n\tif err := i.iptv6.DeletePortFromPortSet(contextID, port); err != nil {\n\t\tzap.L().Warn(\"Failed to delete port from ipv6 portset \", zap.String(\"port\", port), zap.Error(err))\n\t}\n\n\treturn nil\n}\n\n// AddPortToPortSet adds ports to the portsets\nfunc (i *iptables) AddPortToPortSet(contextID string, port string) error {\n\n\tportSetName := i.getPortSet(contextID)\n\tif portSetName == \"\" {\n\t\treturn fmt.Errorf(\"unable to get portset for contextID %s\", contextID)\n\t}\n\tips := i.ipset.GetIpset(portSetName)\n\tif err := ips.Add(port, 0); err != nil {\n\t\treturn fmt.Errorf(\"unable to add port to portset: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// AddPortToPortSet adds ports to the portsets\nfunc (i *Instance) AddPortToPortSet(contextID string, port string) error {\n\n\tif err := i.iptv4.AddPortToPortSet(contextID, port); err != nil {\n\t\tzap.L().Warn(\"Failed to add port to ipv4 portset\", zap.String(\"contextID\", contextID), zap.String(\"port\", port), zap.Error(err))\n\t}\n\n\tif err := i.iptv6.AddPortToPortSet(contextID, port); err != nil {\n\t\tzap.L().Warn(\"Failed to add port to ipv6 portset\", zap.String(\"contextID\", contextID), zap.String(\"port\", port), zap.Error(err))\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/rules.go",
    "content": "// +build !windows,!rhel6\n\npackage iptablesctrl\n\nimport (\n\t\"strconv\"\n\n\tmarkconstants \"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n)\n\nvar enforcerCgroupMark = strconv.Itoa(markconstants.EnforcerCgroupMark)\n\nvar triremChains = `\n{{if isLocalServer}}\n-t {{.MangleTable}} -N {{.HostInput}}\n-t {{.MangleTable}} -N {{.HostOutput}}\n-t {{.MangleTable}} -N {{.NetworkSvcInput}}\n-t {{.MangleTable}} -N {{.NetworkSvcOutput}}\n-t {{.MangleTable}} -N {{.TriremeInput}}\n-t {{.MangleTable}} -N {{.TriremeOutput}}\n{{end}}\n-t {{.MangleTable}} -N {{.NfqueueOutput}}\n-t {{.MangleTable}} -N {{.NfqueueInput}}\n-t {{.MangleTable}} -N {{.MangleProxyAppChain}}\n-t {{.MangleTable}} -N {{.MainAppChain}}\n-t {{.MangleTable}} -N {{.MainNetChain}}\n-t {{.MangleTable}} -N {{.MangleProxyNetChain}}\n-t {{.NatTable}} -N {{.NatProxyAppChain}}\n-t {{.NatTable}} -N {{.NatProxyNetChain}}\n{{if isIstioEnabled}}\n-t {{.MangleTable}} -N {{.IstioChain}}\n{{end}}\n`\n\nvar globalRules = `\n\n{{.MangleTable}} {{.NfqueueInput}} -j HMARK --hmark-tuple dport,sport --hmark-mod {{.NumNFQueues}} --hmark-offset {{.DefaultInputMark}} --hmark-rnd 0xdeadbeef\n\n{{range $index,$queuenum := .NFQueues}}\n{{$.MangleTable}} {{$.NfqueueInput}} -m mark --mark {{getInputMark}} -j NFQUEUE --queue-num {{$queuenum}} --queue-bypass\n{{end}}\n\n{{.MangleTable}} {{.NfqueueOutput}} -j HMARK --hmark-tuple sport,dport --hmark-mod {{.NumNFQueues}} --hmark-offset 0 --hmark-rnd 0xdeadbeef\n\n{{range $index,$queuenum := .NFQueues}}\n{{$.MangleTable}} {{$.NfqueueOutput}} -m mark --mark {{getOutputMark}} -j NFQUEUE --queue-num {{$queuenum}} --queue-bypass\n{{end}}\n\n{{.MangleTable}} INPUT -m set ! --match-set {{.ExclusionsSet}} src -j {{.MainNetChain}}\n{{.MangleTable}} {{.MainNetChain}} -j {{ .MangleProxyNetChain }}\n\n{{/* tcp rules */}}\n\n{{.MangleTable}} {{.MainNetChain}} -p tcp -m mark --mark {{.PacketMarkToSetConnmark}} -j CONNMARK --set-mark {{.DefaultExternalConnmark}}\n{{.MangleTable}} {{.MainNetChain}} -p tcp -m mark --mark {{.PacketMarkToSetConnmark}} -j ACCEPT\n{{.MangleTable}} {{.MainNetChain}} -m connmark --mark {{.DefaultExternalConnmark}} -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\n{{.MangleTable}} {{$.MainNetChain}} -m set --match-set {{$.TargetTCPNetSet}} src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j {{.NfqueueInput}}\n\n{{/* tcp rules ends */}}\n\n{{/* udp rules */}}\n\n{{.MangleTable}} {{$.MainNetChain}} -p udp -m string --string {{$.UDPSignature}} --algo bm --to 65535 -j {{.NfqueueInput}}\n{{.MangleTable}} {{.MainNetChain}} -p udp -m connmark --mark {{.DefaultDropConnmark}} -m comment --comment \"Drop UDP ACL\" -j DROP \n{{.MangleTable}} {{.MainNetChain}} -m connmark --mark {{.DefaultConnmark}} -p udp -j ACCEPT\n\n{{/* udp rules ends */}}\n\n{{if isLocalServer}}\n{{.MangleTable}} {{.MainNetChain}} -j {{.TriremeInput}}\n{{.MangleTable}} {{.MainNetChain}} -j {{.NetworkSvcInput}}\n{{.MangleTable}} {{.MainNetChain}} -j {{.HostInput}}\n{{end}}\n\n{{if isIstioEnabled}}\n{{.MangleTable}} OUTPUT -j {{.IstioChain}}\n{{.MangleTable}} {{.MainNetChain}} -p tcp --dport {{IstioRedirPort}} -m addrtype --dst-type LOCAL -m addrtype --src-type LOCAL -j ACCEPT\n{{end}}\n{{.MangleTable}} OUTPUT -m set ! --match-set {{.ExclusionsSet}} dst -j {{.MainAppChain}}\n\n{{.MangleTable}} {{.MainAppChain}} -m mark --mark {{.PacketMarkToSetConnmark}} -j CONNMARK --set-mark {{.DefaultExternalConnmark}}\n{{.MangleTable}} {{.MainAppChain}} -p tcp -m mark --mark {{.PacketMarkToSetConnmark}} -j ACCEPT\n\n{{/* enforcer rules */}}\n{{.MangleTable}} {{.MainAppChain}}  -p udp --dport 53 -m mark --mark 0x40 -m cgroup --cgroup ` + enforcerCgroupMark + ` -j CONNMARK --set-mark {{.DefaultExternalConnmark}}\n{{.MangleTable}} {{.MainAppChain}}  -p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark {{.DefaultExternalConnmark}}\n{{/* enforcer rules ends */}}\n\n\n{{.MangleTable}} {{.MainAppChain}} -j {{.MangleProxyAppChain}}\n{{.MangleTable}} {{.MainAppChain}} -m connmark --mark {{.DefaultExternalConnmark}} -j ACCEPT\n{{.MangleTable}} {{.MainAppChain}} -p udp -m connmark --mark {{.DefaultDropConnmark}} -m comment --comment \"Drop UDP ACL\" -j DROP\n{{.MangleTable}} {{.MainAppChain}} -m connmark --mark {{.DefaultConnmark}} -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK  -j ACCEPT\n{{.MangleTable}} {{.MainAppChain}} -m connmark --mark {{.DefaultConnmark}} -p udp -j ACCEPT\n{{.MangleTable}} {{.MainAppChain}} -m mark --mark {{.RawSocketMark}} -j ACCEPT\n{{$.MangleTable}} {{$.MainAppChain}} -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j {{.NfqueueOutput}}\n\n{{if isLocalServer}}\n{{.MangleTable}} {{.MainAppChain}} -j {{.TriremeOutput}}\n{{.MangleTable}} {{.MainAppChain}} -j {{.NetworkSvcOutput}}\n{{.MangleTable}} {{.MainAppChain}} -j {{.HostOutput}}\n{{end}}\n\n{{.MangleTable}} {{.MangleProxyAppChain}} -m mark --mark {{.ProxyMark}} -j ACCEPT\n{{.MangleTable}} {{.MangleProxyNetChain}} -m mark --mark {{.ProxyMark}} -j ACCEPT\n\n{{/* Using RETURN instead of ACCEPT because ACCEPT skips k8s DNS NAT rules */}}\n{{.NatTable}} {{.NatProxyAppChain}} -m mark --mark {{.ProxyMark}} -j RETURN\n{{.NatTable}} {{.NatProxyNetChain}} -m mark --mark {{.ProxyMark}} -j ACCEPT\n`\n\n// cgroupCaptureTemplate are the list of iptables commands that will hook traffic and send it to a PU specific\n// chain. The hook method depends on the type of PU.\nvar cgroupCaptureTemplate = `\n\n{{if isTCPPorts}}\n{{.MangleTable}} {{.NetSection}} -p tcp -m multiport --destination-ports {{.TCPPorts}} -m comment --comment PU-Chain -j {{.NetChain}}\n{{else}}\n{{.MangleTable}} {{.NetSection}} -p tcp -m set --match-set {{.TCPPortSet}} dst -m comment --comment PU-Chain -j {{.NetChain}}\n{{end}}\n\n{{if isHostPU}}\n{{/* UDP response traffic needs to be accepted */}}\n{{.MangleTable}} {{.NetSection}} -p udp -m udp -m state --state ESTABLISHED -m connmark ! --mark {{.DefaultHandShakeMark}} -j ACCEPT\n{{/* Traffic to systemd resolver/dnsmasq gets accepted */}}\n{{.MangleTable}} {{.NetSection}} -p udp -m udp --dport 53 -j ACCEPT\n{{.MangleTable}} {{.NetSection}} -m comment --comment PU-Chain -j {{.NetChain}}\n{{end}}\n\n{{if isUDPPorts}}\n{{.MangleTable}} {{.NetSection}} -p udp -m multiport --destination-ports {{.UDPPorts}} -m comment --comment PU-Chain -j {{.NetChain}}\n{{end}}\n\n{{if isHostPU}}\n{{.MangleTable}} {{.AppSection}} -m cgroup ! --cgroup ` + enforcerCgroupMark + ` -m comment --comment PU-Chain -j MARK --set-mark {{.Mark}}\n{{.MangleTable}} {{.AppSection}} -m mark --mark {{.Mark}} -m comment --comment PU-Chain -j {{.AppChain}}\n{{else}}\n{{.MangleTable}} {{.AppSection}} -m cgroup --cgroup {{.Mark}} -m comment --comment PU-Chain -j MARK --set-mark {{.Mark}}\n{{.MangleTable}} {{.AppSection}} -m mark --mark {{.Mark}} -m comment --comment PU-Chain -j {{.AppChain}}\n{{end}}\n\n{{if isHostPU}}\n{{if isIPV6Enabled}}\n{{.MangleTable}} {{.AppSection}} -p icmpv6 -j {{.AppChain}}\n{{else}}\n{{.MangleTable}} {{.AppSection}} -p icmp -j {{.AppChain}}\n{{end}}\n{{end}}\n`\n\n// containerChainTemplate will hook traffic towards the container specific chains.\nvar containerChainTemplate = `\n{{.MangleTable}} {{.AppSection}} -m comment --comment Container-specific-chain -j {{.AppChain}}\n{{.MangleTable}} {{.NetSection}} -m comment --comment Container-specific-chain -j {{.NetChain}}`\n\nvar istioChainTemplate = `\n{{.MangleTable}} {{.IstioChain}} -p tcp -m owner ! --uid-owner {{IstioUID}} -j ACCEPT\n{{.MangleTable}} {{.IstioChain}} -p tcp -m owner --uid-owner {{IstioUID}} -m addrtype --dst-type LOCAL -m addrtype --src-type LOCAL -j CONNMARK --set-mark {{.DefaultExternalConnmark}}\n{{.MangleTable}} {{.IstioChain}} -p tcp -m owner --uid-owner {{IstioUID}} -m addrtype --dst-type LOCAL -j ACCEPT`\n\nvar acls = `\n{{range .RejectObserveContinue}}\n{{joinRule .}}\n{{end}}\n\n{{range .RejectNotObserved}}\n{{joinRule .}}\n{{end}}\n\n{{range .RejectObserveApply}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptObserveContinue}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptNotObserved}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptObserveApply}}\n{{joinRule .}}\n{{end}}\n\n{{range .ReverseRules}}\n{{joinRule .}}\n{{end}}\n`\n\nvar preNetworkACLRuleTemplate = `\n{{/* matches syn and ack packets */}}\n{{$.MangleTable}} {{$.NetChain}} -p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j {{.NfqueueInput}}\n`\n\n// packetCaptureTemplate are the rules that trap traffic towards the user space.\nvar packetCaptureTemplate = `\n{{if needICMP}}\n{{.MangleTable}} {{.AppChain}} -p icmpv6 -m bpf --bytecode \"{{.ICMPv6Allow}}\" -j ACCEPT\n{{end}}\n\n{{if isNotContainerPU}}\n\n{{$.MangleTable}} {{$.AppChain}} -m set --match-set {{$.TargetTCPNetSet}} dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\n{{$.MangleTable}} {{$.AppChain}} -m set --match-set {{$.TargetTCPNetSet}} dst -p tcp -j HMARK --hmark-tuple sport,dport --hmark-mod {{.NumNFQueues}} --hmark-offset {{packetMark}} --hmark-rnd 0xdeadbeef\n{{$.MangleTable}} {{$.AppChain}} -p udp -m set --match-set {{$.TargetUDPNetSet}} dst -j HMARK --hmark-tuple sport,dport --hmark-mod {{.NumNFQueues}} --hmark-offset {{packetMark}} --hmark-rnd 0xdeadbeef\n\n{{range $index,$queuenum := .NFQueues}}\n{{$.MangleTable}} {{$.AppChain}} -m mark --mark {{getOutputMark}} -j NFQUEUE --queue-num {{$queuenum}} --queue-bypass\n{{end}}\n\n{{else}}\n{{$.MangleTable}} {{$.AppChain}} -m set --match-set {{$.TargetTCPNetSet}} dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\n{{$.MangleTable}} {{$.AppChain}} -m set --match-set {{$.TargetTCPNetSet}} dst -p tcp -j {{.NfqueueOutput}}\n{{$.MangleTable}} {{$.AppChain}} -p udp -m set --match-set {{$.TargetUDPNetSet}} dst -j {{.NfqueueOutput}}\n{{end}}\n\n{{.MangleTable}} {{.AppChain}} -p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\n\n{{.MangleTable}} {{.AppChain}} -p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\n\n{{range appAnyRules}}\n{{joinRule .}}\n{{end}}\n{{.MangleTable}} {{.AppChain}} -d {{.DefaultIP}} -m state --state NEW -j NFLOG  --nflog-group 10 --nflog-prefix {{.AppNFLOGPrefix}}\n{{if isAppDrop}}\n{{.MangleTable}} {{.AppChain}} -d {{.DefaultIP}} -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix {{.AppNFLOGDropPacketLogPrefix}}\n{{end}}\n{{.MangleTable}} {{.AppChain}} -d {{.DefaultIP}} -j {{.AppDefaultAction}}\n\n{{if needICMP}}\n{{.MangleTable}} {{.NetChain}} -p icmpv6 -m bpf --bytecode \"{{.ICMPv6Allow}}\" -j ACCEPT\n{{end}}\n\n\n{{.MangleTable}} {{.NetChain}} -p tcp -m set --match-set {{$.TargetTCPNetSet}} src -m tcp --tcp-flags SYN NONE -j {{.NfqueueInput}}\n{{.MangleTable}} {{.NetChain}} -p udp -m set --match-set {{.TargetUDPNetSet}} src --match limit --limit 1000/s -j {{.NfqueueInput}}\n\n{{.MangleTable}} {{.NetChain}} -p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\n\n{{range netAnyRules}}\n{{joinRule .}}\n{{end}}\n\n{{.MangleTable}} {{.NetChain}} -s {{.DefaultIP}} -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix {{.NetNFLOGPrefix}}\n{{if isNetDrop}}\n{{.MangleTable}} {{.NetChain}} -s {{.DefaultIP}} -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix {{.NetNFLOGDropPacketLogPrefix}}\n{{end}}\n{{.MangleTable}} {{.NetChain}} -s {{.DefaultIP}} -j {{.NetDefaultAction}}\n`\n\nvar proxyDNSChainTemplate = `\n{{if enableDNSProxy}}\n{{.MangleTable}} {{.MangleProxyAppChain}} -p udp -m udp --sport {{.DNSProxyPort}} -j ACCEPT\n{{.MangleTable}} {{.MangleProxyNetChain}} -p udp -m udp --dport {{.DNSProxyPort}} -j ACCEPT\n{{if isCgroupSet}}\n{{.NatTable}} {{.NatProxyAppChain}} -d {{.DNSServerIP}} -p udp --dport 53 -m mark ! --mark {{.ProxyMark}} -m cgroup --cgroup {{.CgroupMark}} -j CONNMARK --save-mark\n{{.NatTable}} {{.NatProxyAppChain}} -d {{.DNSServerIP}} -p udp --dport 53 -m mark ! --mark {{.ProxyMark}} -m cgroup --cgroup {{.CgroupMark}} -j REDIRECT --to-ports {{.DNSProxyPort}}\n{{else}}\n{{.NatTable}} {{.NatProxyAppChain}} -d {{.DNSServerIP}} -p udp --dport 53 -m mark ! --mark {{.ProxyMark}} -j REDIRECT --to-ports {{.DNSProxyPort}}\n{{end}}\n{{end}}\n`\nvar proxyChainTemplate = `\n{{.MangleTable}} {{.MangleProxyAppChain}} -p tcp -m tcp --sport {{.ProxyPort}} -j ACCEPT\n{{.MangleTable}} {{.MangleProxyAppChain}} -p tcp -m set --match-set {{.SrvIPSet}} src -j ACCEPT\n{{.MangleTable}} {{.MangleProxyAppChain}} -p tcp -m set --match-set {{.DestIPSet}} dst,dst -m mark ! --mark {{.ProxyMark}} -j ACCEPT\n\n{{.MangleTable}} {{.MangleProxyNetChain}} -p tcp -m set --match-set {{.DestIPSet}} src,src -j ACCEPT\n{{.MangleTable}} {{.MangleProxyNetChain}} -p tcp -m set --match-set {{.SrvIPSet}} src -m addrtype --src-type LOCAL -j ACCEPT\n{{.MangleTable}} {{.MangleProxyNetChain}} -p tcp -m tcp --dport {{.ProxyPort}} -j ACCEPT\n\n{{if isCgroupSet}}\n{{.NatTable}} {{.NatProxyAppChain}} -p tcp -m set --match-set {{.DestIPSet}} dst,dst -m mark ! --mark {{.ProxyMark}} -m cgroup --cgroup {{.CgroupMark}} -j REDIRECT --to-ports {{.ProxyPort}}\n{{else}}\n{{.NatTable}} {{.NatProxyAppChain}} -p tcp -m set --match-set {{.DestIPSet}} dst,dst -m mark ! --mark {{.ProxyMark}} -j REDIRECT --to-ports {{.ProxyPort}}\n{{end}}\n{{.NatTable}} {{.NatProxyNetChain}} -p tcp -m set --match-set {{.SrvIPSet}} dst -m mark ! --mark {{.ProxyMark}} -j REDIRECT --to-ports {{.ProxyPort}}`\n\nvar globalHooks = `\n{{.MangleTable}} INPUT -m set ! --match-set {{.ExclusionsSet}} src -j {{.MainNetChain}}\n{{.MangleTable}} OUTPUT -m set ! --match-set {{.ExclusionsSet}} dst -j {{.MainAppChain}}\n{{.NatTable}} PREROUTING -p tcp -m addrtype --dst-type LOCAL -m set ! --match-set {{.ExclusionsSet}} src -j {{.NatProxyNetChain}}\n{{.NatTable}} OUTPUT -m set ! --match-set {{.ExclusionsSet}} dst -j {{.NatProxyAppChain}}\n`\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/rules_rhel6.go",
    "content": "// +build rhel6\n\npackage iptablesctrl\n\nvar triremChains = `\n{{if isLocalServer}}\n-t {{.MangleTable}} -N {{.HostInput}}\n-t {{.MangleTable}} -N {{.HostOutput}}\n-t {{.MangleTable}} -N {{.NetworkSvcInput}}\n-t {{.MangleTable}} -N {{.NetworkSvcOutput}}\n-t {{.MangleTable}} -N {{.TriremeInput}}\n-t {{.MangleTable}} -N {{.TriremeOutput}}\n{{end}}\n-t {{.MangleTable}} -N {{.NfqueueOutput}}\n-t {{.MangleTable}} -N {{.NfqueueInput}}\n-t {{.MangleTable}} -N {{.MangleProxyAppChain}}\n-t {{.MangleTable}} -N {{.MainAppChain}}\n-t {{.MangleTable}} -N {{.MainNetChain}}\n-t {{.MangleTable}} -N {{.MangleProxyNetChain}}\n-t {{.NatTable}} -N {{.NatProxyAppChain}}\n-t {{.NatTable}} -N {{.NatProxyNetChain}}\n`\n\nvar globalRules = `\n\n{{$.MangleTable}} {{$.NfqueueInput}} -j MARK --set-mark {{.DefaultInputMark}}\n{{$.MangleTable}} {{$.NfqueueInput}} -m mark --mark {{.DefaultInputMark}} -j NFQUEUE --queue-balance {{queueBalance}} --queue-bypass\n\n{{$.MangleTable}} {{$.NfqueueOutput}} -j MARK --set-mark 0\n{{$.MangleTable}} {{$.NfqueueOutput}} -m mark --mark 0 -j NFQUEUE --queue-balance {{queueBalance}} --queue-bypass\n\n{{.MangleTable}} INPUT -m set ! --match-set {{.ExclusionsSet}} src -j {{.MainNetChain}}\n{{.MangleTable}} {{.MainNetChain}} -p udp --sport 53 -j ACCEPT\n{{.MangleTable}} {{.MainNetChain}} -j {{ .MangleProxyNetChain }}\n\n{{/* tcp rules */}}\n\n{{.MangleTable}} {{.MainNetChain}} -p tcp -m mark --mark {{.PacketMarkToSetConnmark}} -j CONNMARK --set-mark {{.DefaultExternalConnmark}}\n{{.MangleTable}} {{.MainNetChain}} -p tcp -m mark --mark {{.PacketMarkToSetConnmark}} -j ACCEPT\n{{.MangleTable}} {{.MainNetChain}} -m connmark --mark {{.DefaultExternalConnmark}} -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j ACCEPT\n{{.MangleTable}} {{$.MainNetChain}} -m set --match-set {{$.TargetTCPNetSet}} src -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j {{.NfqueueInput}}\n\n{{/* tcp rules ends */}}\n\n{{/* udp rules */}}\n\n{{.MangleTable}} {{$.MainNetChain}} -p udp -m string --string {{$.UDPSignature}} --algo bm --to 65535 -j {{.NfqueueInput}}\n{{.MangleTable}} {{.MainNetChain}} -p udp -m connmark --mark {{.DefaultDropConnmark}} -m comment --comment \"Drop UDP ACL\" -j DROP \n{{.MangleTable}} {{.MainNetChain}} -m connmark --mark {{.DefaultConnmark}} -p udp -j ACCEPT\n\n{{/* udp rules ends */}}\n\n{{if isLocalServer}}\n{{.MangleTable}} {{.MainNetChain}} -j {{.TriremeInput}}\n{{.MangleTable}} {{.MainNetChain}} -j {{.NetworkSvcInput}}\n{{.MangleTable}} {{.MainNetChain}} -j {{.HostInput}}\n{{end}}\n\n{{.MangleTable}} OUTPUT -m set ! --match-set {{.ExclusionsSet}} dst -j {{.MainAppChain}}\n{{.MangleTable}} {{.MainAppChain}} -p udp --dport 53 -j ACCEPT\n\n{{.MangleTable}} {{.MainAppChain}} -m mark --mark {{.PacketMarkToSetConnmark}} -j CONNMARK --set-mark {{.DefaultExternalConnmark}}\n{{.MangleTable}} {{.MainAppChain}} -p tcp -m mark --mark {{.PacketMarkToSetConnmark}} -j ACCEPT\n\n{{.MangleTable}} {{.MainAppChain}}  -p udp --dport 53 -m mark --mark 0x40 -j CONNMARK --set-mark {{.DefaultExternalConnmark}}\n\n{{.MangleTable}} {{.MainAppChain}} -j {{.MangleProxyAppChain}}\n{{.MangleTable}} {{.MainAppChain}} -m connmark --mark {{.DefaultExternalConnmark}} -j ACCEPT\n{{.MangleTable}} {{.MainAppChain}} -p udp -m connmark --mark {{.DefaultDropConnmark}} -m comment --comment \"Drop UDP ACL\" -j DROP\n{{.MangleTable}} {{.MainAppChain}} -m connmark --mark {{.DefaultConnmark}} -p tcp ! --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK  -j ACCEPT\n{{.MangleTable}} {{.MainAppChain}} -m connmark --mark {{.DefaultConnmark}} -p udp -j ACCEPT\n{{.MangleTable}} {{.MainAppChain}} -m mark --mark {{.RawSocketMark}} -j ACCEPT\n{{$.MangleTable}} {{$.MainAppChain}} -p tcp -m tcp --tcp-flags FIN,RST,URG,PSH,SYN,ACK SYN,ACK -j {{.NfqueueOutput}}\n\n{{if isLocalServer}}\n{{.MangleTable}} {{.MainAppChain}} -j {{.TriremeOutput}}\n{{.MangleTable}} {{.MainAppChain}} -j {{.NetworkSvcOutput}}\n{{.MangleTable}} {{.MainAppChain}} -j {{.HostOutput}}\n{{end}}\n\n{{.MangleTable}} {{.MangleProxyAppChain}} -m mark --mark {{.ProxyMark}} -j ACCEPT\n{{.MangleTable}} {{.MangleProxyNetChain}} -m mark --mark {{.ProxyMark}} -j ACCEPT\n\n{{.NatTable}} {{.NatProxyAppChain}} -m mark --mark {{.ProxyMark}} -j ACCEPT\n{{.NatTable}} {{.NatProxyNetChain}} -m mark --mark {{.ProxyMark}} -j ACCEPT\n`\n\n// cgroupCaptureTemplate is not used for rhel6\nvar cgroupCaptureTemplate = ``\n\n// containerChainTemplate is not used for rhel6\nvar containerChainTemplate = ``\n\n// istioChainTemplate is not used for rhel6\nvar istioChainTemplate = ``\n\nvar acls = `\n{{range .RejectObserveContinue}}\n{{joinRule .}}\n{{end}}\n\n{{range .RejectNotObserved}}\n{{joinRule .}}\n{{end}}\n\n{{range .RejectObserveApply}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptObserveContinue}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptNotObserved}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptObserveApply}}\n{{joinRule .}}\n{{end}}\n\n{{range .ReverseRules}}\n{{joinRule .}}\n{{end}}\n`\n\nvar preNetworkACLRuleTemplate = `\n{{/* matches syn and ack packets */}}\n{{$.MangleTable}} {{$.NetChain}} -p tcp -m tcp --tcp-option 34 -m tcp --tcp-flags FIN,RST,URG,PSH NONE -j {{.NfqueueInput}}\n`\n\n// packetCaptureTemplate are the rules that trap traffic towards the user space.\nvar packetCaptureTemplate = `\n\n{{.MangleTable}} {{.AppChain}} -p icmp -j NFQUEUE --queue-balance {{queueBalance}}\n{{.MangleTable}} {{.NetChain}} -p icmp -j NFQUEUE --queue-balance {{queueBalance}}\n\n{{$.MangleTable}} {{$.AppChain}} -m set --match-set {{$.TargetTCPNetSet}} dst -p tcp -m tcp --tcp-flags FIN FIN -j ACCEPT\n{{$.MangleTable}} {{$.AppChain}} -m set --match-set {{$.TargetTCPNetSet}} dst -p tcp -j MARK --set-mark {{packetMark}}\n{{$.MangleTable}} {{$.AppChain}} -p udp -m set --match-set {{$.TargetUDPNetSet}} dst -j MARK --set-mark {{packetMark}}\n{{$.MangleTable}} {{$.AppChain}} -m mark --mark {{packetMark}} -j NFQUEUE --queue-balance {{queueBalance}} --queue-bypass\n\n{{.MangleTable}} {{.AppChain}} -p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\n\n{{.MangleTable}} {{.AppChain}} -p udp -m state --state ESTABLISHED -m comment --comment UDP-Established-Connections -j ACCEPT\n\n{{range appAnyRules}}\n{{joinRule .}}\n{{end}}\n\n{{.MangleTable}} {{.AppChain}} -d {{.DefaultIP}} -m state --state NEW -j NFLOG  --nflog-group 10 --nflog-prefix {{.AppNFLOGPrefix}}\n{{if isAppDrop}}\n{{.MangleTable}} {{.AppChain}} -d {{.DefaultIP}} -m state ! --state NEW -j NFLOG --nflog-group 10 --nflog-prefix {{.AppNFLOGDropPacketLogPrefix}}\n{{end}}\n{{.MangleTable}} {{.AppChain}} -d {{.DefaultIP}} -j {{.AppDefaultAction}}\n\n{{.MangleTable}} {{.NetChain}} -p tcp -m set --match-set {{$.TargetTCPNetSet}} src -m tcp --tcp-flags SYN NONE -j {{.NfqueueInput}}\n{{.MangleTable}} {{.NetChain}} -p udp -m set --match-set {{.TargetUDPNetSet}} src --match limit --limit 1000/s -j {{.NfqueueInput}}\n\n{{.MangleTable}} {{.NetChain}} -p tcp -m state --state ESTABLISHED -m comment --comment TCP-Established-Connections -j ACCEPT\n\n{{range netAnyRules}}\n{{joinRule .}}\n{{end}}\n\n{{.MangleTable}} {{.NetChain}} -s {{.DefaultIP}} -m state --state NEW -j NFLOG --nflog-group 11 --nflog-prefix {{.NetNFLOGPrefix}}\n{{if isNetDrop}}\n{{.MangleTable}} {{.NetChain}} -s {{.DefaultIP}} -m state ! --state NEW -j NFLOG --nflog-group 11 --nflog-prefix {{.NetNFLOGDropPacketLogPrefix}}\n{{end}}\n{{.MangleTable}} {{.NetChain}} -s {{.DefaultIP}} -j {{.NetDefaultAction}}\n`\n\n// proxyDNSChainTemplate is not used for rhel6\nvar proxyDNSChainTemplate = ``\n\n// proxyChainTemplate is not used for rhel6\nvar proxyChainTemplate = ``\n\nvar globalHooks = `\n{{.MangleTable}} INPUT -m set ! --match-set {{.ExclusionsSet}} src -j {{.MainNetChain}}\n{{.MangleTable}} OUTPUT -m set ! --match-set {{.ExclusionsSet}} dst -j {{.MainAppChain}}\n{{.NatTable}} PREROUTING -p tcp -m addrtype --dst-type LOCAL -m set ! --match-set {{.ExclusionsSet}} src -j {{.NatProxyNetChain}}\n{{.NatTable}} OUTPUT -m set ! --match-set {{.ExclusionsSet}} dst -j {{.NatProxyAppChain}}\n`\n\nvar legacyProxyRules = `\n{{.MangleTable}} {{.MangleProxyAppChain}} -p tcp -m tcp --sport {{.ProxyPort}} -j ACCEPT\n{{if enableDNSProxy}}\n{{.MangleTable}} {{.MangleProxyAppChain}} -p udp -m udp --sport {{.DNSProxyPort}} -j ACCEPT\n{{end}}\n{{.MangleTable}} {{.MangleProxyAppChain}} -p tcp -m set --match-set {{.SrvIPSet}} src -j ACCEPT\n{{.MangleTable}} {{.MangleProxyAppChain}} -p tcp -m set --match-set {{.DestIPSet}} dst,dst -m mark ! --mark {{.ProxyMark}} -j ACCEPT\n\n{{.MangleTable}} {{.MangleProxyNetChain}} -p tcp -m set --match-set {{.DestIPSet}} src,src -j ACCEPT\n{{.MangleTable}} {{.MangleProxyNetChain}} -p tcp -m set --match-set {{.SrvIPSet}} src -m addrtype --src-type LOCAL -j ACCEPT\n{{.MangleTable}} {{.MangleProxyNetChain}} -p tcp -m tcp --dport {{.ProxyPort}} -j ACCEPT\n{{if enableDNSProxy}}\n{{.MangleTable}} {{.MangleProxyNetChain}} -p udp -m udp --dport {{.DNSProxyPort}} -j ACCEPT\n{{end}}\n\n{{if isCgroupSet}}\n{{.NatTable}} {{.NatProxyAppChain}} -p tcp -m set --match-set {{.DestIPSet}} dst,dst -m mark ! --mark {{.ProxyMark}} -m multiport --source-ports {{.TCPPorts}} -j REDIRECT --to-ports {{.ProxyPort}}\n{{else}}\n{{.NatTable}} {{.NatProxyAppChain}} -p tcp -m set --match-set {{.DestIPSet}} dst,dst -m mark ! --mark {{.ProxyMark}} -j REDIRECT --to-ports {{.ProxyPort}}\n{{end}}\n\n{{if enableDNSProxy}}\n{{.NatTable}} {{.NatProxyAppChain}} -p udp --dport 53 -m mark ! --mark {{.ProxyMark}} -j REDIRECT --to-ports {{.DNSProxyPort}}\n{{end}}\n\n{{.NatTable}} {{.NatProxyNetChain}} -p tcp -m set --match-set {{.SrvIPSet}} dst -m mark ! --mark {{.ProxyMark}} -j REDIRECT --to-ports {{.ProxyPort}}`\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/rules_windows.go",
    "content": "// +build windows\n\npackage iptablesctrl\n\nvar triremChains = `\n-t OUTPUT  -N GlobalRules-OUTPUT\n-t INPUT   -N GlobalRules-INPUT\n-t OUTPUT  -N ProcessRules-OUTPUT\n-t INPUT   -N ProcessRules-INPUT\n-t OUTPUT  -N HostSvcRules-OUTPUT\n-t INPUT   -N HostSvcRules-INPUT\n-t OUTPUT  -N HostPU-OUTPUT\n-t INPUT   -N HostPU-INPUT\n`\n\n// When enforcerd is managed by cns-agent, its parent is mgr and its grandparent is boot\n// -cns-agent-boot\n//   |----cns-agent-mgr\n//         |----enforcerd\n// However, when mgr is updated, it will be respawned with a new pid and enforcerd will no longer\n// have a parent\n// -cns-agent-boot\n//   |----cns-agent-mgr\n// -enforcerd\n// We need to allow this new mgr to communicate with the API server too, so we can allow\n// cns-agent-boot and its children, in order to satisfy this.\n// Note also that any currently active mgr pid needs to be explicitly added as its own rule here.\n\n// globalRules are the rules not tied to a PU chain.\nvar globalRules = `\nINPUT  GlobalRules-INPUT  -m set --match-set {{.ExclusionsSet}} srcIP -j ACCEPT_ONCE\nOUTPUT GlobalRules-OUTPUT -m set --match-set {{.ExclusionsSet}} dstIP -j ACCEPT_ONCE\n{{if isIPv4}}\nINPUT  GlobalRules-INPUT  -m owner --pid-owner {{EnforcerPID}} -j ACCEPT\nOUTPUT GlobalRules-OUTPUT -m owner --pid-owner {{EnforcerPID}} -j ACCEPT\n{{if isManagedByCnsAgentManager}}\nINPUT  GlobalRules-INPUT  -m owner --pid-owner {{CnsAgentBootPID}} --pid-children -j ACCEPT\nOUTPUT GlobalRules-OUTPUT -m owner --pid-owner {{CnsAgentBootPID}} --pid-children -j ACCEPT\nINPUT  GlobalRules-INPUT  -m owner --pid-owner {{CnsAgentMgrPID}} -j ACCEPT\nOUTPUT GlobalRules-OUTPUT -m owner --pid-owner {{CnsAgentMgrPID}} -j ACCEPT\n{{end}}\n{{if enableDNSProxy}}\nINPUT  GlobalRules-INPUT  -p udp --sports 53 -m set --match-set {{windowsDNSServerName}} srcIP -j NFQUEUE_FORCE -j MARK 83\n{{end}}\n{{end}}\n{{if needICMP}}\nOUTPUT GlobalRules-OUTPUT -p icmpv6 --icmp-type 133/0 -j ACCEPT\nOUTPUT GlobalRules-OUTPUT -p icmpv6 --icmp-type 134/0 -j ACCEPT\nOUTPUT GlobalRules-OUTPUT -p icmpv6 --icmp-type 135/0 -j ACCEPT\nOUTPUT GlobalRules-OUTPUT -p icmpv6 --icmp-type 136/0 -j ACCEPT\nOUTPUT GlobalRules-OUTPUT -p icmpv6 --icmp-type 141/0 -j ACCEPT\nOUTPUT GlobalRules-OUTPUT -p icmpv6 --icmp-type 142/0 -j ACCEPT\nINPUT  GlobalRules-INPUT -p icmpv6 --icmp-type 133/0 -j ACCEPT\nINPUT  GlobalRules-INPUT -p icmpv6 --icmp-type 134/0 -j ACCEPT\nINPUT  GlobalRules-INPUT -p icmpv6 --icmp-type 135/0 -j ACCEPT\nINPUT  GlobalRules-INPUT -p icmpv6 --icmp-type 136/0 -j ACCEPT\nINPUT  GlobalRules-INPUT -p icmpv6 --icmp-type 141/0 -j ACCEPT\nINPUT  GlobalRules-INPUT -p icmpv6 --icmp-type 142/0 -j ACCEPT\n{{end}}\n\n`\nvar istioChainTemplate = ``\nvar proxyDNSChainTemplate = ``\n\n// cgroupCaptureTemplate are the list of iptables commands that will hook traffic and send it to a PU specific\n// chain. The hook method depends on the type of PU.\nvar cgroupCaptureTemplate = `\n\nINPUT  {{.NetChain}} -p udp -m string --string {{.UDPSignature}} --offset 4 -j NFQUEUE -j MARK {{.PacketMark}}\nINPUT  {{.NetChain}} -p udp -m string --string {{.UDPSignature}} --offset 6 -j NFQUEUE -j MARK {{.PacketMark}}\nOUTPUT {{.AppChain}} -p tcp --tcp-flags 18,18 -j NFQUEUE -j MARK {{.PacketMark}}\nINPUT  {{.NetChain}} -p tcp --tcp-flags 18,18 -m set --match-set {{.TargetTCPNetSet}} srcIP -j NFQUEUE -j MARK {{.PacketMark}}\n{{if isHostPU}}\nOUTPUT HostPU-OUTPUT -p tcp -m set --match-set {{.TargetTCPNetSet}} dstIP -m set --match-set {{.DestIPSet}} dstIP,dstPort -j REDIRECT  --to-ports {{.ProxyPort}}\nINPUT  HostPU-INPUT  -p tcp -m set --match-set {{.SrvIPSet}} dstPort -j REDIRECT --to-ports {{.ProxyPort}}\nOUTPUT HostPU-OUTPUT -j {{.AppChain}}\nINPUT  HostPU-INPUT  -j {{.NetChain}}\n{{else}}\n{{if isProcessPU}}\nOUTPUT ProcessRules-OUTPUT -j {{.AppChain}} -m owner --pid-owner {{.ContextID}} --pid-childrenonly\nINPUT  ProcessRules-INPUT  -j {{.NetChain}} -m owner --pid-owner {{.ContextID}} --pid-childrenonly\n{{else}}\n{{if isTCPPorts}}\nOUTPUT HostSvcRules-OUTPUT -p tcp --dports {{.TCPPorts}} -j {{.AppChain}}\nINPUT  HostSvcRules-INPUT  -p tcp --sports {{.TCPPorts}} -j {{.NetChain}}\n{{end}}\n{{if isUDPPorts}}\nOUTPUT HostSvcRules-OUTPUT -p udp --dports {{.UDPPorts}} -j {{.AppChain}}\nINPUT  HostSvcRules-INPUT  -p udp --sports {{.UDPPorts}} -j {{.NetChain}}\n{{end}}\n{{end}}\n{{end}}\n`\n\n// containerChainTemplate will hook traffic towards the container specific chains.\nvar containerChainTemplate = ``\n\nvar acls = `\n{{range .RejectObserveContinue}}\n{{joinRule .}}\n{{end}}\n\n{{range .RejectNotObserved}}\n{{joinRule .}}\n{{end}}\n\n{{range .RejectObserveApply}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptObserveContinue}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptNotObserved}}\n{{joinRule .}}\n{{end}}\n\n{{range .AcceptObserveApply}}\n{{joinRule .}}\n{{end}}\n\n{{range .ReverseRules}}\n{{joinRule .}}\n{{end}}\n`\n\nvar preNetworkACLRuleTemplate = `\n{{/* matches syn and ack packets FIN,RST,URG,PSH NONE */}}\nINPUT  {{.NetChain}} -p tcp --tcp-flags 45,0 --tcp-option 34 -j NFQUEUE MARK {{.PacketMark}}\n`\n\n// packetCaptureTemplate are the rules that trap traffic towards the user space.\n// windows uses it as a final deny-all.\nvar packetCaptureTemplate = `\nOUTPUT {{.AppChain}} -p tcp --tcp-flags 1,1 -m set --match-set {{.TargetTCPNetSet}} dstIP -j ACCEPT\nOUTPUT {{.AppChain}} -p tcp -m set --match-set {{.TargetTCPNetSet}} dstIP -j NFQUEUE -j MARK {{.PacketMark}}\nOUTPUT {{.AppChain}} -p udp -m set --match-set {{.TargetUDPNetSet}} dstIP -j NFQUEUE -j MARK {{.PacketMark}}\nINPUT  {{.NetChain}} -p tcp --tcp-flags 2,0 -j NFQUEUE -j MARK {{.PacketMark}}\n{{range appAnyRules}}\n{{joinRule .}}\n{{end}}\n{{range netAnyRules}}\n{{joinRule .}}\n{{end}}\n{{range appAnyRules}}\n{{joinRule .}}\n{{end}}\n{{range netAnyRules}}\n{{joinRule .}}\n{{end}}\n{{if isAppDrop}}\nOUTPUT {{.AppChain}} -m set --match-set {{windowsAllIpsetName}} dstIP -j NFLOG --nflog-group 10 --nflog-prefix {{.AppNFLOGDropPacketLogPrefix}}\n{{end}}\nOUTPUT {{.AppChain}} -m set --match-set {{windowsAllIpsetName}} dstIP -j {{.AppDefaultAction}} -j NFLOG --nflog-group 10 --nflog-prefix {{.AppNFLOGPrefix}}\n{{if isNetDrop}}\nINPUT  {{.NetChain}} -m set --match-set {{windowsAllIpsetName}} srcIP -j NFLOG --nflog-group 11 --nflog-prefix {{.NetNFLOGDropPacketLogPrefix}}\n{{end}}\nINPUT  {{.NetChain}} -m set --match-set {{windowsAllIpsetName}} srcIP -j {{.NetDefaultAction}} -j NFLOG --nflog-group 11 --nflog-prefix {{.NetNFLOGPrefix}}\n`\n\nvar proxyChainTemplate = ``\n\nvar globalHooks = ``\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/templates.go",
    "content": "package iptablesctrl\n\nimport (\n\t\"bytes\"\n\t\"crypto/md5\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"strconv\"\n\t\"strings\"\n\t\"text/template\"\n\n\t\"github.com/kballard/go-shellquote\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\tcconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/afinetrawsocket\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n)\n\nfunc extractRulesFromTemplate(tmpl *template.Template, data interface{}) ([][]string, error) {\n\n\tbuffer := bytes.NewBuffer([]byte{})\n\tif err := tmpl.Execute(buffer, data); err != nil {\n\t\treturn [][]string{}, fmt.Errorf(\"unable to execute template:%s\", err)\n\t}\n\n\trules := [][]string{}\n\n\tfor _, m := range strings.Split(buffer.String(), \"\\n\") {\n\n\t\trule, err := shellquote.Split(m)\n\t\tif err != nil {\n\t\t\treturn [][]string{}, err\n\t\t}\n\n\t\t// ignore empty lines in the buffer\n\t\tif len(rule) <= 1 {\n\t\t\tcontinue\n\t\t}\n\n\t\trules = append(rules, rule)\n\t}\n\treturn rules, nil\n}\n\n// ACLInfo keeps track of all information to create ACLs\ntype ACLInfo struct {\n\tContextID string\n\tPUType    common.PUType\n\n\t// Tables\n\tMangleTable string\n\tNatTable    string\n\n\t// Chains\n\tMainAppChain        string\n\tMainNetChain        string\n\tBPFPath             string\n\tHostInput           string\n\tHostOutput          string\n\tNfqueueOutput       string\n\tNfqueueInput        string\n\tNetworkSvcInput     string\n\tNetworkSvcOutput    string\n\tTriremeInput        string\n\tTriremeOutput       string\n\tNatProxyNetChain    string\n\tNatProxyAppChain    string\n\tMangleProxyNetChain string\n\tMangleProxyAppChain string\n\tPreRouting          string\n\n\tAppChain   string\n\tNetChain   string\n\tAppSection string\n\tNetSection string\n\n\t// serviceMesh chains\n\tIstioChain string\n\n\t// common info\n\tDefaultConnmark         string\n\tDefaultDropConnmark     string\n\tDefaultExternalConnmark string\n\tPacketMarkToSetConnmark string\n\tDefaultInputMark        string\n\tDefaultHandShakeMark    string\n\n\tRawSocketMark   string\n\tTargetTCPNetSet string\n\tTargetUDPNetSet string\n\tExclusionsSet   string\n\tIpsetPrefix     string\n\t// IPv4 IPv6\n\tDefaultIP     string\n\tneedICMPRules bool\n\n\t// UDP rules\n\tNumpackets   string\n\tInitialCount string\n\tUDPSignature string\n\n\t// Linux PUs\n\tTCPPorts   string\n\tUDPPorts   string\n\tTCPPortSet string\n\n\t// ProxyRules\n\tDestIPSet     string\n\tSrvIPSet      string\n\tProxyPort     string\n\tDNSProxyPort  string\n\tDNSServerIP   string\n\tCgroupMark    string\n\tProxyMark     string\n\tAuthPhaseMark string\n\n\tPacketMark string\n\tMark       string\n\tPortSet    string\n\n\tAppNFLOGPrefix              string\n\tAppNFLOGDropPacketLogPrefix string\n\tAppDefaultAction            string\n\n\tNetNFLOGPrefix              string\n\tNetNFLOGDropPacketLogPrefix string\n\tNetDefaultAction            string\n\n\tNFQueues    []int\n\tNumNFQueues int\n\t// icmpv6 allow bytecode\n\tICMPv6Allow string\n\n\t// Istio Iptable rules\n\tIstioEnabled bool\n}\n\nfunc chainName(contextID string, version int) (app, net string, err error) {\n\thash := md5.New()\n\n\tif _, err := io.WriteString(hash, contextID); err != nil {\n\t\treturn \"\", \"\", err\n\t}\n\toutput := base64.URLEncoding.EncodeToString(hash.Sum(nil))\n\tif len(contextID) > 4 {\n\t\tcontextID = contextID[:4] + output[:6]\n\t} else {\n\t\tcontextID = contextID + output[:6]\n\t}\n\n\tapp = appChainPrefix + contextID + \"-\" + strconv.Itoa(version)\n\tnet = netChainPrefix + contextID + \"-\" + strconv.Itoa(version)\n\n\treturn app, net, nil\n}\n\nfunc (i *iptables) newACLInfo(version int, contextID string, p *policy.PUInfo, puType common.PUType) (*ACLInfo, error) {\n\tvar appChain, netChain string\n\tvar err error\n\tvar nfqueues []int\n\n\tnumQueues := i.fqc.GetNumQueues()\n\n\tipFilter := i.impl.IPFilter()\n\n\tif contextID != \"\" {\n\t\tappChain, netChain, err = chainName(contextID, version)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tparseDNSServerIP := func() string {\n\t\tfor _, ipString := range i.fqc.GetDNSServerAddresses() {\n\t\t\tif ip := net.ParseIP(ipString); ip != nil {\n\t\t\t\tif ipFilter(ip) {\n\t\t\t\t\treturn ipString\n\t\t\t\t}\n\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t// parseCIDR\n\t\t\tif ip, _, err := net.ParseCIDR(ipString); err == nil {\n\t\t\t\tif ipFilter(ip) {\n\t\t\t\t\treturn ipString\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn \"\"\n\t}\n\n\tappDefaultAction := policy.Reject | policy.Log\n\tnetDefaultAction := policy.Reject | policy.Log\n\n\tvar tcpPorts, udpPorts string\n\tvar servicePort, mark, dnsProxyPort, packetMark string\n\tif p != nil {\n\t\ttcpPorts, udpPorts = common.ConvertServicesToProtocolPortList(p.Runtime.Options().Services)\n\t\tpuType = p.Runtime.PUType()\n\t\tservicePort = p.Policy.ServicesListeningPort()\n\t\tdnsProxyPort = p.Policy.DNSProxyPort()\n\t\tmark = p.Runtime.Options().CgroupMark\n\t\tpacketMark = mark\n\t\tappDefaultAction = p.Policy.AppDefaultPolicyAction()\n\t\tnetDefaultAction = p.Policy.NetDefaultPolicyAction()\n\t}\n\n\tdestSetName, srvSetName := i.ipsetmanager.GetProxySetNames(contextID)\n\n\ttcpTargetSetName, udpTargetSetName, excludedNetworkSetName := i.ipsetmanager.GetIPsetNamesForTargetAndExcludedNetworks()\n\n\tappSection := \"\"\n\tnetSection := \"\"\n\tswitch puType {\n\tcase common.LinuxProcessPU, common.WindowsProcessPU:\n\t\tappSection = TriremeOutput\n\t\tnetSection = TriremeInput\n\tcase common.HostNetworkPU:\n\t\tappSection = NetworkSvcOutput\n\t\tnetSection = NetworkSvcInput\n\tcase common.HostPU:\n\t\tappSection = HostModeOutput\n\t\tnetSection = HostModeInput\n\tdefault:\n\t\tappSection = mainAppChain\n\t\tnetSection = mainNetChain\n\t}\n\n\tportSetName := i.ipsetmanager.GetServerPortSetName(contextID)\n\n\tfor i := 0; i < numQueues; i++ {\n\t\tnfqueues = append(nfqueues, i)\n\t}\n\n\tcfg := &ACLInfo{\n\t\tContextID: contextID,\n\t\tPUType:    puType,\n\t\t// Chains\n\t\tMangleTable:         \"mangle\",\n\t\tNatTable:            \"nat\",\n\t\tMainAppChain:        mainAppChain,\n\t\tMainNetChain:        mainNetChain,\n\t\tHostInput:           HostModeInput,\n\t\tHostOutput:          HostModeOutput,\n\t\tNfqueueOutput:       NfqueueOutput,\n\t\tNfqueueInput:        NfqueueInput,\n\t\tNFQueues:            nfqueues,\n\t\tNumNFQueues:         numQueues,\n\t\tNetworkSvcInput:     NetworkSvcInput,\n\t\tNetworkSvcOutput:    NetworkSvcOutput,\n\t\tTriremeInput:        TriremeInput,\n\t\tTriremeOutput:       TriremeOutput,\n\t\tNatProxyNetChain:    natProxyInputChain,\n\t\tNatProxyAppChain:    natProxyOutputChain,\n\t\tMangleProxyNetChain: proxyInputChain,\n\t\tMangleProxyAppChain: proxyOutputChain,\n\t\tPreRouting:          ipTableSectionPreRouting,\n\n\t\tAppChain:   appChain,\n\t\tNetChain:   netChain,\n\t\tAppSection: appSection,\n\t\tNetSection: netSection,\n\t\tIstioChain: istioChain,\n\n\t\t// common info\n\t\tDefaultConnmark:         strconv.Itoa(int(constants.DefaultConnMark)),\n\t\tDefaultDropConnmark:     strconv.Itoa(int(constants.DropConnmark)),\n\t\tDefaultExternalConnmark: strconv.Itoa(int(constants.DefaultExternalConnMark)),\n\t\tPacketMarkToSetConnmark: strconv.Itoa(int(constants.PacketMarkToSetConnmark)),\n\t\tDefaultInputMark:        strconv.Itoa(int(constants.DefaultInputMark)),\n\t\tRawSocketMark:           strconv.Itoa(afinetrawsocket.ApplicationRawSocketMark),\n\t\tDefaultHandShakeMark:    strconv.Itoa(int(constants.HandshakeConnmark)),\n\t\tCgroupMark:              mark,\n\t\tTargetTCPNetSet:         tcpTargetSetName,\n\t\tTargetUDPNetSet:         udpTargetSetName,\n\t\tExclusionsSet:           excludedNetworkSetName,\n\t\t// IPv4 vs IPv6\n\t\tDefaultIP:     i.impl.GetDefaultIP(),\n\t\tneedICMPRules: i.impl.NeedICMP(),\n\n\t\t// UDP rules\n\t\tNumpackets:   numPackets,\n\t\tInitialCount: initialCount,\n\t\tUDPSignature: packet.UDPAuthMarker,\n\n\t\t// // Linux PUs\n\t\tTCPPorts:   tcpPorts,\n\t\tUDPPorts:   udpPorts,\n\t\tTCPPortSet: portSetName,\n\n\t\t// // ProxyRules\n\t\tDestIPSet:    destSetName,\n\t\tSrvIPSet:     srvSetName,\n\t\tProxyPort:    servicePort,\n\t\tDNSProxyPort: dnsProxyPort,\n\t\tDNSServerIP:  parseDNSServerIP(),\n\t\tProxyMark:    cconstants.ProxyMark,\n\n\t\t// PUs\n\t\tPacketMark: packetMark,\n\t\tMark:       mark,\n\t\tPortSet:    portSetName,\n\n\t\tAppNFLOGPrefix:              policy.DefaultLogPrefix(contextID, appDefaultAction),\n\t\tAppNFLOGDropPacketLogPrefix: policy.DefaultDropPacketLogPrefix(contextID),\n\t\tAppDefaultAction:            policy.DefaultAction(appDefaultAction),\n\n\t\tNetNFLOGPrefix:              policy.DefaultLogPrefix(contextID, netDefaultAction),\n\t\tNetNFLOGDropPacketLogPrefix: policy.DefaultDropPacketLogPrefix(contextID),\n\t\tNetDefaultAction:            policy.DefaultAction(netDefaultAction),\n\t}\n\n\tallowICMPv6(cfg)\n\tif i.bpf != nil {\n\t\tcfg.BPFPath = i.bpf.GetBPFPath()\n\t}\n\n\treturn cfg, nil\n}\n"
  },
  {
    "path": "controller/internal/supervisor/iptablesctrl/templates_test.go",
    "content": "// +build !windows\n\npackage iptablesctrl\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestChainName(t *testing.T) {\n\tConvey(\"When I test the creation of the name of the chain\", t, func() {\n\n\t\tConvey(\"With a contextID of Context and version of 1\", func() {\n\t\t\tapp, net, err := chainName(\"Context\", 1)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tConvey(\"I should get the right names\", func() {\n\t\t\t\t//app, net := i.chainName(\"Context\", 1)\n\n\t\t\t\tSo(app, ShouldContainSubstring, \"TRI-App\")\n\t\t\t\tSo(net, ShouldContainSubstring, \"TRI-Net\")\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/internal/supervisor/mocksupervisor/mocksupervisor.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/internal/supervisor/interfaces.go\n\n// Package mocksupervisor is a generated GoMock package.\npackage mocksupervisor\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\ttime \"time\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\truntime \"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// MockSupervisor is a mock of Supervisor interface\n// nolint\ntype MockSupervisor struct {\n\tctrl     *gomock.Controller\n\trecorder *MockSupervisorMockRecorder\n}\n\n// MockSupervisorMockRecorder is the mock recorder for MockSupervisor\n// nolint\ntype MockSupervisorMockRecorder struct {\n\tmock *MockSupervisor\n}\n\n// NewMockSupervisor creates a new mock instance\n// nolint\nfunc NewMockSupervisor(ctrl *gomock.Controller) *MockSupervisor {\n\tmock := &MockSupervisor{ctrl: ctrl}\n\tmock.recorder = &MockSupervisorMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockSupervisor) EXPECT() *MockSupervisorMockRecorder {\n\treturn m.recorder\n}\n\n// Supervise mocks base method\n// nolint\nfunc (m *MockSupervisor) Supervise(contextID string, puInfo *policy.PUInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Supervise\", contextID, puInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Supervise indicates an expected call of Supervise\n// nolint\nfunc (mr *MockSupervisorMockRecorder) Supervise(contextID, puInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Supervise\", reflect.TypeOf((*MockSupervisor)(nil).Supervise), contextID, puInfo)\n}\n\n// Unsupervise mocks base method\n// nolint\nfunc (m *MockSupervisor) Unsupervise(contextID string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Unsupervise\", contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Unsupervise indicates an expected call of Unsupervise\n// nolint\nfunc (mr *MockSupervisorMockRecorder) Unsupervise(contextID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Unsupervise\", reflect.TypeOf((*MockSupervisor)(nil).Unsupervise), contextID)\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockSupervisor) Run(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Run\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockSupervisorMockRecorder) Run(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockSupervisor)(nil).Run), ctx)\n}\n\n// SetTargetNetworks mocks base method\n// nolint\nfunc (m *MockSupervisor) SetTargetNetworks(cfg *runtime.Configuration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SetTargetNetworks\", cfg)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SetTargetNetworks indicates an expected call of SetTargetNetworks\n// nolint\nfunc (mr *MockSupervisorMockRecorder) SetTargetNetworks(cfg interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetTargetNetworks\", reflect.TypeOf((*MockSupervisor)(nil).SetTargetNetworks), cfg)\n}\n\n// CleanUp mocks base method\n// nolint\nfunc (m *MockSupervisor) CleanUp() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CleanUp\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CleanUp indicates an expected call of CleanUp\n// nolint\nfunc (mr *MockSupervisorMockRecorder) CleanUp() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CleanUp\", reflect.TypeOf((*MockSupervisor)(nil).CleanUp))\n}\n\n// EnableIPTablesPacketTracing mocks base method\n// nolint\nfunc (m *MockSupervisor) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableIPTablesPacketTracing\", ctx, contextID, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableIPTablesPacketTracing indicates an expected call of EnableIPTablesPacketTracing\n// nolint\nfunc (mr *MockSupervisorMockRecorder) EnableIPTablesPacketTracing(ctx, contextID, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableIPTablesPacketTracing\", reflect.TypeOf((*MockSupervisor)(nil).EnableIPTablesPacketTracing), ctx, contextID, interval)\n}\n\n// MockImplementor is a mock of Implementor interface\n// nolint\ntype MockImplementor struct {\n\tctrl     *gomock.Controller\n\trecorder *MockImplementorMockRecorder\n}\n\n// MockImplementorMockRecorder is the mock recorder for MockImplementor\n// nolint\ntype MockImplementorMockRecorder struct {\n\tmock *MockImplementor\n}\n\n// NewMockImplementor creates a new mock instance\n// nolint\nfunc NewMockImplementor(ctrl *gomock.Controller) *MockImplementor {\n\tmock := &MockImplementor{ctrl: ctrl}\n\tmock.recorder = &MockImplementorMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockImplementor) EXPECT() *MockImplementorMockRecorder {\n\treturn m.recorder\n}\n\n// ConfigureRules mocks base method\n// nolint\nfunc (m *MockImplementor) ConfigureRules(version int, contextID string, containerInfo *policy.PUInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigureRules\", version, contextID, containerInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ConfigureRules indicates an expected call of ConfigureRules\n// nolint\nfunc (mr *MockImplementorMockRecorder) ConfigureRules(version, contextID, containerInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigureRules\", reflect.TypeOf((*MockImplementor)(nil).ConfigureRules), version, contextID, containerInfo)\n}\n\n// UpdateRules mocks base method\n// nolint\nfunc (m *MockImplementor) UpdateRules(version int, contextID string, containerInfo, oldContainerInfo *policy.PUInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateRules\", version, contextID, containerInfo, oldContainerInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateRules indicates an expected call of UpdateRules\n// nolint\nfunc (mr *MockImplementorMockRecorder) UpdateRules(version, contextID, containerInfo, oldContainerInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateRules\", reflect.TypeOf((*MockImplementor)(nil).UpdateRules), version, contextID, containerInfo, oldContainerInfo)\n}\n\n// DeleteRules mocks base method\n// nolint\nfunc (m *MockImplementor) DeleteRules(version int, context, tcpPorts, udpPorts, mark, uid string, containerInfo *policy.PUInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DeleteRules\", version, context, tcpPorts, udpPorts, mark, uid, containerInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DeleteRules indicates an expected call of DeleteRules\n// nolint\nfunc (mr *MockImplementorMockRecorder) DeleteRules(version, context, tcpPorts, udpPorts, mark, uid, containerInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DeleteRules\", reflect.TypeOf((*MockImplementor)(nil).DeleteRules), version, context, tcpPorts, udpPorts, mark, uid, containerInfo)\n}\n\n// SetTargetNetworks mocks base method\n// nolint\nfunc (m *MockImplementor) SetTargetNetworks(cfg *runtime.Configuration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SetTargetNetworks\", cfg)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SetTargetNetworks indicates an expected call of SetTargetNetworks\n// nolint\nfunc (mr *MockImplementorMockRecorder) SetTargetNetworks(cfg interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetTargetNetworks\", reflect.TypeOf((*MockImplementor)(nil).SetTargetNetworks), cfg)\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockImplementor) Run(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Run\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockImplementorMockRecorder) Run(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockImplementor)(nil).Run), ctx)\n}\n\n// CleanUp mocks base method\n// nolint\nfunc (m *MockImplementor) CleanUp() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CleanUp\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CleanUp indicates an expected call of CleanUp\n// nolint\nfunc (mr *MockImplementorMockRecorder) CleanUp() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CleanUp\", reflect.TypeOf((*MockImplementor)(nil).CleanUp))\n}\n\n// ACLProvider mocks base method\n// nolint\nfunc (m *MockImplementor) ACLProvider() []provider.IptablesProvider {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ACLProvider\")\n\tret0, _ := ret[0].([]provider.IptablesProvider)\n\treturn ret0\n}\n\n// ACLProvider indicates an expected call of ACLProvider\n// nolint\nfunc (mr *MockImplementorMockRecorder) ACLProvider() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ACLProvider\", reflect.TypeOf((*MockImplementor)(nil).ACLProvider))\n}\n\n// CreateCustomRulesChain mocks base method\n// nolint\nfunc (m *MockImplementor) CreateCustomRulesChain() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CreateCustomRulesChain\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CreateCustomRulesChain indicates an expected call of CreateCustomRulesChain\n// nolint\nfunc (mr *MockImplementorMockRecorder) CreateCustomRulesChain() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateCustomRulesChain\", reflect.TypeOf((*MockImplementor)(nil).CreateCustomRulesChain))\n}\n"
  },
  {
    "path": "controller/internal/supervisor/noop/supervisornoop.go",
    "content": "// Package supervisornoop implements the supervisor interface with no operations. This is currently being used in two places:\n// 1. for the supervisor proxy that actually does not need to take any action as a complement to the enforcer proxy\n// 2. for enforcer implementations that do not need a supervisor to program the networks (e.g. the remote envoy authorizer enforcer)\npackage supervisornoop\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// NoopSupervisor is a struct to hold the implementation of the Supervisor interface\ntype NoopSupervisor struct{}\n\n// Supervise just keeps track of the active remotes so that it can initiate updates.\nfunc (s *NoopSupervisor) Supervise(contextID string, puInfo *policy.PUInfo) error {\n\n\treturn nil\n}\n\n// Unsupervise just keeps track of the active remotes so\nfunc (s *NoopSupervisor) Unsupervise(contextID string) error {\n\n\treturn nil\n}\n\n// SetTargetNetworks sets the target networks in case of an  update\nfunc (s *NoopSupervisor) SetTargetNetworks(cfg *runtime.Configuration) error {\n\treturn nil\n}\n\n// CleanUp implements the cleanup interface, but it doesn't need to do anything.\nfunc (s *NoopSupervisor) CleanUp() error {\n\treturn nil\n}\n\n// Run runs the proxy supervisor and initializes the cleaners.\nfunc (s *NoopSupervisor) Run(ctx context.Context) error {\n\treturn nil\n}\n\n// EnableIPTablesPacketTracing enable iptables tracing\nfunc (s *NoopSupervisor) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\treturn nil\n}\n\n// NewNoopSupervisor creates a new noop supervisor\nfunc NewNoopSupervisor() *NoopSupervisor {\n\n\treturn &NoopSupervisor{}\n}\n"
  },
  {
    "path": "controller/internal/supervisor/supervisor.go",
    "content": "package supervisor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor/iptablesctrl\"\n\tsupervisornoop \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor/noop\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\ntype cacheData struct {\n\tversion       int\n\tips           policy.ExtendedMap\n\tmark          string\n\ttcpPorts      string\n\tudpPorts      string\n\tusername      string\n\tcontainerInfo *policy.PUInfo\n}\n\n// Config is the structure holding all information about the supervisor\ntype Config struct {\n\t// mode is LocalServer or RemoteContainer\n\tmode constants.ModeType\n\t// versionTracker tracks the current version of the ACLs\n\tversionTracker cache.DataStore\n\t// impl is the packet filter implementation\n\timpl Implementor\n\t// collector is the stats collector implementation\n\tcollector collector.EventCollector\n\t// filterQueue is the filterqueue parameters\n\tfilterQueue fqconfig.FilterQueue\n\t// cfg is the mutable configuration\n\tcfg *runtime.Configuration\n\tsync.Mutex\n}\n\n// NewSupervisor will create a new connection supervisor that uses IPTables\n// to redirect specific packets to userspace. It instantiates multiple data stores\n// to maintain efficient mappings between contextID, policy and IP addresses. This\n// simplifies the lookup operations at the expense of memory.\nfunc NewSupervisor(\n\tcollector collector.EventCollector,\n\tenforcerInstance enforcer.Enforcer,\n\tmode constants.ModeType,\n\tcfg *runtime.Configuration,\n\tipv6Enabled bool,\n\tiptablesLockfile string,\n) (Supervisor, error) {\n\n\t// for certain modes we do not want to launch a supervisor at all, so we are going to launch a noop supervisor\n\t// like we do for the supervisor proxy\n\tif mode == constants.RemoteContainerEnvoyAuthorizer {\n\t\treturn supervisornoop.NewNoopSupervisor(), nil\n\t}\n\n\tif collector == nil || enforcerInstance == nil {\n\t\treturn nil, errors.New(\"Invalid parameters\")\n\t}\n\n\tfilterQueue := enforcerInstance.GetFilterQueue()\n\tif filterQueue == nil {\n\t\treturn nil, errors.New(\"enforcer filter queues cannot be nil\")\n\t}\n\n\tbpf := enforcerInstance.GetBPFObject()\n\tserviceMeshType := enforcerInstance.GetServiceMeshType()\n\timpl, err := iptablesctrl.NewInstance(filterQueue, mode, ipv6Enabled, bpf, iptablesLockfile, serviceMeshType)\n\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to initialize supervisor controllers: %s\", err)\n\t}\n\n\treturn &Config{\n\t\tmode:           mode,\n\t\timpl:           impl,\n\t\tversionTracker: cache.NewCache(\"SupVersionTracker\"),\n\t\tcollector:      collector,\n\t\tfilterQueue:    filterQueue,\n\t\tcfg:            cfg,\n\t}, nil\n}\n\n// Run starts the supervisor\nfunc (s *Config) Run(ctx context.Context) error {\n\n\ts.Lock()\n\tdefer s.Unlock()\n\n\tif err := s.impl.Run(ctx); err != nil {\n\t\treturn fmt.Errorf(\"unable to start the implementer: %s\", err)\n\t}\n\n\tif err := s.impl.SetTargetNetworks(s.cfg); err != nil {\n\t\treturn err\n\t}\n\n\tif err := s.impl.CreateCustomRulesChain(); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// Supervise creates a mapping between an IP address and the corresponding labels.\n// it invokes the various handlers that process the parameter policy.\nfunc (s *Config) Supervise(contextID string, pu *policy.PUInfo) error {\n\n\tif pu == nil || pu.Policy == nil || pu.Runtime == nil {\n\t\treturn errors.New(\"Invalid PU or policy info\")\n\t}\n\n\tif _, err := s.versionTracker.Get(contextID); err != nil {\n\t\t// ContextID is not found in Cache, New PU: Do create.\n\t\treturn s.doCreatePU(contextID, pu)\n\t}\n\n\t// Context already in the cache. Just run update\n\treturn s.doUpdatePU(contextID, pu)\n}\n\n// Unsupervise removes the mapping from cache and cleans up the iptable rules. ALL\n// remove operations will print errors by they don't return error. We want to force\n// as much cleanup as possible to avoid stale state\nfunc (s *Config) Unsupervise(contextID string) error {\n\ts.Lock()\n\tdefer s.Unlock()\n\n\tdata, err := s.versionTracker.Get(contextID)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"cannot find policy version: %s\", err)\n\t}\n\n\tcfg := data.(*cacheData)\n\n\t// TODO (varks): Similar to configureRules and UpdateRules, DeleteRules should take\n\t// only contextID and *policy.PUInfo as function parameters.\n\tif err := s.impl.DeleteRules(cfg.version, contextID, cfg.tcpPorts, cfg.udpPorts, cfg.mark, cfg.username, cfg.containerInfo); err != nil {\n\t\tzap.L().Warn(\"Some rules were not deleted during unsupervise\", zap.Error(err))\n\t}\n\n\tif err := s.versionTracker.Remove(contextID); err != nil {\n\t\tzap.L().Warn(\"Failed to clean the rule version cache\", zap.Error(err))\n\t}\n\n\tipsetmanager.V4().RemoveExternalNets(contextID)\n\tipsetmanager.V6().RemoveExternalNets(contextID)\n\n\treturn nil\n}\n\n// CleanUp implements the cleanup interface\nfunc (s *Config) CleanUp() error {\n\ts.Lock()\n\tdefer s.Unlock()\n\n\treturn s.impl.CleanUp()\n}\n\n// SetTargetNetworks sets the target networks of the supervisor\nfunc (s *Config) SetTargetNetworks(cfg *runtime.Configuration) error {\n\n\ts.Lock()\n\tdefer s.Unlock()\n\n\ts.cfg = cfg.DeepCopy()\n\treturn s.impl.SetTargetNetworks(cfg)\n}\n\n// ACLProvider returns the ACL provider used by the supervisor that can be\n// shared with other entities.\nfunc (s *Config) ACLProvider() []provider.IptablesProvider {\n\treturn s.impl.ACLProvider()\n}\n\nfunc (s *Config) doCreatePU(contextID string, pu *policy.PUInfo) error {\n\n\ts.Lock()\n\n\ttcpPorts, udpPorts := common.ConvertServicesToProtocolPortList(pu.Runtime.Options().Services)\n\tc := &cacheData{\n\t\tversion:       0,\n\t\tips:           pu.Policy.IPAddresses(),\n\t\tmark:          pu.Runtime.Options().CgroupMark,\n\t\ttcpPorts:      tcpPorts,\n\t\tudpPorts:      udpPorts,\n\t\tusername:      pu.Runtime.Options().UserID,\n\t\tcontainerInfo: pu,\n\t}\n\n\t// Version the policy so that we can do hitless policy changes\n\ts.versionTracker.AddOrUpdate(contextID, c)\n\n\tvar iprules policy.IPRuleList\n\n\tiprules = append(iprules, pu.Policy.ApplicationACLs()...)\n\tiprules = append(iprules, pu.Policy.NetworkACLs()...)\n\n\tif err := ipsetmanager.V4().RegisterExternalNets(contextID, iprules); err != nil {\n\t\ts.Unlock()\n\t\tzap.L().Error(\"Error creating ipsets for external networks\", zap.Error(err))\n\t\treturn err\n\t}\n\n\tif err := ipsetmanager.V6().RegisterExternalNets(contextID, iprules); err != nil {\n\t\ts.Unlock()\n\t\tzap.L().Error(\"Error creating ipsets for external networks\", zap.Error(err))\n\t\treturn err\n\t}\n\n\t// Configure the rules\n\tif err := s.impl.ConfigureRules(c.version, contextID, pu); err != nil {\n\t\t// Revert what you can since we have an error - it will fail most likely\n\t\tzap.L().Error(\"ConfigureRules Failed with error\", zap.Error(err))\n\t\ts.Unlock()\n\t\ts.Unsupervise(contextID) // nolint\n\t\treturn err\n\t}\n\n\ts.Unlock()\n\n\treturn nil\n}\n\n// UpdatePU creates a mapping between an IP address and the corresponding labels\n//and the invokes the various handlers that process all policies.\nfunc (s *Config) doUpdatePU(contextID string, pu *policy.PUInfo) error {\n\n\ts.Lock()\n\n\tdata, err := s.versionTracker.Get(contextID)\n\tif err != nil {\n\t\ts.Unlock()\n\t\treturn fmt.Errorf(\"unable to find pu %s in cache: %s\", contextID, err)\n\t}\n\n\tvar iprules policy.IPRuleList\n\n\tiprules = append(iprules, pu.Policy.ApplicationACLs()...)\n\tiprules = append(iprules, pu.Policy.NetworkACLs()...)\n\n\tif err := ipsetmanager.V4().RegisterExternalNets(contextID, iprules); err != nil {\n\t\ts.Unlock()\n\t\tzap.L().Error(\"Error creating ipsets for external networks\", zap.Error(err))\n\t\treturn err\n\t}\n\n\tif err := ipsetmanager.V6().RegisterExternalNets(contextID, iprules); err != nil {\n\t\ts.Unlock()\n\t\tzap.L().Error(\"Error creating ipsets for external networks\", zap.Error(err))\n\t\treturn err\n\t}\n\n\tc := data.(*cacheData)\n\n\tif err := s.impl.UpdateRules(c.version^1, contextID, pu, c.containerInfo); err != nil {\n\t\tzap.L().Error(\"Update rules failed with error...Reconfiguring the system\", zap.Error(err))\n\t\tcounters.IncrementCounter(counters.ErrIPTablesReset)\n\t\ts.Unlock()\n\n\t\ts.Unsupervise(contextID)    //nolint\n\t\ts.CleanUp()                 //nolint\n\t\ts.Run(context.Background()) //nolint\n\t\treturn s.Supervise(contextID, pu)\n\t}\n\n\tc.version ^= 1\n\n\t// Updated the policy in the cached processing unit.\n\tc.containerInfo.Policy = pu.Policy\n\n\tipsetmanager.V4().DestroyUnusedIPsets()\n\tipsetmanager.V6().DestroyUnusedIPsets()\n\n\ts.Unlock()\n\treturn nil\n}\n\n// EnableIPTablesPacketTracing enables ip tables packet tracing\nfunc (s *Config) EnableIPTablesPacketTracing(ctx context.Context, contextID string, interval time.Duration) error {\n\n\tdata, err := s.versionTracker.Get(contextID)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"cannot find policy version: %s\", err)\n\t}\n\n\tcfg := data.(*cacheData)\n\tiptablesRules := debugRules(cfg, s.mode)\n\tipts := s.impl.ACLProvider()\n\n\tfor _, ipt := range ipts {\n\t\tfor _, rule := range iptablesRules {\n\t\t\tif err := ipt.Insert(rule[0], rule[1], 1, rule[2:]...); err != nil {\n\t\t\t\tzap.L().Error(\"Unable to install rule\", zap.Error(err))\n\t\t\t}\n\t\t}\n\n\t\t// anonymous go func to flush debug iptables after interval\n\t\tgo func(ipt provider.IptablesProvider) {\n\t\t\tfor {\n\t\t\t\tselect {\n\t\t\t\tcase <-ctx.Done():\n\t\t\t\t\treturn\n\t\t\t\tcase <-time.After(interval):\n\t\t\t\t\tfor _, rule := range iptablesRules {\n\t\t\t\t\t\tif err := ipt.Delete(rule[0], rule[1], rule[2:]...); err != nil {\n\t\t\t\t\t\t\tzap.L().Debug(\"Unable to delete trace rules\", zap.Error(err))\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}(ipt)\n\t}\n\treturn nil\n}\n\nfunc debugRules(data *cacheData, mode constants.ModeType) [][]string {\n\tiptables := [][]string{}\n\tif mode == constants.RemoteContainer {\n\t\tiptables = append(iptables, [][]string{\n\t\t\t{\n\t\t\t\t\"raw\",\n\t\t\t\t\"PREROUTING\",\n\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"raw\",\n\t\t\t\t\"OUTPUT\",\n\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t},\n\t\t}...)\n\t} else {\n\t\tif data.tcpPorts != \"0\" {\n\t\t\tiptables = append(iptables,\n\t\t\t\t[][]string{\n\t\t\t\t\t{\n\t\t\t\t\t\t\"raw\",\n\t\t\t\t\t\t\"PREROUTING\",\n\t\t\t\t\t\t\"-p\", \"tcp\",\n\t\t\t\t\t\t\"--match\", \"multiport\",\n\t\t\t\t\t\t\"--destination-ports\", data.tcpPorts,\n\t\t\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"raw\",\n\t\t\t\t\t\t\"OUTPUT\",\n\t\t\t\t\t\t\"--match\", \"multiport\",\n\t\t\t\t\t\t\"--source-ports\", data.tcpPorts,\n\t\t\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t\t\t},\n\t\t\t\t}...,\n\t\t\t)\n\n\t\t} else {\n\t\t\tiptables = append(iptables, [][]string{\n\t\t\t\t{\n\t\t\t\t\t\"raw\",\n\t\t\t\t\t\"PREROUTING\",\n\t\t\t\t\t\"-p\", \"tcp\",\n\t\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"raw\",\n\t\t\t\t\t\"OUTPUT\",\n\t\t\t\t\t\"-m\", \"cgroup\",\n\t\t\t\t\t\"--cgroup\", data.mark,\n\t\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t\t},\n\t\t\t}...)\n\t\t}\n\t\tif data.udpPorts != \"0\" {\n\t\t\tiptables = append(iptables, [][]string{\n\t\t\t\t{\n\t\t\t\t\t\"raw\",\n\t\t\t\t\t\"PREROUTING\",\n\t\t\t\t\t\"-p\", \"udp\",\n\t\t\t\t\t\"--match\", \"multiport\",\n\t\t\t\t\t\"--destination-ports\", data.udpPorts,\n\t\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"raw\",\n\t\t\t\t\t\"OUTPUT\",\n\t\t\t\t\t\"--match\", \"multiport\",\n\t\t\t\t\t\"--source-ports\", data.tcpPorts,\n\t\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t\t},\n\t\t\t}...)\n\t\t} else {\n\t\t\tiptables = append(iptables,\n\t\t\t\t[][]string{\n\t\t\t\t\t{\n\t\t\t\t\t\t\"raw\",\n\t\t\t\t\t\t\"PREROUTING\",\n\t\t\t\t\t\t\"-p\", \"tcp\",\n\t\t\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"raw\",\n\t\t\t\t\t\t\"OUTPUT\",\n\t\t\t\t\t\t\"-m\", \"cgroup\",\n\t\t\t\t\t\t\"--cgroup\", data.mark,\n\t\t\t\t\t\t\"-j\", \"TRACE\",\n\t\t\t\t\t},\n\t\t\t\t}...)\n\t\t}\n\t}\n\treturn iptables\n}\n"
  },
  {
    "path": "controller/internal/supervisor/supervisor_test.go",
    "content": "package supervisor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/blang/semver\"\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/afinetrawsocket\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor/mocksupervisor\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ipsetmanager\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packetprocessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/testhelper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n)\n\nfunc newSupervisor(\n\tcollector collector.EventCollector,\n\tenforcerInstance enforcer.Enforcer,\n\tmode constants.ModeType,\n\tcfg *runtime.Configuration,\n) (*Config, error) {\n\n\ts, err := NewSupervisor(collector, enforcerInstance, mode, cfg, false, \"\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn s.(*Config), nil\n}\n\nfunc createPUInfo() *policy.PUInfo {\n\n\trules := policy.IPRuleList{\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.30.253.0/24\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t\tProtocols: []string{\"TCP\"},\n\t\t\tPolicy:    &policy.FlowPolicy{Action: policy.Reject},\n\t\t},\n\n\t\tpolicy.IPRule{\n\t\t\tAddresses: []string{\"192.30.253.0/24\"},\n\t\t\tPorts:     []string{\"443\"},\n\t\t\tProtocols: []string{\"TCP\"},\n\t\t\tPolicy:    &policy.FlowPolicy{Action: policy.Accept},\n\t\t},\n\t}\n\n\tips := policy.ExtendedMap{\n\t\tpolicy.DefaultNamespace: \"172.17.0.1\",\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetIPAddresses(ips)\n\tplc := policy.NewPUPolicy(\n\t\t\"context\",\n\t\t\"/ns1\",\n\t\tpolicy.Police,\n\t\trules,\n\t\trules,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tnil,\n\t\tips,\n\t\t0,\n\t\t0,\n\t\tnil,\n\t\tnil,\n\t\t[]string{},\n\t\tpolicy.EnforcerMapping,\n\t\tpolicy.Reject|policy.Log,\n\t\tpolicy.Reject|policy.Log,\n\t)\n\n\treturn policy.PUInfoFromPolicyAndRuntime(\"context\", plc, runtime)\n\n}\n\nfunc newFilterQueueWithDefaults() fqconfig.FilterQueue {\n\treturn fqconfig.NewFilterQueue(0, nil)\n}\n\n// NewWithDefaults create a new data path with most things used by default\nfunc newWithDefaults(\n\tserverID string, // nolint: unparam\n\tcollector collector.EventCollector,\n\tservice packetprocessor.PacketProcessor, // nolint: unparam\n\tsecrets secrets.Secrets,\n\tmode constants.ModeType,\n\tprocMountPoint string, // nolint: unparam\n\ttargetNetworks []string,\n) enforcer.Enforcer {\n\n\tdefaultMutualAuthorization := false\n\tdefaultValidity := constants.SynTokenRefreshTime\n\tdefaultExternalIPCacheTimeout, err := time.ParseDuration(enforcerconstants.DefaultExternalIPTimeout)\n\tif err != nil {\n\t\tdefaultExternalIPCacheTimeout = time.Second\n\t}\n\tdefaultPacketLogs := false\n\n\ttokenAccessor, _ := tokenaccessor.New(serverID, defaultValidity, secrets)\n\tpuFromContextID := cache.NewCache(\"puFromContextID\")\n\tfq := newFilterQueueWithDefaults()\n\te := nfqdatapath.New(\n\t\tdefaultMutualAuthorization,\n\t\tfq,\n\t\tcollector,\n\t\tserverID,\n\t\tdefaultValidity,\n\t\tsecrets,\n\t\tmode,\n\t\tprocMountPoint,\n\t\tdefaultExternalIPCacheTimeout,\n\t\tdefaultPacketLogs,\n\t\ttokenAccessor,\n\t\tpuFromContextID,\n\t\t&runtime.Configuration{TCPTargetNetworks: targetNetworks},\n\t\tfalse,\n\t\tsemver.Version{},\n\t\tpolicy.None,\n\t)\n\n\treturn e\n}\n\nfunc TestNewSupervisor(t *testing.T) {\n\tConvey(\"When I try to instantiate a new supervisor \", t, func() {\n\n\t\tc := &collector.DefaultCollector{}\n\t\t_, secrets, _ := testhelper.NewTestCompactPKISecrets()\n\n\t\tprevRawSocket := nfqdatapath.GetUDPRawSocket\n\t\tdefer func() {\n\t\t\tnfqdatapath.GetUDPRawSocket = prevRawSocket\n\t\t}()\n\t\tnfqdatapath.GetUDPRawSocket = func(mark int, device string) (afinetrawsocket.SocketWriter, error) {\n\t\t\treturn nil, nil\n\t\t}\n\n\t\te := newWithDefaults(\"serverID\", c, nil, secrets, constants.LocalServer, \"/proc\", []string{\"0.0.0.0/0\"})\n\t\tmode := constants.LocalServer\n\n\t\tConvey(\"When I provide correct parameters\", func() {\n\t\t\ts, err := newSupervisor(c, e, mode, &runtime.Configuration{})\n\t\t\tConvey(\"I should not get an error \", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(s, ShouldNotBeNil)\n\t\t\t\tSo(s.collector, ShouldEqual, c)\n\t\t\t})\n\t\t})\n\t\tConvey(\"When I provide a nil  collector\", func() {\n\t\t\ts, err := newSupervisor(nil, e, mode, &runtime.Configuration{})\n\t\t\tConvey(\"I should get an error \", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(s, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I provide a nil enforcer\", func() {\n\t\t\ts, err := newSupervisor(c, nil, mode, &runtime.Configuration{})\n\t\t\tConvey(\"I should get an error \", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(s, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestSupervise(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid supervisor\", t, func() {\n\t\tc := &collector.DefaultCollector{}\n\t\t_, scrts, _ := testhelper.NewTestCompactPKISecrets()\n\n\t\tprevRawSocket := nfqdatapath.GetUDPRawSocket\n\t\tdefer func() {\n\t\t\tnfqdatapath.GetUDPRawSocket = prevRawSocket\n\t\t}()\n\t\tnfqdatapath.GetUDPRawSocket = func(mark int, device string) (afinetrawsocket.SocketWriter, error) {\n\t\t\treturn nil, nil\n\t\t}\n\t\te := newWithDefaults(\"serverID\", c, nil, scrts, constants.RemoteContainer, \"/proc\", []string{\"0.0.0.0/0\"})\n\n\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\tipsetmanager.SetIpsetTestInstance(ips)\n\n\t\ts, _ := newSupervisor(c, e, constants.RemoteContainer, &runtime.Configuration{})\n\t\tSo(s, ShouldNotBeNil)\n\n\t\timpl := mocksupervisor.NewMockImplementor(ctrl)\n\t\ts.impl = impl\n\n\t\tConvey(\"When I supervise a new PU with invalid policy\", func() {\n\t\t\terr := s.Supervise(\"contextID\", nil)\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tpuInfo := createPUInfo()\n\n\t\tConvey(\"When I supervise a new PU with valid policy\", func() {\n\t\t\timpl.EXPECT().ConfigureRules(0, \"contextID\", puInfo).Return(nil)\n\t\t\terr := s.Supervise(\"contextID\", puInfo)\n\t\t\tConvey(\"I should not get an error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I supervise a new PU with valid policy, but there is an error\", func() {\n\t\t\timpl.EXPECT().ConfigureRules(0, \"errorPU\", puInfo).Return(errors.New(\"error\"))\n\t\t\timpl.EXPECT().DeleteRules(0, \"errorPU\", gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\terr := s.Supervise(\"errorPU\", puInfo)\n\t\t\tConvey(\"I should  get an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I send supervise command for a second time, it should do an update\", func() {\n\t\t\timpl.EXPECT().ConfigureRules(0, \"contextID\", puInfo).Return(nil)\n\t\t\timpl.EXPECT().UpdateRules(1, \"contextID\", gomock.Any(), gomock.Any()).Return(nil)\n\t\t\tnoerr := s.Supervise(\"contextID\", puInfo)\n\t\t\tSo(noerr, ShouldBeNil)\n\t\t\terr := s.Supervise(\"contextID\", puInfo)\n\t\t\tConvey(\"I should not get an error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I send supervise command for a second time, and the update fails\", func() {\n\t\t\timpl.EXPECT().ConfigureRules(0, \"contextID\", puInfo).Times(2).Return(nil)\n\t\t\timpl.EXPECT().UpdateRules(1, \"contextID\", gomock.Any(), gomock.Any()).Return(errors.New(\"error\"))\n\t\t\timpl.EXPECT().DeleteRules(0, \"contextID\", gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\timpl.EXPECT().CleanUp()\n\t\t\timpl.EXPECT().Run(gomock.Any()).Return(nil)\n\t\t\timpl.EXPECT().SetTargetNetworks(gomock.Any()).Return(nil)\n\t\t\timpl.EXPECT().CreateCustomRulesChain().Return(nil)\n\t\t\tserr := s.Supervise(\"contextID\", puInfo)\n\t\t\tSo(serr, ShouldBeNil)\n\t\t\terr := s.Supervise(\"contextID\", puInfo)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t})\n}\n\nfunc TestUnsupervise(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a properly configured  supervisor\", t, func() {\n\t\tc := &collector.DefaultCollector{}\n\t\t_, scrts, _ := testhelper.NewTestCompactPKISecrets()\n\n\t\tprevRawSocket := nfqdatapath.GetUDPRawSocket\n\t\tdefer func() {\n\t\t\tnfqdatapath.GetUDPRawSocket = prevRawSocket\n\t\t}()\n\t\tnfqdatapath.GetUDPRawSocket = func(mark int, device string) (afinetrawsocket.SocketWriter, error) {\n\t\t\treturn nil, nil\n\t\t}\n\n\t\te := newWithDefaults(\"serverID\", c, nil, scrts, constants.RemoteContainer, \"/proc\", []string{\"0.0.0.0/0\"})\n\n\t\tips := ipsetmanager.NewTestIpsetProvider()\n\t\tipsetmanager.SetIpsetTestInstance(ips)\n\n\t\ts, _ := newSupervisor(c, e, constants.RemoteContainer, &runtime.Configuration{TCPTargetNetworks: []string{\"172.17.0.0/16\"}})\n\t\tSo(s, ShouldNotBeNil)\n\n\t\timpl := mocksupervisor.NewMockImplementor(ctrl)\n\t\ts.impl = impl\n\n\t\tConvey(\"When I try to unsupervise a PU that was not see before\", func() {\n\t\t\terr := s.Unsupervise(\"badContext\")\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tpuInfo := createPUInfo()\n\n\t\tConvey(\"When I try to unsupervise a valid PU \", func() {\n\t\t\timpl.EXPECT().ConfigureRules(0, \"contextID\", puInfo).Return(nil)\n\t\t\timpl.EXPECT().DeleteRules(0, \"contextID\", gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\tserr := s.Supervise(\"contextID\", puInfo)\n\t\t\tSo(serr, ShouldBeNil)\n\t\t\terr := s.Unsupervise(\"contextID\")\n\t\t\tConvey(\"I should get no errors\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestStart(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a properly configured supervisor\", t, func() {\n\t\tc := &collector.DefaultCollector{}\n\t\t_, scrts, _ := testhelper.NewTestCompactPKISecrets()\n\n\t\tprevRawSocket := nfqdatapath.GetUDPRawSocket\n\t\tdefer func() {\n\t\t\tnfqdatapath.GetUDPRawSocket = prevRawSocket\n\t\t}()\n\t\tnfqdatapath.GetUDPRawSocket = func(mark int, device string) (afinetrawsocket.SocketWriter, error) {\n\t\t\treturn nil, nil\n\t\t}\n\n\t\te := newWithDefaults(\"serverID\", c, nil, scrts, constants.RemoteContainer, \"/proc\", []string{\"0.0.0.0/0\"})\n\n\t\ts, _ := newSupervisor(c, e,\n\t\t\tconstants.RemoteContainer,\n\t\t\t&runtime.Configuration{TCPTargetNetworks: []string{\"172.17.0.0/16\"}})\n\t\tSo(s, ShouldNotBeNil)\n\n\t\timpl := mocksupervisor.NewMockImplementor(ctrl)\n\t\ts.impl = impl\n\n\t\tConvey(\"When I try to start it and the implementor works\", func() {\n\t\t\timpl.EXPECT().Run(gomock.Any()).Return(nil)\n\t\t\timpl.EXPECT().SetTargetNetworks(&runtime.Configuration{TCPTargetNetworks: []string{\"172.17.0.0/16\"}}).Return(nil)\n\t\t\timpl.EXPECT().CreateCustomRulesChain().Return(nil)\n\t\t\terr := s.Run(context.Background())\n\t\t\tConvey(\"I should get no errors\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to start it and the implementor returns an error\", func() {\n\t\t\timpl.EXPECT().Run(gomock.Any()).Return(errors.New(\"error\"))\n\t\t\terr := s.Run(context.Background())\n\t\t\tConvey(\"I should get an error \", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestStop(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a properly configured supervisor\", t, func() {\n\t\tc := &collector.DefaultCollector{}\n\t\t_, scrts, _ := testhelper.NewTestCompactPKISecrets()\n\n\t\tprevRawSocket := nfqdatapath.GetUDPRawSocket\n\t\tdefer func() {\n\t\t\tnfqdatapath.GetUDPRawSocket = prevRawSocket\n\t\t}()\n\t\tnfqdatapath.GetUDPRawSocket = func(mark int, device string) (afinetrawsocket.SocketWriter, error) {\n\t\t\treturn nil, nil\n\t\t}\n\n\t\te := newWithDefaults(\"serverID\", c, nil, scrts, constants.RemoteContainer, \"/proc\", []string{\"0.0.0.0/0\"})\n\n\t\ts, _ := newSupervisor(c, e, constants.RemoteContainer, &runtime.Configuration{TCPTargetNetworks: []string{\"172.17.0.0/16\"}})\n\t\tSo(s, ShouldNotBeNil)\n\n\t\timpl := mocksupervisor.NewMockImplementor(ctrl)\n\t\ts.impl = impl\n\n\t\tConvey(\"When I try to start it and the implementor works\", func() {\n\t\t\timpl.EXPECT().Run(gomock.Any()).Return(nil)\n\t\t\timpl.EXPECT().SetTargetNetworks(&runtime.Configuration{TCPTargetNetworks: []string{\"172.17.0.0/16\"}}).Return(nil)\n\t\t\timpl.EXPECT().CreateCustomRulesChain().Return(nil)\n\t\t\terr := s.Run(context.Background())\n\t\t\tConvey(\"I should get no errors\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestEnableIPTablesPacketTracing(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a properly configured supervisor\", t, func() {\n\t\tc := &collector.DefaultCollector{}\n\t\t_, scrts, _ := testhelper.NewTestCompactPKISecrets()\n\n\t\tprevRawSocket := nfqdatapath.GetUDPRawSocket\n\t\tdefer func() {\n\t\t\tnfqdatapath.GetUDPRawSocket = prevRawSocket\n\t\t}()\n\t\tnfqdatapath.GetUDPRawSocket = func(mark int, device string) (afinetrawsocket.SocketWriter, error) {\n\t\t\treturn nil, nil\n\t\t}\n\n\t\te := newWithDefaults(\"serverID\", c, nil, scrts, constants.RemoteContainer, \"/proc\", []string{\"0.0.0.0/0\"})\n\n\t\ts, _ := newSupervisor(c, e, constants.RemoteContainer, &runtime.Configuration{TCPTargetNetworks: []string{\"172.17.0.0/16\"}})\n\t\tSo(s, ShouldNotBeNil)\n\n\t\timpl := mocksupervisor.NewMockImplementor(ctrl)\n\t\ts.impl = impl\n\n\t\tConvey(\"When I try to start it and the implementor works\", func() {\n\t\t\timpl.EXPECT().Run(gomock.Any()).Return(nil)\n\t\t\timpl.EXPECT().SetTargetNetworks(&runtime.Configuration{TCPTargetNetworks: []string{\"172.17.0.0/16\"}}).Return(nil)\n\t\t\timpl.EXPECT().CreateCustomRulesChain().Return(nil)\n\t\t\terr := s.Run(context.Background())\n\t\t\tConvey(\"I should get no errors\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\n\t\t})\n\t\tConvey(\"I setup EnableIPTablesTracing on an invalid contextID\", func() {\n\t\t\terr := s.EnableIPTablesPacketTracing(context.Background(), \"serverID\", 10*time.Second)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t\tConvey(\"I setup EnableIPTablesTracing on an valid contextID\", func() {\n\t\t\tpuInfo := createPUInfo()\n\t\t\timpl.EXPECT().ConfigureRules(0, \"contextID\", puInfo).Return(nil)\n\n\t\t\tserr := s.Supervise(\"contextID\", puInfo)\n\t\t\tSo(serr, ShouldBeNil)\n\t\t\tiptProvider := provider.NewTestIptablesProvider()\n\t\t\tiptProviders := []provider.IptablesProvider{iptProvider, iptProvider}\n\t\t\timpl.EXPECT().ACLProvider().Times(1).Return(iptProviders)\n\t\t\terr := s.EnableIPTablesPacketTracing(context.Background(), \"contextID\", 10*time.Second)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestDebugRules(t *testing.T) {\n\tConvey(\"Given i get debug rules\", t, func() {\n\t\tConvey(\"Debug Rules for container\", func() {\n\t\t\trules := debugRules(nil, constants.RemoteContainer)\n\t\t\tSo(len(rules), ShouldEqual, 2)\n\t\t\tfor _, rule := range rules {\n\t\t\t\tfound := strings.Contains(strings.Join(rule, \",\"), \"multiport\")\n\t\t\t\tSo(found, ShouldBeFalse)\n\n\t\t\t}\n\t\t})\n\t\tConvey(\"Debug Rules for linux process with valid tcp port\", func() {\n\t\t\tdata := &cacheData{\n\t\t\t\ttcpPorts: \"80\",\n\t\t\t}\n\t\t\trules := debugRules(data, constants.LocalServer)\n\t\t\tSo(len(rules), ShouldEqual, 4)\n\t\t\tfor _, rule := range rules {\n\t\t\t\tfound := strings.Contains(strings.Join(rule, \",\"), \"udp\") && strings.Contains(strings.Join(rule, \",\"), \"cgroup\")\n\t\t\t\tSo(found, ShouldBeFalse)\n\n\t\t\t}\n\t\t})\n\t\tConvey(\"Debug Rules for linux process with valid udp port\", func() {\n\t\t\tdata := &cacheData{\n\t\t\t\tudpPorts: \"80\",\n\t\t\t}\n\t\t\trules := debugRules(data, constants.LocalServer)\n\t\t\tSo(len(rules), ShouldEqual, 4)\n\t\t\tfor _, rule := range rules {\n\t\t\t\tfound := strings.Contains(strings.Join(rule, \",\"), \"tcp\") && strings.Contains(strings.Join(rule, \",\"), \"cgroup\")\n\t\t\t\tSo(found, ShouldBeFalse)\n\n\t\t\t}\n\t\t})\n\t\tConvey(\"Debug Rules for linux process with valid mark\", func() {\n\t\t\tdata := &cacheData{\n\t\t\t\tudpPorts: \"80\",\n\t\t\t}\n\t\t\trules := debugRules(data, constants.LocalServer)\n\t\t\tSo(len(rules), ShouldEqual, 4)\n\t\t\tfor _, rule := range rules {\n\t\t\t\tfound := strings.Contains(strings.Join(rule, \",\"), \"cgroup\") && strings.Contains(strings.Join(rule, \",\"), \"multiport\")\n\t\t\t\tSo(found, ShouldBeFalse)\n\n\t\t\t}\n\t\t})\n\n\t})\n}\n"
  },
  {
    "path": "controller/internal/windows/rulespec_windows.go",
    "content": "// +build windows\n\npackage windows\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/DavidGamba/go-getoptions\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n\t\"golang.org/x/net/ipv6\"\n)\n\n// WindowsRuleRange represents a range of values for a rule\ntype WindowsRuleRange struct { // nolint:golint // ignore type name stutters\n\tStart int\n\tEnd   int\n}\n\n// WindowsRuleIcmpMatch represents parameters for an ICMP match\ntype WindowsRuleIcmpMatch struct { // nolint:golint // ignore type name stutters\n\tIcmpType      int\n\tIcmpCodeRange *WindowsRuleRange\n\tNomatch       bool\n}\n\n// WindowsRuleMatchSet represents result of parsed --match-set\ntype WindowsRuleMatchSet struct { // nolint:golint // ignore type name stutters\n\tMatchSetName    string\n\tMatchSetNegate  bool\n\tMatchSetDstIP   bool\n\tMatchSetDstPort bool\n\tMatchSetSrcIP   bool\n\tMatchSetSrcPort bool\n}\n\n// WindowsRuleSpec represents result of parsed iptables rule\ntype WindowsRuleSpec struct { // nolint:golint // ignore type name stutters\n\tProtocol                   int\n\tAction                     int // FilterAction (allow, drop, nfq, proxy)\n\tProxyPort                  int\n\tMark                       int\n\tLog                        bool\n\tLogPrefix                  string\n\tGroupID                    int\n\tProcessID                  int\n\tProcessIncludeChildren     bool\n\tProcessIncludeChildrenOnly bool\n\tMatchSrcPort               []*WindowsRuleRange\n\tMatchDstPort               []*WindowsRuleRange\n\tMatchBytesNoMatch          bool\n\tMatchBytes                 []byte\n\tMatchBytesOffset           int\n\tMatchSet                   []*WindowsRuleMatchSet\n\tIcmpMatch                  []*WindowsRuleIcmpMatch\n\tTCPFlags                   uint8\n\tTCPFlagsMask               uint8\n\tTCPFlagsSpecified          bool\n\tTCPOption                  uint8\n\tTCPOptionSpecified         bool\n\tGotoFilterName             string\n\tFlowMarkNoMatch            bool\n\tFlowMark                   int\n}\n\n// MakeRuleSpecText converts a WindowsRuleSpec back into a string for an iptables rule\nfunc MakeRuleSpecText(winRuleSpec *WindowsRuleSpec, validate bool) (string, error) {\n\n\trulespec := \"\"\n\n\tif winRuleSpec.Protocol > 0 && winRuleSpec.Protocol < math.MaxUint8 {\n\t\trulespec += fmt.Sprintf(\"-p %d \", winRuleSpec.Protocol)\n\t}\n\n\tif len(winRuleSpec.MatchBytes) > 0 {\n\t\tif winRuleSpec.MatchBytesNoMatch {\n\t\t\trulespec += fmt.Sprintf(\"-m string --string ! %s --offset %d \", string(winRuleSpec.MatchBytes), winRuleSpec.MatchBytesOffset)\n\t\t} else {\n\t\t\trulespec += fmt.Sprintf(\"-m string --string %s --offset %d \", string(winRuleSpec.MatchBytes), winRuleSpec.MatchBytesOffset)\n\t\t}\n\t}\n\n\tif len(winRuleSpec.MatchSrcPort) > 0 {\n\t\trulespec += \"--sports \"\n\t\tfor i, pr := range winRuleSpec.MatchSrcPort {\n\t\t\trulespec += strconv.Itoa(pr.Start)\n\t\t\tif pr.Start != pr.End {\n\t\t\t\trulespec += fmt.Sprintf(\":%d\", pr.End)\n\t\t\t}\n\t\t\tif i+1 < len(winRuleSpec.MatchSrcPort) {\n\t\t\t\trulespec += \",\"\n\t\t\t}\n\t\t}\n\t\trulespec += \" \"\n\t}\n\n\tif len(winRuleSpec.MatchDstPort) > 0 {\n\t\trulespec += \"--dports \"\n\t\tfor i, pr := range winRuleSpec.MatchDstPort {\n\t\t\trulespec += strconv.Itoa(pr.Start)\n\t\t\tif pr.Start != pr.End {\n\t\t\t\trulespec += fmt.Sprintf(\":%d\", pr.End)\n\t\t\t}\n\t\t\tif i+1 < len(winRuleSpec.MatchDstPort) {\n\t\t\t\trulespec += \",\"\n\t\t\t}\n\t\t}\n\t\trulespec += \" \"\n\t}\n\n\tif len(winRuleSpec.MatchSet) > 0 {\n\t\tfor _, ms := range winRuleSpec.MatchSet {\n\t\t\trulespec += \"-m set \"\n\t\t\tif ms.MatchSetNegate {\n\t\t\t\trulespec += \"! \"\n\t\t\t}\n\t\t\trulespec += fmt.Sprintf(\"--match-set %s \", ms.MatchSetName)\n\t\t\tif ms.MatchSetSrcIP {\n\t\t\t\trulespec += \"srcIP\"\n\t\t\t\tif ms.MatchSetSrcPort || ms.MatchSetDstPort {\n\t\t\t\t\trulespec += \",\"\n\t\t\t\t}\n\t\t\t} else if ms.MatchSetDstIP {\n\t\t\t\trulespec += \"dstIP\"\n\t\t\t\tif ms.MatchSetSrcPort || ms.MatchSetDstPort {\n\t\t\t\t\trulespec += \",\"\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ms.MatchSetSrcPort {\n\t\t\t\trulespec += \"srcPort\"\n\t\t\t} else if ms.MatchSetDstPort {\n\t\t\t\trulespec += \"dstPort\"\n\t\t\t}\n\t\t\trulespec += \" \"\n\t\t}\n\t}\n\n\tif len(winRuleSpec.IcmpMatch) > 0 {\n\t\tfor _, im := range winRuleSpec.IcmpMatch {\n\t\t\tif im.Nomatch {\n\t\t\t\trulespec += \"--icmp-type nomatch\"\n\t\t\t} else {\n\t\t\t\trulespec += fmt.Sprintf(\"--icmp-type %d\", im.IcmpType)\n\t\t\t\tif im.IcmpCodeRange != nil {\n\t\t\t\t\trulespec += fmt.Sprintf(\"/%d\", im.IcmpCodeRange.Start)\n\t\t\t\t\tif im.IcmpCodeRange.Start != im.IcmpCodeRange.End {\n\t\t\t\t\t\trulespec += fmt.Sprintf(\":%d\", im.IcmpCodeRange.End)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\trulespec += \" \"\n\t\t}\n\t}\n\n\tif winRuleSpec.TCPFlagsSpecified {\n\t\trulespec += fmt.Sprintf(\"--tcp-flags %d,%d \", winRuleSpec.TCPFlagsMask, winRuleSpec.TCPFlags)\n\t}\n\n\tif winRuleSpec.TCPOptionSpecified {\n\t\trulespec += fmt.Sprintf(\"--tcp-option %d \", winRuleSpec.TCPOption)\n\t}\n\n\tif winRuleSpec.FlowMark != 0 {\n\t\tif winRuleSpec.FlowMarkNoMatch {\n\t\t\trulespec += fmt.Sprintf(\"-m connmark --mark ! %d \", winRuleSpec.FlowMark)\n\t\t} else {\n\t\t\trulespec += fmt.Sprintf(\"-m connmark --mark %d \", winRuleSpec.FlowMark)\n\t\t}\n\t}\n\n\tswitch winRuleSpec.Action {\n\tcase frontman.FilterActionAllow:\n\t\trulespec += \"-j ACCEPT \"\n\tcase frontman.FilterActionAllowOnce:\n\t\trulespec += \"-j ACCEPT_ONCE \"\n\tcase frontman.FilterActionBlock:\n\t\trulespec += \"-j DROP \"\n\tcase frontman.FilterActionProxy:\n\t\trulespec += fmt.Sprintf(\"-j REDIRECT --to-ports %d \", winRuleSpec.ProxyPort)\n\tcase frontman.FilterActionNfq:\n\t\trulespec += fmt.Sprintf(\"-j NFQUEUE -j MARK %d \", winRuleSpec.Mark)\n\tcase frontman.FilterActionForceNfq:\n\t\trulespec += fmt.Sprintf(\"-j NFQUEUE_FORCE -j MARK %d \", winRuleSpec.Mark)\n\tcase frontman.FilterActionGotoFilter:\n\t\trulespec += fmt.Sprintf(\"-j %s \", winRuleSpec.GotoFilterName)\n\tcase frontman.FilterActionSetMark:\n\t\trulespec += fmt.Sprintf(\"-j CONNMARK --set-mark %d\", winRuleSpec.Mark)\n\t}\n\tif winRuleSpec.Log {\n\t\trulespec += fmt.Sprintf(\"-j NFLOG --nflog-group %d --nflog-prefix \\\"%s\\\" \", winRuleSpec.GroupID, winRuleSpec.LogPrefix)\n\t}\n\n\tif winRuleSpec.ProcessID > 0 {\n\t\trulespec += fmt.Sprintf(\"-m owner --pid-owner %d \", winRuleSpec.ProcessID)\n\t\tif winRuleSpec.ProcessIncludeChildrenOnly {\n\t\t\trulespec += \"--pid-childrenonly \"\n\t\t} else if winRuleSpec.ProcessIncludeChildren {\n\t\t\trulespec += \"--pid-children \"\n\t\t}\n\t}\n\n\trulespec = strings.TrimSpace(rulespec)\n\tif validate {\n\t\tif _, err := ParseRuleSpec(strings.Split(rulespec, \" \")...); err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\treturn rulespec, nil\n}\n\n// ParsePortString parses comma-separated list of port or port ranges\nfunc ParsePortString(portString string) ([]*WindowsRuleRange, error) {\n\tvar result []*WindowsRuleRange\n\tif portString != \"\" {\n\t\tportList := strings.Split(portString, \",\")\n\t\tfor _, portListItem := range portList {\n\t\t\tportEnd := 0\n\t\t\tportStart, err := strconv.Atoi(portListItem)\n\t\t\tif err != nil {\n\t\t\t\tportRange := strings.SplitN(portListItem, \":\", 2)\n\t\t\t\tif len(portRange) != 2 {\n\t\t\t\t\treturn nil, errors.New(\"invalid port string\")\n\t\t\t\t}\n\t\t\t\tportStart, err = strconv.Atoi(portRange[0])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, errors.New(\"invalid port string\")\n\t\t\t\t}\n\t\t\t\tportEnd, err = strconv.Atoi(portRange[1])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, errors.New(\"invalid port string\")\n\t\t\t\t}\n\t\t\t}\n\t\t\tif portEnd == 0 {\n\t\t\t\tportEnd = portStart\n\t\t\t}\n\t\t\tresult = append(result, &WindowsRuleRange{portStart, portEnd})\n\t\t}\n\t}\n\treturn result, nil\n}\n\n// ReduceIcmpProtoString will look at policyRestrictions and return a rulespec substring for matching.\n// represents the logic: \"icmpProtoTypeCode and (policyRestrictions[0] or policyRestrictions[1] or...)\"\n// can return empty list if there is a proto match with no restriction.\n// will return error if there is no intersection.\nfunc ReduceIcmpProtoString(icmpProtoTypeCode string, policyRestrictions []string) ([]string, error) {\n\n\tif len(policyRestrictions) == 0 {\n\t\treturn TransformIcmpProtoString(icmpProtoTypeCode), nil\n\t}\n\n\tsplitIt := func(p string) (string, []*WindowsRuleIcmpMatch, error) {\n\t\tvar c []*WindowsRuleIcmpMatch\n\t\tvar err error\n\t\tparts := strings.SplitN(p, \"/\", 2)\n\t\tswitch len(parts) {\n\t\tcase 2:\n\t\t\tc, err = ParseIcmpTypeCode(parts[1])\n\t\t\tif err != nil {\n\t\t\t\treturn \"\", nil, err\n\t\t\t}\n\t\t\tfallthrough\n\t\tcase 1:\n\t\t\treturn parts[0], c, nil\n\t\tdefault:\n\t\t\treturn \"\", nil, fmt.Errorf(\"invalid icmpProtoTypeCode: %s\", icmpProtoTypeCode)\n\t\t}\n\t}\n\n\tnormalizeProto := func(p string) string {\n\t\tswitch strings.ToLower(p) {\n\t\tcase \"1\":\n\t\t\treturn \"icmp\"\n\t\tcase \"58\", \"icmp6\":\n\t\t\treturn \"icmpv6\"\n\t\t}\n\t\treturn p\n\t}\n\n\tproto, criteria, err := splitIt(icmpProtoTypeCode)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar positiveMatch bool\n\tresult := make([]string, 0, len(policyRestrictions))\n\tfor _, restriction := range policyRestrictions {\n\t\tprotoR, criteriaR, err := splitIt(restriction)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// proto should match\n\t\tif proto != protoR && normalizeProto(proto) != normalizeProto(protoR) {\n\t\t\tcontinue\n\t\t}\n\t\tif len(criteriaR) == 0 {\n\t\t\t// no restriction\n\t\t\tresult = append(result, TransformIcmpProtoString(icmpProtoTypeCode)...)\n\t\t\tpositiveMatch = true\n\t\t\tcontinue\n\t\t}\n\t\tif len(criteria) == 0 {\n\t\t\t// restriction takes effect\n\t\t\tresult = append(result, TransformIcmpProtoString(restriction)...)\n\t\t\tpositiveMatch = true\n\t\t\tcontinue\n\t\t}\n\n\t\tif criteria[0].IcmpType != criteriaR[0].IcmpType {\n\t\t\t// types don't match\n\t\t\tcontinue\n\t\t}\n\n\t\tvar ranges, rangesR []WindowsRuleRange\n\t\tfor _, c := range criteria {\n\t\t\tif c.IcmpCodeRange != nil {\n\t\t\t\tranges = append(ranges, *c.IcmpCodeRange)\n\t\t\t}\n\t\t}\n\t\tfor _, c := range criteriaR {\n\t\t\tif c.IcmpCodeRange != nil {\n\t\t\t\trangesR = append(rangesR, *c.IcmpCodeRange)\n\t\t\t}\n\t\t}\n\n\t\tif len(rangesR) == 0 {\n\t\t\t// no code restriction\n\t\t\tresult = append(result, TransformIcmpProtoString(icmpProtoTypeCode)...)\n\t\t\tpositiveMatch = true\n\t\t\tcontinue\n\t\t}\n\t\tif len(ranges) == 0 {\n\t\t\t// use restriction\n\t\t\tresult = append(result, TransformIcmpProtoString(restriction)...)\n\t\t\tpositiveMatch = true\n\t\t\tcontinue\n\t\t}\n\n\t\t// intersect the code restrictions\n\t\tcombined := make([]*WindowsRuleRange, 0, len(ranges)+len(rangesR))\n\t\tsort.Slice(ranges, func(i, j int) bool {\n\t\t\treturn ranges[i].Start < ranges[j].Start\n\t\t})\n\t\tsort.Slice(rangesR, func(i, j int) bool {\n\t\t\treturn rangesR[i].Start < rangesR[j].Start\n\t\t})\n\t\tfor i, j := 0, 0; i < len(ranges) && j < len(rangesR); {\n\t\t\ta, b := ranges[i], rangesR[j]\n\t\t\t// find max of the mins\n\t\t\tmaxOfMins := a.Start\n\t\t\tif b.Start > maxOfMins {\n\t\t\t\tmaxOfMins = b.Start\n\t\t\t}\n\t\t\t// find smaller max, and check if it's less than the other min.\n\t\t\t// if not then the intersection is [max(min1,min2),smallermax]\n\t\t\tif a.End < b.End {\n\t\t\t\tif a.End >= b.Start {\n\t\t\t\t\tcombined = append(combined, &WindowsRuleRange{Start: maxOfMins, End: a.End})\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif b.End >= a.Start {\n\t\t\t\t\tcombined = append(combined, &WindowsRuleRange{Start: maxOfMins, End: b.End})\n\t\t\t\t}\n\t\t\t}\n\t\t\t// advance\n\t\t\tif a.End <= b.End {\n\t\t\t\ti++\n\t\t\t}\n\t\t\tif b.End <= a.End {\n\t\t\t\tj++\n\t\t\t}\n\t\t}\n\n\t\tif len(combined) == 0 {\n\t\t\t// no intersection\n\t\t\tcontinue\n\t\t}\n\n\t\tcodeString := \"\"\n\t\tfor i, c := range combined {\n\t\t\tif i > 0 {\n\t\t\t\tcodeString += \",\"\n\t\t\t}\n\t\t\tcodeString += fmt.Sprintf(\"%d:%d\", c.Start, c.End)\n\t\t}\n\t\tcombinedString := fmt.Sprintf(\"%s/%d/%s\", proto, criteria[0].IcmpType, codeString)\n\t\tresult = append(result, TransformIcmpProtoString(combinedString)...)\n\t\tpositiveMatch = true\n\t}\n\n\tif !positiveMatch {\n\t\treturn nil, errors.New(\"policy restrictions do not match\")\n\t}\n\treturn result, nil\n}\n\n// TransformIcmpProtoString parses icmp/type/code string coming from ACL rule\n// and returns a rulespec subsection\nfunc TransformIcmpProtoString(icmpProtoTypeCode string) []string {\n\tparts := strings.SplitN(icmpProtoTypeCode, \"/\", 2)\n\tif len(parts) != 2 {\n\t\treturn nil\n\t}\n\ttypeCodeString := strings.TrimSpace(parts[1])\n\tif typeCodeString == \"\" {\n\t\treturn nil\n\t}\n\treturn []string{\"--icmp-type\", typeCodeString}\n}\n\n// GetIcmpNoMatch returns a rulespec subsection to indicate that there should be no match\nfunc GetIcmpNoMatch() []string {\n\treturn []string{\"--icmp-type\", \"nomatch\"}\n}\n\n// ParseIcmpTypeCode parses --icmp-type option\n// string is of the form type/code:code,code,code:code\nfunc ParseIcmpTypeCode(icmpTypeCode string) ([]*WindowsRuleIcmpMatch, error) {\n\n\tif icmpTypeCode == \"\" {\n\t\treturn nil, nil\n\t}\n\n\tif strings.EqualFold(icmpTypeCode, \"nomatch\") {\n\t\treturn []*WindowsRuleIcmpMatch{{Nomatch: true}}, nil\n\t}\n\n\tvar result []*WindowsRuleIcmpMatch\n\n\tparts := strings.SplitN(icmpTypeCode, \"/\", 2)\n\tif len(parts) == 0 {\n\t\treturn nil, nil\n\t}\n\ticmpType, err := strconv.Atoi(parts[0])\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif icmpType < 0 || icmpType > math.MaxUint8 {\n\t\treturn nil, errors.New(\"ICMP type out of range\")\n\t}\n\tif len(parts) > 1 {\n\t\t// parse codes, comma-separated\n\t\tfor _, code := range strings.Split(parts[1], \",\") {\n\t\t\t// parse code range\n\t\t\tcodeLower, codeUpper := -1, -1\n\t\t\tcodeRange := strings.SplitN(code, \":\", 2)\n\t\t\tif len(codeRange) > 0 {\n\t\t\t\tcodeLower, err = strconv.Atoi(codeRange[0])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tcodeUpper = codeLower\n\t\t\t}\n\t\t\tif len(codeRange) > 1 {\n\t\t\t\tcodeUpper, err = strconv.Atoi(codeRange[1])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\t\tif codeLower < 0 || codeLower > math.MaxUint8 {\n\t\t\t\treturn nil, errors.New(\"ICMP code out of range\")\n\t\t\t}\n\t\t\tif codeUpper < 0 || codeUpper > math.MaxUint8 || codeUpper < codeLower {\n\t\t\t\treturn nil, errors.New(\"ICMP code out of range\")\n\t\t\t}\n\t\t\tresult = append(result, &WindowsRuleIcmpMatch{\n\t\t\t\tIcmpType:      icmpType,\n\t\t\t\tIcmpCodeRange: &WindowsRuleRange{codeLower, codeUpper},\n\t\t\t})\n\t\t}\n\t}\n\tif len(result) == 0 {\n\t\tresult = append(result, &WindowsRuleIcmpMatch{IcmpType: icmpType})\n\t}\n\n\treturn result, nil\n}\n\n// ParseRuleSpec parses a windows iptable rule\nfunc ParseRuleSpec(rulespec ...string) (*WindowsRuleSpec, error) {\n\n\topt := getoptions.New()\n\n\tprotocolOpt := opt.String(\"p\", \"\")\n\tsPortOpt := opt.String(\"sports\", \"\")\n\tdPortOpt := opt.String(\"dports\", \"\")\n\tactionOpt := opt.StringSlice(\"j\", 1, 10, opt.Required())\n\tmodeOpt := opt.StringSlice(\"m\", 1, 10)\n\tmatchSetOpt := opt.StringSlice(\"match-set\", 2, 10)\n\tmatchStringOpt := opt.StringSlice(\"string\", 1, 2)\n\tmatchStringOffsetOpt := opt.Int(\"offset\", 0)\n\tmatchOwnerPidOpt := opt.Int(\"pid-owner\", 0)\n\tmatchOwnerPidChildrenOpt := opt.Bool(\"pid-children\", false)\n\tmatchOwnerPidChildrenOnlyOpt := opt.Bool(\"pid-childrenonly\", false)\n\tredirectPortOpt := opt.Int(\"to-ports\", 0)\n\tstateOpt := opt.String(\"state\", \"\")\n\topt.String(\"match\", \"\") // \"--match multiport\" ignored\n\tgroupIDOpt := opt.Int(\"nflog-group\", 0)\n\tlogPrefixOpt := opt.String(\"nflog-prefix\", \"\")\n\ticmpTypeOpt := opt.StringSlice(\"icmp-type\", 1, 20)\n\ttcpFlags := opt.StringSlice(\"tcp-flags\", 1, 1)\n\ttcpOption := opt.StringSlice(\"tcp-option\", 1, 1)\n\tmarkOpt := opt.StringSlice(\"mark\", 1, 2)\n\tsetMarkOpt := opt.Int(\"set-mark\", 0)\n\n\t_, err := opt.Parse(rulespec)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tresult := &WindowsRuleSpec{}\n\n\t// protocol\n\tisProtoAnyRule := false\n\tswitch strings.ToLower(*protocolOpt) {\n\tcase \"tcp\":\n\t\tresult.Protocol = packet.IPProtocolTCP\n\tcase \"udp\":\n\t\tresult.Protocol = packet.IPProtocolUDP\n\tcase \"icmp\":\n\t\tresult.Protocol = 1\n\tcase \"icmpv6\":\n\t\tresult.Protocol = ipv6.ICMPType(0).Protocol()\n\tcase \"\": // not specified = all\n\t\tfallthrough\n\tcase \"all\":\n\t\tresult.Protocol = -1\n\t\tisProtoAnyRule = true\n\tdefault:\n\t\tresult.Protocol, err = strconv.Atoi(*protocolOpt)\n\t\tif err != nil {\n\t\t\treturn nil, errors.New(\"rulespec not valid: invalid protocol\")\n\t\t}\n\t\tif result.Protocol < 0 || result.Protocol > math.MaxUint8 {\n\t\t\treturn nil, errors.New(\"rulespec not valid: invalid protocol\")\n\t\t}\n\t\t// iptables man page says protocol zero is equivalent to 'all' (sorry, IPv6 Hop-by-Hop Option)\n\t\tif result.Protocol == 0 {\n\t\t\tresult.Protocol = -1\n\t\t}\n\t}\n\n\tif len(*tcpFlags) == 1 {\n\t\tparts := strings.SplitN((*tcpFlags)[0], \",\", 2)\n\t\tif len(parts) == 0 {\n\t\t\treturn nil, errors.New(\"rulespec not valid: invalid tcp-flags\")\n\t\t}\n\t\tmask, err := strconv.Atoi(parts[0])\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif mask < 0 || mask > math.MaxUint8 {\n\t\t\treturn nil, errors.New(\"TCP mask out of range\")\n\t\t}\n\t\tflags, err := strconv.Atoi(parts[1])\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif flags < 0 || flags > math.MaxUint8 {\n\t\t\treturn nil, errors.New(\"TCP flags out of range\")\n\t\t}\n\n\t\tresult.TCPFlags = uint8(flags)\n\t\tresult.TCPFlagsMask = uint8(mask)\n\t\tresult.TCPFlagsSpecified = true\n\t}\n\n\tif len(*tcpOption) == 1 {\n\t\toption, err := strconv.Atoi((*tcpOption)[0])\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif option < 0 || option > math.MaxUint8 {\n\t\t\treturn nil, errors.New(\"TCP option out of range\")\n\t\t}\n\t\tresult.TCPOption = uint8(option)\n\t\tresult.TCPOptionSpecified = true\n\t}\n\n\tfor i := 0; i < len(*icmpTypeOpt); i++ {\n\t\tim, err := ParseIcmpTypeCode((*icmpTypeOpt)[i])\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"rulespec not valid: %s\", err.Error())\n\t\t}\n\t\tresult.IcmpMatch = append(result.IcmpMatch, im...)\n\t}\n\n\t// src/dest port: either port or port range or list of such\n\tresult.MatchSrcPort, err = ParsePortString(*sPortOpt)\n\tif err != nil {\n\t\treturn nil, errors.New(\"rulespec not valid: invalid match port\")\n\t}\n\tresult.MatchDstPort, err = ParsePortString(*dPortOpt)\n\tif err != nil {\n\t\treturn nil, errors.New(\"rulespec not valid: invalid match port\")\n\t}\n\n\t// -m options\n\tfor i, modeOptSetNum := 0, 0; i < len(*modeOpt); i++ {\n\t\tswitch (*modeOpt)[i] {\n\t\tcase \"set\":\n\t\t\tmatchSet := &WindowsRuleMatchSet{}\n\t\t\t// see if negate of --match-set occurred\n\t\t\tif i+1 < len(*modeOpt) && (*modeOpt)[i+1] == \"!\" {\n\t\t\t\tmatchSet.MatchSetNegate = true\n\t\t\t\ti++\n\t\t\t}\n\t\t\t// now check corresponding match-set by index\n\t\t\tmatchSetIndex := 2 * modeOptSetNum\n\t\t\tmodeOptSetNum++\n\t\t\tif matchSetIndex+1 >= len(*matchSetOpt) {\n\t\t\t\treturn nil, errors.New(\"rulespec not valid: --match-set not found for -m set\")\n\t\t\t}\n\t\t\t// first part is the ipset name\n\t\t\tmatchSet.MatchSetName = (*matchSetOpt)[matchSetIndex]\n\t\t\t// second part is the dst/src match specifier\n\t\t\tipPortSpecLower := strings.ToLower((*matchSetOpt)[matchSetIndex+1])\n\t\t\tif strings.HasPrefix(ipPortSpecLower, \"dstip\") {\n\t\t\t\tmatchSet.MatchSetDstIP = true\n\t\t\t} else if strings.HasPrefix(ipPortSpecLower, \"srcip\") {\n\t\t\t\tmatchSet.MatchSetSrcIP = true\n\t\t\t}\n\t\t\tif strings.HasSuffix(ipPortSpecLower, \"dstport\") {\n\t\t\t\tmatchSet.MatchSetDstPort = true\n\t\t\t\tif result.Protocol < 1 {\n\t\t\t\t\treturn nil, errors.New(\"rulespec not valid: ipset match on port requires protocol be set\")\n\t\t\t\t}\n\t\t\t} else if strings.HasSuffix(ipPortSpecLower, \"srcport\") {\n\t\t\t\tmatchSet.MatchSetSrcPort = true\n\t\t\t\tif result.Protocol < 1 {\n\t\t\t\t\treturn nil, errors.New(\"rulespec not valid: ipset match on port requires protocol be set\")\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !matchSet.MatchSetDstIP && !matchSet.MatchSetDstPort && !matchSet.MatchSetSrcIP && !matchSet.MatchSetSrcPort {\n\t\t\t\t// look for acl-created iptables-conforming match on 'dst' or 'src'.\n\t\t\t\t// a dst or src by itself we take to mean match both. otherwise, we take it as ip-match,port-match.\n\t\t\t\tif strings.HasPrefix(ipPortSpecLower, \"dst\") {\n\t\t\t\t\tmatchSet.MatchSetDstIP = true\n\t\t\t\t} else if strings.HasPrefix(ipPortSpecLower, \"src\") {\n\t\t\t\t\tmatchSet.MatchSetSrcIP = true\n\t\t\t\t}\n\t\t\t\tif strings.HasSuffix(ipPortSpecLower, \"dst\") && !isProtoAnyRule {\n\t\t\t\t\tmatchSet.MatchSetDstPort = true\n\t\t\t\t\tif result.Protocol < 1 {\n\t\t\t\t\t\treturn nil, errors.New(\"rulespec not valid: ipset match on port requires protocol be set\")\n\t\t\t\t\t}\n\t\t\t\t} else if strings.HasSuffix(ipPortSpecLower, \"src\") && !isProtoAnyRule {\n\t\t\t\t\tmatchSet.MatchSetSrcPort = true\n\t\t\t\t\tif result.Protocol < 1 {\n\t\t\t\t\t\treturn nil, errors.New(\"rulespec not valid: ipset match on port requires protocol be set\")\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !matchSet.MatchSetDstIP && !matchSet.MatchSetDstPort && !matchSet.MatchSetSrcIP && !matchSet.MatchSetSrcPort {\n\t\t\t\treturn nil, errors.New(\"rulespec not valid: ipset match needs ip/port specifier\")\n\t\t\t}\n\t\t\tresult.MatchSet = append(result.MatchSet, matchSet)\n\n\t\tcase \"string\":\n\t\t\tnomatch := false\n\t\t\tmatchString := \"\"\n\t\t\tswitch len(*matchStringOpt) {\n\t\t\tcase 1:\n\t\t\t\tmatchString = (*matchStringOpt)[0]\n\t\t\tcase 2:\n\t\t\t\tif (*matchStringOpt)[0] != \"!\" {\n\t\t\t\t\treturn nil, errors.New(\"rulespec not valid: match string ! option invalid\")\n\t\t\t\t}\n\t\t\t\tnomatch = true\n\t\t\t\tmatchString = (*matchStringOpt)[1]\n\t\t\t}\n\t\t\tif len(matchString) == 0 {\n\t\t\t\treturn nil, errors.New(\"rulespec not valid: no match string given\")\n\t\t\t}\n\t\t\tresult.MatchBytesNoMatch = nomatch\n\t\t\tresult.MatchBytes = []byte(matchString)\n\t\t\tresult.MatchBytesOffset = *matchStringOffsetOpt\n\n\t\tcase \"owner\":\n\t\t\tif *matchOwnerPidOpt <= 0 {\n\t\t\t\treturn nil, errors.New(\"rulespec not valid: valid pid-owner needed\")\n\t\t\t}\n\t\t\tresult.ProcessID = *matchOwnerPidOpt\n\t\t\tif *matchOwnerPidChildrenOnlyOpt {\n\t\t\t\tresult.ProcessIncludeChildrenOnly = true\n\t\t\t}\n\t\t\tif *matchOwnerPidChildrenOpt {\n\t\t\t\tif *matchOwnerPidChildrenOnlyOpt {\n\t\t\t\t\treturn nil, errors.New(\"rulespec not valid: cannot have both --pid-childrenonly and --pid-children\")\n\t\t\t\t}\n\t\t\t\tresult.ProcessIncludeChildren = true\n\t\t\t}\n\n\t\tcase \"state\":\n\t\t\tif result.Protocol == packet.IPProtocolTCP && *stateOpt == \"NEW\" {\n\t\t\t\tresult.TCPFlags = 2\n\t\t\t\tresult.TCPFlagsMask = 18\n\t\t\t\tresult.TCPFlagsSpecified = true\n\t\t\t}\n\n\t\tcase \"connmark\":\n\t\t\tnomatch := false\n\t\t\tflowMarkString := \"\"\n\t\t\tswitch len(*markOpt) {\n\t\t\tcase 1:\n\t\t\t\tflowMarkString = (*markOpt)[0]\n\t\t\tcase 2:\n\t\t\t\tif (*markOpt)[0] != \"!\" {\n\t\t\t\t\treturn nil, errors.New(\"rulespec not valid: flowmark ! option invalid\")\n\t\t\t\t}\n\t\t\t\tnomatch = true\n\t\t\t\tflowMarkString = (*markOpt)[1]\n\t\t\t}\n\t\t\tvalue, err := strconv.Atoi(flowMarkString)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, errors.New(\"rulespec not valid: flowmmark should be int\")\n\t\t\t}\n\t\t\tresult.FlowMarkNoMatch = nomatch\n\t\t\tresult.FlowMark = value\n\n\t\tdefault:\n\t\t\treturn nil, errors.New(\"rulespec not valid: unknown -m option\")\n\t\t}\n\t}\n\n\t// action: either NFQUEUE, REDIRECT, MARK, ACCEPT, DROP, NFLOG\n\tfor i := 0; i < len(*actionOpt); i++ {\n\t\t// CRITICAL: If you add a case here, you need to update fixRuleSpec in iptablesprovider_windows.go\n\t\tswitch (*actionOpt)[i] {\n\t\tcase \"NFQUEUE\":\n\t\t\tresult.Action = frontman.FilterActionNfq\n\t\tcase \"NFQUEUE_FORCE\":\n\t\t\tresult.Action = frontman.FilterActionForceNfq\n\t\tcase \"REDIRECT\":\n\t\t\tresult.Action = frontman.FilterActionProxy\n\t\tcase \"ACCEPT\":\n\t\t\tresult.Action = frontman.FilterActionAllow\n\t\tcase \"ACCEPT_ONCE\":\n\t\t\tresult.Action = frontman.FilterActionAllowOnce\n\t\tcase \"DROP\":\n\t\t\tresult.Action = frontman.FilterActionBlock\n\t\tcase \"CONNMARK\":\n\t\t\tresult.Action = frontman.FilterActionSetMark\n\t\t\tresult.Mark = *setMarkOpt\n\t\tcase \"MARK\":\n\t\t\ti++\n\t\t\tif i >= len(*actionOpt) {\n\t\t\t\treturn nil, errors.New(\"rulespec not valid: no mark given\")\n\t\t\t}\n\t\t\tresult.Mark, err = strconv.Atoi((*actionOpt)[i])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, errors.New(\"rulespec not valid: mark should be int32\")\n\t\t\t}\n\t\tcase \"NFLOG\":\n\t\t\t// if no other action specified, it will default to 'continue' action (zero)\n\t\t\tresult.Log = true\n\t\t\tresult.LogPrefix = *logPrefixOpt\n\t\t\tresult.GroupID = *groupIDOpt\n\t\tdefault:\n\t\t\tif i >= len(*actionOpt) {\n\t\t\t\treturn nil, errors.New(\"rulespec not valid: no goto filter given\")\n\t\t\t}\n\t\t\tresult.Action = frontman.FilterActionGotoFilter\n\t\t\tresult.GotoFilterName = (*actionOpt)[i]\n\t\t}\n\t}\n\n\tif result.Mark == 0 && (result.Action == frontman.FilterActionNfq || result.Action == frontman.FilterActionForceNfq) {\n\t\treturn nil, errors.New(\"rulespec not valid: nfq action needs to set mark\")\n\t}\n\n\tif result.Mark == 0 && (result.Action == frontman.FilterActionSetMark) {\n\t\treturn nil, errors.New(\"rulespec not valid: setmark action needs to set mark\")\n\t}\n\n\t// redirect port\n\tresult.ProxyPort = *redirectPortOpt\n\tif result.Action == frontman.FilterActionProxy && result.ProxyPort == 0 {\n\t\treturn nil, errors.New(\"rulespec not valid: no redirect port given\")\n\t}\n\n\treturn result, nil\n}\n\n// Equal compares a WindowsRuleMatchSet to another for equality\nfunc (w *WindowsRuleMatchSet) Equal(other *WindowsRuleMatchSet) bool {\n\tif other == nil {\n\t\treturn false\n\t}\n\treturn w.MatchSetName == other.MatchSetName &&\n\t\tw.MatchSetNegate == other.MatchSetNegate &&\n\t\tw.MatchSetDstIP == other.MatchSetDstIP &&\n\t\tw.MatchSetDstPort == other.MatchSetDstPort &&\n\t\tw.MatchSetSrcIP == other.MatchSetSrcIP &&\n\t\tw.MatchSetSrcPort == other.MatchSetSrcPort\n}\n\n// Equal compares a WindowsRuleRange to another for equality\nfunc (w *WindowsRuleRange) Equal(other *WindowsRuleRange) bool {\n\tif other == nil {\n\t\treturn false\n\t}\n\treturn w.Start == other.Start && w.End == other.End\n}\n\n// Equal compares a WindowsRuleIcmpMatch to another for equality\nfunc (w *WindowsRuleIcmpMatch) Equal(other *WindowsRuleIcmpMatch) bool {\n\tif other == nil {\n\t\treturn false\n\t}\n\tif w.Nomatch != other.Nomatch {\n\t\treturn false\n\t}\n\tif w.IcmpType != other.IcmpType {\n\t\treturn false\n\t}\n\tif w.IcmpCodeRange != nil {\n\t\tif !w.IcmpCodeRange.Equal(other.IcmpCodeRange) {\n\t\t\treturn false\n\t\t}\n\t} else if other.IcmpCodeRange != nil {\n\t\treturn false\n\t}\n\treturn true\n}\n\n// Equal compares a WindowsRuleSpec to another for equality\nfunc (w *WindowsRuleSpec) Equal(other *WindowsRuleSpec) bool {\n\tif other == nil {\n\t\treturn false\n\t}\n\tequalSoFar := w.Protocol == other.Protocol &&\n\t\tw.Action == other.Action &&\n\t\tw.ProxyPort == other.ProxyPort &&\n\t\tw.Mark == other.Mark &&\n\t\tw.Log == other.Log &&\n\t\tw.LogPrefix == other.LogPrefix &&\n\t\tw.GroupID == other.GroupID &&\n\t\tw.ProcessID == other.ProcessID &&\n\t\tw.ProcessIncludeChildren == other.ProcessIncludeChildren &&\n\t\tw.ProcessIncludeChildrenOnly == other.ProcessIncludeChildrenOnly &&\n\t\tw.MatchBytesNoMatch == other.MatchBytesNoMatch &&\n\t\tw.MatchBytesOffset == other.MatchBytesOffset &&\n\t\tbytes.Equal(w.MatchBytes, other.MatchBytes) &&\n\t\tlen(w.IcmpMatch) == len(other.IcmpMatch) &&\n\t\tlen(w.MatchSrcPort) == len(other.MatchSrcPort) &&\n\t\tlen(w.MatchDstPort) == len(other.MatchDstPort) &&\n\t\tlen(w.MatchSet) == len(other.MatchSet) &&\n\t\tw.TCPFlags == other.TCPFlags &&\n\t\tw.TCPFlagsMask == other.TCPFlagsMask &&\n\t\tw.TCPFlagsSpecified == other.TCPFlagsSpecified &&\n\t\tw.TCPOption == other.TCPOption &&\n\t\tw.TCPOptionSpecified == other.TCPOptionSpecified &&\n\t\tw.FlowMark == other.FlowMark &&\n\t\tw.FlowMarkNoMatch == other.FlowMarkNoMatch\n\tif !equalSoFar {\n\t\treturn false\n\t}\n\t// we checked lengths above, but now continue to compare slices for equality.\n\t// note: we assume equal slices have elements in the same order.\n\tfor i := 0; i < len(w.MatchSrcPort); i++ {\n\t\tif w.MatchSrcPort[i] == nil {\n\t\t\tif other.MatchSrcPort[i] != nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tif !w.MatchSrcPort[i].Equal(other.MatchSrcPort[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\tfor i := 0; i < len(w.MatchDstPort); i++ {\n\t\tif w.MatchDstPort[i] == nil {\n\t\t\tif other.MatchDstPort[i] != nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tif !w.MatchDstPort[i].Equal(other.MatchDstPort[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\tfor i := 0; i < len(w.MatchSet); i++ {\n\t\tif w.MatchSet[i] == nil {\n\t\t\tif other.MatchSet[i] != nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tif !w.MatchSet[i].Equal(other.MatchSet[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\tfor i := 0; i < len(w.IcmpMatch); i++ {\n\t\tif w.IcmpMatch[i] == nil {\n\t\t\tif other.IcmpMatch[i] != nil {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\t\tif !w.IcmpMatch[i].Equal(other.IcmpMatch[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n"
  },
  {
    "path": "controller/internal/windows/rulespec_windows_test.go",
    "content": "// +build windows\n\npackage windows\n\nimport (\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/kballard/go-shellquote\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n)\n\nfunc TestParseRuleSpecMatchSet(t *testing.T) {\n\n\tConvey(\"When I parse a rule with an ipset match\", t, func() {\n\t\trsOrig := \"-m set --match-set TRI-ipset-1 srcIP -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with more than one ipset match\", t, func() {\n\t\trsOrig := \"-m set --match-set TRI-ipset-1 srcIP -j ACCEPT -m set --match-set TRI-ipset-2 dstIP\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipsets\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-ipset-2\")\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match without ip or port specifier\", t, func() {\n\t\trsOrig := \"-m set --match-set TRI-ipset-1 -j ACCEPT\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tConvey(\"I should get an error\", func() {\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldContainSubstring, \"Missing argument for option 'match-set'\")\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match with invalid ip or port specifier\", t, func() {\n\t\trsOrig := \"-m set --match-set TRI-ipset-1 both -j ACCEPT\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tConvey(\"I should get an error\", func() {\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldContainSubstring, \"ipset match needs ip/port specifier\")\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match on port without protocol\", t, func() {\n\t\trsOrig := \"-m set --match-set TRI-ipset-1 srcPort -j ACCEPT\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tConvey(\"I should get an error\", func() {\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldContainSubstring, \"ipset match on port requires protocol be set\")\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match on src port\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 srcPort -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset's port\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeTrue)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match on dst port\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 dstPort -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset's port\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match on ip and port\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 dstIP,dstPort -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset's ip and port\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match on ip and port mixed\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 srcIP,dstPort -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset's ip and port\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match on ip and port with one specifier (src)\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 src -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset's ip and port\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeTrue)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match on ip and port with one specifier (dst)\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 dst -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset's ip and port\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an ipset match on ip and port with specifier order important\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 src,dst -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset's ip and port\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with no ipset match\", t, func() {\n\t\trsOrig := \"-j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldBeNil)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a negative ipset match\", t, func() {\n\t\trsOrig := \"-p tcp -m set ! --match-set TRI-ipset-1 dstPort -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept except for the given ipset\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n}\n\nfunc TestParseRuleSpecProtocol(t *testing.T) {\n\n\tConvey(\"When I parse a rule with a protocol match on tcp\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 srcPort -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset/protocol\", func() {\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolTCP)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a protocol match on udp\", t, func() {\n\t\trsOrig := \"-p udp -m set --match-set TRI-ipset-1 srcPort -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given ipset/protocol\", func() {\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolUDP)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a protocol match on icmp\", t, func() {\n\t\trsOrig := \"-p icmp -j DROP\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should reject for the given protocol\", func() {\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionBlock)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a protocol match on tcp by number\", t, func() {\n\t\trsOrig := \"-p 6 -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given protocol\", func() {\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolTCP)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a protocol match on udp by number\", t, func() {\n\t\trsOrig := \"-p 17 -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given protocol\", func() {\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolUDP)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a protocol match on ICMP for IPv6 by number\", t, func() {\n\t\trsOrig := \"-p 58 -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given protocol\", func() {\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, 58)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an invalid protocol match\", t, func() {\n\t\trsOrig := \"-p http -m set --match-set TRI-ipset-1 srcPort -j ACCEPT\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tConvey(\"I should get an error\", func() {\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldContainSubstring, \"invalid protocol\")\n\t\t})\n\t})\n\n}\n\nfunc TestParseRuleSpecAction(t *testing.T) {\n\n\tConvey(\"When I parse a rule with an accept action\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 dstPort -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a drop action\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 dstPort -j DROP\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should block\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionBlock)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an nfq action and a given mark\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 srcIP,srcPort -j NFQUEUE -j MARK 100\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should route to nfq and set the mark\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionNfq)\n\t\t\tSo(ruleSpec.Mark, ShouldEqual, 100)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an nfq action without a mark\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 srcIP -j NFQUEUE\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tConvey(\"I should get an error\", func() {\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldContainSubstring, \"nfq action needs to set mark\")\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a force nfq action\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 srcIP,srcPort -j NFQUEUE_FORCE -j MARK 100\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should route to nfq (without honoring ignore-flow) and set the mark\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionForceNfq)\n\t\t\tSo(ruleSpec.Mark, ShouldEqual, 100)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a proxy action\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 dstPort -j REDIRECT --to-ports 20992\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should redirect to given port\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionProxy)\n\t\t\tSo(ruleSpec.ProxyPort, ShouldEqual, 20992)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a proxy action without a redirect port\", t, func() {\n\t\trsOrig := \"-p tcp -m set --match-set TRI-ipset-1 srcIP -j REDIRECT\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tConvey(\"I should get an error\", func() {\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldContainSubstring, \"no redirect port given\")\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a log and drop action\", t, func() {\n\t\trsOrig := \"-p tcp -j DROP -j NFLOG --nflog-group 10 --nflog-prefix 531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should drop and log\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionBlock)\n\t\t\tSo(ruleSpec.Log, ShouldBeTrue)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a log and accept action\", t, func() {\n\t\trsOrig := \"-p tcp -j ACCEPT -j NFLOG --nflog-group 10 --nflog-prefix 531138568:5d6044b9e99572000149d650:5d60448a884e46000145cf67:6\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept and log\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.Log, ShouldBeTrue)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n}\n\nfunc TestParseRuleSpecSPortDPort(t *testing.T) {\n\n\tConvey(\"When I parse a rule with a match on source port\", t, func() {\n\t\trsOrig := \"-p tcp --sport 12345 -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given source port\", func() {\n\t\t\tSo(ruleSpec.MatchDstPort, ShouldHaveLength, 0)\n\t\t\tSo(ruleSpec.MatchSrcPort, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSrcPort[0].Start, ShouldEqual, 12345)\n\t\t\tSo(ruleSpec.MatchSrcPort[0].End, ShouldEqual, 12345)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a match on source port range\", t, func() {\n\t\trsOrig := \"-p tcp --sport 80,1024:65535 -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given source port range\", func() {\n\t\t\tSo(ruleSpec.MatchSrcPort, ShouldHaveLength, 2)\n\t\t\tSo(ruleSpec.MatchSrcPort[0].Start, ShouldEqual, 80)\n\t\t\tSo(ruleSpec.MatchSrcPort[0].End, ShouldEqual, 80)\n\t\t\tSo(ruleSpec.MatchSrcPort[1].Start, ShouldEqual, 1024)\n\t\t\tSo(ruleSpec.MatchSrcPort[1].End, ShouldEqual, 65535)\n\t\t\tSo(ruleSpec.MatchDstPort, ShouldHaveLength, 0)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a match on dest port\", t, func() {\n\t\trsOrig := \"-p tcp --dport 80 -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given dest port\", func() {\n\t\t\tSo(ruleSpec.MatchSrcPort, ShouldHaveLength, 0)\n\t\t\tSo(ruleSpec.MatchDstPort, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchDstPort[0].Start, ShouldEqual, 80)\n\t\t\tSo(ruleSpec.MatchDstPort[0].End, ShouldEqual, 80)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a match on dest port range\", t, func() {\n\t\trsOrig := \"-p tcp --dport 1234,8080:8443,65000:65005,65530 -j ACCEPT\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should accept for the given dest port range\", func() {\n\t\t\tSo(ruleSpec.MatchDstPort, ShouldHaveLength, 4)\n\t\t\tSo(ruleSpec.MatchDstPort[0].Start, ShouldEqual, 1234)\n\t\t\tSo(ruleSpec.MatchDstPort[0].End, ShouldEqual, 1234)\n\t\t\tSo(ruleSpec.MatchDstPort[1].Start, ShouldEqual, 8080)\n\t\t\tSo(ruleSpec.MatchDstPort[1].End, ShouldEqual, 8443)\n\t\t\tSo(ruleSpec.MatchDstPort[2].Start, ShouldEqual, 65000)\n\t\t\tSo(ruleSpec.MatchDstPort[2].End, ShouldEqual, 65005)\n\t\t\tSo(ruleSpec.MatchDstPort[3].Start, ShouldEqual, 65530)\n\t\t\tSo(ruleSpec.MatchDstPort[3].End, ShouldEqual, 65530)\n\t\t\tSo(ruleSpec.MatchSrcPort, ShouldHaveLength, 0)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n}\n\nfunc TestParseRuleSpecMatchString(t *testing.T) {\n\n\tConvey(\"When I parse a rule with a string match\", t, func() {\n\t\trsOrig := \"-p udp -m set --match-set TRI-ipset-udp-1 srcIP -m string --string n30njxq7bmiwr6dtxq --offset 2 -j NFQUEUE -j MARK 1234 -m set --match-set TRI-ipset-udp-2 srcIP\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should forward to nfq if I see the given string at the given offset\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionNfq)\n\t\t\tSo(ruleSpec.Mark, ShouldEqual, 1234)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolUDP)\n\t\t\tSo(ruleSpec.MatchBytesOffset, ShouldEqual, 2)\n\t\t\tSo(ruleSpec.MatchBytes, ShouldResemble, []byte(\"n30njxq7bmiwr6dtxq\"))\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-udp-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-ipset-udp-2\")\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n}\n\nfunc TestParseRuleSpecNoMatchString(t *testing.T) {\n\n\tConvey(\"When I parse a rule with a string ! match\", t, func() {\n\t\trsOrig := \"-p udp -m set --match-set TRI-ipset-udp-1 srcIP -m string --string ! n30njxq7bmiwr6dtxq --offset 2 -j NFQUEUE -j MARK 1234 -m set --match-set TRI-ipset-udp-2 srcIP\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should forward to nfq if I see the given string at the given offset\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionNfq)\n\t\t\tSo(ruleSpec.Mark, ShouldEqual, 1234)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolUDP)\n\t\t\tSo(ruleSpec.MatchBytesNoMatch, ShouldEqual, true)\n\t\t\tSo(ruleSpec.MatchBytesOffset, ShouldEqual, 2)\n\t\t\tSo(ruleSpec.MatchBytes, ShouldResemble, []byte(\"n30njxq7bmiwr6dtxq\"))\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-udp-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-ipset-udp-2\")\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n}\n\nfunc TestParseRuleSpecMatchPid(t *testing.T) {\n\n\tConvey(\"When I parse a rule with a process ID match\", t, func() {\n\t\truleSpec, err := ParseRuleSpec(strings.Split(\"-p tcp -m set --match-set TRI-ipset-tcp-1 dstIP -j NFQUEUE -j MARK 103 -m owner --pid-owner 2438\", \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should forward to nfq for all packets from or to the given process\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionNfq)\n\t\t\tSo(ruleSpec.Mark, ShouldEqual, 103)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolTCP)\n\t\t\tSo(ruleSpec.ProcessID, ShouldEqual, 2438)\n\t\t\tSo(ruleSpec.ProcessIncludeChildren, ShouldBeFalse)\n\t\t\tSo(ruleSpec.ProcessIncludeChildrenOnly, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-tcp-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a process ID match with include children\", t, func() {\n\t\truleSpec, err := ParseRuleSpec(strings.Split(\"-p tcp -m set --match-set TRI-ipset-tcp-1 dstIP -j NFQUEUE -j MARK 104 -m owner --pid-owner 2439 --pid-children\", \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should forward to nfq for all packets from or to the given process and its children\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionNfq)\n\t\t\tSo(ruleSpec.Mark, ShouldEqual, 104)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolTCP)\n\t\t\tSo(ruleSpec.ProcessID, ShouldEqual, 2439)\n\t\t\tSo(ruleSpec.ProcessIncludeChildren, ShouldBeTrue)\n\t\t\tSo(ruleSpec.ProcessIncludeChildrenOnly, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-tcp-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with a process ID match with include children only\", t, func() {\n\t\truleSpec, err := ParseRuleSpec(strings.Split(\"-p tcp -m set --match-set TRI-ipset-tcp-1 dstIP -j NFQUEUE -j MARK 105 -m owner --pid-owner 2440 --pid-childrenonly\", \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should forward to nfq for all packets from or to the given process' children\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionNfq)\n\t\t\tSo(ruleSpec.Mark, ShouldEqual, 105)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolTCP)\n\t\t\tSo(ruleSpec.ProcessID, ShouldEqual, 2440)\n\t\t\tSo(ruleSpec.ProcessIncludeChildren, ShouldBeFalse)\n\t\t\tSo(ruleSpec.ProcessIncludeChildrenOnly, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-ipset-tcp-1\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse a rule with an invalid process ID match\", t, func() {\n\t\t// invalid flags\n\t\t_, err := ParseRuleSpec(strings.Split(\"-p tcp -m set --match-set TRI-ipset-tcp-1 dstIP -j NFQUEUE -j MARK 105 -m owner --pid-owner 2440 --pid-childrenonly --pid-children\", \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t\t// invalid pid\n\t\t_, err = ParseRuleSpec(strings.Split(\"-p tcp -m set --match-set TRI-ipset-tcp-1 dstIP -j NFQUEUE -j MARK 105 -m owner --pid-owner foobar\", \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n}\n\nfunc TestParseRuleSpecMatchIcmp(t *testing.T) {\n\n\tConvey(\"When I parse a rule with an icmp type match\", t, func() {\n\t\truleSpec, err := ParseRuleSpec(strings.Split(\"-p 1 --icmp-type 8 -j ACCEPT\", \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should recognize the icmp type\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.IcmpMatch, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.IcmpMatch[0].Nomatch, ShouldBeFalse)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpType, ShouldEqual, 8)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpCodeRange, ShouldBeNil)\n\t\t})\n\n\t\trulePart := strings.Join(TransformIcmpProtoString(\"icmp/8\"), \" \")\n\t\tSo(rulePart, ShouldEqual, \"--icmp-type 8\")\n\n\t\t_, err = MakeRuleSpecText(ruleSpec, true)\n\t\tSo(err, ShouldBeNil)\n\t})\n\n\tConvey(\"When I parse a rule with an icmp type and code match\", t, func() {\n\t\truleSpec, err := ParseRuleSpec(strings.Split(\"-p icmp --icmp-type 3/5 -j DROP\", \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should recognize the icmp type and code\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionBlock)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.IcmpMatch, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.IcmpMatch[0].Nomatch, ShouldBeFalse)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpType, ShouldEqual, 3)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpCodeRange.Start, ShouldEqual, 5)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpCodeRange.End, ShouldEqual, 5)\n\t\t})\n\n\t\trulePart := strings.Join(TransformIcmpProtoString(\"icmp/3/5\"), \" \")\n\t\tSo(rulePart, ShouldEqual, \"--icmp-type 3/5\")\n\n\t\t_, err = MakeRuleSpecText(ruleSpec, true)\n\t\tSo(err, ShouldBeNil)\n\t})\n\n\tConvey(\"When I parse a rule with an icmp type and multiple codes match\", t, func() {\n\t\truleSpec, err := ParseRuleSpec(strings.Split(\"-p 1 --icmp-type 3/0:4,15,6:7,14 -j ACCEPT\", \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should recognize the icmp type and code ranges\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.IcmpMatch, ShouldHaveLength, 4)\n\t\t\tSo(ruleSpec.IcmpMatch[0].Nomatch, ShouldBeFalse)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpType, ShouldEqual, 3)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpCodeRange.Start, ShouldEqual, 0)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpCodeRange.End, ShouldEqual, 4)\n\t\t\tSo(ruleSpec.IcmpMatch[1].IcmpType, ShouldEqual, 3)\n\t\t\tSo(ruleSpec.IcmpMatch[1].IcmpCodeRange.Start, ShouldEqual, 15)\n\t\t\tSo(ruleSpec.IcmpMatch[1].IcmpCodeRange.End, ShouldEqual, 15)\n\t\t\tSo(ruleSpec.IcmpMatch[2].IcmpType, ShouldEqual, 3)\n\t\t\tSo(ruleSpec.IcmpMatch[2].IcmpCodeRange.Start, ShouldEqual, 6)\n\t\t\tSo(ruleSpec.IcmpMatch[2].IcmpCodeRange.End, ShouldEqual, 7)\n\t\t\tSo(ruleSpec.IcmpMatch[3].IcmpType, ShouldEqual, 3)\n\t\t\tSo(ruleSpec.IcmpMatch[3].IcmpCodeRange.Start, ShouldEqual, 14)\n\t\t\tSo(ruleSpec.IcmpMatch[3].IcmpCodeRange.End, ShouldEqual, 14)\n\t\t})\n\n\t\trulePart := strings.Join(TransformIcmpProtoString(\"icmp/3/0:4,15,6:7,14\"), \" \")\n\t\tSo(rulePart, ShouldEqual, \"--icmp-type 3/0:4,15,6:7,14\")\n\n\t\t_, err = MakeRuleSpecText(ruleSpec, true)\n\t\tSo(err, ShouldBeNil)\n\t})\n\n\tConvey(\"When I parse a rule with an icmp v6 type and code match\", t, func() {\n\t\truleSpec, err := ParseRuleSpec(strings.Split(\"-p 58 --icmp-type 140/1,2 -j ACCEPT\", \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should recognize the icmp v6 type and code\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, 58)\n\t\t\tSo(ruleSpec.IcmpMatch, ShouldHaveLength, 2)\n\t\t\tSo(ruleSpec.IcmpMatch[0].Nomatch, ShouldBeFalse)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpType, ShouldEqual, 140)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpCodeRange.Start, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.IcmpMatch[0].IcmpCodeRange.End, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.IcmpMatch[1].IcmpCodeRange.Start, ShouldEqual, 2)\n\t\t\tSo(ruleSpec.IcmpMatch[1].IcmpCodeRange.End, ShouldEqual, 2)\n\t\t})\n\n\t\trulePart := strings.Join(TransformIcmpProtoString(\"icmpv6/140/1,2\"), \" \")\n\t\tSo(rulePart, ShouldEqual, \"--icmp-type 140/1,2\")\n\n\t\t_, err = MakeRuleSpecText(ruleSpec, true)\n\t\tSo(err, ShouldBeNil)\n\t})\n\n\tConvey(\"When I parse a rule with an icmp nomatch\", t, func() {\n\t\truleSpec, err := ParseRuleSpec(strings.Split(\"-p icmp --icmp-type nomatch -j ACCEPT\", \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should recognize that it should never match\", func() {\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllow)\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.IcmpMatch, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.IcmpMatch[0].Nomatch, ShouldBeTrue)\n\t\t})\n\n\t\t_, err = MakeRuleSpecText(ruleSpec, true)\n\t\tSo(err, ShouldBeNil)\n\t})\n\n\tConvey(\"When I parse a rule with an invalid icmp type/code\", t, func() {\n\t\t// invalid range separator\n\t\t_, err := ParseRuleSpec(strings.Split(\"-p 1 --icmp-type 3/0:4,15,6-7,14 -j ACCEPT\", \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t\t// code out of range\n\t\t_, err = ParseRuleSpec(strings.Split(\"-p 1 --icmp-type 3/0:4,15,17:256 -j ACCEPT\", \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t\t// code given but no type\n\t\t_, err = ParseRuleSpec(strings.Split(\"-p 1 --icmp-type /2 -j ACCEPT\", \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t\t// type out of range\n\t\t_, err = ParseRuleSpec(strings.Split(\"-p 1 --icmp-type 2555/1,4 -j ACCEPT\", \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"When I handle a rule with policy restrictions\", t, func() {\n\n\t\trulePart, err := ReduceIcmpProtoString(\"icmp\", []string{\"icmp/1/1\", \"icmp/2/3:4\"})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(rulePart, ShouldHaveLength, 4)\n\t\tSo(rulePart[0], ShouldEqual, \"--icmp-type\")\n\t\tSo(rulePart[1], ShouldEqual, \"1/1\")\n\t\tSo(rulePart[3], ShouldEqual, \"2/3:4\")\n\n\t\trulePart, err = ReduceIcmpProtoString(\"icmp/3\", []string{\"icmp/2\", \"icmp\", \"icmp/3/0\"})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(rulePart, ShouldHaveLength, 4)\n\t\tSo(rulePart[1], ShouldEqual, \"3\")\n\t\tSo(rulePart[3], ShouldEqual, \"3/0\")\n\n\t\trulePart, err = ReduceIcmpProtoString(\"icmp/3/0:2,3:3,5:7,10:18\", []string{\"icmp/3/2:4,6:8,11\", \"icmp/3/1,2,4,6:7,9,14,16,18\", \"icmp/3/0,10,20:22\"})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(rulePart, ShouldHaveLength, 6)\n\t\tSo(rulePart[1], ShouldEqual, \"3/2:2,3:3,6:7,11:11\")\n\t\tSo(rulePart[3], ShouldEqual, \"3/1:1,2:2,6:7,14:14,16:16,18:18\")\n\t\tSo(rulePart[5], ShouldEqual, \"3/0:0,10:10\")\n\n\t\trulePart, err = ReduceIcmpProtoString(\"icmp/8/10,1:3,7\", []string{\"icmp/2/10\", \"icmp/8\", \"icmp/8/0,4:6,11:20,9,8\"})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(rulePart, ShouldHaveLength, 2)\n\t\tSo(rulePart[1], ShouldEqual, \"8/10,1:3,7\")\n\n\t\trulePart, err = ReduceIcmpProtoString(\"icmp\", []string{\"icmp/1/1\"})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(rulePart, ShouldHaveLength, 2)\n\t\tSo(rulePart[1], ShouldEqual, \"1/1\")\n\n\t\trulePart, err = ReduceIcmpProtoString(\"icmp6\", []string{\"icmp6/1/0:255\"})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(rulePart, ShouldHaveLength, 2)\n\t\tSo(rulePart[1], ShouldEqual, \"1/0:255\")\n\n\t\trulePart, err = ReduceIcmpProtoString(\"icmp/1\", []string{\"icmp\", \"icmp6\"})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(rulePart, ShouldHaveLength, 2)\n\t\tSo(rulePart[1], ShouldEqual, \"1\")\n\n\t\t// proto match without type/code should return empty\n\t\trulePart, err = ReduceIcmpProtoString(\"icmp\", []string{\"icmp\"})\n\t\tSo(err, ShouldBeNil)\n\t\tSo(rulePart, ShouldHaveLength, 0)\n\n\t\t// proto match with type/code conflict should return error\n\t\t_, err = ReduceIcmpProtoString(\"icmp/0\", []string{\"icmp/1\"})\n\t\tSo(err, ShouldNotBeNil)\n\t})\n}\n\n// test generated acl rule\nfunc TestParseRuleSpecACLRule(t *testing.T) {\n\n\tConvey(\"When I parse an acl rule for nflog\", t, func() {\n\t\trsOrig := \"-p 6 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 1:65535 -m state --state NEW -j NFLOG --nflog-group 10 --nflog-prefix \\\"3617624947:5d6044b9e99572000149d650:5d60448a884e46000145cf67:incoming n_3484738895:6\\\"\"\n\n\t\trule, err := shellquote.Split(rsOrig)\n\t\tSo(err, ShouldBeNil)\n\n\t\truleSpec, err := ParseRuleSpec(rule...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should be able to interpret it as a windows rule\", func() {\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolTCP)\n\t\t\tSo(ruleSpec.MatchSrcPort, ShouldHaveLength, 0)\n\t\t\tSo(ruleSpec.MatchDstPort, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchDstPort[0].Start, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.MatchDstPort[0].End, ShouldEqual, 65535)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, 0) // degenerate log-only rule\n\t\t\tSo(ruleSpec.Log, ShouldBeTrue)\n\t\t\tSo(ruleSpec.LogPrefix, ShouldEqual, \"3617624947:5d6044b9e99572000149d650:5d60448a884e46000145cf67:incoming n_3484738895:6\")\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-v4-ext-cUDEx1114Z2xd\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-v4-TargetTCP\")\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetNegate, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse an acl rule for drop or accept\", t, func() {\n\t\trsOrig := \"-p 6 -m set --match-set TRI-v4-ext-cUDEx1114Z2xd dst -m state --state NEW -m set ! --match-set TRI-v4-TargetTCP dst --match multiport --dports 1:65535 -j DROP\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should be able to interpret it as a windows rule\", func() {\n\t\t\tSo(ruleSpec.Protocol, ShouldEqual, packet.IPProtocolTCP)\n\t\t\tSo(ruleSpec.MatchSrcPort, ShouldHaveLength, 0)\n\t\t\tSo(ruleSpec.MatchDstPort, ShouldHaveLength, 1)\n\t\t\tSo(ruleSpec.MatchDstPort[0].Start, ShouldEqual, 1)\n\t\t\tSo(ruleSpec.MatchDstPort[0].End, ShouldEqual, 65535)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionBlock)\n\t\t\tSo(ruleSpec.Log, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet, ShouldNotBeNil)\n\t\t\tSo(ruleSpec.MatchSet, ShouldHaveLength, 2)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetName, ShouldEqual, \"TRI-v4-ext-cUDEx1114Z2xd\")\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetNegate, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[0].MatchSetSrcPort, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetName, ShouldEqual, \"TRI-v4-TargetTCP\")\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetNegate, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstIP, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetDstPort, ShouldBeTrue)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcIP, ShouldBeFalse)\n\t\t\tSo(ruleSpec.MatchSet[1].MatchSetSrcPort, ShouldBeFalse)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n}\n\n// test TCP flags and options\nfunc TestParseRuleSpecTCPFlagsAndOptions(t *testing.T) {\n\n\tConvey(\"When I parse tcp flags\", t, func() {\n\t\trsOrig := \"--tcp-flags 18,10 -j ACCEPT_ONCE\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should be able to interpret it as a windows rule\", func() {\n\t\t\tSo(ruleSpec.TCPFlags, ShouldEqual, 10)\n\t\t\tSo(ruleSpec.TCPFlagsMask, ShouldEqual, 18)\n\t\t\tSo(ruleSpec.TCPFlagsSpecified, ShouldEqual, true)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllowOnce)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse tcp options\", t, func() {\n\t\trsOrig := \"--tcp-option 34 -j ACCEPT_ONCE\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should be able to interpret it as a windows rule\", func() {\n\t\t\tSo(ruleSpec.TCPOption, ShouldEqual, 34)\n\t\t\tSo(ruleSpec.TCPOptionSpecified, ShouldEqual, true)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllowOnce)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestParseRuleSpecConnmark(t *testing.T) {\n\n\tConvey(\"When I parse connmark\", t, func() {\n\t\trsOrig := \"-m set --match-set TRI-ipset-1 srcIP -m connmark --mark 18 -j ACCEPT_ONCE\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should be able to interpret it as a windows rule\", func() {\n\t\t\tSo(ruleSpec.FlowMark, ShouldEqual, 18)\n\t\t\tSo(ruleSpec.FlowMarkNoMatch, ShouldEqual, false)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllowOnce)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse ! connmark\", t, func() {\n\t\trsOrig := \"-m connmark --mark ! 18 -j ACCEPT_ONCE\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should be able to interpret it as a windows rule\", func() {\n\t\t\tSo(ruleSpec.FlowMark, ShouldEqual, 18)\n\t\t\tSo(ruleSpec.FlowMarkNoMatch, ShouldEqual, true)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionAllowOnce)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse connmark with invalid !\", t, func() {\n\t\trsOrig := \"-m connmark --mark x 18 -j ACCEPT_ONCE\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"When I parse connmark with invalid mark\", t, func() {\n\t\trsOrig := \"-m connmark --mark abc -j ACCEPT_ONCE\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n}\n\nfunc TestParseRuleSpecSetMark(t *testing.T) {\n\n\tConvey(\"When I parse setmark\", t, func() {\n\t\trsOrig := \"-m connmark --mark 18 -j CONNMARK --set-mark 15\"\n\t\truleSpec, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"I should be able to interpret it as a windows rule\", func() {\n\t\t\tSo(ruleSpec.FlowMark, ShouldEqual, 18)\n\t\t\tSo(ruleSpec.FlowMarkNoMatch, ShouldEqual, false)\n\t\t\tSo(ruleSpec.Action, ShouldEqual, frontman.FilterActionSetMark)\n\t\t\tSo(ruleSpec.Mark, ShouldEqual, 15)\n\t\t})\n\t\tConvey(\"I should be able to convert back to a string\", func() {\n\t\t\t_, err := MakeRuleSpecText(ruleSpec, true)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I parse setmark without a mark\", t, func() {\n\t\trsOrig := \"-m connmark --mark 18 -j CONNMARK\"\n\t\t_, err := ParseRuleSpec(strings.Split(rsOrig, \" \")...)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n}\n"
  },
  {
    "path": "controller/internal/windows/utils.go",
    "content": "// +build windows\n\npackage windows\n\nimport (\n\t\"syscall\"\n\t\"unsafe\"\n)\n\n// WideCharPointerToString converts a pointer to a zero-terminated wide character string to a golang string\nfunc WideCharPointerToString(pszWide *uint16) string {\n\n\tptr := uintptr(unsafe.Pointer(pszWide)) //nolint:govet\n\tbuf := make([]uint16, 0, 256)\n\tfor {\n\t\tch := *((*uint16)(unsafe.Pointer(ptr))) //nolint:govet\n\t\tbuf = append(buf, ch)\n\t\tif ch == 0 {\n\t\t\tbreak\n\t\t}\n\n\t\tptr += 2\n\t}\n\treturn syscall.UTF16ToString(buf)\n}\n"
  },
  {
    "path": "controller/mockcontroller/mocktrireme.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/interfaces.go\n\n// Package mockcontroller is a generated GoMock package.\npackage mockcontroller\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\ttime \"time\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tpackettracing \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packettracing\"\n\tsecrets \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\truntime \"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// MockTriremeController is a mock of TriremeController interface\n// nolint\ntype MockTriremeController struct {\n\tctrl     *gomock.Controller\n\trecorder *MockTriremeControllerMockRecorder\n}\n\n// MockTriremeControllerMockRecorder is the mock recorder for MockTriremeController\n// nolint\ntype MockTriremeControllerMockRecorder struct {\n\tmock *MockTriremeController\n}\n\n// NewMockTriremeController creates a new mock instance\n// nolint\nfunc NewMockTriremeController(ctrl *gomock.Controller) *MockTriremeController {\n\tmock := &MockTriremeController{ctrl: ctrl}\n\tmock.recorder = &MockTriremeControllerMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockTriremeController) EXPECT() *MockTriremeControllerMockRecorder {\n\treturn m.recorder\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockTriremeController) Run(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Run\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) Run(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockTriremeController)(nil).Run), ctx)\n}\n\n// CleanUp mocks base method\n// nolint\nfunc (m *MockTriremeController) CleanUp() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CleanUp\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CleanUp indicates an expected call of CleanUp\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) CleanUp() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CleanUp\", reflect.TypeOf((*MockTriremeController)(nil).CleanUp))\n}\n\n// Enforce mocks base method\n// nolint\nfunc (m *MockTriremeController) Enforce(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Enforce\", ctx, puID, policy, runtime)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Enforce indicates an expected call of Enforce\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) Enforce(ctx, puID, policy, runtime interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Enforce\", reflect.TypeOf((*MockTriremeController)(nil).Enforce), ctx, puID, policy, runtime)\n}\n\n// UnEnforce mocks base method\n// nolint\nfunc (m *MockTriremeController) UnEnforce(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UnEnforce\", ctx, puID, policy, runtime)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UnEnforce indicates an expected call of UnEnforce\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) UnEnforce(ctx, puID, policy, runtime interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UnEnforce\", reflect.TypeOf((*MockTriremeController)(nil).UnEnforce), ctx, puID, policy, runtime)\n}\n\n// UpdatePolicy mocks base method\n// nolint\nfunc (m *MockTriremeController) UpdatePolicy(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdatePolicy\", ctx, puID, policy, runtime)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdatePolicy indicates an expected call of UpdatePolicy\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) UpdatePolicy(ctx, puID, policy, runtime interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdatePolicy\", reflect.TypeOf((*MockTriremeController)(nil).UpdatePolicy), ctx, puID, policy, runtime)\n}\n\n// UpdateSecrets mocks base method\n// nolint\nfunc (m *MockTriremeController) UpdateSecrets(secrets secrets.Secrets) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateSecrets\", secrets)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateSecrets indicates an expected call of UpdateSecrets\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) UpdateSecrets(secrets interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateSecrets\", reflect.TypeOf((*MockTriremeController)(nil).UpdateSecrets), secrets)\n}\n\n// UpdateConfiguration mocks base method\n// nolint\nfunc (m *MockTriremeController) UpdateConfiguration(cfg *runtime.Configuration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateConfiguration\", cfg)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateConfiguration indicates an expected call of UpdateConfiguration\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) UpdateConfiguration(cfg interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateConfiguration\", reflect.TypeOf((*MockTriremeController)(nil).UpdateConfiguration), cfg)\n}\n\n// EnableDatapathPacketTracing mocks base method\n// nolint\nfunc (m *MockTriremeController) EnableDatapathPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, direction packettracing.TracingDirection, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableDatapathPacketTracing\", ctx, puID, policy, runtime, direction, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableDatapathPacketTracing indicates an expected call of EnableDatapathPacketTracing\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) EnableDatapathPacketTracing(ctx, puID, policy, runtime, direction, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableDatapathPacketTracing\", reflect.TypeOf((*MockTriremeController)(nil).EnableDatapathPacketTracing), ctx, puID, policy, runtime, direction, interval)\n}\n\n// EnableIPTablesPacketTracing mocks base method\n// nolint\nfunc (m *MockTriremeController) EnableIPTablesPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableIPTablesPacketTracing\", ctx, puID, policy, runtime, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableIPTablesPacketTracing indicates an expected call of EnableIPTablesPacketTracing\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) EnableIPTablesPacketTracing(ctx, puID, policy, runtime, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableIPTablesPacketTracing\", reflect.TypeOf((*MockTriremeController)(nil).EnableIPTablesPacketTracing), ctx, puID, policy, runtime, interval)\n}\n\n// Ping mocks base method\n// nolint\nfunc (m *MockTriremeController) Ping(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, pingConfig *policy.PingConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Ping\", ctx, puID, policy, runtime, pingConfig)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Ping indicates an expected call of Ping\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) Ping(ctx, puID, policy, runtime, pingConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Ping\", reflect.TypeOf((*MockTriremeController)(nil).Ping), ctx, puID, policy, runtime, pingConfig)\n}\n\n// DebugCollect mocks base method\n// nolint\nfunc (m *MockTriremeController) DebugCollect(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, debugConfig *policy.DebugConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DebugCollect\", ctx, puID, policy, runtime, debugConfig)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DebugCollect indicates an expected call of DebugCollect\n// nolint\nfunc (mr *MockTriremeControllerMockRecorder) DebugCollect(ctx, puID, policy, runtime, debugConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DebugCollect\", reflect.TypeOf((*MockTriremeController)(nil).DebugCollect), ctx, puID, policy, runtime, debugConfig)\n}\n\n// MockDebugInfo is a mock of DebugInfo interface\n// nolint\ntype MockDebugInfo struct {\n\tctrl     *gomock.Controller\n\trecorder *MockDebugInfoMockRecorder\n}\n\n// MockDebugInfoMockRecorder is the mock recorder for MockDebugInfo\n// nolint\ntype MockDebugInfoMockRecorder struct {\n\tmock *MockDebugInfo\n}\n\n// NewMockDebugInfo creates a new mock instance\n// nolint\nfunc NewMockDebugInfo(ctrl *gomock.Controller) *MockDebugInfo {\n\tmock := &MockDebugInfo{ctrl: ctrl}\n\tmock.recorder = &MockDebugInfoMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockDebugInfo) EXPECT() *MockDebugInfoMockRecorder {\n\treturn m.recorder\n}\n\n// EnableDatapathPacketTracing mocks base method\n// nolint\nfunc (m *MockDebugInfo) EnableDatapathPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, direction packettracing.TracingDirection, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableDatapathPacketTracing\", ctx, puID, policy, runtime, direction, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableDatapathPacketTracing indicates an expected call of EnableDatapathPacketTracing\n// nolint\nfunc (mr *MockDebugInfoMockRecorder) EnableDatapathPacketTracing(ctx, puID, policy, runtime, direction, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableDatapathPacketTracing\", reflect.TypeOf((*MockDebugInfo)(nil).EnableDatapathPacketTracing), ctx, puID, policy, runtime, direction, interval)\n}\n\n// EnableIPTablesPacketTracing mocks base method\n// nolint\nfunc (m *MockDebugInfo) EnableIPTablesPacketTracing(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, interval time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnableIPTablesPacketTracing\", ctx, puID, policy, runtime, interval)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnableIPTablesPacketTracing indicates an expected call of EnableIPTablesPacketTracing\n// nolint\nfunc (mr *MockDebugInfoMockRecorder) EnableIPTablesPacketTracing(ctx, puID, policy, runtime, interval interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnableIPTablesPacketTracing\", reflect.TypeOf((*MockDebugInfo)(nil).EnableIPTablesPacketTracing), ctx, puID, policy, runtime, interval)\n}\n\n// Ping mocks base method\n// nolint\nfunc (m *MockDebugInfo) Ping(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, pingConfig *policy.PingConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Ping\", ctx, puID, policy, runtime, pingConfig)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Ping indicates an expected call of Ping\n// nolint\nfunc (mr *MockDebugInfoMockRecorder) Ping(ctx, puID, policy, runtime, pingConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Ping\", reflect.TypeOf((*MockDebugInfo)(nil).Ping), ctx, puID, policy, runtime, pingConfig)\n}\n\n// DebugCollect mocks base method\n// nolint\nfunc (m *MockDebugInfo) DebugCollect(ctx context.Context, puID string, policy *policy.PUPolicy, runtime *policy.PURuntime, debugConfig *policy.DebugConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DebugCollect\", ctx, puID, policy, runtime, debugConfig)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DebugCollect indicates an expected call of DebugCollect\n// nolint\nfunc (mr *MockDebugInfoMockRecorder) DebugCollect(ctx, puID, policy, runtime, debugConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DebugCollect\", reflect.TypeOf((*MockDebugInfo)(nil).DebugCollect), ctx, puID, policy, runtime, debugConfig)\n}\n"
  },
  {
    "path": "controller/pkg/aclprovider/ipsetprovider.go",
    "content": "// +build linux darwin\n\npackage provider\n\nimport (\n\t\"fmt\"\n\t\"os/exec\"\n\t\"strings\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"go.uber.org/zap\"\n)\n\n// IpsetProvider returns a fabric for Ipset.\ntype IpsetProvider interface {\n\tNewIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error)\n\tGetIpset(name string) Ipset\n\tDestroyAll(prefix string) error\n\tListIPSets() ([]string, error)\n}\n\n// Ipset is an abstraction of all the methods an implementation of userspace\n// ipsets need to provide.\ntype Ipset interface {\n\tAdd(entry string, timeout int) error\n\tAddOption(entry string, option string, timeout int) error\n\tDel(entry string) error\n\tDestroy() error\n\tFlush() error\n\tTest(entry string) (bool, error)\n}\n\ntype goIpsetProvider struct{}\n\nfunc ipsetCreateBitmapPort(setname string) error {\n\t//Bitmap type is not supported by the ipset library\n\tpath, _ := exec.LookPath(\"ipset\")\n\tout, err := exec.Command(path, \"create\", setname, \"bitmap:port\", \"range\", \"0-65535\", \"timeout\", \"0\").CombinedOutput()\n\tif err != nil {\n\t\tif strings.Contains(string(out), \"set with the same name already exists\") {\n\t\t\tzap.L().Warn(\"Set already exists - cleaning up\", zap.String(\"set name\", setname))\n\t\t\t// Clean up the existing set\n\t\t\tif _, cerr := exec.Command(path, \"-F\", setname).CombinedOutput(); cerr != nil {\n\t\t\t\treturn fmt.Errorf(\"Failed to clean up existing ipset: %s\", err)\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\t\tzap.L().Error(\"Unable to create set\", zap.String(\"set name\", setname), zap.String(\"ipset-output\", string(out)))\n\t}\n\treturn err\n}\n\n// NewIpset returns an IpsetProvider interface based on the go-ipset\n// external package.\nfunc (i *goIpsetProvider) NewIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error) {\n\t// Check if hashtype is a type of hash\n\tif strings.HasPrefix(ipsetType, \"hash:\") {\n\t\treturn ipset.New(name, ipsetType, p)\n\t}\n\n\tif err := ipsetCreateBitmapPort(name); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &ipset.IPSet{Name: name}, nil\n}\n\n// GetIpset gets the ipset object from the name.\nfunc (i *goIpsetProvider) GetIpset(name string) Ipset {\n\treturn &ipset.IPSet{\n\t\tName: name,\n\t}\n}\n\n// DestroyAll destroys all the ipsets - it will fail if there are existing references\nfunc (i *goIpsetProvider) DestroyAll(prefix string) error {\n\n\treturn ipset.DestroyAll()\n}\n\nfunc (i *goIpsetProvider) ListIPSets() ([]string, error) {\n\n\tpath, err := exec.LookPath(\"ipset\")\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"ipset command not found: %s\", err)\n\t}\n\n\tout, err := exec.Command(path, \"-L\", \"-name\").CombinedOutput()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to list ipsets:%s\", err)\n\t}\n\n\treturn strings.Split(string(out), \"\\n\"), nil\n}\n\n// NewGoIPsetProvider Return a Go IPSet Provider\nfunc NewGoIPsetProvider() IpsetProvider {\n\treturn &goIpsetProvider{}\n}\n"
  },
  {
    "path": "controller/pkg/aclprovider/ipsetprovider_windows.go",
    "content": "// +build windows\n\npackage provider\n\nimport (\n\t\"fmt\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"go.aporeto.io/trireme-lib/utils/frontman\"\n)\n\n// IpsetProvider returns a fabric for Ipset.\ntype IpsetProvider interface {\n\tNewIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error)\n\tGetIpset(name string) Ipset\n\tDestroyAll(prefix string) error\n\tListIPSets() ([]string, error)\n}\n\n// Ipset is an abstraction of all the methods an implementation of userspace\n// ipsets need to provide.\ntype Ipset interface {\n\tAdd(entry string, timeout int) error\n\tAddOption(entry string, option string, timeout int) error\n\tDel(entry string) error\n\tDestroy() error\n\tFlush() error\n\tTest(entry string) (bool, error)\n}\n\ntype ipsetProvider struct{}\n\ntype winIPSet struct {\n\thandle uintptr\n\tname   string // for debugging\n}\n\n// NewIpset returns an IpsetProvider interface based on the go-ipset\n// external package.\nfunc (i *ipsetProvider) NewIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error) {\n\tipsetHandle, err := frontman.Wrapper.NewIpset(name, ipsetType)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &winIPSet{ipsetHandle, name}, nil\n}\n\n// GetIpset gets the ipset object from the name.\n// Note that the interface can't return error here, but since it's possible to fail in Windows,\n// we log error and return incomplete object, and expect a failure from Frontman on a later call.\nfunc (i *ipsetProvider) GetIpset(name string) Ipset {\n\tipsetHandle, err := frontman.Wrapper.GetIpset(name)\n\tif err != nil {\n\t\tzap.L().Error(fmt.Sprintf(\"failed to get ipset %s\", name), zap.Error(err))\n\t\treturn &winIPSet{0, name}\n\t}\n\treturn &winIPSet{ipsetHandle, name}\n}\n\n// DestroyAll destroys all the ipsets - it will fail if there are existing references\nfunc (i *ipsetProvider) DestroyAll(prefix string) error {\n\treturn frontman.Wrapper.DestroyAllIpsets(prefix)\n}\n\nfunc (i *ipsetProvider) ListIPSets() ([]string, error) {\n\treturn frontman.Wrapper.ListIpsets()\n}\n\n// NewGoIPsetProvider Return a Go IPSet Provider\nfunc NewGoIPsetProvider() IpsetProvider {\n\treturn &ipsetProvider{}\n}\n\nfunc (w *winIPSet) Add(entry string, timeout int) error {\n\treturn frontman.Wrapper.IpsetAdd(w.handle, entry, timeout)\n}\n\nfunc (w *winIPSet) AddOption(entry string, option string, timeout int) error {\n\treturn frontman.Wrapper.IpsetAddOption(w.handle, entry, option, timeout)\n}\n\nfunc (w *winIPSet) Del(entry string) error {\n\treturn frontman.Wrapper.IpsetDelete(w.handle, entry)\n}\n\nfunc (w *winIPSet) Destroy() error {\n\treturn frontman.Wrapper.IpsetDestroy(w.handle)\n}\n\nfunc (w *winIPSet) Flush() error {\n\treturn frontman.Wrapper.IpsetFlush(w.handle)\n}\n\nfunc (w *winIPSet) Test(entry string) (bool, error) {\n\treturn frontman.Wrapper.IpsetTest(w.handle, entry)\n}\n"
  },
  {
    "path": "controller/pkg/aclprovider/ipsetprovidermock.go",
    "content": "package provider\n\nimport (\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n)\n\ntype ipsetProviderMockedMethods struct {\n\tnewMockIPset   func(name string, hasht string, p *ipset.Params) (Ipset, error)\n\tgetMockIPset   func(name string) Ipset\n\tdestroyAllMock func(prefix string) error\n\tlistIPSetsMock func() ([]string, error)\n}\n\n// TestIpsetProvider is a test implementation for IpsetProvider\ntype TestIpsetProvider interface {\n\tIpsetProvider\n\tMockNewIpset(t *testing.T, impl func(name string, hasht string, p *ipset.Params) (Ipset, error))\n\tMockGetIpset(t *testing.T, impl func(name string) Ipset)\n\tMockDestroyAll(t *testing.T, impl func(string) error)\n\tMockListIPSets(t *testing.T, impl func() ([]string, error))\n}\n\ntype testIpsetProvider struct {\n\tmocks       map[*testing.T]*ipsetProviderMockedMethods\n\tlock        *sync.Mutex\n\tcurrentTest *testing.T\n}\n\n// NewTestIpsetProvider returns a new TestManipulator.\nfunc NewTestIpsetProvider() TestIpsetProvider {\n\treturn &testIpsetProvider{\n\t\tlock:  &sync.Mutex{},\n\t\tmocks: map[*testing.T]*ipsetProviderMockedMethods{},\n\t}\n}\n\nfunc (m *testIpsetProvider) MockNewIpset(t *testing.T, impl func(name string, hasht string, p *ipset.Params) (Ipset, error)) {\n\n\tm.currentMocks(t).newMockIPset = impl\n}\n\nfunc (m *testIpsetProvider) MockGetIpset(t *testing.T, impl func(name string) Ipset) {\n\tm.currentMocks(t).getMockIPset = impl\n}\n\nfunc (m *testIpsetProvider) MockDestroyAll(t *testing.T, impl func(string) error) {\n\n\tm.currentMocks(t).destroyAllMock = impl\n}\n\nfunc (m *testIpsetProvider) MockListIPSets(t *testing.T, impl func() ([]string, error)) {\n\n\tm.currentMocks(t).listIPSetsMock = impl\n}\n\nfunc (m *testIpsetProvider) NewIpset(name string, hasht string, p *ipset.Params) (Ipset, error) {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.newMockIPset != nil {\n\t\treturn mock.newMockIPset(name, hasht, p)\n\t}\n\n\treturn NewTestIpset(), nil\n}\n\nfunc (m *testIpsetProvider) GetIpset(name string) Ipset {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.newMockIPset != nil {\n\t\treturn mock.getMockIPset(name)\n\t}\n\n\treturn NewTestIpset()\n}\n\nfunc (m *testIpsetProvider) DestroyAll(prefix string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.destroyAllMock != nil {\n\t\treturn mock.destroyAllMock(prefix)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpsetProvider) ListIPSets() ([]string, error) {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.listIPSetsMock != nil {\n\t\treturn mock.listIPSetsMock()\n\t}\n\n\treturn nil, nil\n}\n\nfunc (m *testIpsetProvider) currentMocks(t *testing.T) *ipsetProviderMockedMethods {\n\tm.lock.Lock()\n\tdefer m.lock.Unlock()\n\n\tmocks := m.mocks[t]\n\n\tif mocks == nil {\n\t\tmocks = &ipsetProviderMockedMethods{}\n\t\tm.mocks[t] = mocks\n\t}\n\n\tm.currentTest = t\n\treturn mocks\n}\n\ntype ipsetMockedMethods struct {\n\taddMock       func(entry string, timeout int) error\n\taddOptionMock func(entry string, option string, timeout int) error\n\tdelMock       func(entry string) error\n\tdestroyMock   func() error\n\tflushMock     func() error\n\ttestMock      func(entry string) (bool, error)\n}\n\n// TestIpset is a test implementation for Ipset\ntype TestIpset interface {\n\tIpset\n\tMockAdd(t *testing.T, impl func(entry string, timeout int) error)\n\tMockAddOption(t *testing.T, impl func(entry string, option string, timeout int) error)\n\tMockDel(t *testing.T, impl func(entry string) error)\n\tMockDestroy(t *testing.T, impl func() error)\n\tMockFlush(t *testing.T, impl func() error)\n\tMockTest(t *testing.T, impl func(entry string) (bool, error))\n}\n\ntype testIpset struct {\n\tmocks       map[*testing.T]*ipsetMockedMethods\n\tlock        *sync.Mutex\n\tcurrentTest *testing.T\n}\n\n// NewTestIpset returns a new TestManipulator.\nfunc NewTestIpset() TestIpset {\n\treturn &testIpset{\n\t\tlock:  &sync.Mutex{},\n\t\tmocks: map[*testing.T]*ipsetMockedMethods{},\n\t}\n}\n\nfunc (m *testIpset) MockAdd(t *testing.T, impl func(entry string, timeout int) error) {\n\n\tm.currentMocks(t).addMock = impl\n}\n\nfunc (m *testIpset) MockAddOption(t *testing.T, impl func(entry string, option string, timeout int) error) {\n\n\tm.currentMocks(t).addOptionMock = impl\n}\n\nfunc (m *testIpset) MockDel(t *testing.T, impl func(entry string) error) {\n\n\tm.currentMocks(t).delMock = impl\n}\n\nfunc (m *testIpset) MockDestroy(t *testing.T, impl func() error) {\n\n\tm.currentMocks(t).destroyMock = impl\n}\n\nfunc (m *testIpset) MockFlush(t *testing.T, impl func() error) {\n\n\tm.currentMocks(t).flushMock = impl\n}\n\nfunc (m *testIpset) MockTest(t *testing.T, impl func(entry string) (bool, error)) {\n\n\tm.currentMocks(t).testMock = impl\n}\n\nfunc (m *testIpset) Add(entry string, timeout int) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.addMock != nil {\n\t\treturn mock.addMock(entry, timeout)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpset) AddOption(entry string, option string, timeout int) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.addOptionMock != nil {\n\t\treturn mock.addOptionMock(entry, option, timeout)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpset) Del(entry string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.delMock != nil {\n\t\treturn mock.delMock(entry)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpset) Destroy() error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.destroyMock != nil {\n\t\treturn mock.destroyMock()\n\t}\n\treturn nil\n\n}\n\nfunc (m *testIpset) Flush() error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.flushMock != nil {\n\t\treturn mock.flushMock()\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpset) Test(entry string) (bool, error) {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.testMock != nil {\n\t\treturn mock.testMock(entry)\n\t}\n\n\treturn false, nil\n}\n\nfunc (m *testIpset) currentMocks(t *testing.T) *ipsetMockedMethods {\n\tm.lock.Lock()\n\tdefer m.lock.Unlock()\n\n\tmocks := m.mocks[t]\n\n\tif mocks == nil {\n\t\tmocks = &ipsetMockedMethods{}\n\t\tm.mocks[t] = mocks\n\t}\n\n\tm.currentTest = t\n\treturn mocks\n}\n"
  },
  {
    "path": "controller/pkg/aclprovider/iptablesprovider.go",
    "content": "// +build linux darwin\n\npackage provider\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os/exec\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"go.uber.org/zap\"\n)\n\n// IptablesProvider is an abstraction of all the methods an implementation of userspace\n// iptables need to provide.\ntype IptablesProvider interface {\n\tBaseIPTables\n\t// Commit will commit changes if it is a batch provider.\n\tCommit() error\n\t// RetrieveTable allows a caller to retrieve the final table.\n\tRetrieveTable() map[string]map[string][]string\n\t// ResetRules resets the rules to a state where rules with the substring subs are removed\n\tResetRules(subs string) error\n}\n\n// BaseIPTables is the base interface of iptables functions.\ntype BaseIPTables interface {\n\t// Append apends a rule to chain of table\n\tAppend(table, chain string, rulespec ...string) error\n\t// Insert inserts a rule to a chain of table at the required pos\n\tInsert(table, chain string, pos int, rulespec ...string) error\n\t// Delete deletes a rule of a chain in the given table\n\tDelete(table, chain string, rulespec ...string) error\n\t// ListChains lists all the chains associated with a table\n\tListChains(table string) ([]string, error)\n\t// ClearChain clears a chain in a table\n\tClearChain(table, chain string) error\n\t// DeleteChain deletes a chain in the table. There should be no references to this chain\n\tDeleteChain(table, chain string) error\n\t// NewChain creates a new chain\n\tNewChain(table, chain string) error\n\t// ListRules lists the rules in the table/chain passed to it\n\tListRules(table, chain string) ([]string, error)\n}\n\n// BatchProvider uses iptables-restore to program ACLs\ntype BatchProvider struct {\n\tipt BaseIPTables\n\n\t//        TABLE      CHAIN    RULES\n\trules       map[string]map[string][]string\n\tbatchTables map[string]bool\n\n\t// Allowing for custom commit functions for testing\n\tcommitFunc  func(buf *bytes.Buffer) error\n\tcustomChain string\n\tsync.Mutex\n\tcmd        string\n\trestoreCmd string\n\tsaveCmd    string\n\tquote      bool\n}\n\nconst (\n\tcmdV4        = \"iptables --wait\"\n\tcmdV6        = \"ip6tables --wait\"\n\trestoreCmdV4 = \"iptables-restore\"\n\trestoreCmdV6 = \"ip6tables-restore\"\n\tsaveCmdV4    = \"iptables-save\"\n\tsaveCmdV6    = \"ip6tables-save\"\n)\n\n// TestIptablesPinned returns error if the kernel doesn't support bpf pinning in iptables\nfunc TestIptablesPinned(bpf string) error {\n\tcmd := exec.Command(\"aporeto-iptables\", strings.Fields(\"iptables --wait -t mangle -I OUTPUT -m bpf --object-pinned \"+bpf+\" -j LOG\")...)\n\tif _, err := cmd.CombinedOutput(); err != nil {\n\t\treturn err\n\t}\n\n\tcmd = exec.Command(\"aporeto-iptables\", strings.Fields(\"iptables --wait -t mangle -D OUTPUT -m bpf --object-pinned \"+bpf+\" -j LOG\")...)\n\tif _, err := cmd.CombinedOutput(); err != nil {\n\t\tzap.L().Error(\"Error removing rule\", zap.Error(err))\n\t}\n\n\treturn nil\n}\n\n// NewGoIPTablesProviderV4 returns an IptablesProvider interface based on the go-iptables\n// external package.\nfunc NewGoIPTablesProviderV4(batchTables []string, customChain string) (IptablesProvider, error) {\n\n\tbatchTablesMap := map[string]bool{}\n\tfor _, t := range batchTables {\n\t\tbatchTablesMap[t] = true\n\t}\n\n\tb := &BatchProvider{\n\t\tcmd:         cmdV4,\n\t\trules:       map[string]map[string][]string{},\n\t\tbatchTables: batchTablesMap,\n\t\trestoreCmd:  restoreCmdV4,\n\t\tsaveCmd:     saveCmdV4,\n\t\tcustomChain: customChain,\n\t\tquote:       true,\n\t}\n\n\tb.commitFunc = b.restore\n\n\treturn b, nil\n}\n\n// NewGoIPTablesProviderV6 returns an IptablesProvider interface based on the go-iptables\n// external package.\nfunc NewGoIPTablesProviderV6(batchTables []string, customChain string) (IptablesProvider, error) {\n\n\tbatchTablesMap := map[string]bool{}\n\tfor _, t := range batchTables {\n\t\tbatchTablesMap[t] = true\n\t}\n\n\tb := &BatchProvider{\n\t\tcmd:         cmdV6,\n\t\trules:       map[string]map[string][]string{},\n\t\tbatchTables: batchTablesMap,\n\t\tcustomChain: customChain,\n\t\trestoreCmd:  restoreCmdV6,\n\t\tsaveCmd:     saveCmdV6,\n\t\tquote:       true,\n\t}\n\n\tb.commitFunc = b.restore\n\n\treturn b, nil\n}\n\n// NewCustomBatchProvider is a custom batch provider wher the downstream\n// iptables utility is provided by the caller. Very useful for testing\n// the ACL functions with a mock.\nfunc NewCustomBatchProvider(ipt BaseIPTables, commit func(buf *bytes.Buffer) error, batchTables []string) *BatchProvider {\n\n\tbatchTablesMap := map[string]bool{}\n\n\tfor _, t := range batchTables {\n\t\tbatchTablesMap[t] = true\n\t}\n\n\treturn &BatchProvider{\n\t\tipt:         ipt,\n\t\trules:       map[string]map[string][]string{},\n\t\tbatchTables: batchTablesMap,\n\t\tcommitFunc:  commit,\n\t}\n}\n\nfunc createIPtablesCommand(iptablesCmd, table, chain, action string, rulespec ...string) []string {\n\tcmd := strings.Fields(iptablesCmd)\n\tcmd = append(cmd, \"-t\")\n\tcmd = append(cmd, table)\n\tcmd = append(cmd, action)\n\tcmd = append(cmd, chain)\n\tcmd = append(cmd, rulespec...)\n\treturn cmd\n}\n\n// Append will append the provided rule to the local cache or call\n// directly the iptables command depending on the table.\nfunc (b *BatchProvider) Append(table, chain string, rulespec ...string) error {\n\tb.Lock()\n\tdefer b.Unlock()\n\n\tif len(rulespec) == 0 {\n\t\treturn nil\n\t}\n\n\tif _, ok := b.batchTables[table]; !ok {\n\t\tcmd := createIPtablesCommand(b.cmd, table, chain, \"-A\", rulespec...)\n\t\texecCmd := exec.Command(\"aporeto-iptables\", cmd...)\n\t\ts, err := execCmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\treturn errors.New(string(s))\n\t\t}\n\n\t\treturn nil\n\t}\n\n\tif _, ok := b.rules[table]; !ok {\n\t\tb.rules[table] = map[string][]string{}\n\t}\n\n\tif _, ok := b.rules[table][chain]; !ok {\n\t\tb.rules[table][chain] = []string{}\n\t}\n\n\tb.quoteRulesSpec(rulespec)\n\n\trule := strings.Join(rulespec, \" \")\n\tb.rules[table][chain] = append(b.rules[table][chain], rule)\n\n\treturn nil\n}\n\n// Insert will insert the rule in the corresponding position in the local\n// cache or call the corresponding iptables command, depending on the table.\nfunc (b *BatchProvider) Insert(table, chain string, pos int, rulespec ...string) error {\n\n\tb.Lock()\n\tdefer b.Unlock()\n\n\tif _, ok := b.batchTables[table]; !ok {\n\t\tcmd := createIPtablesCommand(b.cmd, table, chain, \"-I\", rulespec...)\n\t\texecCmd := exec.Command(\"aporeto-iptables\", cmd...)\n\t\ts, err := execCmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\treturn errors.New(string(s))\n\t\t}\n\t\treturn nil\n\t}\n\n\tif _, ok := b.rules[table]; !ok {\n\t\tb.rules[table] = map[string][]string{}\n\t}\n\n\tif _, ok := b.rules[table][chain]; !ok {\n\t\tb.rules[table][chain] = []string{}\n\t}\n\n\tb.quoteRulesSpec(rulespec)\n\n\trule := strings.Join(rulespec, \" \")\n\n\tif pos == 1 {\n\t\tb.rules[table][chain] = append([]string{rule}, b.rules[table][chain]...)\n\t} else if pos > len(b.rules[table][chain]) {\n\t\tb.rules[table][chain] = append(b.rules[table][chain], rule)\n\t} else {\n\t\tb.rules[table][chain] = append(b.rules[table][chain], \"newvalue\")\n\t\tcopy(b.rules[table][chain][pos-1:], b.rules[table][chain][pos-2:])\n\t\tb.rules[table][chain][pos-1] = rule\n\t}\n\n\treturn nil\n}\n\n// Delete will delete the rule from the local cache or the system.\nfunc (b *BatchProvider) Delete(table, chain string, rulespec ...string) error {\n\tb.Lock()\n\tdefer b.Unlock()\n\n\tif _, ok := b.batchTables[table]; !ok {\n\t\tcmd := createIPtablesCommand(b.cmd, table, chain, \"-D\", rulespec...)\n\t\texecCmd := exec.Command(\"aporeto-iptables\", cmd...)\n\t\ts, err := execCmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\treturn errors.New(string(s))\n\t\t}\n\t\treturn nil\n\t}\n\n\tif _, ok := b.rules[table]; !ok {\n\t\treturn nil\n\t}\n\n\tif _, ok := b.rules[table][chain]; !ok {\n\t\treturn nil\n\t}\n\n\tb.quoteRulesSpec(rulespec)\n\n\trule := strings.Join(rulespec, \" \")\n\tfor index, r := range b.rules[table][chain] {\n\t\tif rule == r {\n\t\t\tswitch index {\n\t\t\tcase 0:\n\t\t\t\tif len(b.rules[table][chain]) == 1 {\n\t\t\t\t\tb.rules[table][chain] = []string{}\n\t\t\t\t} else {\n\t\t\t\t\tb.rules[table][chain] = b.rules[table][chain][1:]\n\t\t\t\t}\n\t\t\tcase len(b.rules[table][chain]) - 1:\n\t\t\t\tb.rules[table][chain] = b.rules[table][chain][:index]\n\t\t\tdefault:\n\t\t\t\tb.rules[table][chain] = append(b.rules[table][chain][:index], b.rules[table][chain][index+1:]...)\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// ListChains returns a slice containing the name of each chain in the specified table.\nfunc listChains(iptablesCmd, table string) ([]string, error) {\n\tcmd := strings.Fields(iptablesCmd)\n\tcmd = append(cmd, []string{\"-t\", table, \"-S\"}...)\n\n\texecCmd := exec.Command(\"aporeto-iptables\", cmd...)\n\tout, err := execCmd.CombinedOutput()\n\tif err != nil {\n\t\treturn nil, errors.New(string(out))\n\t}\n\n\tresult := strings.Split(string(out), \"\\n\")\n\n\t// Iterate over rules to find all default (-P) and user-specified (-N) chains.\n\t// Chains definition always come before rules.\n\t// Format is the following:\n\t// -P OUTPUT ACCEPT\n\t// -N Custom\n\tvar chains []string\n\tfor _, val := range result {\n\t\tif strings.HasPrefix(val, \"-P\") || strings.HasPrefix(val, \"-N\") {\n\t\t\tchains = append(chains, strings.Fields(val)[1])\n\t\t} else {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn chains, nil\n}\n\n// ListChains will provide a list of the current chains.\nfunc (b *BatchProvider) ListChains(table string) ([]string, error) {\n\tb.Lock()\n\tdefer b.Unlock()\n\n\tchains, err := listChains(b.cmd, table)\n\tif err != nil {\n\t\treturn []string{}, err\n\t}\n\n\tif _, ok := b.batchTables[table]; !ok || b.rules[table] == nil {\n\t\treturn chains, nil\n\t}\n\n\tfor _, chain := range chains {\n\t\tif _, ok := b.rules[table][chain]; !ok {\n\t\t\tb.rules[table][chain] = []string{}\n\t\t}\n\t}\n\n\tallChains := make([]string, len(b.rules[table]))\n\ti := 0\n\tfor chain := range b.rules[table] {\n\t\tallChains[i] = chain\n\t\ti++\n\t}\n\n\treturn allChains, nil\n}\n\n// ClearChain will clear the chains.\nfunc (b *BatchProvider) ClearChain(table, chain string) error {\n\n\tb.Lock()\n\tdefer b.Unlock()\n\n\tif _, ok := b.batchTables[table]; !ok {\n\t\tcmd := strings.Fields(b.cmd)\n\t\tcmd = append(cmd, []string{\"-t\", table, \"-F\", chain}...)\n\t\texecCmd := exec.Command(\"aporeto-iptables\", cmd...)\n\t\ts, err := execCmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\treturn errors.New(string(s))\n\t\t}\n\t\treturn nil\n\t}\n\n\tif _, ok := b.rules[table]; !ok {\n\t\treturn nil\n\t}\n\tif _, ok := b.rules[table][chain]; !ok {\n\t\treturn nil\n\t}\n\n\tb.rules[table][chain] = []string{}\n\treturn nil\n}\n\n// DeleteChain will delete the chains.\nfunc (b *BatchProvider) DeleteChain(table, chain string) error {\n\tb.Lock()\n\tdefer b.Unlock()\n\n\tif _, ok := b.batchTables[table]; !ok {\n\t\tcmd := strings.Fields(b.cmd)\n\t\tcmd = append(cmd, []string{\"-t\", table, \"-X\", chain}...)\n\t\texecCmd := exec.Command(\"aporeto-iptables\", cmd...)\n\t\ts, err := execCmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\treturn errors.New(string(s))\n\t\t}\n\t\treturn nil\n\t}\n\n\tif _, ok := b.rules[table]; !ok {\n\t\treturn nil\n\t}\n\n\tdelete(b.rules[table], chain)\n\treturn nil\n}\n\n// NewChain creates a new chain.\nfunc (b *BatchProvider) NewChain(table, chain string) error {\n\tb.Lock()\n\tdefer b.Unlock()\n\n\tif _, ok := b.batchTables[table]; !ok {\n\t\tcmd := strings.Fields(b.cmd)\n\t\tcmd = append(cmd, []string{\"-t\", table, \"-N\", chain}...)\n\t\texecCmd := exec.Command(\"aporeto-iptables\", cmd...)\n\t\ts, err := execCmd.CombinedOutput()\n\t\tif err != nil {\n\t\t\treturn errors.New(string(s))\n\t\t}\n\t\treturn nil\n\t}\n\n\tif _, ok := b.rules[table]; !ok {\n\t\tb.rules[table] = map[string][]string{}\n\t}\n\n\tb.rules[table][chain] = []string{}\n\treturn nil\n}\n\n// Commit commits the rules to the system\nfunc (b *BatchProvider) Commit() error {\n\tb.Lock()\n\tdefer b.Unlock()\n\n\t// We don't commit if we don't have any tables. This is old\n\t// kernel compatibility mode.\n\tif len(b.batchTables) == 0 {\n\t\treturn nil\n\t}\n\n\tbuf, err := b.createDataBuffer()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Failed to crete buffer %s\", err)\n\t}\n\n\treturn b.commitFunc(buf)\n}\n\n// RetrieveTable allows a caller to retrieve the final table. Mostly\n// needed for debuging and unit tests.\nfunc (b *BatchProvider) RetrieveTable() map[string]map[string][]string {\n\tb.Lock()\n\tdefer b.Unlock()\n\n\treturn b.rules\n}\n\nfunc (b *BatchProvider) createDataBuffer() (*bytes.Buffer, error) {\n\n\tbuf := bytes.NewBuffer([]byte{})\n\n\tfor table := range b.rules {\n\t\tif _, err := fmt.Fprintf(buf, \"*%s\\n\", table); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tfor chain := range b.rules[table] {\n\t\t\tif _, err := fmt.Fprintf(buf, \":%s - [0:0]\\n\", chain); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t\tfor chain := range b.rules[table] {\n\t\t\tfor _, rule := range b.rules[table][chain] {\n\t\t\t\tif _, err := fmt.Fprintf(buf, \"-A %s %s\\n\", chain, rule); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tcustomChainRules, _ := b.saveCustomChainRules()\n\t\tfmt.Fprintf(buf, \"%s\\n\", customChainRules.String())\n\t\tif _, err := fmt.Fprintf(buf, \"COMMIT\\n\"); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn buf, nil\n}\n\n// restore will save the current DB to iptables.\nfunc (b *BatchProvider) restore(buf *bytes.Buffer) error {\n\n\tcmd := exec.Command(\"aporeto-iptables\", b.restoreCmd, \"--wait\")\n\tcmd.Stdin = buf\n\tout, err := cmd.CombinedOutput()\n\tif err != nil {\n\t\tagain, _ := b.createDataBuffer()\n\t\tzap.L().Error(\"Failed to execute command\", zap.Error(err),\n\t\t\tzap.ByteString(\"Output\", out),\n\t\t\tzap.String(\"Output\", again.String()),\n\t\t)\n\t\treturn fmt.Errorf(\"Failed to execute iptables-restore: %s\", err)\n\t}\n\treturn nil\n}\n\nfunc (b *BatchProvider) quoteRulesSpec(rulesspec []string) {\n\n\tif !b.quote {\n\t\treturn\n\t}\n\n\tfor i, rule := range rulesspec {\n\t\tif len(rulesspec[i]) > 0 && rulesspec[i][0] == '\"' {\n\t\t\tcontinue\n\t\t}\n\n\t\trulesspec[i] = fmt.Sprintf(\"\\\"%s\\\"\", rule)\n\t}\n}\n\n// ResetRules resets the rules to the original form.\n// It is implemented as \"iptables-save | grep \"-v\" subs | iptables-restore\"\nfunc (b *BatchProvider) ResetRules(subs string) error {\n\n\tvar out []byte\n\tvar err error\n\n\tcmd := exec.Command(\"aporeto-iptables\", b.saveCmd)\n\tif out, err = cmd.CombinedOutput(); err != nil {\n\t\tzap.L().Error(\"Failed to get iptables-save command\", zap.Error(err),\n\t\t\tzap.String(\"Output\", string(out)))\n\t\treturn err\n\t}\n\n\ts := string(out)\n\trules := strings.Split(s, \"\\n\")\n\n\tvar filterRules []string\n\n\tfor _, rule := range rules {\n\t\tif !strings.Contains(rule, subs) {\n\t\t\tfilterRules = append(filterRules, rule)\n\t\t}\n\t}\n\n\tcombineRules := strings.Join(filterRules, \"\\n\")\n\tbuf := bytes.NewBufferString(combineRules)\n\n\treturn b.commitFunc(buf)\n}\n\nfunc (b *BatchProvider) saveCustomChainRules() (*bytes.Buffer, error) {\n\tvar out []byte\n\tvar err error\n\n\tcmd := exec.Command(\"aporeto-iptables\", b.saveCmd)\n\tif out, err = cmd.CombinedOutput(); err != nil {\n\t\tzap.L().Error(\"Failed to get iptables-save command\", zap.Error(err),\n\t\t\tzap.String(\"Output\", string(out)))\n\t\treturn nil, err\n\t}\n\n\ts := string(out)\n\trules := strings.Split(s, \"\\n\")\n\n\tvar filterRules []string\n\n\tfor _, rule := range rules {\n\t\tif strings.Contains(rule, b.customChain) {\n\t\t\tfilterRules = append(filterRules, rule)\n\t\t}\n\t}\n\n\tcombineRules := strings.Join(filterRules, \"\\n\")\n\treturn bytes.NewBufferString(combineRules), nil\n\n}\n\n// ListRules lists the rules in the table/chain passed to it\nfunc (b *BatchProvider) ListRules(table, chain string) ([]string, error) {\n\tvar cmd *exec.Cmd\n\n\tif chain != \"\" {\n\t\tcmd = exec.Command(\"aporeto-iptables\", \"iptables\", \"--wait\", \"-t\", table, \"-L\", chain)\n\t} else {\n\t\tcmd = exec.Command(\"aporeto-iptables\", \"iptables\", \"-wait\", \"-t\", table, \"-L\")\n\t}\n\tout, err := cmd.CombinedOutput()\n\tif err != nil {\n\t\tzap.L().Error(\"Failed to get rules\", zap.Error(err), zap.String(\"table\", table), zap.String(\"chain\", chain))\n\t\treturn []string{}, err\n\t}\n\trules := strings.Split(string(out), \"\\n\")\n\treturn rules, nil\n\n}\n"
  },
  {
    "path": "controller/pkg/aclprovider/iptablesprovider_test.go",
    "content": "// +build !windows\n\npackage provider\n\nimport (\n\t\"testing\"\n\n\t\"github.com/magiconair/properties/assert\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nconst (\n\tmangle      = \"mangle\"\n\tinputChain  = \"INPUT\"\n\toutputChain = \"OUTPUT\"\n)\n\nfunc NewTestProvider(batchTables []string, quote bool) *BatchProvider {\n\tbatchTablesMap := map[string]bool{}\n\tfor _, t := range batchTables {\n\t\tbatchTablesMap[t] = true\n\t}\n\treturn &BatchProvider{\n\t\trules:       map[string]map[string][]string{},\n\t\tbatchTables: batchTablesMap,\n\t\tquote:       quote,\n\t}\n}\n\nfunc TestAppend(t *testing.T) {\n\tConvey(\"Given a valid batch provider\", t, func() {\n\t\tp := NewTestProvider([]string{mangle}, true)\n\t\tConvey(\"When I append a first rule, it should create the table\", func() {\n\t\t\terr := p.Append(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t})\n\n\t\tConvey(\"When I append two rules in the array, the values should be ordered \", func() {\n\t\t\terr := p.Append(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Append(mangle, inputChain, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(rules[1], ShouldResemble, \"\\\"val2\\\"\")\n\t\t})\n\n\t\tConvey(\"When I append two rules in different chains, there should be two chains\", func() {\n\t\t\terr := p.Append(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Append(mangle, outputChain, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 2)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][outputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(p.rules[mangle][outputChain][0], ShouldResemble, \"\\\"val2\\\"\")\n\t\t})\n\t})\n}\n\nfunc TestInsert(t *testing.T) {\n\tConvey(\"Given a valid batch provider\", t, func() {\n\t\tp := NewTestProvider([]string{mangle}, true)\n\t\tConvey(\"When I insert a first rule, it should create the table\", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t})\n\n\t\tConvey(\"When I insert two rules in the first position of the array, the values should be reverse ordered \", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 1, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[1], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(rules[0], ShouldResemble, \"\\\"val2\\\"\")\n\t\t})\n\n\t\tConvey(\"When I insert two rules in the first and last position of the array \", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 2, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(rules[1], ShouldResemble, \"\\\"val2\\\"\")\n\t\t})\n\n\t\tConvey(\"When I insert two rules in the first and a bad position in the array, the last one should be last \", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 6, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(rules[1], ShouldResemble, \"\\\"val2\\\"\")\n\t\t})\n\n\t\tConvey(\"When I insert a rule in the midle of the array, it should go in the right place \", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 1, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 2, \"val1-2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 4, \"val2-3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 5)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(rules[1], ShouldResemble, \"\\\"val1-2\\\"\")\n\t\t\tSo(rules[2], ShouldResemble, \"\\\"val2\\\"\")\n\t\t\tSo(rules[3], ShouldResemble, \"\\\"val2-3\\\"\")\n\t\t\tSo(rules[4], ShouldResemble, \"\\\"val3\\\"\")\n\t\t})\n\n\t\tConvey(\"When I Insert two rules in different chains, there should be two chains\", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, outputChain, 1, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 2)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][outputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(p.rules[mangle][outputChain][0], ShouldResemble, \"\\\"val2\\\"\")\n\t\t})\n\t})\n}\n\nfunc TestInsertWithQuoteFalse(t *testing.T) {\n\tConvey(\"Given a valid batch provider\", t, func() {\n\t\tp := NewTestProvider([]string{mangle}, false)\n\t\tConvey(\"When I insert a first rule, it should create the table\", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"val1\")\n\t\t})\n\n\t\tConvey(\"When I insert two rules in the first position of the array, the values should be reverse ordered \", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 1, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[1], ShouldResemble, \"val1\")\n\t\t\tSo(rules[0], ShouldResemble, \"val2\")\n\t\t})\n\n\t\tConvey(\"When I insert two rules in the first and last position of the array \", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 2, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[0], ShouldResemble, \"val1\")\n\t\t\tSo(rules[1], ShouldResemble, \"val2\")\n\t\t})\n\n\t\tConvey(\"When I insert two rules in the first and a bad position in the array, the last one should be last \", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 6, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[0], ShouldResemble, \"val1\")\n\t\t\tSo(rules[1], ShouldResemble, \"val2\")\n\t\t})\n\n\t\tConvey(\"When I insert a rule in the midle of the array, it should go in the right place \", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 1, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 2, \"val1-2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, inputChain, 4, \"val2-3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 5)\n\t\t\trules := p.rules[mangle][inputChain]\n\t\t\tSo(rules[0], ShouldResemble, \"val1\")\n\t\t\tSo(rules[1], ShouldResemble, \"val1-2\")\n\t\t\tSo(rules[2], ShouldResemble, \"val2\")\n\t\t\tSo(rules[3], ShouldResemble, \"val2-3\")\n\t\t\tSo(rules[4], ShouldResemble, \"val3\")\n\t\t})\n\n\t\tConvey(\"When I Insert two rules in different chains, there should be two chains\", func() {\n\t\t\terr := p.Insert(mangle, inputChain, 1, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Insert(mangle, outputChain, 1, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 2)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][outputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"val1\")\n\t\t\tSo(p.rules[mangle][outputChain][0], ShouldResemble, \"val2\")\n\t\t})\n\t})\n}\n\nfunc TestDelete(t *testing.T) {\n\tConvey(\"Given a valid batch provider\", t, func() {\n\t\tp := NewTestProvider([]string{mangle}, true)\n\t\tConvey(\"When I have one rule, I should be able to delete it\", func() {\n\t\t\terr := p.Append(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\terr = p.Delete(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 0)\n\t\t})\n\n\t\tConvey(\"When I have two rules, I should be able to delete the second one\", func() {\n\t\t\terr := p.Append(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Append(mangle, inputChain, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(p.rules[mangle][inputChain][1], ShouldResemble, \"\\\"val2\\\"\")\n\t\t\terr = p.Delete(mangle, inputChain, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t})\n\n\t\tConvey(\"When I have two rules, I should be able to delete the first one\", func() {\n\t\t\terr := p.Append(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Append(mangle, inputChain, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(p.rules[mangle][inputChain][1], ShouldResemble, \"\\\"val2\\\"\")\n\t\t\terr = p.Delete(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val2\\\"\")\n\t\t})\n\n\t\tConvey(\"When I have three rules, I should be able to delete the middle one\", func() {\n\t\t\terr := p.Append(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Append(mangle, inputChain, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = p.Append(mangle, inputChain, \"val3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 3)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(p.rules[mangle][inputChain][1], ShouldResemble, \"\\\"val2\\\"\")\n\t\t\tSo(p.rules[mangle][inputChain][2], ShouldResemble, \"\\\"val3\\\"\")\n\t\t\terr = p.Delete(mangle, inputChain, \"val2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 2)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\tSo(p.rules[mangle][inputChain][1], ShouldResemble, \"\\\"val3\\\"\")\n\t\t})\n\t})\n}\n\nfunc TestClearChain(t *testing.T) {\n\tConvey(\"Given a valid batch provider\", t, func() {\n\t\tp := NewTestProvider([]string{mangle}, true)\n\t\tConvey(\"If a clear an empty chain, I should get no error\", func() {\n\t\t\terr := p.ClearChain(mangle, inputChain)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 0)\n\t\t})\n\t\tConvey(\"After I append a rule, I should be able to delete the chain\", func() {\n\t\t\terr := p.Append(mangle, inputChain, \"val1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 1)\n\t\t\tSo(p.rules[mangle][inputChain][0], ShouldResemble, \"\\\"val1\\\"\")\n\t\t\terr = p.ClearChain(mangle, inputChain)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 0)\n\t\t})\n\t})\n}\n\nfunc TestDeleteChain(t *testing.T) {\n\tConvey(\"Given a valid batch provider\", t, func() {\n\t\tp := NewTestProvider([]string{mangle}, true)\n\t\tConvey(\"If a delete an empty chain, I should get no error\", func() {\n\t\t\terr := p.ClearChain(mangle, inputChain)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 0)\n\t\t})\n\t\tConvey(\"After I append a rule, I should be able to delete the chain\", func() {\n\t\t\terr := p.NewChain(mangle, inputChain)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 1)\n\t\t\tSo(len(p.rules[mangle][inputChain]), ShouldEqual, 0)\n\n\t\t\terr = p.DeleteChain(mangle, inputChain)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(p.rules[mangle]), ShouldEqual, 0)\n\t\t})\n\t})\n}\n\nfunc TestProvider(t *testing.T) {\n\tb, err := NewGoIPTablesProviderV4([]string{}, \"\")\n\tassert.Equal(t, b != nil, true, \"go iptables should not be nil\")\n\tassert.Equal(t, err == nil, true, \"error should be nil\")\n\tb, err = NewGoIPTablesProviderV6([]string{}, \"\")\n\tassert.Equal(t, b != nil, true, \"go iptables should not be nil\")\n\tassert.Equal(t, err == nil, true, \"error should be nil\")\n}\n"
  },
  {
    "path": "controller/pkg/aclprovider/iptablesprovider_windows.go",
    "content": "// +build windows\n\npackage provider\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"math\"\n\t\"strings\"\n\t\"syscall\"\n\t\"unsafe\"\n\n\twinipt \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/windows\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n\t\"go.uber.org/zap\"\n)\n\n// IptablesProvider is an abstraction of all the methods an implementation of userspace\n// iptables need to provide.\ntype IptablesProvider interface {\n\tBaseIPTables\n\t// Commit will commit changes if it is a batch provider.\n\tCommit() error\n\t// RetrieveTable allows a caller to retrieve the final table.\n\tRetrieveTable() map[string]map[string][]string\n\t// ResetRules resets the rules to a state where rules with the substring subs are removed\n\tResetRules(subs string) error\n}\n\n// BaseIPTables is the base interface of iptables functions.\ntype BaseIPTables interface {\n\t// Append apends a rule to chain of table\n\tAppend(table, chain string, rulespec ...string) error\n\t// Insert inserts a rule to a chain of table at the required pos\n\tInsert(table, chain string, pos int, rulespec ...string) error\n\t// Delete deletes a rule of a chain in the given table\n\tDelete(table, chain string, rulespec ...string) error\n\t// ListChains lists all the chains associated with a table\n\tListChains(table string) ([]string, error)\n\t// ClearChain clears a chain in a table\n\tClearChain(table, chain string) error\n\t// DeleteChain deletes a chain in the table. There should be no references to this chain\n\tDeleteChain(table, chain string) error\n\t// NewChain creates a new chain\n\tNewChain(table, chain string) error\n\t// ListRules lists the rules in the table/chain passed to it\n\tListRules(table, chain string) ([]string, error)\n}\n\n// BatchProvider uses iptables-restore to program ACLs\ntype BatchProvider struct {\n\tipv4 bool\n}\n\n// NewGoIPTablesProviderV4 returns an IptablesProvider interface based on the go-iptables\n// external package.\nfunc NewGoIPTablesProviderV4(batchTables []string, customChain string) (IptablesProvider, error) {\n\treturn &BatchProvider{ipv4: true}, nil\n}\n\n// NewGoIPTablesProviderV6 returns an IptablesProvider interface based on the go-iptables\n// external package.\nfunc NewGoIPTablesProviderV6(batchTables []string, customChain string) (IptablesProvider, error) {\n\treturn &BatchProvider{ipv4: false}, nil\n}\n\n// NewCustomBatchProvider is a custom batch provider wher the downstream\n// iptables utility is provided by the caller. Very useful for testing\n// the ACL functions with a mock.\nfunc NewCustomBatchProvider(ipt BaseIPTables, commit func(buf *bytes.Buffer) error, batchTables []string) *BatchProvider {\n\treturn &BatchProvider{ipv4: true}\n}\n\n// helper function for passing args to frontman api\nfunc boolToUint8(b bool) uint8 {\n\tif b {\n\t\treturn 1\n\t}\n\treturn 0\n}\n\nconst ipv4ChainSuffix = \"-v4\"\nconst ipv6ChainSuffix = \"-v6\"\n\nfunc (b *BatchProvider) fixChainName(chain string) string {\n\tif strings.HasSuffix(chain, ipv4ChainSuffix) || strings.HasSuffix(chain, ipv6ChainSuffix) {\n\t\treturn chain\n\t}\n\tif b.ipv4 {\n\t\tchain += ipv4ChainSuffix\n\t} else {\n\t\tchain += ipv6ChainSuffix\n\t}\n\treturn chain\n}\n\nfunc (b *BatchProvider) fixRuleSpec(rulespec []string) []string {\n\t// When a rule is referencing another chain, then we need to add the IP version number.\n\tlength := len(rulespec)\n\tfor index, part := range rulespec {\n\t\tnextIndex := index + 1\n\t\tif part == \"-j\" && nextIndex < length {\n\t\t\tswitch rulespec[nextIndex] {\n\t\t\tcase \"NFQUEUE\":\n\t\t\tcase \"NFQUEUE_FORCE\":\n\t\t\tcase \"REDIRECT\":\n\t\t\tcase \"ACCEPT\":\n\t\t\tcase \"ACCEPT_ONCE\":\n\t\t\tcase \"DROP\":\n\t\t\tcase \"MARK\":\n\t\t\tcase \"CONNMARK\":\n\t\t\tcase \"NFLOG\":\n\t\t\tdefault:\n\t\t\t\trulespec[nextIndex] = b.fixChainName(rulespec[nextIndex])\n\t\t\t}\n\t\t}\n\t}\n\treturn rulespec\n}\n\n// Append will append the provided rule to the local cache or call\n// directly the iptables command depending on the table.\nfunc (b *BatchProvider) Append(table, chain string, rulespec ...string) error {\n\n\tchain = b.fixChainName(chain)\n\trulespec = b.fixRuleSpec(rulespec)\n\n\tzap.L().Debug(fmt.Sprintf(\"add rule %s to table/chain %s/%s\", strings.Join(rulespec, \" \"), table, chain))\n\n\twinRuleSpec, err := winipt.ParseRuleSpec(rulespec...)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcriteriaID := strings.Join(rulespec, \" \")\n\n\targRuleSpec := frontman.RuleSpec{\n\t\tAction:    uint8(winRuleSpec.Action),\n\t\tLog:       boolToUint8(winRuleSpec.Log),\n\t\tGroupID:   uint32(winRuleSpec.GroupID),\n\t\tProxyPort: uint16(winRuleSpec.ProxyPort),\n\t\tMark:      uint32(winRuleSpec.Mark),\n\t\tProcessID: uint64(winRuleSpec.ProcessID),\n\t}\n\n\tif b.ipv4 {\n\t\targRuleSpec.IPVersionMatch = frontman.IPVersion4\n\t} else {\n\t\targRuleSpec.IPVersionMatch = frontman.IPVersion6\n\t}\n\n\tif winRuleSpec.FlowMark != 0 {\n\t\targRuleSpec.FlowMark = uint32(winRuleSpec.FlowMark)\n\t\tif winRuleSpec.FlowMarkNoMatch {\n\t\t\targRuleSpec.FlowMarkMatchType = frontman.MatchTypeNoMatch\n\t\t} else {\n\t\t\targRuleSpec.FlowMarkMatchType = frontman.MatchTypeMatch\n\t\t}\n\t}\n\n\tif winRuleSpec.TCPFlagsSpecified {\n\t\targRuleSpec.TCPFlagsSpecified = 1\n\t\targRuleSpec.TCPFlags = winRuleSpec.TCPFlags\n\t\targRuleSpec.TCPFlagsMask = winRuleSpec.TCPFlagsMask\n\t}\n\n\tif winRuleSpec.TCPOptionSpecified {\n\t\targRuleSpec.TCPOptionSpecified = 1\n\t\targRuleSpec.TCPOption = winRuleSpec.TCPOption\n\t}\n\n\tif winRuleSpec.Protocol > 0 && winRuleSpec.Protocol < math.MaxUint8 {\n\t\targRuleSpec.ProtocolSpecified = 1\n\t\targRuleSpec.Protocol = uint8(winRuleSpec.Protocol)\n\t}\n\tif len(winRuleSpec.MatchSrcPort) > 0 {\n\t\targRuleSpec.SrcPortCount = int32(len(winRuleSpec.MatchSrcPort))\n\t\tsrcPorts := make([]frontman.PortRange, argRuleSpec.SrcPortCount)\n\t\tfor i, portRange := range winRuleSpec.MatchSrcPort {\n\t\t\tsrcPorts[i] = frontman.PortRange{PortStart: uint16(portRange.Start), PortEnd: uint16(portRange.End)}\n\t\t}\n\t\targRuleSpec.SrcPorts = &srcPorts[0]\n\t}\n\tif len(winRuleSpec.MatchDstPort) > 0 {\n\t\targRuleSpec.DstPortCount = int32(len(winRuleSpec.MatchDstPort))\n\t\tdstPorts := make([]frontman.PortRange, argRuleSpec.DstPortCount)\n\t\tfor i, portRange := range winRuleSpec.MatchDstPort {\n\t\t\tdstPorts[i] = frontman.PortRange{PortStart: uint16(portRange.Start), PortEnd: uint16(portRange.End)}\n\t\t}\n\t\targRuleSpec.DstPorts = &dstPorts[0]\n\t}\n\tif len(winRuleSpec.MatchBytes) > 0 {\n\t\tif winRuleSpec.MatchBytesNoMatch {\n\t\t\targRuleSpec.BytesNoMatch = 1\n\t\t}\n\t\targRuleSpec.BytesMatchStart = frontman.BytesMatchStartPayload\n\t\targRuleSpec.BytesMatchOffset = int32(winRuleSpec.MatchBytesOffset)\n\t\targRuleSpec.BytesMatchSize = int32(len(winRuleSpec.MatchBytes))\n\t\targRuleSpec.BytesMatch = &winRuleSpec.MatchBytes[0]\n\t}\n\tif winRuleSpec.LogPrefix != \"\" {\n\t\targRuleSpec.LogPrefix = uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(winRuleSpec.LogPrefix))) // nolint:staticcheck\n\t}\n\tif winRuleSpec.ProcessID > 0 {\n\t\tif winRuleSpec.ProcessIncludeChildrenOnly {\n\t\t\targRuleSpec.ProcessFlags = frontman.ProcessMatchChildren\n\t\t} else {\n\t\t\targRuleSpec.ProcessFlags = frontman.ProcessMatchProcess\n\t\t\tif winRuleSpec.ProcessIncludeChildren {\n\t\t\t\targRuleSpec.ProcessFlags |= frontman.ProcessMatchChildren\n\t\t\t}\n\t\t}\n\t}\n\tif len(winRuleSpec.IcmpMatch) > 0 {\n\t\targRuleSpec.IcmpRangeCount = int32(len(winRuleSpec.IcmpMatch))\n\t\ticmpRanges := make([]frontman.IcmpRange, argRuleSpec.IcmpRangeCount)\n\t\tfor i, im := range winRuleSpec.IcmpMatch {\n\t\t\tr := frontman.IcmpRange{}\n\t\t\tif !im.Nomatch {\n\t\t\t\tr.IcmpTypeSpecified = 1\n\t\t\t\tr.IcmpType = uint8(im.IcmpType)\n\t\t\t\tif im.IcmpCodeRange != nil {\n\t\t\t\t\tr.IcmpCodeSpecified = 1\n\t\t\t\t\tr.IcmpCodeLower = uint8(im.IcmpCodeRange.Start)\n\t\t\t\t\tr.IcmpCodeUpper = uint8(im.IcmpCodeRange.End)\n\t\t\t\t}\n\t\t\t}\n\t\t\ticmpRanges[i] = r\n\t\t}\n\t\targRuleSpec.IcmpRanges = &icmpRanges[0]\n\t}\n\targIpsetRuleSpecs := make([]frontman.IpsetRuleSpec, len(winRuleSpec.MatchSet))\n\tfor i, matchSet := range winRuleSpec.MatchSet {\n\t\targIpsetRuleSpecs[i].NotIpset = boolToUint8(matchSet.MatchSetNegate)\n\t\targIpsetRuleSpecs[i].IpsetDstIP = boolToUint8(matchSet.MatchSetDstIP)\n\t\targIpsetRuleSpecs[i].IpsetDstPort = boolToUint8(matchSet.MatchSetDstPort)\n\t\targIpsetRuleSpecs[i].IpsetSrcIP = boolToUint8(matchSet.MatchSetSrcIP)\n\t\targIpsetRuleSpecs[i].IpsetSrcPort = boolToUint8(matchSet.MatchSetSrcPort)\n\t\targIpsetRuleSpecs[i].IpsetName = uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(matchSet.MatchSetName))) // nolint:staticcheck\n\t}\n\n\tif winRuleSpec.GotoFilterName != \"\" {\n\t\targRuleSpec.GotoFilterName = uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(winRuleSpec.GotoFilterName))) // nolint:staticcheck\n\t}\n\n\treturn frontman.Wrapper.AppendFilterCriteria(chain, criteriaID, &argRuleSpec, argIpsetRuleSpecs)\n}\n\n// Insert will insert the rule in the corresponding position in the local\n// cache or call the corresponding iptables command, depending on the table.\nfunc (b *BatchProvider) Insert(table, chain string, pos int, rulespec ...string) error {\n\tchain = b.fixChainName(chain)\n\tzap.L().Debug(fmt.Sprintf(\"Insert not expected for table %s and chain %s\", table, chain))\n\treturn nil\n}\n\n// Delete will delete the rule from the local cache or the system.\nfunc (b *BatchProvider) Delete(table, chain string, rulespec ...string) error {\n\tchain = b.fixChainName(chain)\n\trulespec = b.fixRuleSpec(rulespec)\n\tcriteriaID := strings.Join(rulespec, \" \")\n\tzap.L().Debug(fmt.Sprintf(\"delete rule %s from table/chain %s/%s\", criteriaID, table, chain))\n\treturn frontman.Wrapper.DeleteFilterCriteria(chain, criteriaID)\n}\n\n// ListChains will provide a list of the current chains.\nfunc (b *BatchProvider) ListChains(table string) ([]string, error) {\n\tvar outbound bool\n\tif strings.HasPrefix(table, \"O\") || strings.HasPrefix(table, \"o\") {\n\t\toutbound = true\n\t} else if strings.HasPrefix(table, \"I\") || strings.HasPrefix(table, \"i\") {\n\t\toutbound = false\n\t} else {\n\t\treturn nil, fmt.Errorf(\"'%s' is not a valid table for ListChains\", table)\n\t}\n\n\treturn frontman.Wrapper.GetFilterList(outbound)\n}\n\n// ClearChain will clear the chains.\nfunc (b *BatchProvider) ClearChain(table, chain string) error {\n\tchain = b.fixChainName(chain)\n\treturn frontman.Wrapper.EmptyFilter(chain)\n}\n\n// DeleteChain will delete the chains.\nfunc (b *BatchProvider) DeleteChain(table, chain string) error {\n\tchain = b.fixChainName(chain)\n\treturn frontman.Wrapper.DestroyFilter(chain)\n}\n\n// NewChain creates a new chain.\nfunc (b *BatchProvider) NewChain(table, chain string) error {\n\tchain = b.fixChainName(chain)\n\tvar outbound bool\n\tif strings.HasPrefix(table, \"O\") || strings.HasPrefix(table, \"o\") {\n\t\toutbound = true\n\t} else if strings.HasPrefix(table, \"I\") || strings.HasPrefix(table, \"i\") {\n\t\toutbound = false\n\t} else {\n\t\treturn fmt.Errorf(\"'%s' is not a valid table for NewChain\", table)\n\t}\n\n\t// A goto filter is outside the normal filters and a rule can jump to a \"goto filter\"\n\t// So any filter that has -Net or -App is a goto filter.\n\tisGotoFilter := false\n\tif strings.Contains(chain, \"-Net\") || strings.Contains(chain, \"-App\") {\n\t\tisGotoFilter = true\n\t}\n\treturn frontman.Wrapper.AppendFilter(outbound, chain, isGotoFilter)\n}\n\n// Commit commits the rules to the system\nfunc (b *BatchProvider) Commit() error {\n\t// does nothing\n\treturn nil\n}\n\n// RetrieveTable allows a caller to retrieve the final table. Mostly\n// needed for debuging and unit tests.\nfunc (b *BatchProvider) RetrieveTable() map[string]map[string][]string {\n\t// not applicable for windows\n\treturn map[string]map[string][]string{}\n}\n\n// ResetRules returns nil in windows\nfunc (b *BatchProvider) ResetRules(subs string) error {\n\t// does nothing\n\treturn nil\n}\n\n// ListRules lists the rules in the table/chain passed to it\nfunc (b *BatchProvider) ListRules(table, chain string) ([]string, error) {\n\t// This is is unimplemented on windows\n\treturn []string{}, nil\n}\n"
  },
  {
    "path": "controller/pkg/aclprovider/iptablesprovidermock.go",
    "content": "package provider\n\nimport (\n\t\"sync\"\n\t\"testing\"\n)\n\ntype iptablesProviderMockedMethods struct {\n\tappendMock        func(table, chain string, rulespec ...string) error\n\tinsertMock        func(table, chain string, pos int, rulespec ...string) error\n\tdeleteMock        func(table, chain string, rulespec ...string) error\n\tlistChainsMock    func(table string) ([]string, error)\n\tclearChainMock    func(table, chain string) error\n\tdeleteChainMock   func(table, chain string) error\n\tnewChainMock      func(table, chain string) error\n\tcommitMock        func() error\n\tretrieveTableMock func() map[string]map[string][]string\n\tresetMock         func(subs string) error\n\tlistRulesMock     func(table, chain string) ([]string, error)\n}\n\n// TestIptablesProvider is a test implementation for IptablesProvider\ntype TestIptablesProvider interface {\n\tIptablesProvider\n\tMockAppend(t *testing.T, impl func(table, chain string, rulespec ...string) error)\n\tMockInsert(t *testing.T, impl func(table, chain string, pos int, rulespec ...string) error)\n\tMockDelete(t *testing.T, impl func(table, chain string, rulespec ...string) error)\n\tMockListChains(t *testing.T, impl func(table string) ([]string, error))\n\tMockClearChain(t *testing.T, impl func(table, chain string) error)\n\tMockDeleteChain(t *testing.T, impl func(table, chain string) error)\n\tMockNewChain(t *testing.T, impl func(table, chain string) error)\n\tMockCommit(t *testing.T, impl func() error)\n\tMockReset(t *testing.T, impl func(subs string) error)\n\tMockListRules(t *testing.T, impl func(table, chain string) ([]string, error))\n}\n\n// A testIptablesProvider is an empty TransactionalManipulator that can be easily mocked.\ntype testIptablesProvider struct {\n\tmocks       map[*testing.T]*iptablesProviderMockedMethods\n\tlock        *sync.Mutex\n\tcurrentTest *testing.T\n}\n\n// NewTestIptablesProvider returns a new TestManipulator.\nfunc NewTestIptablesProvider() TestIptablesProvider {\n\treturn &testIptablesProvider{lock: &sync.Mutex{}, mocks: map[*testing.T]*iptablesProviderMockedMethods{}}\n}\n\nfunc (m *testIptablesProvider) MockListRules(t *testing.T, impl func(table, chain string) ([]string, error)) {\n\tm.currentMocks(t).listRulesMock = impl\n}\nfunc (m *testIptablesProvider) MockAppend(t *testing.T, impl func(table, chain string, rulespec ...string) error) {\n\n\tm.currentMocks(t).appendMock = impl\n}\n\nfunc (m *testIptablesProvider) MockReset(t *testing.T, impl func(subs string) error) {\n\tm.currentMocks(t).resetMock = impl\n}\n\nfunc (m *testIptablesProvider) MockInsert(t *testing.T, impl func(table, chain string, pos int, rulespec ...string) error) {\n\n\tm.currentMocks(t).insertMock = impl\n}\n\nfunc (m *testIptablesProvider) MockDelete(t *testing.T, impl func(table, chain string, rulespec ...string) error) {\n\n\tm.currentMocks(t).deleteMock = impl\n}\n\nfunc (m *testIptablesProvider) MockListChains(t *testing.T, impl func(table string) ([]string, error)) {\n\n\tm.currentMocks(t).listChainsMock = impl\n}\n\nfunc (m *testIptablesProvider) MockClearChain(t *testing.T, impl func(table, chain string) error) {\n\n\tm.currentMocks(t).clearChainMock = impl\n}\n\nfunc (m *testIptablesProvider) MockDeleteChain(t *testing.T, impl func(table, chain string) error) {\n\n\tm.currentMocks(t).deleteChainMock = impl\n}\n\nfunc (m *testIptablesProvider) MockNewChain(t *testing.T, impl func(table, chain string) error) {\n\n\tm.currentMocks(t).newChainMock = impl\n}\n\nfunc (m *testIptablesProvider) MockCommit(t *testing.T, impl func() error) {\n\tm.currentMocks(t).commitMock = impl\n}\n\nfunc (m *testIptablesProvider) Append(table, chain string, rulespec ...string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.appendMock != nil {\n\t\treturn mock.appendMock(table, chain, rulespec...)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) ListRules(table, chain string) ([]string, error) {\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.listRulesMock != nil {\n\t\treturn mock.listRulesMock(table, chain)\n\t}\n\treturn []string{}, nil\n}\n\nfunc (m *testIptablesProvider) Insert(table, chain string, pos int, rulespec ...string) error {\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.insertMock != nil {\n\t\treturn mock.insertMock(table, chain, pos, rulespec...)\n\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) ResetRules(subs string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.resetMock != nil {\n\t\treturn mock.resetMock(subs)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) Delete(table, chain string, rulespec ...string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.deleteMock != nil {\n\t\treturn mock.deleteMock(table, chain, rulespec...)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) ListChains(table string) ([]string, error) {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.listChainsMock != nil {\n\t\treturn mock.listChainsMock(table)\n\t}\n\n\treturn nil, nil\n}\n\nfunc (m *testIptablesProvider) ClearChain(table, chain string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.clearChainMock != nil {\n\t\treturn mock.clearChainMock(table, chain)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) DeleteChain(table, chain string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.deleteChainMock != nil {\n\t\treturn mock.deleteChainMock(table, chain)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) NewChain(table, chain string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.newChainMock != nil {\n\t\treturn mock.newChainMock(table, chain)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) Commit() error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.commitMock != nil {\n\t\treturn mock.commitMock()\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) RetrieveTable() map[string]map[string][]string {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.retrieveTableMock != nil {\n\t\treturn mock.retrieveTableMock()\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIptablesProvider) currentMocks(t *testing.T) *iptablesProviderMockedMethods {\n\tm.lock.Lock()\n\tdefer m.lock.Unlock()\n\n\tmocks := m.mocks[t]\n\n\tif mocks == nil {\n\t\tmocks = &iptablesProviderMockedMethods{}\n\t\tm.mocks[t] = mocks\n\t}\n\n\tm.currentTest = t\n\treturn mocks\n}\n"
  },
  {
    "path": "controller/pkg/auth/auth.go",
    "content": "package auth\n\nimport (\n\t\"context\"\n\t\"crypto/x509\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/servicetokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/urisearch\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\n// CallbackResponse captures all the response data of the call back processing.\ntype CallbackResponse struct {\n\tCookie    *http.Cookie\n\tStatus    int\n\tOriginURL string\n\tData      string\n\tMessage   string\n}\n\n// Processor holds all the local data of the authorization engine. A processor\n// can handle authorization for multiple services. The goal is to authenticate\n// a request based on both service and user credentials.\ntype Processor struct {\n\tapis                  *urisearch.APICache\n\tuserTokenHandler      usertokens.Verifier\n\tuserTokenMappings     map[string]string\n\tuserAuthorizationType policy.UserAuthorizationTypeValues\n\taporetoJWT            *servicetokens.Verifier\n\tsync.RWMutex\n}\n\n// NewProcessor creates an auth processor with PKI user tokens. The caller\n// must provide a valid secrets structure and an optional list of trustedCertificates\n// that can be used to validate tokens. If the list is empty, the CA from the secrets\n// will be used for token validation.\nfunc NewProcessor(s secrets.Secrets, trustedCertificate *x509.Certificate) *Processor {\n\treturn &Processor{\n\t\taporetoJWT: servicetokens.NewVerifier(s, trustedCertificate),\n\t}\n}\n\n// UpdateSecrets will update the Aporeto secrets for the validation of the\n// Aporeto tokens.\nfunc (p *Processor) UpdateSecrets(s secrets.Secrets, trustedCertificate *x509.Certificate) {\n\tp.aporetoJWT.UpdateSecrets(s, trustedCertificate)\n}\n\n// AddOrUpdateService adds or replaces a service in the authorization db.\nfunc (p *Processor) AddOrUpdateService(apis *urisearch.APICache, serviceType policy.UserAuthorizationTypeValues, handler usertokens.Verifier, mappings map[string]string) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.apis = apis\n\tp.userTokenMappings = mappings\n\tp.userTokenHandler = handler\n\tp.userAuthorizationType = serviceType\n}\n\n// UpdateServiceAPIs updates an existing service with a new API definition.\nfunc (p *Processor) UpdateServiceAPIs(apis *urisearch.APICache) error {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.apis = apis\n\treturn nil\n}\n\n// DecodeUserClaims decodes the user claims with the user authorization method.\nfunc (p *Processor) DecodeUserClaims(ctx context.Context, name, userToken string, certs []*x509.Certificate) ([]string, bool, string, error) {\n\n\tswitch p.userAuthorizationType {\n\tcase policy.UserAuthorizationMutualTLS, policy.UserAuthorizationJWT:\n\t\t// First parse any incoming certificates and retrieve attributes from them.\n\t\t// This is used in case of client authorization with certificates.\n\t\tattributes := []string{}\n\t\tfor _, cert := range certs {\n\t\t\tattributes = append(attributes, \"CN=\"+cert.Subject.CommonName)\n\t\t\tfor _, email := range cert.EmailAddresses {\n\t\t\t\tattributes = append(attributes, \"Email=\"+email)\n\t\t\t}\n\t\t\tfor _, org := range cert.Subject.Organization {\n\t\t\t\tattributes = append(attributes, \"O=\"+org)\n\t\t\t}\n\t\t\tfor _, org := range cert.Subject.OrganizationalUnit {\n\t\t\t\tattributes = append(attributes, \"OU=\"+org)\n\t\t\t}\n\t\t}\n\n\t\tif p.userAuthorizationType == policy.UserAuthorizationJWT && p.userTokenHandler != nil {\n\t\t\tjwtAttributes, _, _, err := p.userTokenHandler.Validate(ctx, userToken)\n\t\t\tif err != nil {\n\t\t\t\treturn attributes, false, userToken, fmt.Errorf(\"Unable to decode JWT: %s\", err)\n\t\t\t}\n\t\t\tattributes = append(attributes, jwtAttributes...)\n\t\t}\n\n\t\treturn attributes, false, userToken, nil\n\n\tcase policy.UserAuthorizationOIDC:\n\t\t// Now we can parse the user claims.\n\t\tif p.userTokenHandler == nil {\n\t\t\tzap.L().Error(\"Internal Server Error: OIDC User Token Handler not configured\")\n\t\t\treturn []string{}, false, userToken, nil\n\t\t}\n\t\treturn p.userTokenHandler.Validate(ctx, userToken)\n\tdefault:\n\t\treturn []string{}, false, userToken, nil\n\t}\n}\n\n// DecodeAporetoClaims decodes the Aporeto claims\nfunc (p *Processor) DecodeAporetoClaims(aporetoToken string, publicKey string) (string, []string, *policy.PingPayload, error) {\n\tif len(aporetoToken) == 0 || p.aporetoJWT == nil {\n\t\treturn \"\", []string{}, nil, nil\n\t}\n\n\t// Finally we can parse the Aporeto token.\n\tid, scopes, profile, pingPayload, err := p.aporetoJWT.ParseToken(aporetoToken, publicKey)\n\tif err != nil {\n\t\treturn \"\", []string{}, nil, fmt.Errorf(\"Invalid Aporeto Token: %s\", err)\n\t}\n\n\treturn id, append(profile, scopes...), pingPayload, nil\n}\n\n// Callback is function called by and IDP auth provider will exchange the provided\n// authorization code with a JWT token. This closes the Oauth loop.\nfunc (p *Processor) Callback(ctx context.Context, u *url.URL) (*CallbackResponse, error) {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\tc := &CallbackResponse{}\n\n\t// Validate the JWT token through the handler.\n\ttoken, originURL, status, err := p.userTokenHandler.Callback(ctx, u)\n\tif err != nil {\n\t\tc.Status = http.StatusUnauthorized\n\t\tc.Message = fmt.Sprintf(\"Invalid code %s:\", err)\n\t\treturn c, err\n\t}\n\n\tc.OriginURL = originURL\n\tc.Status = status\n\tc.Cookie = &http.Cookie{\n\t\tName:     \"X-APORETO-AUTH\",\n\t\tValue:    token,\n\t\tHttpOnly: true,\n\t\tSecure:   true,\n\t\tPath:     \"/\",\n\t}\n\n\t// We transmit the information in the return payload for applications\n\t// that choose to use it directly without a cookie.\n\tdata, err := json.MarshalIndent(c.Cookie, \" \", \" \")\n\tif err != nil {\n\t\tc.Status = http.StatusInternalServerError\n\t\tc.Message = \"Unable to decode data\"\n\t\treturn c, err\n\t}\n\n\tc.Data = string(data)\n\n\treturn c, nil\n}\n\n// Check is the main method that will search API cache and validate whether the call should\n// be allowed. It returns two values. If the access is allowed, and whether the access\n// public or not. This allows callers to decide what to do when there is a failure, and\n// potentially issue a redirect.\nfunc (p *Processor) Check(method, uri string, claims []string) (bool, bool) {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.apis.FindAndMatchScope(method, uri, claims)\n}\n\n// RedirectURI returns the redirect URI in order to start the authentication dance.\nfunc (p *Processor) RedirectURI(originURL string) string {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.userTokenHandler.IssueRedirect(originURL)\n}\n\n// UpdateRequestHeaders will update the request headers based on the user claims\n// and the corresponding mappings.\nfunc (p *Processor) UpdateRequestHeaders(h http.Header, claims []string) {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\tif len(p.userTokenMappings) == 0 {\n\t\treturn\n\t}\n\n\tfor _, claim := range claims {\n\t\tparts := strings.SplitN(claim, \"=\", 2)\n\t\tif header, ok := p.userTokenMappings[parts[0]]; ok && len(parts) == 2 {\n\t\t\th.Add(header, parts[1])\n\t\t}\n\t}\n}\n\n// RetrieveServiceHandler will retrieve the service that is stored in the serviceMap\nfunc (p *Processor) RetrieveServiceHandler() (usertokens.Verifier, error) {\n\treturn p.userTokenHandler, nil\n}\n"
  },
  {
    "path": "controller/pkg/bufferpool/bufferpool.go",
    "content": "package bufferpool\n\nimport (\n\t\"sync\"\n)\n\n// BufferPool implements the interface of httputil.BufferPool in order\n// to improve memory utilization in the reverse proxy.\ntype BufferPool struct {\n\ts sync.Pool\n}\n\n// NewPool creates a new BufferPool.\nfunc NewPool(size int) *BufferPool {\n\treturn &BufferPool{\n\t\ts: sync.Pool{\n\t\t\tNew: func() interface{} {\n\t\t\t\treturn make([]byte, size)\n\t\t\t},\n\t\t},\n\t}\n}\n\n// Get gets a buffer from the pool.\nfunc (b *BufferPool) Get() []byte {\n\treturn b.s.Get().([]byte)\n}\n\n// Put returns the buffer to the pool.\nfunc (b *BufferPool) Put(buf []byte) {\n\tb.s.Put(buf) // nolint\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/bytes.go",
    "content": "package claimsheader\n\n// HeaderBytes is the claimsheader in bytes\ntype HeaderBytes []byte\n\n// ToClaimsHeader parses the bytes and returns the ClaimsHeader\nfunc (h HeaderBytes) ToClaimsHeader() *ClaimsHeader {\n\n\tif h == nil || len(h) != maxHeaderLen {\n\t\treturn NewClaimsHeader()\n\t}\n\n\tcompressionTypeMask := compressionTypeMask(h.extractHeaderAttribute(h[0], compressionTypeBitMask.toUint8()))\n\tdatapathVersionMask := datapathVersionMask(h.extractHeaderAttribute(h[0], datapathVersionBitMask.toUint8()))\n\n\treturn &ClaimsHeader{\n\t\tcompressionType: compressionTypeMask.toType(),\n\t\tencrypt:         uint8ToBool(encrypt, h.extractHeaderAttribute(h[1], encryptionEnabledBit)),\n\t\tping:            uint8ToBool(ping, h.extractHeaderAttribute(h[1], pingEnabledBit)),\n\t\tdatapathVersion: datapathVersionMask.toType(),\n\t}\n}\n\n// extractHeaderAttribute returns the attribute from byte\n// mask - mask specific to the attribute\nfunc (h HeaderBytes) extractHeaderAttribute(attr byte, mask uint8) uint8 {\n\n\treturn attr & mask\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/bytes_test.go",
    "content": "// +build !windows\n\npackage claimsheader\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestHeaderBytes(t *testing.T) {\n\n\tConvey(\"Given I create a new header bytes\", t, func() {\n\t\theader := NewClaimsHeader(\n\t\t\tOptionCompressionType(CompressionTypeV1),\n\t\t\tOptionEncrypt(true),\n\t\t\tOptionDatapathVersion(DatapathVersion1),\n\t\t\tOptionPing(true),\n\t\t).ToBytes()\n\n\t\tConvey(\"Then header bytes should not be nil\", func() {\n\t\t\tSo(header, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Given I convert bytes to claims header\", func() {\n\t\t\tch := header.ToClaimsHeader()\n\n\t\t\tConvey(\"Then it should be equal\", func() {\n\t\t\t\tSo(ch.compressionType, ShouldEqual, CompressionTypeV1)\n\t\t\t\tSo(ch.encrypt, ShouldEqual, true)\n\t\t\t\tSo(ch.datapathVersion, ShouldEqual, DatapathVersion1)\n\t\t\t\tSo(ch.ping, ShouldEqual, true)\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/claimsheader.go",
    "content": "package claimsheader\n\n// NewClaimsHeader returns claims header handler\nfunc NewClaimsHeader(opts ...Option) *ClaimsHeader {\n\n\tc := &ClaimsHeader{}\n\n\tfor _, opt := range opts {\n\t\topt(c)\n\t}\n\n\treturn c\n}\n\n// ToBytes generates the 32-bit header in bytes\n//    0             1              2               3               4\n//  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  |     D     |CT |E|P|            R (reserved)                   |\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  D  [0:5]   - Datapath version\n//  CT [6,7]   - Compressed tag type\n//  E  [8]     - Encryption enabled\n//  P  [9]     - Ping enabled\n//  R  [10:31] - Reserved\nfunc (c *ClaimsHeader) ToBytes() HeaderBytes {\n\n\tclaimsHeaderData := make([]byte, maxHeaderLen)\n\tclaimsHeaderData[0] |= c.datapathVersion.toMask().toUint8()\n\tclaimsHeaderData[0] |= c.compressionType.toMask().toUint8()\n\tclaimsHeaderData[1] |= boolToUint8(encrypt, c.encrypt)\n\tclaimsHeaderData[1] |= boolToUint8(ping, c.ping)\n\n\treturn claimsHeaderData\n}\n\n// CompressionType is the compression type\nfunc (c *ClaimsHeader) CompressionType() CompressionType {\n\n\treturn c.compressionType\n}\n\n// Encrypt is the encrypt in bool\nfunc (c *ClaimsHeader) Encrypt() bool {\n\n\treturn c.encrypt\n}\n\n// DatapathVersion is the datapath version\nfunc (c *ClaimsHeader) DatapathVersion() DatapathVersion {\n\n\treturn c.datapathVersion\n}\n\n// Ping returns the ping in bool\nfunc (c *ClaimsHeader) Ping() bool {\n\n\treturn c.ping\n}\n\n// SetCompressionType sets the compression type\nfunc (c *ClaimsHeader) SetCompressionType(ct CompressionType) {\n\n\tc.compressionType = ct\n}\n\n// SetEncrypt sets the encrypt\nfunc (c *ClaimsHeader) SetEncrypt(e bool) {\n\n\tc.encrypt = e\n}\n\n// SetDatapathVersion sets the datapath version\nfunc (c *ClaimsHeader) SetDatapathVersion(dv DatapathVersion) {\n\n\tc.datapathVersion = dv\n}\n\n// SetPing sets the ping\nfunc (c *ClaimsHeader) SetPing(ping bool) {\n\n\tc.ping = ping\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/claimsheader_test.go",
    "content": "// +build !windows\n\npackage claimsheader\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestHeader(t *testing.T) {\n\n\tConvey(\"Given I create a new claims header\", t, func() {\n\t\theader := NewClaimsHeader(\n\t\t\tOptionEncrypt(true),\n\t\t\tOptionCompressionType(CompressionTypeV1),\n\t\t\tOptionDatapathVersion(DatapathVersion1),\n\t\t\tOptionPing(true),\n\t\t).ToBytes()\n\n\t\tConvey(\"Then claims header should not be nil\", func() {\n\t\t\tSo(header, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Given I convert bytes to claims header\", func() {\n\t\t\tch := header.ToClaimsHeader()\n\n\t\t\tConvey(\"Then it should be equal\", func() {\n\t\t\t\tSo(ch.CompressionType(), ShouldEqual, CompressionTypeV1)\n\t\t\t\tSo(ch.Encrypt(), ShouldEqual, true)\n\t\t\t\tSo(ch.DatapathVersion(), ShouldEqual, DatapathVersion1)\n\t\t\t\tSo(ch.Ping(), ShouldEqual, true)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given I create a new claims header and encrypt false\", t, func() {\n\t\theader := NewClaimsHeader(\n\t\t\tOptionEncrypt(false),\n\t\t\tOptionPing(true),\n\t\t\tOptionCompressionType(CompressionTypeV1),\n\t\t\tOptionDatapathVersion(DatapathVersion1),\n\t\t).ToBytes()\n\n\t\tConvey(\"Then claims header should not be nil\", func() {\n\t\t\tSo(header, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Given I convert bytes to claims header\", func() {\n\t\t\tch := header.ToClaimsHeader()\n\n\t\t\tConvey(\"Then it should be equal\", func() {\n\t\t\t\tSo(ch.CompressionType(), ShouldEqual, CompressionTypeV1)\n\t\t\t\tSo(ch.Encrypt(), ShouldEqual, false)\n\t\t\t\tSo(ch.Ping(), ShouldEqual, true)\n\t\t\t\tSo(ch.DatapathVersion(), ShouldEqual, DatapathVersion1)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given I create a new claims header and with no ping\", t, func() {\n\t\theader := NewClaimsHeader(\n\t\t\tOptionEncrypt(false),\n\t\t\tOptionDatapathVersion(DatapathVersion1),\n\t\t).ToBytes()\n\n\t\tConvey(\"Then claims header should not be nil\", func() {\n\t\t\tSo(header, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Given I convert bytes to claims header\", func() {\n\t\t\tch := header.ToClaimsHeader()\n\n\t\t\tConvey(\"Then it should be equal\", func() {\n\t\t\t\tSo(ch.CompressionType(), ShouldEqual, CompressionTypeV1)\n\t\t\t\tSo(ch.Encrypt(), ShouldEqual, false)\n\t\t\t\tSo(ch.Ping(), ShouldEqual, false)\n\t\t\t\tSo(ch.DatapathVersion(), ShouldEqual, DatapathVersion1)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given I create a new claims header and encrypt false\", t, func() {\n\t\theader := NewClaimsHeader(\n\t\t\tOptionEncrypt(false),\n\t\t\tOptionPing(false),\n\t\t\tOptionCompressionType(CompressionTypeV1),\n\t\t\tOptionDatapathVersion(DatapathVersion1),\n\t\t).ToBytes()\n\n\t\tConvey(\"Then claims header should not be nil\", func() {\n\t\t\tSo(header, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Given I convert bytes to claims header\", func() {\n\t\t\tch := header.ToClaimsHeader()\n\n\t\t\tConvey(\"Then it should be equal\", func() {\n\t\t\t\tSo(ch.CompressionType(), ShouldEqual, CompressionTypeV1)\n\t\t\t\tSo(ch.Encrypt(), ShouldEqual, false)\n\t\t\t\tSo(ch.Ping(), ShouldEqual, false)\n\t\t\t\tSo(ch.DatapathVersion(), ShouldEqual, DatapathVersion1)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given I create a new claims header and change it later\", t, func() {\n\t\theader := NewClaimsHeader(\n\t\t\tOptionEncrypt(false),\n\t\t\tOptionDatapathVersion(DatapathVersion1),\n\t\t\tOptionPing(false),\n\t\t)\n\n\t\tConvey(\"Then claims header should not be nil\", func() {\n\t\t\tSo(header, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Given I convert bytes to claims header\", func() {\n\t\t\tch := header.ToBytes().ToClaimsHeader()\n\n\t\t\tConvey(\"Then it should be equal\", func() {\n\t\t\t\tSo(ch.CompressionType(), ShouldEqual, CompressionTypeV1)\n\t\t\t\tSo(ch.Encrypt(), ShouldEqual, false)\n\t\t\t\tSo(ch.DatapathVersion(), ShouldEqual, DatapathVersion1)\n\t\t\t\tSo(ch.Ping(), ShouldEqual, false)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"Given I set different compression type and encrypt\", func() {\n\t\t\theader.SetCompressionType(CompressionTypeV1)\n\t\t\theader.SetEncrypt(true)\n\t\t\theader.SetPing(true)\n\t\t\tch := header.ToBytes().ToClaimsHeader()\n\n\t\t\tConvey(\"Then it should be equal\", func() {\n\t\t\t\tSo(ch.CompressionType(), ShouldEqual, CompressionTypeV1)\n\t\t\t\tSo(ch.Encrypt(), ShouldEqual, true)\n\t\t\t\tSo(ch.DatapathVersion(), ShouldEqual, DatapathVersion1)\n\t\t\t\tSo(ch.Ping(), ShouldEqual, true)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given I try retrieve fields without any data\", t, func() {\n\t\tch := &ClaimsHeader{}\n\n\t\tConvey(\"Then it should be equal\", func() {\n\t\t\tSo(ch.CompressionType(), ShouldEqual, 0)\n\t\t\tSo(ch.Encrypt(), ShouldEqual, false)\n\t\t\tSo(ch.DatapathVersion(), ShouldEqual, DatapathVersion1)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/constants.go",
    "content": "package claimsheader\n\nconst (\n\t// maxHeaderLen must be maximimum claims header length\n\tmaxHeaderLen = 4\n\t// zeroBit is the value 0\n\tzeroBit = 0x00\n\t// encryptionEnabledBit that is set in the bytes\n\tencryptionEnabledBit = 0x01\n\t// pingEnabledBit that is set in the bytes\n\tpingEnabledBit = 0x02\n)\n"
  },
  {
    "path": "controller/pkg/claimsheader/ct.go",
    "content": "package claimsheader\n\nimport \"strconv\"\n\n// CompressionType defines the compression used.\ntype CompressionType int\n\nconst (\n\n\t// CompressionTypeV1 is version 1 of compression\n\tCompressionTypeV1 CompressionType = iota\n)\n\nconst (\n\t// CompressedTagLengthV1 is version 1 length of tags\n\tCompressedTagLengthV1 int = 12\n\n\t// CompressedTagLengthV2 is version 2 length of tags\n\tCompressedTagLengthV2 int = 8\n)\n\n// toMask returns the mask based on the type\nfunc (ct CompressionType) toMask() compressionTypeMask {\n\n\treturn compressionTypeV1Mask\n\n}\n\nfunc (ct CompressionType) toString() string {\n\n\treturn strconv.Itoa(int(ct))\n}\n\n// compressionTypeMask defines the compression mask.\ntype compressionTypeMask uint8\n\nconst (\n\t// compressionTypeV1Mask mask that identifies compression type v1\n\tcompressionTypeV1Mask compressionTypeMask = 0x80\n\t// compressionTypeBitMask mask used to check relevant compression types\n\tcompressionTypeBitMask compressionTypeMask = 0xC0\n)\n\n// toType returns the type based on mask\nfunc (cm compressionTypeMask) toType() CompressionType {\n\treturn CompressionTypeV1\n}\n\n// toUint8 returns uint8 from compressiontypemask\nfunc (cm compressionTypeMask) toUint8() uint8 {\n\n\treturn uint8(cm)\n}\n\n// CompressionTypeToTagLength converts CompressionType to length.\nfunc CompressionTypeToTagLength(t CompressionType) int {\n\n\treturn CompressedTagLengthV1\n\n}\n\n// String2CompressionType is a helper to convert string to compression type\nfunc String2CompressionType(s string) CompressionType {\n\n\treturn CompressionTypeV1\n\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/datapath_version.go",
    "content": "package claimsheader\n\n// DatapathVersion defines the datapath version\ntype DatapathVersion int\n\n// DatapathVersion constants\nconst (\n\tDatapathVersion1 DatapathVersion = iota\n\tDatapathVersion2\n)\n\nfunc (dv DatapathVersion) toMask() datapathVersionMask { // nolint\n\n\tif dv == DatapathVersion1 {\n\t\treturn datapathVersion\n\t}\n\n\treturn zeroBit\n}\n\n// datapathVersion is the enforcer version\n// TODO: Enable this in datapath\ntype datapathVersionMask uint8\n\nconst (\n\tdatapathVersion        datapathVersionMask = 0x00\n\tdatapathVersionBitMask datapathVersionMask = 0x3F\n)\n\nfunc (dm datapathVersionMask) toType() DatapathVersion {\n\n\tif dm == datapathVersion {\n\t\treturn DatapathVersion1\n\t}\n\n\treturn -1\n}\n\n// toUint8 returns uint8 from datapathVersionMask\nfunc (dm datapathVersionMask) toUint8() uint8 {\n\n\treturn uint8(dm)\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/options.go",
    "content": "package claimsheader\n\n// Option is used to set claimsheader fields\ntype Option func(*ClaimsHeader)\n\n// OptionCompressionType sets compression Type\nfunc OptionCompressionType(compressionType CompressionType) Option {\n\n\treturn func(c *ClaimsHeader) {\n\t\tc.compressionType = compressionType\n\t}\n}\n\n// OptionEncrypt sets encryption\nfunc OptionEncrypt(encrypt bool) Option {\n\n\treturn func(c *ClaimsHeader) {\n\t\tc.encrypt = encrypt\n\t}\n}\n\n// OptionDatapathVersion sets handshake version\nfunc OptionDatapathVersion(datapathVersion DatapathVersion) Option {\n\n\treturn func(c *ClaimsHeader) {\n\t\tc.datapathVersion = datapathVersion\n\t}\n}\n\n// OptionPing sets ping\nfunc OptionPing(ping bool) Option {\n\n\treturn func(c *ClaimsHeader) {\n\t\tc.ping = ping\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/ping_type.go",
    "content": "package claimsheader\n\n// PingType defines the ping type.\ntype PingType int\n\n// PingType options.\nconst (\n\tPingTypeNone PingType = iota\n\tPingTypeDefaultIdentity\n\tPingTypeCustomIdentity\n\tPingTypeDefaultIdentityPassthrough\n)\n\n// toMask returns the mask based on the type\nfunc (pt PingType) toMask() pingTypeMask {\n\n\tswitch pt {\n\tcase PingTypeNone:\n\t\treturn pingTypeNoneMask\n\tcase PingTypeDefaultIdentity:\n\t\treturn pingTypeDefaultIdentityMask\n\tcase PingTypeCustomIdentity:\n\t\treturn pingTypeCustomIdentityMask\n\tcase PingTypeDefaultIdentityPassthrough:\n\t\treturn pingTypeDefaultIdentityPassthroughMask\n\tdefault:\n\t\treturn pingTypeNoneMask\n\t}\n}\n\n// toMask returns the mask based on the type\nfunc (pt PingType) String() string {\n\n\tswitch pt {\n\tcase PingTypeNone:\n\t\treturn \"None\"\n\tcase PingTypeDefaultIdentity:\n\t\treturn \"DefaultIdentity\"\n\tcase PingTypeCustomIdentity:\n\t\treturn \"CustomIdentity\"\n\tcase PingTypeDefaultIdentityPassthrough:\n\t\treturn \"DefaultIdentityPassthrough\"\n\tdefault:\n\t\treturn \"None\"\n\t}\n}\n\n// pingTypeMask defines the ping type mask.\ntype pingTypeMask uint8\n\n// PingTypeMask options.\nconst (\n\tpingTypeNoneMask                       pingTypeMask = 0x02\n\tpingTypeDefaultIdentityMask            pingTypeMask = 0x04\n\tpingTypeCustomIdentityMask             pingTypeMask = 0x06\n\tpingTypeDefaultIdentityPassthroughMask pingTypeMask = 0x08\n\tpingTypeBitMask                        pingTypeMask = 0x3E\n)\n\n// toType returns the type based on mask\nfunc (pm pingTypeMask) toType() PingType {\n\n\tswitch pm {\n\tcase pingTypeNoneMask:\n\t\treturn PingTypeNone\n\tcase pingTypeDefaultIdentityMask:\n\t\treturn PingTypeDefaultIdentity\n\tcase pingTypeCustomIdentityMask:\n\t\treturn PingTypeCustomIdentity\n\tcase pingTypeDefaultIdentityPassthroughMask:\n\t\treturn PingTypeDefaultIdentityPassthrough\n\tdefault:\n\t\treturn PingTypeNone\n\t}\n}\n\n// toUint8 returns uint8 from PingTypemask\nfunc (pm pingTypeMask) toUint8() uint8 {\n\n\treturn uint8(pm)\n}\n"
  },
  {
    "path": "controller/pkg/claimsheader/types.go",
    "content": "package claimsheader\n\n// ClaimsHeader holds header sub attributes\ntype ClaimsHeader struct {\n\t// CompressionType represents compressed tag mode attribute\n\tcompressionType CompressionType\n\t// Encrypt represents enryption enabled attribute\n\tencrypt bool\n\t// Handshake type represents datapath version\n\tdatapathVersion DatapathVersion\n\t// Ping represents ping is set\n\tping bool\n}\n\ntype boolType int\n\nconst (\n\tencrypt boolType = iota\n\tping\n)\n\n// boolToUint8 converts bool to uint8\n// to populate the bits based on v\nfunc boolToUint8(t boolType, v bool) uint8 {\n\n\tif !v {\n\t\treturn zeroBit\n\t}\n\n\tswitch t {\n\tcase encrypt:\n\t\treturn encryptionEnabledBit\n\tcase ping:\n\t\treturn pingEnabledBit\n\tdefault:\n\t\treturn zeroBit\n\t}\n}\n\n// uint8ToBool converts uint8 to bool\n// to populate the struct based on n\nfunc uint8ToBool(t boolType, n uint8) bool {\n\n\tswitch t {\n\tcase encrypt:\n\t\treturn n == encryptionEnabledBit\n\tcase ping:\n\t\treturn n == pingEnabledBit\n\tdefault:\n\t\treturn false\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/cleaner/cleaner.go",
    "content": "package cleaner\n\nimport (\n\t\"fmt\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor/iptablesctrl\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// CleanAllTriremeACLs cleans up all previous Trireme ACLs. It can be called from\n// other packages for housekeeping.\n// TODO: fix this, this was ok before, but it's ugly now because we have to\n//       injecting iptablesLockfile here..\n//       iptables and it's configuration is part of trireme and iptables cleanup should\n//       be done when the trireme instance starts up.\nfunc CleanAllTriremeACLs(iptablesLockfile string) error {\n\n\tfq := fqconfig.NewFilterQueue(0, nil)\n\n\tipt, err := iptablesctrl.NewInstance(fq, constants.LocalServer, true, nil, iptablesLockfile, policy.None)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to initialize cleaning iptables controller: %s\", err)\n\t}\n\n\treturn ipt.CleanUp()\n}\n"
  },
  {
    "path": "controller/pkg/connection/connection.go",
    "content": "package connection\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/afinetrawsocket\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pingconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n\t\"go.uber.org/zap\"\n)\n\n// TCPFlowState identifies the constants of the state of a TCP connectioncon\ntype TCPFlowState int\n\n// UDPFlowState identifies the constants of the state of a UDP connection.\ntype UDPFlowState int\n\n// ProxyConnState identifies the constants of the state of a proxied connection\ntype ProxyConnState int\n\nconst (\n\n\t// TCPSynSend is the state where the Syn packets has been send, but no response has been received\n\tTCPSynSend TCPFlowState = iota\n\n\t// TCPSynReceived indicates that the syn packet has been received\n\tTCPSynReceived\n\n\t// TCPSynAckSend indicates that the SynAck packet has been send\n\tTCPSynAckSend\n\n\t// TCPSynAckReceived is the state where the SynAck has been received\n\tTCPSynAckReceived\n\n\t// TCPAckSend indicates that the ack packets has been sent\n\tTCPAckSend\n\n\t// TCPAckProcessed is the state that the negotiation has been completed\n\tTCPAckProcessed\n\n\t// TCPData indicates that the packets are now data packets\n\tTCPData\n\n\t// UnknownState indicates that this an existing connection in the unknown state.\n\tUnknownState\n)\n\nconst (\n\t// ClientTokenSend Init token send for client\n\tClientTokenSend ProxyConnState = iota\n\n\t// ServerReceivePeerToken -- waiting to receive peer token\n\tServerReceivePeerToken\n\n\t// ServerSendToken -- Send our own token and the client tokens\n\tServerSendToken\n\n\t// ClientPeerTokenReceive -- Receive signed tokens from server\n\tClientPeerTokenReceive\n\n\t// ClientSendSignedPair -- Sign the (token/nonce pair) and send\n\tClientSendSignedPair\n\n\t// ServerAuthenticatePair -- Authenticate pair of tokens\n\tServerAuthenticatePair\n)\n\nconst (\n\t// UDPStart is the state where a syn will be sent.\n\tUDPStart UDPFlowState = iota\n\n\t// UDPClientSendSyn is the state where a syn has been sent.\n\tUDPClientSendSyn\n\n\t// UDPClientSendAck  is the state where application side has send the ACK.\n\tUDPClientSendAck\n\n\t// UDPReceiverSendSynAck is the state where syn ack packet has been sent.\n\tUDPReceiverSendSynAck\n\n\t// UDPReceiverProcessedAck is the state that the negotiation has been completed.\n\tUDPReceiverProcessedAck\n\n\t// UDPData is the state where data is being transmitted.\n\tUDPData\n\n\t// UDPRST is the state when we received rst from peer. This connection is dead\n\tUDPRST\n)\n\n// MaximumUDPQueueLen is the maximum number of UDP packets buffered.\nconst MaximumUDPQueueLen = 50\n\n// AuthInfo keeps authentication information about a connection\ntype AuthInfo struct {\n\tNonce                        [tokens.NonceLength]byte\n\tRemoteNonce                  []byte\n\tRemoteContextID              string\n\tRemoteIP                     string\n\tRemotePort                   string\n\tLocalDatapathPrivateKey      *ephemeralkeys.PrivateKey\n\tSecretKey                    []byte\n\tLocalDatapathPublicKeyV1     []byte\n\tLocalDatapathPublicKeySignV1 []byte\n\tLocalDatapathPublicKeyV2     []byte\n\tLocalDatapathPublicKeySignV2 []byte\n\tConnectionClaims             tokens.ConnectionClaims\n\tSynAckToken                  []byte\n\tAckToken                     []byte\n\tProto314                     bool\n}\n\n//TCPTuple contains the 4 tuple for tcp connection\ntype TCPTuple struct {\n\tSourceAddress      net.IP\n\tDestinationAddress net.IP\n\tSourcePort         uint16\n\tDestinationPort    uint16\n}\n\n// TCPConnection is information regarding TCP Connection\ntype TCPConnection struct {\n\tsync.RWMutex\n\n\tstate TCPFlowState\n\tAuth  AuthInfo\n\n\t// ServiceData allows services to associate state with a connection\n\tServiceData interface{}\n\n\t// Context is the pucontext.PUContext that is associated with this connection\n\t// Minimizes the number of caches and lookups\n\tContext *pucontext.PUContext\n\n\t// TimeOut signals the timeout to be used by the state machines\n\tTimeOut time.Duration\n\n\t// ServiceConnection indicates that this connection is handled by a service\n\tServiceConnection bool\n\n\t// LoopbackConnection indicates that this connections is within the same pu context.\n\tloopbackConnection bool\n\n\t// ReportFlowPolicy holds the last matched observed policy\n\tReportFlowPolicy *policy.FlowPolicy\n\n\t// PacketFlowPolicy holds the last matched actual policy\n\tPacketFlowPolicy *policy.FlowPolicy\n\n\t// MarkForDeletion -- this is is used only in conjunction with serviceconnection. Its a hint for us if we have a fin for an earlier connection\n\t// and this is reused port flow.\n\tMarkForDeletion bool\n\n\tRetransmittedSynAck bool\n\n\texpiredConnection bool\n\n\t// TCPtuple is tcp tuple\n\tTCPtuple *TCPTuple\n\n\t// PingConfig is the config that holds ping related information.\n\tPingConfig *pingconfig.PingConfig\n\n\tSecrets secrets.Secrets\n\n\tSourceController      string\n\tDestinationController string\n\tinitialSequenceNumber uint32\n\ttimer                 *time.Timer\n\tcounter               uint32\n\treportReason          string\n\tconnectionTimeout     time.Duration\n\tEncodedBuf            [tokens.ClaimsEncodedBufSize]byte\n}\n\n//DefaultConnectionTimeout is used as the timeout for connection in the cache.\nvar DefaultConnectionTimeout = 24 * time.Second\n\n//StartTimer starts the timer for 24 seconds and\n//on expiry will call the function passed in the argument.\nfunc (c *TCPConnection) StartTimer(f func()) {\n\tif c.timer == nil {\n\t\tc.timer = time.AfterFunc(c.connectionTimeout, f)\n\t} else {\n\t\tc.timer.Reset(c.connectionTimeout)\n\t}\n}\n\n//StopTimer will stop the timer in the connection object.\nfunc (c *TCPConnection) StopTimer() {\n\tif c.timer != nil {\n\t\tc.timer.Stop()\n\t}\n}\n\n//ResetTimer resets the timer\nfunc (c *TCPConnection) ResetTimer(newTimeout time.Duration) {\n\tif c.timer != nil {\n\t\tc.timer.Reset(newTimeout)\n\t}\n}\n\nfunc (tcpTuple *TCPTuple) String() string {\n\treturn \"sip: \" + tcpTuple.SourceAddress.String() + \" dip: \" + tcpTuple.DestinationAddress.String() + \" sport: \" + strconv.Itoa(int(tcpTuple.SourcePort)) + \" dport: \" + strconv.Itoa(int(tcpTuple.DestinationPort))\n}\n\n// String returns a printable version of connection\nfunc (c *TCPConnection) String() string {\n\treturn fmt.Sprintf(\"state:%d auth: %+v\", c.state, c.Auth)\n}\n\n// PingEnabled returns true if ping is enabled for this connection\nfunc (c *TCPConnection) PingEnabled() bool {\n\treturn c.PingConfig != nil\n}\n\n// GetState is used to return the state\nfunc (c *TCPConnection) GetState() TCPFlowState {\n\treturn c.state\n}\n\n// GetStateString is used to return the state as string\nfunc (c *TCPConnection) GetStateString() string {\n\n\tswitch c.state {\n\tcase TCPSynSend:\n\t\treturn \"TCPSynSend\"\n\n\tcase TCPSynReceived:\n\t\treturn \"TCPSynReceived\"\n\n\tcase TCPSynAckSend:\n\t\treturn \"TCPSynAckSend\"\n\n\tcase TCPSynAckReceived:\n\t\treturn \"TCPSynAckReceived\"\n\n\tcase TCPAckSend:\n\t\treturn \"TCPAckSend\"\n\n\tcase TCPAckProcessed:\n\t\treturn \"TCPAckProcessed\"\n\n\tcase TCPData:\n\t\treturn \"TCPData\"\n\n\tdefault:\n\t\treturn \"UnknownState\"\n\t}\n}\n\n// GetInitialSequenceNumber returns the initial sequence number that was found on the syn packet\n// corresponding to this TCP Connection\nfunc (c *TCPConnection) GetInitialSequenceNumber() uint32 {\n\treturn c.initialSequenceNumber\n}\n\n// GetMarkForDeletion returns the state of markForDeletion flag\nfunc (c *TCPConnection) GetMarkForDeletion() bool {\n\tc.RLock()\n\tdefer c.RUnlock()\n\treturn c.MarkForDeletion\n}\n\n// IncrementCounter increments counter for this connection\nfunc (c *TCPConnection) IncrementCounter() {\n\tatomic.AddUint32(&c.counter, 1)\n}\n\n// GetCounterAndReset returns the counter and resets it to zero\nfunc (c *TCPConnection) GetCounterAndReset() uint32 {\n\treturn atomic.SwapUint32(&c.counter, 0)\n}\n\n// GetReportReason returns the reason for reporting this connection\nfunc (c *TCPConnection) GetReportReason() string {\n\n\tc.RLock()\n\tdefer c.RUnlock()\n\n\treturn c.reportReason\n}\n\n// SetReportReason sets the reason for reporting this connection\nfunc (c *TCPConnection) SetReportReason(reason string) {\n\n\tc.Lock()\n\tc.reportReason = reason\n\tc.Unlock()\n}\n\n// SetState is used to setup the state for the TCP connection\nfunc (c *TCPConnection) SetState(state TCPFlowState) {\n\tc.state = state\n}\n\n// Cleanup will provide information when a connection is removed by a timer.\nfunc (c *TCPConnection) Cleanup() {\n\n\tc.Lock()\n\tif !c.expiredConnection && c.state != TCPData {\n\t\tc.expiredConnection = true\n\t\tif c.Context != nil {\n\t\t\tc.Context.Counters().IncrementCounter(counters.ErrTCPConnectionsExpired)\n\t\t}\n\t}\n\tc.Unlock()\n}\n\n// SetLoopbackConnection sets LoopbackConnection field.\nfunc (c *TCPConnection) SetLoopbackConnection(isLoopback bool) {\n\t// Logging information\n\tc.loopbackConnection = isLoopback\n}\n\n// IsLoopbackConnection sets LoopbackConnection field.\nfunc (c *TCPConnection) IsLoopbackConnection() bool {\n\t// Logging information\n\treturn c.loopbackConnection\n}\n\n// SetLoopbackConnection sets LoopbackConnection field.\nfunc (c *UDPConnection) SetLoopbackConnection(isLoopback bool) {\n\t// Logging information\n\tc.loopbackConnection = isLoopback\n}\n\n// IsLoopbackConnection sets LoopbackConnection field.\nfunc (c *UDPConnection) IsLoopbackConnection() bool {\n\t// Logging information\n\treturn c.loopbackConnection\n}\n\n// NewTCPConnection returns a TCPConnection information struct\nfunc NewTCPConnection(context *pucontext.PUContext, p *packet.Packet) *TCPConnection {\n\n\tvar initialSeqNumber uint32\n\n\t// Default tuple in case the packet is nil.\n\ttuple := &TCPTuple{}\n\n\tif p != nil {\n\t\ttuple.SourceAddress = p.SourceAddress()\n\t\ttuple.DestinationAddress = p.DestinationAddress()\n\t\ttuple.SourcePort = p.SourcePort()\n\t\ttuple.DestinationPort = p.DestPort()\n\t\tinitialSeqNumber = p.TCPSequenceNumber()\n\t}\n\n\ttcp := &TCPConnection{\n\t\tstate:                 TCPSynSend,\n\t\tContext:               context,\n\t\tAuth:                  AuthInfo{},\n\t\tinitialSequenceNumber: initialSeqNumber,\n\t\tTCPtuple:              tuple,\n\t\tconnectionTimeout:     DefaultConnectionTimeout,\n\t}\n\n\tcrypto.Nonce().GenerateNonce16Bytes(tcp.Auth.Nonce[:])\n\treturn tcp\n\n}\n\n// ProxyConnection is a record to keep state of proxy auth\ntype ProxyConnection struct {\n\tsync.Mutex\n\n\tstate            ProxyConnState\n\tAuth             AuthInfo\n\tReportFlowPolicy *policy.FlowPolicy\n\tPacketFlowPolicy *policy.FlowPolicy\n\treported         bool\n\tSecrets          secrets.Secrets\n}\n\n// NewProxyConnection returns a new Proxy Connection\nfunc NewProxyConnection(keyPair ephemeralkeys.KeyAccessor) *ProxyConnection {\n\n\tp := &ProxyConnection{\n\t\tstate: ClientTokenSend,\n\t\tAuth: AuthInfo{\n\t\t\tLocalDatapathPublicKeyV1: keyPair.DecodingKeyV1(),\n\t\t\tLocalDatapathPrivateKey:  keyPair.PrivateKey(),\n\t\t},\n\t}\n\n\tcrypto.Nonce().GenerateNonce16Bytes(p.Auth.Nonce[:])\n\n\treturn p\n}\n\n// GetState returns the state of a proxy connection\nfunc (c *ProxyConnection) GetState() ProxyConnState {\n\n\treturn c.state\n}\n\n// SetState is used to setup the state for the Proxy Connection\nfunc (c *ProxyConnection) SetState(state ProxyConnState) {\n\n\tc.state = state\n}\n\n// SetReported sets the flag to reported when the conn is reported\nfunc (c *ProxyConnection) SetReported(reported bool) {\n\tc.reported = reported\n}\n\n// UDPConnection is information regarding UDP connection.\ntype UDPConnection struct {\n\tsync.RWMutex\n\n\tstate   UDPFlowState\n\tContext *pucontext.PUContext\n\tAuth    AuthInfo\n\n\tReportFlowPolicy *policy.FlowPolicy\n\tPacketFlowPolicy *policy.FlowPolicy\n\t// ServiceData allows services to associate state with a connection\n\tServiceData interface{}\n\n\t// PacketQueue indicates app UDP packets queued while authorization is in progress.\n\tPacketQueue chan *packet.Packet\n\tWriter      afinetrawsocket.SocketWriter\n\t// ServiceConnection indicates that this connection is handled by a service\n\tServiceConnection bool\n\t// LoopbackConnection indicates that this connections is within the same pu context.\n\tloopbackConnection bool\n\t// Stop channels for restransmissions\n\tsynStop    chan bool\n\tsynAckStop chan bool\n\tackStop    chan bool\n\n\tTestIgnore           bool\n\tudpQueueFullDropCntr uint64\n\texpiredConnection    bool\n\n\tSecrets secrets.Secrets\n\n\tSourceController      string\n\tDestinationController string\n\tEncodedBuf            [tokens.ClaimsEncodedBufSize]byte\n}\n\n// NewUDPConnection returns UDPConnection struct.\nfunc NewUDPConnection(context *pucontext.PUContext, writer afinetrawsocket.SocketWriter) *UDPConnection {\n\n\tu := &UDPConnection{\n\t\tstate:       UDPStart,\n\t\tContext:     context,\n\t\tPacketQueue: make(chan *packet.Packet, MaximumUDPQueueLen),\n\t\tWriter:      writer,\n\t\tAuth:        AuthInfo{},\n\t\tsynStop:     make(chan bool),\n\t\tsynAckStop:  make(chan bool),\n\t\tackStop:     make(chan bool),\n\t\tTestIgnore:  true,\n\t}\n\n\tcrypto.Nonce().GenerateNonce16Bytes(u.Auth.Nonce[:])\n\treturn u\n}\n\n// SynStop issues a stop on the synStop channel.\nfunc (c *UDPConnection) SynStop() {\n\tselect {\n\tcase c.synStop <- true:\n\tdefault:\n\t\tzap.L().Debug(\"Packet loss - channel was already done\")\n\t}\n\n}\n\n// SynAckStop issues a stop in the synAckStop channel.\nfunc (c *UDPConnection) SynAckStop() {\n\tselect {\n\tcase c.synAckStop <- true:\n\tdefault:\n\t\tzap.L().Debug(\"Packet loss - channel was already done\")\n\t}\n}\n\n// AckStop issues a stop in the Ack channel.\nfunc (c *UDPConnection) AckStop() {\n\tselect {\n\tcase c.ackStop <- true:\n\tdefault:\n\t\tzap.L().Debug(\"Packet loss - channel was already done\")\n\t}\n\n}\n\n// SynChannel returns the SynStop channel.\nfunc (c *UDPConnection) SynChannel() chan bool {\n\treturn c.synStop\n\n}\n\n// SynAckChannel returns the SynAck stop channel.\nfunc (c *UDPConnection) SynAckChannel() chan bool {\n\treturn c.synAckStop\n}\n\n// AckChannel returns the Ack stop channel.\nfunc (c *UDPConnection) AckChannel() chan bool {\n\treturn c.ackStop\n}\n\n// GetState is used to get state of UDP Connection.\nfunc (c *UDPConnection) GetState() UDPFlowState {\n\treturn c.state\n}\n\n// SetState is used to setup the state for the UDP Connection.\nfunc (c *UDPConnection) SetState(state UDPFlowState) {\n\tc.state = state\n}\n\n// QueuePackets queues UDP packets till the flow is authenticated.\nfunc (c *UDPConnection) QueuePackets(udpPacket *packet.Packet) (err error) {\n\tbuffer := make([]byte, len(udpPacket.GetBuffer(0)))\n\tcopy(buffer, udpPacket.GetBuffer(0))\n\n\tcopyPacket, err := packet.New(packet.PacketTypeApplication, buffer, udpPacket.Mark, true)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Unable to copy packets to queue:%s\", err)\n\t}\n\n\tif udpPacket.PlatformMetadata != nil {\n\t\tcopyPacket.PlatformMetadata = udpPacket.PlatformMetadata.Clone()\n\t}\n\n\tselect {\n\tcase c.PacketQueue <- copyPacket:\n\tdefault:\n\t\t// connection object is always locked.\n\t\tc.udpQueueFullDropCntr++\n\t\treturn fmt.Errorf(\"Queue is full\")\n\t}\n\n\treturn nil\n}\n\n// DropPackets drops packets on errors during Authorization.\nfunc (c *UDPConnection) DropPackets() {\n\tclose(c.PacketQueue)\n\tc.PacketQueue = make(chan *packet.Packet, MaximumUDPQueueLen)\n}\n\n// ReadPacket reads a packet from the queue.\nfunc (c *UDPConnection) ReadPacket() *packet.Packet {\n\tselect {\n\tcase p := <-c.PacketQueue:\n\t\treturn p\n\tdefault:\n\t\treturn nil\n\t}\n}\n\n// Cleanup is called on cache expiry of the connection to record incomplete connections\nfunc (c *UDPConnection) Cleanup() {\n\n\tc.Lock()\n\tif !c.expiredConnection && c.state != UDPData {\n\t\tc.expiredConnection = true\n\t\tif c.Context != nil {\n\t\t\tc.Context.Counters().IncrementCounter(counters.ErrUDPConnectionsExpired)\n\t\t}\n\t}\n\tc.Unlock()\n}\n\n// String returns a printable version of connection\nfunc (c *UDPConnection) String() string {\n\n\treturn fmt.Sprintf(\"udp-conn state:%d auth: %+v\", c.state, c.Auth)\n}\n\n// UDPConnectionExpirationNotifier expiration notifier when cache entry expires\nfunc UDPConnectionExpirationNotifier(c cache.DataStore, id interface{}, item interface{}) {\n\n\tif conn, ok := item.(*UDPConnection); ok {\n\t\tconn.Cleanup()\n\t}\n}\n\n// ChangeConnectionTimeout is used by test code to change the default\n// connection timeout\nfunc (c *TCPConnection) ChangeConnectionTimeout(t time.Duration) {\n\t// Logging information\n\tc.connectionTimeout = t\n}\n"
  },
  {
    "path": "controller/pkg/connection/connection_test.go",
    "content": "// +build !windows\n\npackage connection\n"
  },
  {
    "path": "controller/pkg/connection/connectioncache.go",
    "content": "package connection\n\nimport (\n\t\"sync\"\n)\n\n//TCPCache is an interface to store tcp connections\n//keyed with the string.\ntype TCPCache interface {\n\tPut(string, *TCPConnection)\n\tGet(string) (*TCPConnection, bool)\n\tRemove(string)\n\tLen() int\n}\n\ntype tcpCache struct {\n\tm map[string]*TCPConnection\n\tsync.RWMutex\n}\n\n//NewTCPConnectionCache initializes the tcp connection cache\nfunc NewTCPConnectionCache() TCPCache {\n\treturn &tcpCache{m: map[string]*TCPConnection{}}\n}\n\n//Put stores the connection object with the key string\nfunc (c *tcpCache) Put(key string, conn *TCPConnection) {\n\tc.Lock()\n\tc.m[key] = conn\n\tc.Unlock()\n}\n\n//Get gets the tcp connection object keyed with the key string\nfunc (c *tcpCache) Get(key string) (*TCPConnection, bool) {\n\tc.RLock()\n\tconn, exists := c.m[key]\n\tc.RUnlock()\n\n\treturn conn, exists\n}\n\n//Remove remove the connection object keyed with the key string\nfunc (c *tcpCache) Remove(key string) {\n\tc.Lock()\n\tdelete(c.m, key)\n\tc.Unlock()\n}\n\nfunc (c *tcpCache) Len() int {\n\tc.Lock()\n\tsize := len(c.m)\n\tc.Unlock()\n\n\treturn size\n}\n"
  },
  {
    "path": "controller/pkg/counters/counters.go",
    "content": "package counters\n\nimport (\n\t\"sync/atomic\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\n// NewCounters initializes new counters handler. Thread safe.\nfunc NewCounters() *Counters {\n\treturn &Counters{}\n}\n\n// CounterNames returns an array of names\nfunc CounterNames() []string {\n\tnames := make([]string, errMax+1)\n\tvar ct CounterType\n\tfor ct = 0; ct <= errMax; ct++ {\n\t\tnames[ct] = ct.String()\n\t}\n\treturn names[:errMax]\n}\n\n// CounterError is a convinence function which returns error as well as increments the counter.\nfunc (c *Counters) CounterError(t CounterType, err error) error {\n\tc.IncrementCounter(t)\n\treturn err\n}\n\n// IncrementCounter increments counters for a given PU\nfunc (c *Counters) IncrementCounter(t CounterType) {\n\tatomic.AddUint32(&c.counters[int(t)], 1)\n}\n\n// GetErrorCounters returns the error counters and resets the counters to zero\nfunc (c *Counters) GetErrorCounters() []collector.Counters {\n\n\tc.Lock()\n\n\treport := make([]collector.Counters, errMax+1)\n\tfor index := range c.counters {\n\t\treport[index] = collector.Counters(atomic.SwapUint32(&c.counters[index], 0))\n\t}\n\n\tc.Unlock()\n\treturn report[:errMax]\n}\n"
  },
  {
    "path": "controller/pkg/counters/counters_test.go",
    "content": "package counters\n\nimport (\n\t\"errors\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc Test_NewCounters(t *testing.T) {\n\n\tConvey(\"When I create new error counters\", t, func() {\n\t\tec := NewCounters()\n\t\tSo(ec, ShouldNotBeNil)\n\t\tSo(len(ec.counters), ShouldEqual, errMax+1)\n\t})\n}\n\nfunc Test_CounterError(t *testing.T) {\n\n\tConvey(\"When I create new error counters\", t, func() {\n\t\tec := NewCounters()\n\t\tSo(ec, ShouldNotBeNil)\n\t\tSo(len(ec.counters), ShouldEqual, errMax+1)\n\n\t\tConvey(\"When I increment counter\", func() {\n\t\t\terr := ec.CounterError(ErrInvalidProtocol, errors.New(\"unknown protocol\"))\n\t\t\tec.IncrementCounter(ErrInvalidProtocol)\n\t\t\tSo(err, ShouldResemble, errors.New(\"unknown protocol\"))\n\t\t\tSo(ec.counters[ErrInvalidProtocol], ShouldEqual, 2)\n\t\t})\n\t})\n}\n\nfunc Test_GetErrorCounter(t *testing.T) {\n\n\tConvey(\"When I create new error counters\", t, func() {\n\t\tec := NewCounters()\n\t\tSo(ec, ShouldNotBeNil)\n\t\tSo(len(ec.counters), ShouldEqual, errMax+1)\n\n\t\tConvey(\"When I increment counter and get error\", func() {\n\t\t\terr := ec.CounterError(ErrInvalidProtocol, errors.New(\"unknown protocol\"))\n\t\t\tec.IncrementCounter(ErrInvalidProtocol)\n\t\t\tec.IncrementCounter(ErrInvalidProtocol)\n\n\t\t\tSo(err, ShouldResemble, errors.New(\"unknown protocol\"))\n\t\t\tSo(ec.counters[ErrInvalidProtocol], ShouldEqual, 3)\n\n\t\t\tc := ec.GetErrorCounters()\n\n\t\t\tSo(len(c), ShouldEqual, errMax)\n\t\t\tSo(c[ErrInvalidProtocol], ShouldEqual, 3)\n\t\t\tSo(ec.counters[ErrInvalidProtocol], ShouldEqual, 0)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/counters/countertype_string.go",
    "content": "// Code generated by \"stringer -type=CounterType -trimprefix Err\"; DO NOT EDIT.\n\npackage counters\n\nimport \"strconv\"\n\nfunc _() {\n\t// An \"invalid array index\" compiler error signifies that the constant values have changed.\n\t// Re-run the stringer command to generate them again.\n\tvar x [1]struct{}\n\t_ = x[ErrUnknownError-0]\n\t_ = x[ErrNonPUTraffic-1]\n\t_ = x[ErrNoConnFound-2]\n\t_ = x[ErrRejectPacket-3]\n\t_ = x[ErrMarkNotFound-4]\n\t_ = x[ErrPortNotFound-5]\n\t_ = x[ErrContextIDNotFound-6]\n\t_ = x[ErrInvalidProtocol-7]\n\t_ = x[ErrConnectionsProcessed-8]\n\t_ = x[ErrEncrConnectionsProcessed-9]\n\t_ = x[ErrUDPDropFin-10]\n\t_ = x[ErrUDPSynDroppedInvalidToken-11]\n\t_ = x[ErrUDPSynAckInvalidToken-12]\n\t_ = x[ErrUDPAckInvalidToken-13]\n\t_ = x[ErrUDPConnectionsProcessed-14]\n\t_ = x[ErrUDPContextIDNotFound-15]\n\t_ = x[ErrUDPDropQueueFull-16]\n\t_ = x[ErrUDPDropInNfQueue-17]\n\t_ = x[ErrAppServicePreProcessorFailed-18]\n\t_ = x[ErrAppServicePostProcessorFailed-19]\n\t_ = x[ErrNetServicePreProcessorFailed-20]\n\t_ = x[ErrNetServicePostProcessorFailed-21]\n\t_ = x[ErrSynTokenFailed-22]\n\t_ = x[ErrSynDroppedInvalidToken-23]\n\t_ = x[ErrSynDroppedTCPOption-24]\n\t_ = x[ErrSynDroppedInvalidFormat-25]\n\t_ = x[ErrSynRejectPacket-26]\n\t_ = x[ErrSynUnexpectedPacket-27]\n\t_ = x[ErrInvalidNetSynState-28]\n\t_ = x[ErrNetSynNotSeen-29]\n\t_ = x[ErrSynToExtNetAccept-30]\n\t_ = x[ErrSynFromExtNetAccept-31]\n\t_ = x[ErrSynToExtNetReject-32]\n\t_ = x[ErrSynFromExtNetReject-33]\n\t_ = x[ErrSynAckTokenFailed-34]\n\t_ = x[ErrOutOfOrderSynAck-35]\n\t_ = x[ErrInvalidSynAck-36]\n\t_ = x[ErrSynAckInvalidToken-37]\n\t_ = x[ErrSynAckMissingToken-38]\n\t_ = x[ErrSynAckNoTCPAuthOption-39]\n\t_ = x[ErrSynAckInvalidFormat-40]\n\t_ = x[ErrSynAckEncryptionMismatch-41]\n\t_ = x[ErrSynAckRejected-42]\n\t_ = x[ErrSynAckToExtNetAccept-43]\n\t_ = x[ErrSynAckFromExtNetAccept-44]\n\t_ = x[ErrSynAckFromExtNetReject-45]\n\t_ = x[ErrAckTokenFailed-46]\n\t_ = x[ErrAckRejected-47]\n\t_ = x[ErrAckTCPNoTCPAuthOption-48]\n\t_ = x[ErrAckInvalidFormat-49]\n\t_ = x[ErrAckInvalidToken-50]\n\t_ = x[ErrAckInUnknownState-51]\n\t_ = x[ErrAckFromExtNetAccept-52]\n\t_ = x[ErrAckFromExtNetReject-53]\n\t_ = x[ErrUDPAppPreProcessingFailed-54]\n\t_ = x[ErrUDPAppPostProcessingFailed-55]\n\t_ = x[ErrUDPNetPreProcessingFailed-56]\n\t_ = x[ErrUDPNetPostProcessingFailed-57]\n\t_ = x[ErrUDPSynInvalidToken-58]\n\t_ = x[ErrUDPSynMissingClaims-59]\n\t_ = x[ErrUDPSynDroppedPolicy-60]\n\t_ = x[ErrUDPSynAckNoConnection-61]\n\t_ = x[ErrUDPSynAckPolicy-62]\n\t_ = x[ErrDroppedTCPPackets-63]\n\t_ = x[ErrDroppedUDPPackets-64]\n\t_ = x[ErrDroppedICMPPackets-65]\n\t_ = x[ErrDroppedDNSPackets-66]\n\t_ = x[ErrDroppedDHCPPackets-67]\n\t_ = x[ErrDroppedNTPPackets-68]\n\t_ = x[ErrTCPConnectionsExpired-69]\n\t_ = x[ErrUDPConnectionsExpired-70]\n\t_ = x[ErrSynTokenEncodeFailed-71]\n\t_ = x[ErrSynTokenHashFailed-72]\n\t_ = x[ErrSynTokenSignFailed-73]\n\t_ = x[ErrSynSharedSecretMissing-74]\n\t_ = x[ErrSynInvalidSecret-75]\n\t_ = x[ErrSynInvalidTokenLength-76]\n\t_ = x[ErrSynMissingSignature-77]\n\t_ = x[ErrSynInvalidSignature-78]\n\t_ = x[ErrSynCompressedTagMismatch-79]\n\t_ = x[ErrSynDatapathVersionMismatch-80]\n\t_ = x[ErrSynTokenDecodeFailed-81]\n\t_ = x[ErrSynTokenExpired-82]\n\t_ = x[ErrSynSharedKeyHashFailed-83]\n\t_ = x[ErrSynPublicKeyFailed-84]\n\t_ = x[ErrSynAckTokenEncodeFailed-85]\n\t_ = x[ErrSynAckTokenHashFailed-86]\n\t_ = x[ErrSynAckTokenSignFailed-87]\n\t_ = x[ErrSynAckSharedSecretMissing-88]\n\t_ = x[ErrSynAckInvalidSecret-89]\n\t_ = x[ErrSynAckInvalidTokenLength-90]\n\t_ = x[ErrSynAckMissingSignature-91]\n\t_ = x[ErrSynAckInvalidSignature-92]\n\t_ = x[ErrSynAckCompressedTagMismatch-93]\n\t_ = x[ErrSynAckDatapathVersionMismatch-94]\n\t_ = x[ErrSynAckTokenDecodeFailed-95]\n\t_ = x[ErrSynAckTokenExpired-96]\n\t_ = x[ErrSynAckSharedKeyHashFailed-97]\n\t_ = x[ErrSynAckPublicKeyFailed-98]\n\t_ = x[ErrAckTokenEncodeFailed-99]\n\t_ = x[ErrAckTokenHashFailed-100]\n\t_ = x[ErrAckTokenSignFailed-101]\n\t_ = x[ErrAckSharedSecretMissing-102]\n\t_ = x[ErrAckInvalidSecret-103]\n\t_ = x[ErrAckInvalidTokenLength-104]\n\t_ = x[ErrAckMissingSignature-105]\n\t_ = x[ErrAckCompressedTagMismatch-106]\n\t_ = x[ErrAckDatapathVersionMismatch-107]\n\t_ = x[ErrAckTokenDecodeFailed-108]\n\t_ = x[ErrAckTokenExpired-109]\n\t_ = x[ErrAckSignatureMismatch-110]\n\t_ = x[ErrUDPSynTokenFailed-111]\n\t_ = x[ErrUDPSynTokenEncodeFailed-112]\n\t_ = x[ErrUDPSynTokenHashFailed-113]\n\t_ = x[ErrUDPSynTokenSignFailed-114]\n\t_ = x[ErrUDPSynSharedSecretMissing-115]\n\t_ = x[ErrUDPSynInvalidSecret-116]\n\t_ = x[ErrUDPSynInvalidTokenLength-117]\n\t_ = x[ErrUDPSynMissingSignature-118]\n\t_ = x[ErrUDPSynInvalidSignature-119]\n\t_ = x[ErrUDPSynCompressedTagMismatch-120]\n\t_ = x[ErrUDPSynDatapathVersionMismatch-121]\n\t_ = x[ErrUDPSynTokenDecodeFailed-122]\n\t_ = x[ErrUDPSynTokenExpired-123]\n\t_ = x[ErrUDPSynSharedKeyHashFailed-124]\n\t_ = x[ErrUDPSynPublicKeyFailed-125]\n\t_ = x[ErrUDPSynAckTokenFailed-126]\n\t_ = x[ErrUDPSynAckTokenEncodeFailed-127]\n\t_ = x[ErrUDPSynAckTokenHashFailed-128]\n\t_ = x[ErrUDPSynAckTokenSignFailed-129]\n\t_ = x[ErrUDPSynAckSharedSecretMissing-130]\n\t_ = x[ErrUDPSynAckInvalidSecret-131]\n\t_ = x[ErrUDPSynAckInvalidTokenLength-132]\n\t_ = x[ErrUDPSynAckMissingSignature-133]\n\t_ = x[ErrUDPSynAckInvalidSignature-134]\n\t_ = x[ErrUDPSynAckCompressedTagMismatch-135]\n\t_ = x[ErrUDPSynAckDatapathVersionMismatch-136]\n\t_ = x[ErrUDPSynAckTokenDecodeFailed-137]\n\t_ = x[ErrUDPSynAckTokenExpired-138]\n\t_ = x[ErrUDPSynAckSharedKeyHashFailed-139]\n\t_ = x[ErrUDPSynAckPublicKeyFailed-140]\n\t_ = x[ErrUDPAckTokenFailed-141]\n\t_ = x[ErrUDPAckTokenEncodeFailed-142]\n\t_ = x[ErrUDPAckTokenHashFailed-143]\n\t_ = x[ErrUDPAckSharedSecretMissing-144]\n\t_ = x[ErrUDPAckInvalidSecret-145]\n\t_ = x[ErrUDPAckInvalidTokenLength-146]\n\t_ = x[ErrUDPAckMissingSignature-147]\n\t_ = x[ErrUDPAckCompressedTagMismatch-148]\n\t_ = x[ErrUDPAckDatapathVersionMismatch-149]\n\t_ = x[ErrUDPAckTokenDecodeFailed-150]\n\t_ = x[ErrUDPAckTokenExpired-151]\n\t_ = x[ErrUDPAckSignatureMismatch-152]\n\t_ = x[ErrAppSynAuthOptionSet-153]\n\t_ = x[ErrAckToFinAck-154]\n\t_ = x[ErrIgnoreFin-155]\n\t_ = x[ErrInvalidNetState-156]\n\t_ = x[ErrInvalidNetAckState-157]\n\t_ = x[ErrAppSynAckAuthOptionSet-158]\n\t_ = x[ErrDuplicateAckDrop-159]\n\t_ = x[ErrDNSForwardFailed-160]\n\t_ = x[ErrDNSResponseFailed-161]\n\t_ = x[ErrNfLogError-162]\n\t_ = x[ErrSegmentServerContainerEventExceedsProcessingTime-163]\n\t_ = x[ErrCorruptPacket-164]\n\t_ = x[ErrSynMissingTCPOption-165]\n\t_ = x[ErrUDPDropRst-166]\n\t_ = x[ErrNonPUUDPTraffic-167]\n\t_ = x[ErrIPTablesReset-168]\n\t_ = x[ErrDNSInvalidRequest-169]\n\t_ = x[errMax-170]\n}\n\nconst _CounterType_name = \"UnknownErrorNonPUTrafficNoConnFoundRejectPacketMarkNotFoundPortNotFoundContextIDNotFoundInvalidProtocolConnectionsProcessedEncrConnectionsProcessedUDPDropFinUDPSynDroppedInvalidTokenUDPSynAckInvalidTokenUDPAckInvalidTokenUDPConnectionsProcessedUDPContextIDNotFoundUDPDropQueueFullUDPDropInNfQueueAppServicePreProcessorFailedAppServicePostProcessorFailedNetServicePreProcessorFailedNetServicePostProcessorFailedSynTokenFailedSynDroppedInvalidTokenSynDroppedTCPOptionSynDroppedInvalidFormatSynRejectPacketSynUnexpectedPacketInvalidNetSynStateNetSynNotSeenSynToExtNetAcceptSynFromExtNetAcceptSynToExtNetRejectSynFromExtNetRejectSynAckTokenFailedOutOfOrderSynAckInvalidSynAckSynAckInvalidTokenSynAckMissingTokenSynAckNoTCPAuthOptionSynAckInvalidFormatSynAckEncryptionMismatchSynAckRejectedSynAckToExtNetAcceptSynAckFromExtNetAcceptSynAckFromExtNetRejectAckTokenFailedAckRejectedAckTCPNoTCPAuthOptionAckInvalidFormatAckInvalidTokenAckInUnknownStateAckFromExtNetAcceptAckFromExtNetRejectUDPAppPreProcessingFailedUDPAppPostProcessingFailedUDPNetPreProcessingFailedUDPNetPostProcessingFailedUDPSynInvalidTokenUDPSynMissingClaimsUDPSynDroppedPolicyUDPSynAckNoConnectionUDPSynAckPolicyDroppedTCPPacketsDroppedUDPPacketsDroppedICMPPacketsDroppedDNSPacketsDroppedDHCPPacketsDroppedNTPPacketsTCPConnectionsExpiredUDPConnectionsExpiredSynTokenEncodeFailedSynTokenHashFailedSynTokenSignFailedSynSharedSecretMissingSynInvalidSecretSynInvalidTokenLengthSynMissingSignatureSynInvalidSignatureSynCompressedTagMismatchSynDatapathVersionMismatchSynTokenDecodeFailedSynTokenExpiredSynSharedKeyHashFailedSynPublicKeyFailedSynAckTokenEncodeFailedSynAckTokenHashFailedSynAckTokenSignFailedSynAckSharedSecretMissingSynAckInvalidSecretSynAckInvalidTokenLengthSynAckMissingSignatureSynAckInvalidSignatureSynAckCompressedTagMismatchSynAckDatapathVersionMismatchSynAckTokenDecodeFailedSynAckTokenExpiredSynAckSharedKeyHashFailedSynAckPublicKeyFailedAckTokenEncodeFailedAckTokenHashFailedAckTokenSignFailedAckSharedSecretMissingAckInvalidSecretAckInvalidTokenLengthAckMissingSignatureAckCompressedTagMismatchAckDatapathVersionMismatchAckTokenDecodeFailedAckTokenExpiredAckSignatureMismatchUDPSynTokenFailedUDPSynTokenEncodeFailedUDPSynTokenHashFailedUDPSynTokenSignFailedUDPSynSharedSecretMissingUDPSynInvalidSecretUDPSynInvalidTokenLengthUDPSynMissingSignatureUDPSynInvalidSignatureUDPSynCompressedTagMismatchUDPSynDatapathVersionMismatchUDPSynTokenDecodeFailedUDPSynTokenExpiredUDPSynSharedKeyHashFailedUDPSynPublicKeyFailedUDPSynAckTokenFailedUDPSynAckTokenEncodeFailedUDPSynAckTokenHashFailedUDPSynAckTokenSignFailedUDPSynAckSharedSecretMissingUDPSynAckInvalidSecretUDPSynAckInvalidTokenLengthUDPSynAckMissingSignatureUDPSynAckInvalidSignatureUDPSynAckCompressedTagMismatchUDPSynAckDatapathVersionMismatchUDPSynAckTokenDecodeFailedUDPSynAckTokenExpiredUDPSynAckSharedKeyHashFailedUDPSynAckPublicKeyFailedUDPAckTokenFailedUDPAckTokenEncodeFailedUDPAckTokenHashFailedUDPAckSharedSecretMissingUDPAckInvalidSecretUDPAckInvalidTokenLengthUDPAckMissingSignatureUDPAckCompressedTagMismatchUDPAckDatapathVersionMismatchUDPAckTokenDecodeFailedUDPAckTokenExpiredUDPAckSignatureMismatchAppSynAuthOptionSetAckToFinAckIgnoreFinInvalidNetStateInvalidNetAckStateAppSynAckAuthOptionSetDuplicateAckDropDNSForwardFailedDNSResponseFailedNfLogErrorSegmentServerContainerEventExceedsProcessingTimeCorruptPacketSynMissingTCPOptionUDPDropRstNonPUUDPTrafficIPTablesResetDNSInvalidRequesterrMax\"\n\nvar _CounterType_index = [...]uint16{0, 12, 24, 35, 47, 59, 71, 88, 103, 123, 147, 157, 182, 203, 221, 244, 264, 280, 296, 324, 353, 381, 410, 424, 446, 465, 488, 503, 522, 540, 553, 570, 589, 606, 625, 642, 658, 671, 689, 707, 728, 747, 771, 785, 805, 827, 849, 863, 874, 895, 911, 926, 943, 962, 981, 1006, 1032, 1057, 1083, 1101, 1120, 1139, 1160, 1175, 1192, 1209, 1227, 1244, 1262, 1279, 1300, 1321, 1341, 1359, 1377, 1399, 1415, 1436, 1455, 1474, 1498, 1524, 1544, 1559, 1581, 1599, 1622, 1643, 1664, 1689, 1708, 1732, 1754, 1776, 1803, 1832, 1855, 1873, 1898, 1919, 1939, 1957, 1975, 1997, 2013, 2034, 2053, 2077, 2103, 2123, 2138, 2158, 2175, 2198, 2219, 2240, 2265, 2284, 2308, 2330, 2352, 2379, 2408, 2431, 2449, 2474, 2495, 2515, 2541, 2565, 2589, 2617, 2639, 2666, 2691, 2716, 2746, 2778, 2804, 2825, 2853, 2877, 2894, 2917, 2938, 2963, 2982, 3006, 3028, 3055, 3084, 3107, 3125, 3148, 3167, 3178, 3187, 3202, 3220, 3242, 3258, 3274, 3291, 3301, 3349, 3362, 3381, 3391, 3406, 3419, 3436, 3442}\n\nfunc (i CounterType) String() string {\n\tif i < 0 || i >= CounterType(len(_CounterType_index)-1) {\n\t\treturn \"CounterType(\" + strconv.FormatInt(int64(i), 10) + \")\"\n\t}\n\treturn _CounterType_name[_CounterType_index[i]:_CounterType_index[i+1]]\n}\n"
  },
  {
    "path": "controller/pkg/counters/default.go",
    "content": "package counters\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\n// defaultCounters are a global instance of counters.\n// These are used when we dont have a PU context.\nvar defaultCounters = NewCounters()\n\n// CounterError is a convinence function which returns error as well as increments the counter.\nfunc CounterError(t CounterType, err error) error { // nolint\n\treturn defaultCounters.CounterError(t, err)\n}\n\n// IncrementCounter increments counters for a given PU\nfunc IncrementCounter(err CounterType) {\n\tdefaultCounters.IncrementCounter(err)\n}\n\n// GetErrorCounters returns the error counters and resets the counters to zero\nfunc GetErrorCounters() []collector.Counters {\n\treturn defaultCounters.GetErrorCounters()\n}\n"
  },
  {
    "path": "controller/pkg/counters/default_test.go",
    "content": "package counters\n\nimport (\n\t\"errors\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc Test_DefaultCounterError(t *testing.T) {\n\n\tConvey(\"When I increment counter\", t, func() {\n\t\terr := CounterError(ErrInvalidProtocol, errors.New(\"unknown protocol\"))\n\t\tIncrementCounter(ErrInvalidProtocol)\n\t\tSo(err, ShouldResemble, errors.New(\"unknown protocol\"))\n\t\tSo(defaultCounters.counters[ErrInvalidProtocol], ShouldEqual, 2)\n\n\t\t// Reset the global counters\n\t\tGetErrorCounters() // nolint\n\t})\n}\n\nfunc Test_DefaultGetErrorCounter(t *testing.T) {\n\n\tConvey(\"When I increment counter\", t, func() {\n\t\terr := CounterError(ErrInvalidProtocol, errors.New(\"unknown protocol\"))\n\t\tIncrementCounter(ErrInvalidProtocol)\n\t\tIncrementCounter(ErrInvalidProtocol)\n\t\tSo(err, ShouldResemble, errors.New(\"unknown protocol\"))\n\t\tSo(defaultCounters.counters[ErrInvalidProtocol], ShouldEqual, 3)\n\n\t\tConvey(\"When I get the error counter\", func() {\n\t\t\tc := GetErrorCounters()\n\t\t\tSo(len(c), ShouldEqual, errMax)\n\t\t\tSo(c[ErrInvalidProtocol], ShouldEqual, 3)\n\t\t\tSo(defaultCounters.counters[ErrInvalidProtocol], ShouldEqual, 0)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/counters/types.go",
    "content": "package counters\n\nimport \"sync\"\n\n// CounterType custom counter error type.\ntype CounterType int\n\n//go:generate stringer -type=CounterType -trimprefix Err\n// Error counters used in the enforcerd\nconst (\n\tErrUnknownError CounterType = iota\n\tErrNonPUTraffic\n\tErrNoConnFound\n\tErrRejectPacket\n\tErrMarkNotFound\n\tErrPortNotFound\n\tErrContextIDNotFound\n\tErrInvalidProtocol\n\tErrConnectionsProcessed\n\tErrEncrConnectionsProcessed\n\tErrUDPDropFin\n\tErrUDPSynDroppedInvalidToken\n\tErrUDPSynAckInvalidToken\n\tErrUDPAckInvalidToken\n\tErrUDPConnectionsProcessed\n\tErrUDPContextIDNotFound\n\tErrUDPDropQueueFull\n\tErrUDPDropInNfQueue\n\tErrAppServicePreProcessorFailed\n\tErrAppServicePostProcessorFailed\n\tErrNetServicePreProcessorFailed\n\tErrNetServicePostProcessorFailed\n\tErrSynTokenFailed\n\tErrSynDroppedInvalidToken\n\tErrSynDroppedTCPOption\n\tErrSynDroppedInvalidFormat\n\tErrSynRejectPacket\n\tErrSynUnexpectedPacket\n\tErrInvalidNetSynState\n\tErrNetSynNotSeen\n\tErrSynToExtNetAccept\n\tErrSynFromExtNetAccept\n\tErrSynToExtNetReject\n\tErrSynFromExtNetReject\n\tErrSynAckTokenFailed\n\tErrOutOfOrderSynAck\n\tErrInvalidSynAck\n\tErrSynAckInvalidToken\n\tErrSynAckMissingToken\n\tErrSynAckNoTCPAuthOption\n\tErrSynAckInvalidFormat\n\tErrSynAckEncryptionMismatch\n\tErrSynAckRejected\n\tErrSynAckToExtNetAccept\n\tErrSynAckFromExtNetAccept\n\tErrSynAckFromExtNetReject\n\tErrAckTokenFailed\n\tErrAckRejected\n\tErrAckTCPNoTCPAuthOption //50\n\tErrAckInvalidFormat\n\tErrAckInvalidToken\n\tErrAckInUnknownState\n\tErrAckFromExtNetAccept\n\tErrAckFromExtNetReject\n\tErrUDPAppPreProcessingFailed\n\tErrUDPAppPostProcessingFailed\n\tErrUDPNetPreProcessingFailed\n\tErrUDPNetPostProcessingFailed\n\tErrUDPSynInvalidToken\n\tErrUDPSynMissingClaims\n\tErrUDPSynDroppedPolicy\n\tErrUDPSynAckNoConnection\n\tErrUDPSynAckPolicy\n\tErrDroppedTCPPackets\n\tErrDroppedUDPPackets\n\tErrDroppedICMPPackets\n\tErrDroppedDNSPackets\n\tErrDroppedDHCPPackets\n\tErrDroppedNTPPackets\n\tErrTCPConnectionsExpired\n\tErrUDPConnectionsExpired\n\tErrSynTokenEncodeFailed\n\tErrSynTokenHashFailed\n\tErrSynTokenSignFailed\n\tErrSynSharedSecretMissing\n\tErrSynInvalidSecret\n\tErrSynInvalidTokenLength\n\tErrSynMissingSignature\n\tErrSynInvalidSignature\n\tErrSynCompressedTagMismatch\n\tErrSynDatapathVersionMismatch\n\tErrSynTokenDecodeFailed\n\tErrSynTokenExpired\n\tErrSynSharedKeyHashFailed\n\tErrSynPublicKeyFailed\n\tErrSynAckTokenEncodeFailed\n\tErrSynAckTokenHashFailed\n\tErrSynAckTokenSignFailed\n\tErrSynAckSharedSecretMissing\n\tErrSynAckInvalidSecret\n\tErrSynAckInvalidTokenLength\n\tErrSynAckMissingSignature\n\tErrSynAckInvalidSignature\n\tErrSynAckCompressedTagMismatch\n\tErrSynAckDatapathVersionMismatch\n\tErrSynAckTokenDecodeFailed\n\tErrSynAckTokenExpired\n\tErrSynAckSharedKeyHashFailed\n\tErrSynAckPublicKeyFailed //100\n\tErrAckTokenEncodeFailed\n\tErrAckTokenHashFailed\n\tErrAckTokenSignFailed\n\tErrAckSharedSecretMissing\n\tErrAckInvalidSecret\n\tErrAckInvalidTokenLength\n\tErrAckMissingSignature\n\tErrAckCompressedTagMismatch\n\tErrAckDatapathVersionMismatch\n\tErrAckTokenDecodeFailed\n\tErrAckTokenExpired\n\tErrAckSignatureMismatch\n\tErrUDPSynTokenFailed\n\tErrUDPSynTokenEncodeFailed\n\tErrUDPSynTokenHashFailed\n\tErrUDPSynTokenSignFailed\n\tErrUDPSynSharedSecretMissing\n\tErrUDPSynInvalidSecret\n\tErrUDPSynInvalidTokenLength\n\tErrUDPSynMissingSignature\n\tErrUDPSynInvalidSignature\n\tErrUDPSynCompressedTagMismatch\n\tErrUDPSynDatapathVersionMismatch\n\tErrUDPSynTokenDecodeFailed\n\tErrUDPSynTokenExpired\n\tErrUDPSynSharedKeyHashFailed\n\tErrUDPSynPublicKeyFailed\n\tErrUDPSynAckTokenFailed\n\tErrUDPSynAckTokenEncodeFailed\n\tErrUDPSynAckTokenHashFailed\n\tErrUDPSynAckTokenSignFailed\n\tErrUDPSynAckSharedSecretMissing\n\tErrUDPSynAckInvalidSecret\n\tErrUDPSynAckInvalidTokenLength\n\tErrUDPSynAckMissingSignature\n\tErrUDPSynAckInvalidSignature\n\tErrUDPSynAckCompressedTagMismatch\n\tErrUDPSynAckDatapathVersionMismatch\n\tErrUDPSynAckTokenDecodeFailed\n\tErrUDPSynAckTokenExpired\n\tErrUDPSynAckSharedKeyHashFailed\n\tErrUDPSynAckPublicKeyFailed\n\tErrUDPAckTokenFailed\n\tErrUDPAckTokenEncodeFailed\n\tErrUDPAckTokenHashFailed\n\tErrUDPAckSharedSecretMissing\n\tErrUDPAckInvalidSecret\n\tErrUDPAckInvalidTokenLength\n\tErrUDPAckMissingSignature\n\tErrUDPAckCompressedTagMismatch //150\n\tErrUDPAckDatapathVersionMismatch\n\tErrUDPAckTokenDecodeFailed\n\tErrUDPAckTokenExpired\n\tErrUDPAckSignatureMismatch\n\tErrAppSynAuthOptionSet\n\tErrAckToFinAck\n\tErrIgnoreFin\n\tErrInvalidNetState\n\tErrInvalidNetAckState\n\tErrAppSynAckAuthOptionSet\n\tErrDuplicateAckDrop\n\tErrDNSForwardFailed\n\tErrDNSResponseFailed\n\tErrNfLogError\n\tErrSegmentServerContainerEventExceedsProcessingTime\n\tErrCorruptPacket\n\tErrSynMissingTCPOption\n\tErrUDPDropRst\n\tErrNonPUUDPTraffic\n\tErrIPTablesReset\n\tErrDNSInvalidRequest\n\t// !!!! ADD NEW ERRORS ABOVE THIS LINE !!!!\n\t// errMax must be the last error counter defined.\n\terrMax\n)\n\n// Counters holds the counters value.\ntype Counters struct {\n\tsync.Mutex\n\tcounters [errMax + 1]uint32\n}\n"
  },
  {
    "path": "controller/pkg/dmesgparser/dmesgparser.go",
    "content": "package dmesgparser\n\nimport (\n\t\"fmt\"\n\t\"os/exec\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n)\n\n// Dmesg struct handle for the dmesg parser\ntype Dmesg struct {\n\tchanSize          int\n\tlastProcessedTime float64\n\tsync.Mutex\n}\n\nfunc getEntryTime(line string) float64 {\n\tleftindex := strings.Index(line, \"[\")\n\trightIndex := strings.Index(line, \"]\")\n\tval, _ := strconv.ParseFloat(strings.TrimSpace(line[leftindex+1:rightIndex]), 64)\n\treturn val\n}\n\n// TODOD move to Dmesg -w mode later\n// func (r *Dmesg) runDmesgCommandFollowMode(outputChan chan string, interval time.Duration) {\n// \tcmdCtx,cancel := context.WithTimeout(ctx, interval)\n// \tdefer cancel()\n// \tcmd := exec.CommandContext(, \"dmesg\", \"-w\", \"-l\", \"warn\")\n\n// }\n\n// RunDmesgCommand runs the Dmesg command to capture raw Dmesg output\nfunc (d *Dmesg) RunDmesgCommand() ([]string, error) {\n\n\toutput, err := exec.Command(\"dmesg\").CombinedOutput()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Cannot run Dmesg cmd %s\", err)\n\t}\n\n\treturn d.ParseDmesgOutput(string(output))\n}\n\n// ParseDmesgOutput will parse dmesg output\nfunc (d *Dmesg) ParseDmesgOutput(dmesgOutput string) ([]string, error) {\n\tlines := strings.Split(strings.TrimSuffix(dmesgOutput, \"\\n\"), \"\\n\")\n\toutputslice := make([]string, len(lines))\n\telementsadded := 0\n\n\tfor _, line := range lines {\n\t\tline = strings.TrimSpace(line)\n\t\tif !isTraceOutput(line) {\n\t\t\tcontinue\n\t\t}\n\t\tif d.lastProcessedTime < getEntryTime(line) {\n\t\t\toutputslice[elementsadded] = line\n\t\t\telementsadded++\n\t\t}\n\t}\n\treturn outputslice[:elementsadded], nil\n}\n\nfunc isTraceOutput(line string) bool {\n\ti := strings.Index(line, \"]\")\n\tif i < 0 {\n\t\treturn false\n\t}\n\tsubstring := strings.TrimSpace(line[:strings.Index(line, \"]\")+1])\n\treturn strings.HasPrefix(line, substring+\" TRACE:\")\n}\n\n// New return an initialized Dmesg\nfunc New() *Dmesg {\n\treturn &Dmesg{\n\t\tchanSize:          10000,\n\t\tlastProcessedTime: 0,\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/ebpf/bpfbuild/socket-filter-bpf.go",
    "content": "// Code generated by go-bindata.\n// sources:\n// ../dist/socket-filter-bpf.o\n// DO NOT EDIT!\n\npackage bpfbuild\n\nimport (\n\t\"bytes\"\n\t\"compress/gzip\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"time\"\n)\n\nfunc bindataRead(data []byte, name string) ([]byte, error) {\n\tgz, err := gzip.NewReader(bytes.NewBuffer(data))\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Read %q: %v\", name, err)\n\t}\n\n\tvar buf bytes.Buffer\n\t_, err = io.Copy(&buf, gz)\n\tclErr := gz.Close()\n\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Read %q: %v\", name, err)\n\t}\n\tif clErr != nil {\n\t\treturn nil, err\n\t}\n\n\treturn buf.Bytes(), nil\n}\n\ntype asset struct {\n\tbytes []byte\n\tinfo  os.FileInfo\n}\n\ntype bindataFileInfo struct {\n\tname    string\n\tsize    int64\n\tmode    os.FileMode\n\tmodTime time.Time\n}\n\nfunc (fi bindataFileInfo) Name() string {\n\treturn fi.name\n}\nfunc (fi bindataFileInfo) Size() int64 {\n\treturn fi.size\n}\nfunc (fi bindataFileInfo) Mode() os.FileMode {\n\treturn fi.mode\n}\nfunc (fi bindataFileInfo) ModTime() time.Time {\n\treturn fi.modTime\n}\nfunc (fi bindataFileInfo) IsDir() bool {\n\treturn false\n}\nfunc (fi bindataFileInfo) Sys() interface{} {\n\treturn nil\n}\n\nvar _socketFilterBpfO = []byte(\"\\x1f\\x8b\\x08\\x00\\x00\\x09\\x6e\\x88\\x00\\xff\\xe4\\x94\\xb1\\x8f\\x12\\x41\\x14\\xc6\\xbf\\x61\\x41\\x16\\xa4\\xa0\\x10\\x83\\xd1\\x82\\x92\\xc2\\x2c\\x46\\x8d\\xb1\\x32\\x84\\x44\\x2a\\x4c\\x8c\\xb1\\x30\\xb1\\x20\\x2b\\x59\\x83\\x59\\x11\\xc2\\x6c\\xa1\\xc4\\xc4\\xca\\xc6\\x8a\\xc6\\x86\\xd2\\xca\\xff\\xe0\\xae\\xdb\\xf6\\xfe\\x0c\\x8a\\x2b\\x2e\\x57\\x51\\x5c\\x72\\x97\\x6b\\xe6\\xb2\\x33\\x6f\\xd8\\x65\\x76\\xd9\\xbb\\xfe\\xbe\\x82\\xb7\\xef\\x37\\xfb\\x76\\xde\\x7c\\x79\\xc3\\xaf\\xd7\\x83\\x7e\\x81\\x31\\x68\\x31\\x9c\\x23\\xce\\x62\\x1d\\x59\\xf1\\x73\\x97\\x7e\\x2b\\x60\\x08\\xef\\x2b\\xd6\\x02\\x50\\x03\\x10\\x96\\x55\\x3e\\x5a\\xac\\x85\\xe6\\xf5\\x88\\xdb\\xc4\\xff\\x1c\\x4b\\xde\\x06\\x70\\x2f\\xe2\\x15\\xc5\\xfd\\xd5\\xc9\\x96\\x47\\x9f\\xf4\\xab\\xa7\\x42\\xbd\\xbf\\x51\\x71\\x71\\x26\\xa3\\x5f\\xbd\\x50\\x71\\x75\\x29\\x63\\xf8\\x4f\\xd5\\x97\\x0b\\xc0\\x5a\\x08\\xd1\\x34\\x9a\\xff\\x2d\\xcf\\x04\\x34\\xe8\\x54\\x25\\xa8\\x0d\\x93\\x75\\x9b\\x9c\\xba\\x90\\x78\\x13\\x80\\x10\\x42\\xe8\\xf5\\x06\\x79\\x76\\x40\\xf9\\xdf\\xad\\x7f\\xca\\x07\\xb9\\x5a\\xcf\\x30\\xf2\\x96\\x4a\\x5b\\xc1\\x16\\xef\\x60\\xff\\xbc\\xcb\\x1e\\x12\\x6b\\xea\\xf5\\xac\\xa1\\x4b\\xe8\\xbd\\xfc\\xb5\\x70\\x68\\xf0\\x37\\xc4\\xcd\\xa1\\xed\\x13\\xb7\\x33\\xbe\\x6b\\xc1\\x4a\\x31\\x35\\xa7\\x69\\x5e\\x93\\xbc\\x94\\xe2\\x4b\\x7d\\x1e\\x00\\x77\\xa2\\xfb\\x61\\xe4\\x8f\\x28\\xaf\\x02\\x28\\x46\\x0f\\x4e\\xe0\\x7d\\x0f\\x30\\x71\\x67\\xbc\\xc3\\x3d\\xce\\xbf\\x4c\\xbf\\x71\\x38\\x73\\xef\\x2b\\x9f\\x8e\\x7c\\x2f\\xe8\\xb8\\xb3\\xd9\\xd0\\x1d\\xf9\\x12\\x39\\xde\\x78\\xf8\\x79\\xee\\x4e\\x3c\\x38\\x3c\\x98\\x07\\xee\\x27\\x38\\xfc\\xc7\\x24\\x8a\\x83\\x5e\\xef\\xc9\\xf0\\xb9\\x0a\\xcf\\x54\\x78\\x9a\\xef\\xdb\\x4d\\xf5\\x42\\xb9\\x98\\xd2\\x98\\xe0\\x47\\x83\\x9b\\xb6\\xb2\\xc4\\xd9\\x93\\xea\\xee\\xd9\\xaf\\x68\\xe4\\x0f\\xae\\xa9\\x37\\xe7\\xc3\\x36\\xde\\x6b\\x00\\x74\\xb3\\x77\\xf5\\x92\\xfa\\x6f\\x25\\xea\\xac\\x44\\xbd\\x9e\\xcb\\x32\\xed\\x6f\\x7a\\xf0\\x56\\xdf\\x7f\\x96\\xdf\\xff\\x63\\xaa\\x2f\\x18\\x7c\\x4c\\xa0\\x8d\\xfc\\xfe\\xdb\\x7b\\xfa\\xff\\x60\\xed\\xf6\\x69\\x93\\x47\\x66\\xff\\xaf\\x32\\xf6\\x8e\\xb4\\x24\\xf8\\x9f\\x72\\x26\\xff\\x03\\xe3\\x7a\\x7d\\xff\\xae\\x02\\x00\\x00\\xff\\xff\\x2e\\x02\\xe6\\xdc\\x08\\x06\\x00\\x00\")\n\nfunc socketFilterBpfOBytes() ([]byte, error) {\n\treturn bindataRead(\n\t\t_socketFilterBpfO,\n\t\t\"socket-filter-bpf.o\",\n\t)\n}\n\nfunc socketFilterBpfO() (*asset, error) {\n\tbytes, err := socketFilterBpfOBytes()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tinfo := bindataFileInfo{name: \"socket-filter-bpf.o\", size: 1544, mode: os.FileMode(420), modTime: time.Unix(1, 0)}\n\ta := &asset{bytes: bytes, info: info}\n\treturn a, nil\n}\n\n// Asset loads and returns the asset for the given name.\n// It returns an error if the asset could not be found or\n// could not be loaded.\nfunc Asset(name string) ([]byte, error) {\n\tcannonicalName := strings.Replace(name, \"\\\\\", \"/\", -1)\n\tif f, ok := _bindata[cannonicalName]; ok {\n\t\ta, err := f()\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"Asset %s can't read by error: %v\", name, err)\n\t\t}\n\t\treturn a.bytes, nil\n\t}\n\treturn nil, fmt.Errorf(\"Asset %s not found\", name)\n}\n\n// MustAsset is like Asset but panics when Asset would return an error.\n// It simplifies safe initialization of global variables.\nfunc MustAsset(name string) []byte {\n\ta, err := Asset(name)\n\tif err != nil {\n\t\tpanic(\"asset: Asset(\" + name + \"): \" + err.Error())\n\t}\n\n\treturn a\n}\n\n// AssetInfo loads and returns the asset info for the given name.\n// It returns an error if the asset could not be found or\n// could not be loaded.\nfunc AssetInfo(name string) (os.FileInfo, error) {\n\tcannonicalName := strings.Replace(name, \"\\\\\", \"/\", -1)\n\tif f, ok := _bindata[cannonicalName]; ok {\n\t\ta, err := f()\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"AssetInfo %s can't read by error: %v\", name, err)\n\t\t}\n\t\treturn a.info, nil\n\t}\n\treturn nil, fmt.Errorf(\"AssetInfo %s not found\", name)\n}\n\n// AssetNames returns the names of the assets.\nfunc AssetNames() []string {\n\tnames := make([]string, 0, len(_bindata))\n\tfor name := range _bindata {\n\t\tnames = append(names, name)\n\t}\n\treturn names\n}\n\n// _bindata is a table, holding each asset generator, mapped to its name.\nvar _bindata = map[string]func() (*asset, error){\n\t\"socket-filter-bpf.o\": socketFilterBpfO,\n}\n\n// AssetDir returns the file names below a certain\n// directory embedded in the file by go-bindata.\n// For example if you run go-bindata on data/... and data contains the\n// following hierarchy:\n//     data/\n//       foo.txt\n//       img/\n//         a.png\n//         b.png\n// then AssetDir(\"data\") would return []string{\"foo.txt\", \"img\"}\n// AssetDir(\"data/img\") would return []string{\"a.png\", \"b.png\"}\n// AssetDir(\"foo.txt\") and AssetDir(\"notexist\") would return an error\n// AssetDir(\"\") will return []string{\"data\"}.\nfunc AssetDir(name string) ([]string, error) {\n\tnode := _bintree\n\tif len(name) != 0 {\n\t\tcannonicalName := strings.Replace(name, \"\\\\\", \"/\", -1)\n\t\tpathList := strings.Split(cannonicalName, \"/\")\n\t\tfor _, p := range pathList {\n\t\t\tnode = node.Children[p]\n\t\t\tif node == nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Asset %s not found\", name)\n\t\t\t}\n\t\t}\n\t}\n\tif node.Func != nil {\n\t\treturn nil, fmt.Errorf(\"Asset %s not found\", name)\n\t}\n\trv := make([]string, 0, len(node.Children))\n\tfor childName := range node.Children {\n\t\trv = append(rv, childName)\n\t}\n\treturn rv, nil\n}\n\ntype bintree struct {\n\tFunc     func() (*asset, error)\n\tChildren map[string]*bintree\n}\nvar _bintree = &bintree{nil, map[string]*bintree{\n\t\"socket-filter-bpf.o\": &bintree{socketFilterBpfO, map[string]*bintree{}},\n}}\n\n// RestoreAsset restores an asset under the given directory\nfunc RestoreAsset(dir, name string) error {\n\tdata, err := Asset(name)\n\tif err != nil {\n\t\treturn err\n\t}\n\tinfo, err := AssetInfo(name)\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = os.MkdirAll(_filePath(dir, filepath.Dir(name)), os.FileMode(0755))\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = ioutil.WriteFile(_filePath(dir, name), data, info.Mode())\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = os.Chtimes(_filePath(dir, name), info.ModTime(), info.ModTime())\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// RestoreAssets restores an asset under the given directory recursively\nfunc RestoreAssets(dir, name string) error {\n\tchildren, err := AssetDir(name)\n\t// File\n\tif err != nil {\n\t\treturn RestoreAsset(dir, name)\n\t}\n\t// Dir\n\tfor _, child := range children {\n\t\terr = RestoreAssets(dir, filepath.Join(name, child))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc _filePath(dir, name string) string {\n\tcannonicalName := strings.Replace(name, \"\\\\\", \"/\", -1)\n\treturn filepath.Join(append([]string{dir}, strings.Split(cannonicalName, \"/\")...)...)\n}\n\n"
  },
  {
    "path": "controller/pkg/ebpf/ebpf_darwin.go",
    "content": "// +build darwin\n\npackage ebpf\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n)\n\n// BPFPath holds the BPF path\nvar BPFPath = \"/sys/fs/bpf/app-ack\"\n\ntype ebpfDarwin struct {\n}\n\n// IsEBPFSupported returns false for Darwin.\nfunc IsEBPFSupported() bool {\n\treturn false\n}\n\n// LoadBPF is not supported on Darwin.\nfunc LoadBPF() BPFModule {\n\treturn nil\n}\n\nfunc (*ebpfDarwin) CreateFlow(*packet.Packet) {\n}\n\nfunc (*ebpfDarwin) RemoveFlow(*packet.Packet) {\n}\n\nfunc (*ebpfDarwin) Cleanup() {\n}\n\nfunc (*ebpfDarwin) GetBPFPath() string {\n\treturn \"\"\n}\n"
  },
  {
    "path": "controller/pkg/ebpf/ebpf_linux.go",
    "content": "// +build !rhel6\n\npackage ebpf\n\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"unsafe\"\n\n\tbpflib \"github.com/iovisor/gobpf/elf\"\n\t\"github.com/iovisor/gobpf/pkg/bpffs\"\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/ebpf/bpfbuild\"\n\t\"go.uber.org/zap\"\n)\n\ntype ebpfModule struct {\n\tm          *bpflib.Module\n\tsessionMap *bpflib.Map\n\tbpfPath    string\n}\n\ntype flow struct {\n\tsrcIP   uint32\n\tdstIP   uint32\n\tsrcPort uint16\n\tdstPort uint16\n}\n\nconst bpfPath = \"/sys/fs/bpf/\"\nconst bpfPrefix = \"app-ack\"\n\nfunc removeOldBPFFiles() {\n\n\tremoveFiles := func(path string, info os.FileInfo, err error) error {\n\t\tif strings.Contains(path, bpfPrefix) {\n\t\t\tif err := os.Remove(path); err != nil {\n\t\t\t\tzap.L().Debug(\"Failed to remove file\", zap.String(\"path\", path), zap.Error(err))\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\n\tfilepath.Walk(bpfPath, removeFiles) // nolint\n}\n\n// IsEBPFSupported is called once by the master enforcer to test if\n// the system supports eBPF.\nfunc IsEBPFSupported() bool {\n\n\tif err := bpffs.Mount(); err != nil {\n\t\tzap.L().Info(\"bpf mount failed\", zap.Error(err))\n\t\treturn false\n\t}\n\n\tvar bpf BPFModule\n\n\tif bpf = LoadBPF(); bpf == nil {\n\t\treturn false\n\t}\n\n\tif err := provider.TestIptablesPinned(bpf.GetBPFPath()); err != nil {\n\t\tzap.L().Info(\"Kernel doesn't support iptables pinned path\", zap.Error(err))\n\t\treturn false\n\t}\n\n\tremoveOldBPFFiles()\n\treturn true\n}\n\n// LoadBPF loads the bpf object in the memory and also pins the bpf to the file system.\nfunc LoadBPF() BPFModule {\n\tbpf := &ebpfModule{}\n\n\tbpf.bpfPath = bpfPath + bpfPrefix + strconv.Itoa(os.Getpid())\n\tif err := os.Remove(bpf.bpfPath); err != nil {\n\t\tif !os.IsNotExist(err) {\n\t\t\tzap.L().Debug(\"Failed to remove bpf file\", zap.Error(err))\n\t\t}\n\t}\n\n\tbuf, err := bpfbuild.Asset(\"socket-filter-bpf.o\")\n\tif err != nil {\n\t\tzap.L().Info(\"Failed to locate asset socket-filter-bpf\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\treader := bytes.NewReader(buf)\n\tm := bpflib.NewModuleFromReader(reader)\n\n\tif err := m.Load(nil); err != nil {\n\t\tzap.L().Info(\"Failed to load BPF in kernel\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tsfAppAck := m.SocketFilter(\"socket/app_ack\")\n\tif sfAppAck == nil {\n\t\tzap.L().Info(\"Failed to load socket filter app_ack\")\n\t\treturn nil\n\t}\n\n\tif err := bpflib.PinObject(sfAppAck.Fd(), bpf.bpfPath); err != nil {\n\t\tzap.L().Info(\"Failed to pin bpf to file system\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tsessionMap := m.Map(\"sessions\")\n\tif sessionMap == nil {\n\t\tzap.L().Info(\"Failed to load sessions map\")\n\t\treturn nil\n\t}\n\n\tbpf.m = m\n\tbpf.sessionMap = sessionMap\n\n\treturn bpf\n}\n\nfunc (ebpf *ebpfModule) CreateFlow(tcpTuple *connection.TCPTuple) {\n\tvar key flow\n\tvar val uint8\n\n\tkey.srcIP = binary.BigEndian.Uint32(tcpTuple.SourceAddress)\n\tkey.dstIP = binary.BigEndian.Uint32(tcpTuple.DestinationAddress)\n\tkey.srcPort = tcpTuple.SourcePort\n\tkey.dstPort = tcpTuple.DestinationPort\n\n\tval = 1\n\n\terr := ebpf.m.UpdateElement(ebpf.sessionMap, unsafe.Pointer(&key), unsafe.Pointer(&val), 0)\n\tif err != nil {\n\t\tzap.L().Debug(\"Update bpf map failed\",\n\t\t\tzap.String(\"packet\", tcpTuple.String()),\n\t\t\tzap.Error(err))\n\t}\n}\n\nfunc (ebpf *ebpfModule) RemoveFlow(tcpTuple *connection.TCPTuple) {\n\tvar key flow\n\n\tkey.srcIP = binary.BigEndian.Uint32(tcpTuple.SourceAddress)\n\tkey.dstIP = binary.BigEndian.Uint32(tcpTuple.DestinationAddress)\n\tkey.srcPort = tcpTuple.SourcePort\n\tkey.dstPort = tcpTuple.DestinationPort\n\n\terr := ebpf.m.DeleteElement(ebpf.sessionMap, unsafe.Pointer(&key))\n\tif err != nil {\n\t\tzap.L().Debug(\"Delete bpf map failed\",\n\t\t\tzap.String(\"packet\", tcpTuple.String()),\n\t\t\tzap.Error(err))\n\t}\n}\n\nfunc (ebpf *ebpfModule) GetBPFPath() string {\n\treturn ebpf.bpfPath\n}\n\nfunc (ebpf *ebpfModule) Cleanup() {\n\tif err := os.Remove(ebpf.bpfPath); err != nil {\n\t\tzap.L().Error(\"Failed to remove bpf file during cleanup\", zap.Error(err))\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/ebpf/ebpf_rhel6.go",
    "content": "// +build rhel6\n\npackage ebpf\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n)\n\n// BPFPath holds the BPF path\nvar BPFPath = \"/sys/fs/bpf/app-ack\"\n\ntype ebpfRhel6 struct {\n}\n\n// IsEBPFSupported returns false for RHEL6.\nfunc IsEBPFSupported() bool {\n\treturn false\n}\n\n// LoadBPF is not supported on RHEL6.\nfunc LoadBPF() BPFModule {\n\treturn nil\n}\n\nfunc (*ebpfRhel6) CreateFlow(*packet.Packet) {\n}\n\nfunc (*ebpfRhel6) RemoveFlow(*packet.Packet) {\n}\n\nfunc (*ebpfRhel6) Cleanup() {\n}\n\nfunc (*ebpfRhel6) GetBPFPath() string {\n\treturn \"\"\n}\n"
  },
  {
    "path": "controller/pkg/ebpf/ebpf_windows.go",
    "content": "package ebpf\n\n// IsEBPFSupported returns false for Windows.\nfunc IsEBPFSupported() bool {\n\treturn false\n}\n\n// LoadBPF is not supported on Windows.\nfunc LoadBPF() BPFModule {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/pkg/ebpf/interface.go",
    "content": "package ebpf\n\nimport \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\n//BPFModule interface exposes the functionality to datapath\ntype BPFModule interface {\n\tGetBPFPath() string\n\tCreateFlow(*connection.TCPTuple)\n\tRemoveFlow(*connection.TCPTuple)\n\tCleanup()\n}\n"
  },
  {
    "path": "controller/pkg/env/parameters.go",
    "content": "package env\n\nimport (\n\t\"os\"\n\t\"strconv\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n)\n\n// RemoteParameters holds all configuration objects that must be passed\n// during the initialization of the monitor.\ntype RemoteParameters struct {\n\tLogWithID      bool\n\tLogLevel       string\n\tLogFormat      string\n\tCompressedTags claimsheader.CompressionType\n}\n\n// GetParameters retrieves log parameters for Remote Enforcer.\nfunc GetParameters() (string, string, string, claimsheader.CompressionType, int) {\n\n\tvar logID, logLevel, logFormat string\n\tvar compressedTagsVersion claimsheader.CompressionType\n\tvar numQueues int\n\n\tlogLevel = os.Getenv(constants.EnvLogLevel)\n\tif logLevel == \"\" {\n\t\tlogLevel = \"info\"\n\t}\n\tlogFormat = os.Getenv(constants.EnvLogFormat)\n\tif logLevel == \"\" {\n\t\tlogFormat = \"json\"\n\t}\n\n\tlogID = os.Getenv(constants.EnvLogID)\n\tcompressedTagsVersion = claimsheader.CompressionTypeV1\n\n\tif num, err := strconv.Atoi(os.Getenv(constants.EnvEnforcerdNFQueues)); err == nil {\n\t\tnumQueues = num\n\t} else {\n\t\tnumQueues = 4\n\t}\n\n\treturn logID, logLevel, logFormat, compressedTagsVersion, numQueues\n}\n"
  },
  {
    "path": "controller/pkg/flowtracking/flowtracking.go",
    "content": "// +build linux\n\npackage flowtracking\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\n\t\"github.com/mdlayher/netlink\"\n\t\"github.com/ti-mo/conntrack\"\n)\n\n// Client is a flow update client\ntype Client struct {\n\tconn *conntrack.Conn\n}\n\n// NewClient creates a new flow tracking client. s\nfunc NewClient(ctx context.Context) (*Client, error) {\n\tc, err := conntrack.Dial(&netlink.Config{\n\t\t// Enable this when the netlink PR is merged.\n\t\tDisableNSLockThread: true,\n\t})\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"flow tracker is unable to dial netlink: %s\", err)\n\t}\n\n\tclient := &Client{conn: c}\n\tgo func() {\n\t\t<-ctx.Done()\n\t\tclient.conn.Close() // nolint errcheck\n\t}()\n\n\treturn client, nil\n}\n\n// Close will close the connection of the client.\nfunc (c *Client) Close() error {\n\treturn c.conn.Close()\n}\n\n// UpdateMark updates the mark of the flow. Caller must indicate if this is an application\n// flow or a network flow.\nfunc (c *Client) UpdateMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32, network bool) error {\n\n\tif network {\n\t\treturn c.UpdateNetworkFlowMark(ipSrc, ipDst, protonum, srcport, dstport, newmark)\n\t}\n\n\treturn c.UpdateApplicationFlowMark(ipSrc, ipDst, protonum, srcport, dstport, newmark)\n}\n\n// GetOriginalDest gets the original destination ip, port and the mark on the packet\nfunc (c *Client) GetOriginalDest(ipSrc, ipDst net.IP, srcport, dstport uint16, protonum uint8) (net.IP, uint16, uint32, error) {\n\n\tflow := conntrack.NewFlow(protonum, 0, ipSrc, ipDst, srcport, dstport, 0, 0)\n\torigFlow, err := c.conn.Get(flow)\n\treturn origFlow.TupleOrig.IP.DestinationAddress, origFlow.TupleOrig.Proto.DestinationPort, origFlow.Mark, err\n}\n\n// UpdateNetworkFlowMark will update the mark for a flow based on packet information received\n// from the network. It will use the reverse tables in conntrack for that.\nfunc (c *Client) UpdateNetworkFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\n\tf := newReplyFlow(protonum, 0, ipSrc, ipDst, srcport, dstport, 0, newmark)\n\n\treturn c.conn.Update(f)\n}\n\n// UpdateApplicationFlowMark will update the mark for a flow based on the packet information\n// received from an application. It will use the forward entries of conntrack for that.\nfunc (c *Client) UpdateApplicationFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\n\tf := conntrack.NewFlow(protonum, 0, ipSrc, ipDst, srcport, dstport, 0, newmark)\n\n\treturn c.conn.Update(f)\n}\n\n// newReplyFlow will create a flow based on the reply tuple only. This will help us\n// update the mark without requiring knowledge of nats.\nfunc newReplyFlow(proto uint8, status conntrack.StatusFlag, srcAddr, destAddr net.IP, srcPort, destPort uint16, timeout, mark uint32) conntrack.Flow {\n\n\tvar f conntrack.Flow\n\n\tf.Status.Value = status\n\n\tf.Timeout = timeout\n\tf.Mark = mark\n\n\t// Set up TupleReply with source and destination inverted\n\tf.TupleReply.IP.SourceAddress = srcAddr\n\tf.TupleReply.IP.DestinationAddress = destAddr\n\tf.TupleReply.Proto.SourcePort = srcPort\n\tf.TupleReply.Proto.DestinationPort = destPort\n\tf.TupleReply.Proto.Protocol = proto\n\n\treturn f\n}\n"
  },
  {
    "path": "controller/pkg/flowtracking/flowtracking_nonlinux.go",
    "content": "// +build !linux\n\npackage flowtracking\n\nimport (\n\t\"context\"\n\t\"net\"\n)\n\n// Client is a flow update client.\n// For Windows, we can't use conntrack.\ntype Client struct {\n}\n\n// NewClient creates a new flow tracking client.\nfunc NewClient(ctx context.Context) (*Client, error) {\n\treturn &Client{}, nil\n}\n\n// Close will close the connection of the client.\nfunc (c *Client) Close() error {\n\treturn nil\n}\n\n// UpdateMark updates the mark of the flow. Caller must indicate if this is an application\n// flow or a network flow. Not used in Windows.\nfunc (c *Client) UpdateMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32, network bool) error {\n\treturn nil\n}\n\n// UpdateNetworkFlowMark will update the mark for a flow based on packet information received\n// from the network. It will use the reverse tables in conntrack for that. Not used in Windows.\nfunc (c *Client) UpdateNetworkFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\treturn nil\n}\n\n// UpdateApplicationFlowMark will update the mark for a flow based on the packet information\n// received from an application. It will use the forward entries of conntrack for that. Not used in Windows.\nfunc (c *Client) UpdateApplicationFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\treturn nil\n}\n\n// GetOriginalDest gets the original destination ip, port and the mark on the packet. Not used in Windows.\n// TODO(windows): we may need to support this?\nfunc (c *Client) GetOriginalDest(ipSrc, ipDst net.IP, srcport, dstport uint16, protonum uint8) (net.IP, uint16, uint32, error) {\n\treturn nil, 0, 0, nil\n}\n"
  },
  {
    "path": "controller/pkg/flowtracking/interfaces.go",
    "content": "package flowtracking\n\nimport \"net\"\n\n// FlowClient defines an interface that trireme uses to communicate with the conntrack\ntype FlowClient interface {\n\t// Close will close the connection of the client.\n\tClose() error\n\t// UpdateMark updates the mark of the flow. Caller must indicate if this is an application\n\t// flow or a network flow.\n\tUpdateMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32, network bool) error\n\t// GetOriginalDest gets the original destination ip, port and the mark on the packet\n\tGetOriginalDest(ipSrc, ipDst net.IP, srcport, dstport uint16, protonum uint8) (net.IP, uint16, uint32, error)\n\t// UpdateNetworkFlowMark will update the mark for a flow based on packet information received\n\t// from the network. It will use the reverse tables in conntrack for that.\n\tUpdateNetworkFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error\n\t// UpdateApplicationFlowMark will update the mark for a flow based on the packet information\n\t// received from an application. It will use the forward entries of conntrack for that.\n\tUpdateApplicationFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error\n}\n"
  },
  {
    "path": "controller/pkg/flowtracking/mockflowclient/mockflowclient.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/pkg/flowtracking/interfaces.go\n\n// Package mockflowclient is a generated GoMock package.\npackage mockflowclient\n\nimport (\n\tnet \"net\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n)\n\n// MockFlowClient is a mock of FlowClient interface\n// nolint\ntype MockFlowClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockFlowClientMockRecorder\n}\n\n// MockFlowClientMockRecorder is the mock recorder for MockFlowClient\n// nolint\ntype MockFlowClientMockRecorder struct {\n\tmock *MockFlowClient\n}\n\n// NewMockFlowClient creates a new mock instance\n// nolint\nfunc NewMockFlowClient(ctrl *gomock.Controller) *MockFlowClient {\n\tmock := &MockFlowClient{ctrl: ctrl}\n\tmock.recorder = &MockFlowClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockFlowClient) EXPECT() *MockFlowClientMockRecorder {\n\treturn m.recorder\n}\n\n// Close mocks base method\n// nolint\nfunc (m *MockFlowClient) Close() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Close\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Close indicates an expected call of Close\n// nolint\nfunc (mr *MockFlowClientMockRecorder) Close() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Close\", reflect.TypeOf((*MockFlowClient)(nil).Close))\n}\n\n// UpdateMark mocks base method\n// nolint\nfunc (m *MockFlowClient) UpdateMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32, network bool) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateMark\", ipSrc, ipDst, protonum, srcport, dstport, newmark, network)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateMark indicates an expected call of UpdateMark\n// nolint\nfunc (mr *MockFlowClientMockRecorder) UpdateMark(ipSrc, ipDst, protonum, srcport, dstport, newmark, network interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateMark\", reflect.TypeOf((*MockFlowClient)(nil).UpdateMark), ipSrc, ipDst, protonum, srcport, dstport, newmark, network)\n}\n\n// GetOriginalDest mocks base method\n// nolint\nfunc (m *MockFlowClient) GetOriginalDest(ipSrc, ipDst net.IP, srcport, dstport uint16, protonum uint8) (net.IP, uint16, uint32, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetOriginalDest\", ipSrc, ipDst, srcport, dstport, protonum)\n\tret0, _ := ret[0].(net.IP)\n\tret1, _ := ret[1].(uint16)\n\tret2, _ := ret[2].(uint32)\n\tret3, _ := ret[3].(error)\n\treturn ret0, ret1, ret2, ret3\n}\n\n// GetOriginalDest indicates an expected call of GetOriginalDest\n// nolint\nfunc (mr *MockFlowClientMockRecorder) GetOriginalDest(ipSrc, ipDst, srcport, dstport, protonum interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetOriginalDest\", reflect.TypeOf((*MockFlowClient)(nil).GetOriginalDest), ipSrc, ipDst, srcport, dstport, protonum)\n}\n\n// UpdateNetworkFlowMark mocks base method\n// nolint\nfunc (m *MockFlowClient) UpdateNetworkFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateNetworkFlowMark\", ipSrc, ipDst, protonum, srcport, dstport, newmark)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateNetworkFlowMark indicates an expected call of UpdateNetworkFlowMark\n// nolint\nfunc (mr *MockFlowClientMockRecorder) UpdateNetworkFlowMark(ipSrc, ipDst, protonum, srcport, dstport, newmark interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateNetworkFlowMark\", reflect.TypeOf((*MockFlowClient)(nil).UpdateNetworkFlowMark), ipSrc, ipDst, protonum, srcport, dstport, newmark)\n}\n\n// UpdateApplicationFlowMark mocks base method\n// nolint\nfunc (m *MockFlowClient) UpdateApplicationFlowMark(ipSrc, ipDst net.IP, protonum uint8, srcport, dstport uint16, newmark uint32) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateApplicationFlowMark\", ipSrc, ipDst, protonum, srcport, dstport, newmark)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateApplicationFlowMark indicates an expected call of UpdateApplicationFlowMark\n// nolint\nfunc (mr *MockFlowClientMockRecorder) UpdateApplicationFlowMark(ipSrc, ipDst, protonum, srcport, dstport, newmark interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateApplicationFlowMark\", reflect.TypeOf((*MockFlowClient)(nil).UpdateApplicationFlowMark), ipSrc, ipDst, protonum, srcport, dstport, newmark)\n}\n"
  },
  {
    "path": "controller/pkg/fqconfig/fqconfig.go",
    "content": "package fqconfig\n\n// FilterQueue captures the runtime configuration like the number of queues, dns servers.\ntype FilterQueue interface {\n\tGetNumQueues() int\n\tGetDNSServerAddresses() []string\n}\n\ntype filterQueue struct {\n\t//NumNFQueues\n\tnumNFQueues int\n\t// DNSServerAddress\n\tDNSServerAddress []string\n}\n\n// NewFilterQueue returns an instance of FilterQueue\nfunc NewFilterQueue(numNFQueues int, dnsServerAddress []string) FilterQueue {\n\treturn &filterQueue{\n\t\tnumNFQueues:      numNFQueues,\n\t\tDNSServerAddress: dnsServerAddress,\n\t}\n}\n\n// GetMarkValue returns a mark value to be used by iptables action\nfunc (f *filterQueue) GetNumQueues() int {\n\treturn f.numNFQueues\n}\n\nfunc (f *filterQueue) GetDNSServerAddresses() []string {\n\treturn f.DNSServerAddress\n}\n"
  },
  {
    "path": "controller/pkg/fqconfig/fqconfig_test.go",
    "content": "// +build !windows\n\npackage fqconfig\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestFqDefaultConfig(t *testing.T) {\n\n\tConvey(\"Given I create a new default filter queue config\", t, func() {\n\t\tfqc := NewFilterQueueWithDefaults()\n\t\tConvey(\"Then I should see a config\", func() {\n\n\t\t\tSo(fqc, ShouldNotBeNil)\n\n\t\t\tSo(fqc.GetMarkValue(), ShouldEqual, DefaultMarkValue)\n\n\t\t\tSo(fqc.GetApplicationQueueSize(), ShouldEqual, DefaultQueueSize)\n\t\t\tSo(fqc.GetNumApplicationQueues(), ShouldEqual, DefaultNumberOfQueues*4)\n\t\t\tSo(fqc.GetApplicationQueueStart(), ShouldEqual, 0)\n\t\t\tSo(fqc.GetApplicationQueueSynStr(), ShouldEqual, \"0:3\")\n\t\t\tSo(fqc.GetApplicationQueueAckStr(), ShouldEqual, \"4:7\")\n\t\t\tSo(fqc.GetApplicationQueueSynAckStr(), ShouldEqual, \"8:11\")\n\t\t\tSo(fqc.GetApplicationQueueSvcStr(), ShouldEqual, \"12:15\")\n\n\t\t\tSo(fqc.GetNetworkQueueSize(), ShouldEqual, DefaultQueueSize)\n\t\t\tSo(fqc.GetNumNetworkQueues(), ShouldEqual, DefaultNumberOfQueues*4)\n\t\t\tSo(fqc.GetNetworkQueueStart(), ShouldEqual, fqc.GetNumApplicationQueues())\n\t\t\tSo(fqc.GetNetworkQueueSynStr(), ShouldEqual, \"16:19\")\n\t\t\tSo(fqc.GetNetworkQueueAckStr(), ShouldEqual, \"20:23\")\n\t\t\tSo(fqc.GetNetworkQueueSynAckStr(), ShouldEqual, \"24:27\")\n\t\t\tSo(fqc.GetNetworkQueueSvcStr(), ShouldEqual, \"28:31\")\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/constants_nonwindows.go",
    "content": "// +build !windows\n\npackage ipsetmanager\n\nconst (\n\tportSetIpsetType      = \"\"\n\tproxySetPortIpsetType = \"\"\n)\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/constants_windows.go",
    "content": "// +build windows\n\npackage ipsetmanager\n\nconst (\n\tportSetIpsetType      = \"hash:net\"\n\tproxySetPortIpsetType = \"hash:port\"\n)\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/helpers.go",
    "content": "package ipsetmanager\n\nimport (\n\t\"strings\"\n)\n\nfunc addToIPset(set Ipset, data string) error {\n\n\t// ipset can not program this rule\n\tif data == IPv4DefaultIP {\n\t\tif err := addToIPset(set, \"0.0.0.0/1\"); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn addToIPset(set, \"128.0.0.0/1\")\n\t}\n\n\t// ipset can not program this rule\n\tif data == IPv6DefaultIP {\n\t\tif err := addToIPset(set, \"::/1\"); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn addToIPset(set, \"8000::/1\")\n\t}\n\n\tif strings.HasPrefix(data, \"!\") {\n\t\treturn set.AddOption(data[1:], \"nomatch\", 0)\n\t}\n\n\treturn set.Add(data, 0)\n}\n\nfunc delFromIPset(set Ipset, data string) error {\n\n\tif data == IPv4DefaultIP {\n\t\tif err := delFromIPset(set, \"0.0.0.0/1\"); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn delFromIPset(set, \"128.0.0.0/1\")\n\t}\n\n\tif data == IPv6DefaultIP {\n\t\tif err := delFromIPset(set, \"::/1\"); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn delFromIPset(set, \"8000::/1\")\n\t}\n\n\treturn set.Del(strings.TrimPrefix(data, \"!\"))\n}\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/ipsetmanager.go",
    "content": "package ipsetmanager\n\nimport (\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"strings\"\n\t\"sync\"\n\n\tipsetpackage \"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"github.com/spaolacci/murmur3\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\t//IPv6DefaultIP is the default ip of v6\n\tIPv6DefaultIP = \"::/0\"\n\t//IPv4DefaultIP is the  default ip for v4\n\tIPv4DefaultIP = \"0.0.0.0/0\"\n\t//IPsetV4 version for ipv4\n\tIPsetV4 = iota\n\t//IPsetV6 version for ipv6\n\tIPsetV6\n\n\tprocessPortSetPrefix = \"ProcPort-\"\n\tproxyPortSetPrefix   = \"Proxy-\"\n\ttargetTCPSuffix      = \"TargetTCP\"\n\ttargetUDPSuffix      = \"TargetUDP\"\n\texcludedSuffix       = \"Excluded\"\n)\n\n//TargetAndExcludedNetworks interface is used to interact with target and excluded networks\ntype TargetAndExcludedNetworks interface {\n\t//CreateIPsetsForTargetAndExcludedNetworks creates the ipsets for target and excluded networks\n\tCreateIPsetsForTargetAndExcludedNetworks() error\n\t//UpdateIPsetsForTargetAndExcludedNetworks updates the ipsets accordingly.\n\tUpdateIPsetsForTargetAndExcludedNetworks([]string, []string, []string) error\n\t//GetIPsetNamesForTargetAndExcludedNetworks returns the ipsets names for tcp, udp and excluded networks\n\tGetIPsetNamesForTargetAndExcludedNetworks() (string, string, string)\n}\n\n//ServerL3 interface is used to interact with the ipsets required to program\n//ports that the server(PU) listens on in L3 datapath.\ntype ServerL3 interface {\n\t//CreateServerPortSet creates the ipset.\n\tCreateServerPortSet(contextID string) error\n\t//GetServerPortSetName returns the name of the portset created\n\tGetServerPortSetName(contextID string) string\n\t//DestroyServerPortSet destroys the server port set.\n\tDestroyServerPortSet(contextID string) error\n\t//AddPortToServerPortSet adds port to the portset.\n\tAddPortToServerPortSet(contextID string, port string) error\n\t//DeletePortFromServerPortSet deletes the port from port set.\n\tDeletePortFromServerPortSet(contextID string, port string) error\n}\n\n// ACLL3 interface is used to interact with the ipsets required for\n// application and network acl's in L3.\ntype ACLL3 interface {\n\t//RegisterExternalNets registers the ipsets corresponding the external networks.\n\tRegisterExternalNets(contextID string, extnets policy.IPRuleList) error\n\t//AddACLIPsets adds the IPs in the ipsets corresponding to the external network service ID.\n\tUpdateACLIPsets([]string, string)\n\t//DestroyUnusedIPsets will remove the unused ipsets.\n\tDestroyUnusedIPsets()\n\t//RemoveExternalNets removes the external networks corresponding to the PU contextID.\n\tRemoveExternalNets(contextID string)\n\t//GetACLIPsets returns the ipset string that correspond to the external networks in the argument\n\tGetACLIPsetsNames(extnets policy.IPRuleList) []string\n\t// DeleteEntryFromIPset delete an entry from an ipset\n\tDeleteEntryFromIPset(ips []string, serviceID string)\n}\n\n//ProxyL4 interface is used to interact with the ipsets required for\n//L4/L7 Services. These include dependent services and exposed Services\ntype ProxyL4 interface {\n\t//CreateProxySets creates the ipsets to implement L4/L7 services\n\tCreateProxySets(contextID string) error\n\t//GetProxyIPsetNames returns the ipset strings that correspond to the pu\n\tGetProxySetNames(contextID string) (string, string)\n\t//DestroyProxySet destroys the ipsets being used for L4/L7 services\n\tDestroyProxySets(contextID string)\n\t//FlushProxySets flushes the proxy IPsets\n\tFlushProxySets(contextID string)\n\t//AddIPPortToDependentService adds ip port to the dependent service\n\tAddIPPortToDependentService(contextID string, ip *net.IPNet, port string) error\n\t//AddPortToExposedService adds the port that this service is exposing\n\tAddPortToExposedService(contextID string, port string) error\n}\n\n//DestroyAll destroys all the ipsets created.\ntype DestroyAll interface {\n\t//DestroyAllIPsets destroys the created ipsets.\n\tDestroyAllIPsets() error\n}\n\n//IPsetPrefix returns the prefix used to construct the ipset.\ntype IPsetPrefix interface {\n\t//GetIPsetPrefix returns the prefix.\n\tGetIPsetPrefix() string\n}\n\n//IPSetManager interface is used by supervisor. This interface provides the supervisor to\n//create ipsets corresponding to service ID.\ntype IPSetManager interface {\n\tTargetAndExcludedNetworks\n\tServerL3\n\tACLL3\n\tProxyL4\n\tDestroyAll\n\tIPsetPrefix\n\n\tReset()\n}\n\ntype ipsetInfo struct {\n\tcontextIDs map[string]bool\n\tname       string\n\taddresses  map[string]bool\n}\n\ntype aclHandler struct {\n\tserviceIDtoACLIPset   map[string]*ipsetInfo\n\tcontextIDtoServiceIDs map[string]map[string]bool\n\ttoDestroy             []string\n}\n\ntype targetNetwork struct {\n\ttcp []string\n\tudp []string\n}\n\ntype excludedNetwork struct {\n\texcluded []string\n}\n\ntype handler struct {\n\tsync.RWMutex\n\n\tipsetPrefix string\n\tipFilter    func(net.IP) bool\n\tipsetParams *ipsetpackage.Params\n\n\tacl aclHandler\n\ttn  targetNetwork\n\ten  excludedNetwork\n\n\tdynamicUpdates map[string][]string\n}\n\nconst (\n\tipv4String = \"v4-\"\n\tipv6String = \"v6-\"\n)\n\nvar ipv4Handler = &handler{\n\tipsetPrefix: constants.ChainPrefix + ipv4String,\n\tipFilter: func(ip net.IP) bool {\n\t\treturn (ip.To4() != nil)\n\t},\n\tipsetParams: &ipsetpackage.Params{},\n\n\tacl: aclHandler{\n\t\tserviceIDtoACLIPset:   map[string]*ipsetInfo{},\n\t\tcontextIDtoServiceIDs: map[string]map[string]bool{},\n\t},\n\ttn:             targetNetwork{tcp: []string{}, udp: []string{}},\n\ten:             excludedNetwork{excluded: []string{}},\n\tdynamicUpdates: map[string][]string{},\n}\n\nvar ipv6Handler = &handler{\n\tipsetPrefix: constants.ChainPrefix + ipv6String,\n\tipFilter: func(ip net.IP) bool {\n\t\treturn (ip.To4() == nil)\n\t},\n\tipsetParams: &ipsetpackage.Params{HashFamily: \"inet6\"},\n\n\tacl: aclHandler{\n\t\tserviceIDtoACLIPset:   map[string]*ipsetInfo{},\n\t\tcontextIDtoServiceIDs: map[string]map[string]bool{},\n\t},\n\ttn:             targetNetwork{tcp: []string{}, udp: []string{}},\n\ten:             excludedNetwork{excluded: []string{}},\n\tdynamicUpdates: map[string][]string{},\n}\n\n//V4 returns the ipv4 instance of ipsetmanager\nfunc V4() IPSetManager {\n\treturn ipv4Handler\n}\n\n//V6 returns the ipv6 instance of ipsetmanager\nfunc V6() IPSetManager {\n\treturn ipv6Handler\n}\n\nfunc (ipHandler *handler) DestroyAllIPsets() error {\n\n\tif err := destroyAll(ipHandler.ipsetPrefix); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) Reset() {\n\tipHandler.Lock()\n\n\tipHandler.acl = aclHandler{\n\t\tserviceIDtoACLIPset:   map[string]*ipsetInfo{},\n\t\tcontextIDtoServiceIDs: map[string]map[string]bool{},\n\t}\n\n\tipHandler.tn = targetNetwork{tcp: []string{}, udp: []string{}}\n\tipHandler.en = excludedNetwork{excluded: []string{}}\n\n\tipHandler.Unlock()\n}\n\nfunc (ipHandler *handler) CreateIPsetsForTargetAndExcludedNetworks() error {\n\n\ttargetTCPName := ipHandler.ipsetPrefix + targetTCPSuffix\n\ttargetUDPName := ipHandler.ipsetPrefix + targetUDPSuffix\n\texcludedName := ipHandler.ipsetPrefix + excludedSuffix\n\n\texistingSets, err := listIPSets()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to read current sets: %s\", err)\n\t}\n\n\tsetIndex := map[string]struct{}{}\n\tfor _, s := range existingSets {\n\t\tsetIndex[s] = struct{}{}\n\t}\n\n\tcreateIPSet := func(name string) error {\n\t\tvar ipset Ipset\n\t\tvar err error\n\n\t\tif _, ok := setIndex[name]; !ok {\n\t\t\tipset, err = newIpset(name, \"hash:net\", ipHandler.ipsetParams)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t} else {\n\t\t\tipset = getIpset(name)\n\t\t}\n\n\t\tif err = ipset.Flush(); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn nil\n\t}\n\n\tif err := createIPSet(targetTCPName); err != nil {\n\t\treturn err\n\t}\n\n\tif err := createIPSet(targetUDPName); err != nil {\n\t\treturn err\n\t}\n\n\tif err := createIPSet(excludedName); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc updateIPSets(ipset Ipset, old []string, new []string) error {\n\t// We need to delete first, because of nomatch.\n\t// For example, if old has 1.2.3.4 and new has !1.2.3.4, then we delete the 1.2.3.4 first\n\t// before we can add the 1.2.3.4 with the nomatch option.\n\n\tdeleteMap := map[string]bool{}\n\taddMap := map[string]bool{}\n\tfor _, net := range old {\n\t\tdeleteMap[net] = true\n\t}\n\tfor _, net := range new {\n\t\tif _, ok := deleteMap[net]; ok {\n\t\t\tdeleteMap[net] = false\n\t\t\tcontinue\n\t\t}\n\t\taddMap[net] = true\n\t}\n\n\tfor net, delete := range deleteMap {\n\t\tif delete {\n\t\t\tif err := delFromIPset(ipset, net); err != nil {\n\t\t\t\tzap.L().Debug(\"unable to remove network from set\", zap.Error(err))\n\t\t\t}\n\t\t}\n\t}\n\n\tfor net, add := range addMap {\n\t\tif add {\n\t\t\tif err := addToIPset(ipset, net); err != nil {\n\t\t\t\treturn fmt.Errorf(\"unable to update target set: %s\", err)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) UpdateIPsetsForTargetAndExcludedNetworks(tcp []string, udp []string, excluded []string) error {\n\n\tfilterIPs := func(ips []string) []string {\n\t\tvar filteredIPs []string\n\n\t\tfor _, ip := range ips {\n\t\t\tparsable := ip\n\t\t\tif strings.HasPrefix(ip, \"!\") {\n\t\t\t\tparsable = ip[1:]\n\t\t\t}\n\t\t\tnetIP := net.ParseIP(parsable)\n\t\t\tif netIP == nil {\n\t\t\t\tnetIP, _, _ = net.ParseCIDR(parsable)\n\t\t\t}\n\n\t\t\tif ipHandler.ipFilter(netIP) {\n\t\t\t\tfilteredIPs = append(filteredIPs, ip)\n\t\t\t}\n\t\t}\n\n\t\treturn filteredIPs\n\t}\n\n\ttcpSet := getIpset(ipHandler.ipsetPrefix + targetTCPSuffix)\n\tudpSet := getIpset(ipHandler.ipsetPrefix + targetUDPSuffix)\n\texcludedSet := getIpset(ipHandler.ipsetPrefix + excludedSuffix)\n\n\ttcpFilterIPs := filterIPs(tcp)\n\tif err := updateIPSets(tcpSet, ipHandler.tn.tcp, tcpFilterIPs); err != nil {\n\t\treturn err\n\t}\n\n\tudpFilterIPs := filterIPs(udp)\n\tif err := updateIPSets(udpSet, ipHandler.tn.udp, udpFilterIPs); err != nil {\n\t\treturn err\n\t}\n\n\texcludedFilterIPs := filterIPs(excluded)\n\tif err := updateIPSets(excludedSet, ipHandler.en.excluded, excludedFilterIPs); err != nil {\n\t\treturn err\n\t}\n\n\tipHandler.tn.tcp = tcpFilterIPs\n\tipHandler.tn.udp = udpFilterIPs\n\tipHandler.en.excluded = excludedFilterIPs\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) GetIPsetNamesForTargetAndExcludedNetworks() (string, string, string) {\n\treturn ipHandler.ipsetPrefix + targetTCPSuffix, ipHandler.ipsetPrefix + targetUDPSuffix, ipHandler.ipsetPrefix + excludedSuffix\n}\n\nfunc (ipHandler *handler) getServerPortSetName(contextID string) string {\n\n\tprefix := ipHandler.ipsetPrefix + processPortSetPrefix\n\n\treturn createName(contextID, prefix)\n}\n\nfunc (ipHandler *handler) getProxyIPSetNames(contextID string) (string, string) {\n\tprefix := ipHandler.ipsetPrefix + proxyPortSetPrefix\n\tname := createName(contextID, prefix)\n\n\treturn name + \"-dst\", name + \"-srv\"\n}\n\nfunc (ipHandler *handler) GetProxySetNames(contextID string) (string, string) {\n\treturn ipHandler.getProxyIPSetNames(contextID)\n}\n\nfunc (ipHandler *handler) DestroyProxySets(contextID string) {\n\tdestSetName, srvSetName := ipHandler.getProxyIPSetNames(contextID)\n\n\tips := getIpset(destSetName)\n\tif err := ips.Destroy(); err != nil {\n\t\tzap.L().Warn(\"Failed to destroy proxyPortSet\", zap.String(\"SetName\", destSetName), zap.Error(err))\n\t}\n\n\tips = getIpset(srvSetName)\n\tif err := ips.Destroy(); err != nil {\n\t\tzap.L().Warn(\"Failed to clear proxy port set\", zap.String(\"set name\", srvSetName), zap.Error(err))\n\t}\n}\n\n//CreateProxySets creates the ipsets for L4/L7 services\nfunc (ipHandler *handler) CreateProxySets(contextID string) error {\n\n\tdestSetName, srvSetName := ipHandler.getProxyIPSetNames(contextID)\n\n\tif _, err := newIpset(destSetName, \"hash:net,port\", ipHandler.ipsetParams); err != nil {\n\t\treturn fmt.Errorf(\"unable to create ipset for %s: %s\", destSetName, err)\n\t}\n\n\t// create ipset for port match\n\tif _, err := newIpset(srvSetName, proxySetPortIpsetType, nil); err != nil {\n\t\treturn fmt.Errorf(\"unable to create ipset for %s: %s\", srvSetName, err)\n\t}\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) FlushProxySets(contextID string) {\n\tdestSetName, srvSetName := ipHandler.getProxyIPSetNames(contextID)\n\n\tips := getIpset(destSetName)\n\tif err := ips.Flush(); err != nil {\n\t\tzap.L().Warn(\"Failed to flush dest proxy port set\", zap.String(\"SetName\", destSetName), zap.Error(err))\n\t}\n\n\tips = getIpset(srvSetName)\n\tif err := ips.Flush(); err != nil {\n\t\tzap.L().Warn(\"Failed to flush server proxy port set\", zap.String(\"set name\", srvSetName), zap.Error(err))\n\t}\n}\n\nfunc (ipHandler *handler) AddIPPortToDependentService(contextID string, addr *net.IPNet, port string) error {\n\n\tdestSetName, _ := ipHandler.getProxyIPSetNames(contextID)\n\tips := getIpset(destSetName)\n\n\tif ipHandler.ipFilter(addr.IP) {\n\t\tpair := addr.String() + \",\" + port\n\t\tif err := ips.Add(pair, 0); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to add dependent ip %s to ipset: %s\", pair, err)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) AddPortToExposedService(contextID string, port string) error {\n\t_, srvSetName := ipHandler.getProxyIPSetNames(contextID)\n\tips := getIpset(srvSetName)\n\n\tif err := ips.Add(port, 0); err != nil {\n\t\treturn fmt.Errorf(\"unable to add port %s to exposed service %s\", port, err)\n\t}\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) GetServerPortSetName(contextID string) string {\n\treturn ipHandler.getServerPortSetName(contextID)\n}\n\nfunc (ipHandler *handler) CreateServerPortSet(contextID string) error {\n\n\tif _, err := newIpset(ipHandler.getServerPortSetName(contextID), portSetIpsetType, nil); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) DestroyServerPortSet(contextID string) error {\n\n\tportSetName := ipHandler.getServerPortSetName(contextID)\n\tips := getIpset(portSetName)\n\n\tif err := ips.Destroy(); err != nil {\n\t\treturn fmt.Errorf(\"Failed to delete pu port set \"+portSetName, zap.Error(err))\n\t}\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) AddPortToServerPortSet(contextID string, port string) error {\n\n\tips := getIpset(ipHandler.getServerPortSetName(contextID))\n\n\tif err := ips.Add(port, 0); err != nil {\n\t\treturn fmt.Errorf(\"unable to add port to portset: %s\", err)\n\t}\n\n\treturn nil\n}\n\nfunc (ipHandler *handler) DeletePortFromServerPortSet(contextID string, port string) error {\n\n\tips := getIpset(ipHandler.getServerPortSetName(contextID))\n\n\tif err := ips.Del(port); err != nil {\n\t\treturn fmt.Errorf(\"unable to delete port from portset: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// RegisterExternalNets registers the contextID and the corresponding serviceIDs\nfunc (ipHandler *handler) RegisterExternalNets(contextID string, extnets policy.IPRuleList) error {\n\tipHandler.Lock()\n\tdefer ipHandler.Unlock()\n\n\tprocessExtnets := func() error {\n\t\tfor _, extnet := range extnets {\n\t\t\tvar ipset *ipsetInfo\n\n\t\t\tserviceID := extnet.Policy.ServiceID\n\t\t\tif ipset = ipHandler.acl.serviceIDtoACLIPset[serviceID]; ipset == nil {\n\t\t\t\tvar err error\n\t\t\t\tif ipset, err = ipHandler.createACLIPset(serviceID); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t// make sure to include updates that were added dynamically by the DNS proxy\n\t\t\taddrs := extnet.Addresses\n\t\t\tif dynamicAddrs, ok := ipHandler.dynamicUpdates[serviceID]; ok {\n\t\t\t\taddrs = append(addrs, dynamicAddrs...)\n\t\t\t}\n\n\t\t\tipHandler.synchronizeIPsinIpset(ipset, addrs)\n\t\t\t// have a backreference from serviceID to contextID\n\t\t\tipset.contextIDs[contextID] = true\n\t\t}\n\n\t\treturn nil\n\t}\n\n\tprocessOlderExtnets := func() {\n\t\tnewExtnets := map[string]bool{}\n\n\t\tfor _, extnet := range extnets {\n\n\t\t\tserviceID := extnet.Policy.ServiceID\n\t\t\tnewExtnets[serviceID] = true\n\t\t\tm, ok := ipHandler.acl.contextIDtoServiceIDs[contextID]\n\n\t\t\tif ok && m[serviceID] {\n\t\t\t\tdelete(m, serviceID)\n\t\t\t}\n\t\t}\n\n\t\tfor serviceID := range ipHandler.acl.contextIDtoServiceIDs[contextID] {\n\t\t\tipHandler.reduceReferenceFromServiceID(contextID, serviceID)\n\t\t}\n\n\t\tipHandler.acl.contextIDtoServiceIDs[contextID] = newExtnets\n\t}\n\n\tif err := processExtnets(); err != nil {\n\t\treturn err\n\t}\n\n\tprocessOlderExtnets()\n\n\treturn nil\n}\n\n// deleteDynamicAddresses must only be alled by DeleteEntryFromIPset to update the internal map of dyanamic addresses\nfunc (ipHandler *handler) deleteDynamicAddresses(ips []string, serviceID string) {\n\tif dynAddrs, ok := ipHandler.dynamicUpdates[serviceID]; ok {\n\t\tipMap := make(map[string]struct{}, len(ips))\n\t\tfor _, ip := range ips {\n\t\t\tipMap[ip] = struct{}{}\n\t\t}\n\n\t\tnewAddrs := make([]string, 0, len(dynAddrs))\n\t\tfor _, dynAddr := range dynAddrs {\n\t\t\tif _, ok := ipMap[dynAddr]; ok {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tnewAddrs = append(newAddrs, dynAddr)\n\t\t}\n\n\t\tipHandler.dynamicUpdates[serviceID] = newAddrs\n\t}\n}\n\n// DeleteEntryFromIPset delete an entry from an ipset\nfunc (ipHandler *handler) DeleteEntryFromIPset(ips []string, serviceID string) {\n\tipHandler.Lock()\n\tdefer ipHandler.Unlock()\n\n\tipHandler.deleteDynamicAddresses(ips, serviceID)\n\n\tfor _, address := range ips {\n\t\tparsableAddress := address\n\t\tif strings.HasPrefix(address, \"!\") {\n\t\t\tparsableAddress = address[1:]\n\t\t}\n\n\t\tnetIP := net.ParseIP(parsableAddress)\n\t\tif netIP == nil {\n\t\t\tnetIP, _, _ = net.ParseCIDR(parsableAddress)\n\t\t}\n\t\tif ipset := ipHandler.acl.serviceIDtoACLIPset[serviceID]; ipset != nil {\n\t\t\tipsetHandler := getIpset(ipset.name)\n\t\t\tdelFromIPset(ipsetHandler, netIP.String()) // nolint\n\t\t\tdelete(ipset.addresses, address)\n\n\t\t}\n\n\t}\n}\n\n// updateDynamicAddresses must only be called by UpdateACLIPsets to update the internal map of dynamic addresses\nfunc (ipHandler *handler) updateDynamicAddresses(addresses []string, serviceID string) {\n\t// no need to lock, already done by the caller\n\tif dynAddrs, ok := ipHandler.dynamicUpdates[serviceID]; ok {\n\t\tipMap := make(map[string]struct{}, len(dynAddrs))\n\t\tfor _, ip := range dynAddrs {\n\t\t\tipMap[ip] = struct{}{}\n\t\t}\n\n\t\tnewAddrs := make([]string, 0, len(addresses))\n\t\tfor _, ip := range addresses {\n\t\t\tif _, ok := ipMap[ip]; ok {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tnewAddrs = append(newAddrs, ip)\n\t\t}\n\n\t\tipHandler.dynamicUpdates[serviceID] = append(dynAddrs, newAddrs...)\n\t} else {\n\t\tipHandler.dynamicUpdates[serviceID] = addresses\n\t}\n}\n\n//UpdateACLIPsets updates the ip addresses in the ipsets corresponding to the serviceID\nfunc (ipHandler *handler) UpdateACLIPsets(addresses []string, serviceID string) {\n\tipHandler.Lock()\n\tdefer ipHandler.Unlock()\n\n\tipHandler.updateDynamicAddresses(addresses, serviceID)\n\n\tfor _, address := range addresses {\n\t\tparsableAddress := address\n\t\tif strings.HasPrefix(address, \"!\") {\n\t\t\tparsableAddress = address[1:]\n\t\t}\n\n\t\tnetIP := net.ParseIP(parsableAddress)\n\t\tif netIP == nil {\n\t\t\tnetIP, _, _ = net.ParseCIDR(parsableAddress)\n\t\t}\n\n\t\tif !ipHandler.ipFilter(netIP) {\n\t\t\tcontinue\n\t\t}\n\n\t\tif ipset := ipHandler.acl.serviceIDtoACLIPset[serviceID]; ipset != nil {\n\t\t\tipsetHandler := getIpset(ipset.name)\n\t\t\tif err := addToIPset(ipsetHandler, address); err != nil {\n\t\t\t\tzap.L().Error(\"Error adding IPs to ipset\", zap.String(\"ipset\", ipset.name), zap.String(\"address\", address))\n\t\t\t}\n\n\t\t\tipset.addresses[address] = true\n\t\t}\n\t}\n}\n\nfunc hashServiceID(serviceID string) string {\n\thash := murmur3.New64()\n\tif _, err := io.WriteString(hash, serviceID); err != nil {\n\t\treturn \"\"\n\t}\n\n\treturn base64.URLEncoding.EncodeToString(hash.Sum(nil))\n}\n\nfunc (ipHandler *handler) synchronizeIPsinIpset(ipsetInfo *ipsetInfo, addresses []string) {\n\tnewips := map[string]bool{}\n\tipsetHandler := getIpset(ipsetInfo.name)\n\n\tvar addrToAdd, addrToDelete []string\n\n\tfor _, address := range addresses {\n\t\tparsableAddress := address\n\t\tif strings.HasPrefix(address, \"!\") {\n\t\t\tparsableAddress = address[1:]\n\t\t}\n\n\t\tnetIP := net.ParseIP(parsableAddress)\n\t\tif netIP == nil {\n\t\t\tnetIP, _, _ = net.ParseCIDR(parsableAddress)\n\t\t}\n\n\t\tif !ipHandler.ipFilter(netIP) {\n\t\t\tcontinue\n\t\t}\n\n\t\tnewips[address] = true\n\n\t\tif _, ok := ipsetInfo.addresses[address]; !ok {\n\t\t\taddrToAdd = append(addrToAdd, address)\n\t\t}\n\t\tdelete(ipsetInfo.addresses, address)\n\t}\n\n\tfor address, val := range ipsetInfo.addresses {\n\t\tif val {\n\t\t\taddrToDelete = append(addrToDelete, address)\n\t\t}\n\t}\n\n\tif err := updateIPSets(ipsetHandler, addrToDelete, addrToAdd); err != nil {\n\t\tzap.L().Error(\"Error updating ipset during sync\", zap.Error(err))\n\t}\n\n\tipsetInfo.addresses = newips\n}\n\nfunc (ipHandler *handler) createACLIPset(serviceID string) (*ipsetInfo, error) {\n\tipsetName := ipHandler.ipsetPrefix + \"ext-\" + hashServiceID(serviceID)\n\tif _, err := newIpset(ipsetName, \"hash:net\", ipHandler.ipsetParams); err != nil {\n\t\treturn nil, err\n\t}\n\n\tipset := &ipsetInfo{contextIDs: map[string]bool{}, name: ipsetName, addresses: map[string]bool{}}\n\tipHandler.acl.serviceIDtoACLIPset[serviceID] = ipset\n\n\treturn ipset, nil\n}\n\nfunc (ipHandler *handler) deleteServiceID(serviceID string) {\n\tipsetInfo := ipHandler.acl.serviceIDtoACLIPset[serviceID]\n\tipHandler.acl.toDestroy = append(ipHandler.acl.toDestroy, ipsetInfo.name)\n\tdelete(ipHandler.acl.serviceIDtoACLIPset, serviceID)\n}\n\n//reduceReferenceFromServiceID reduces the reference for the serviceID.\nfunc (ipHandler *handler) reduceReferenceFromServiceID(contextID string, serviceID string) {\n\tvar ipset *ipsetInfo\n\n\tif ipset = ipHandler.acl.serviceIDtoACLIPset[serviceID]; ipset == nil {\n\t\tzap.L().Error(\"Could not find ipset corresponding to serviceID\", zap.String(\"serviceID\", serviceID))\n\t\treturn\n\t}\n\n\tdelete(ipset.contextIDs, contextID)\n\n\t// there are no references from any pu. safe to destroy now\n\tif len(ipset.contextIDs) == 0 {\n\t\tipHandler.deleteServiceID(serviceID)\n\t}\n}\n\n// DestroyUnusedIPsets destroys the unused ipsets.\nfunc (ipHandler *handler) DestroyUnusedIPsets() {\n\tipHandler.Lock()\n\tdefer ipHandler.Unlock()\n\n\tfor _, ipsetName := range ipHandler.acl.toDestroy {\n\t\tipsetHandler := getIpset(ipsetName)\n\t\tif err := ipsetHandler.Destroy(); err != nil {\n\t\t\tzap.L().Warn(\"Failed to destroy ipset\", zap.String(\"ipset\", ipsetName), zap.Error(err))\n\t\t}\n\t}\n\n\tipHandler.acl.toDestroy = nil\n}\n\n// RemoveExternalNets is called when the contextID is being unsupervised such that all the external nets can be deleted.\nfunc (ipHandler *handler) RemoveExternalNets(contextID string) {\n\tipHandler.Lock()\n\n\tm, ok := ipHandler.acl.contextIDtoServiceIDs[contextID]\n\tif ok {\n\t\tfor serviceID := range m {\n\t\t\tipHandler.reduceReferenceFromServiceID(contextID, serviceID)\n\t\t}\n\t}\n\n\tdelete(ipHandler.acl.contextIDtoServiceIDs, contextID)\n\n\tipHandler.Unlock()\n\tipHandler.DestroyUnusedIPsets()\n}\n\nfunc (ipHandler *handler) GetIPsetPrefix() string {\n\treturn ipHandler.ipsetPrefix\n}\n\n// GetACLIPsets returns the ipset names corresponding to the serviceIDs.\nfunc (ipHandler *handler) GetACLIPsetsNames(extnets policy.IPRuleList) []string {\n\n\tipHandler.Lock()\n\tdefer ipHandler.Unlock()\n\n\tvar ipsets []string\n\n\tfor _, extnet := range extnets {\n\t\tserviceID := extnet.Policy.ServiceID\n\n\t\tipsetInfo, ok := ipHandler.acl.serviceIDtoACLIPset[serviceID]\n\t\tif ok {\n\t\t\tipsets = append(ipsets, ipsetInfo.name)\n\t\t}\n\t}\n\n\treturn ipsets\n}\n\n//createName takes the contextID and prefix and returns a name after processing\nfunc createName(contextID string, prefix string) string {\n\thash := murmur3.New64()\n\n\tif _, err := io.WriteString(hash, contextID); err != nil {\n\t\treturn \"\"\n\t}\n\n\toutput := base64.URLEncoding.EncodeToString(hash.Sum(nil))\n\n\tif len(contextID) > 4 {\n\t\tcontextID = contextID[:4] + output[:4]\n\t} else {\n\t\tcontextID = contextID + output[:4]\n\t}\n\n\tif len(prefix) > 16 {\n\t\tprefix = prefix[:16]\n\t}\n\n\treturn (prefix + contextID)\n}\n\n//V4test returns the test handler for ipv4\nfunc V4test() IPSetManager {\n\treturn &handler{\n\t\tipsetPrefix: \"TRI-\" + ipv4String,\n\t\tipFilter: func(ip net.IP) bool {\n\t\t\treturn (ip.To4() != nil)\n\t\t},\n\t\tipsetParams: &ipsetpackage.Params{},\n\n\t\tacl: aclHandler{\n\t\t\tserviceIDtoACLIPset:   map[string]*ipsetInfo{},\n\t\t\tcontextIDtoServiceIDs: map[string]map[string]bool{},\n\t\t},\n\t\ttn: targetNetwork{tcp: []string{}, udp: []string{}},\n\t\ten: excludedNetwork{excluded: []string{}},\n\t}\n}\n\n//V6test returns the test handler for ipv6\nfunc V6test() IPSetManager {\n\treturn &handler{\n\t\tipsetPrefix: \"TRI-\" + ipv6String,\n\t\tipFilter: func(ip net.IP) bool {\n\t\t\treturn (ip.To4() == nil)\n\t\t},\n\t\tipsetParams: &ipsetpackage.Params{HashFamily: \"inet6\"},\n\n\t\tacl: aclHandler{\n\t\t\tserviceIDtoACLIPset:   map[string]*ipsetInfo{},\n\t\t\tcontextIDtoServiceIDs: map[string]map[string]bool{},\n\t\t},\n\t\ttn: targetNetwork{tcp: []string{}, udp: []string{}},\n\t\ten: excludedNetwork{excluded: []string{}},\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/ipsetmanager_test.go",
    "content": "package ipsetmanager\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n)\n\nconst (\n\tservice     = \"blah\"\n\tip192_0_2_1 = \"192.0.2.1\"\n\tip192_0_2_2 = \"192.0.2.2\"\n\tip192_0_2_3 = \"192.0.2.3\"\n)\n\nfunc Test_handler_deleteDynamicAddresses(t *testing.T) {\n\ttype args struct {\n\t\tips       []string\n\t\tserviceID string\n\t}\n\n\ttests := []struct {\n\t\tname                 string\n\t\targs                 args\n\t\texistingDynamicAddrs map[string][]string\n\t\twantEntry            bool\n\t\twantIPs              []string\n\t}{\n\t\t{\n\t\t\tname: \"entry does not exist\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_1},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{},\n\t\t\twantEntry:            false,\n\t\t\twantIPs:              nil,\n\t\t},\n\t\t{\n\t\t\tname: \"entry is empty\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_1},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{\n\t\t\t\tservice: {},\n\t\t\t},\n\t\t\twantEntry: true,\n\t\t\twantIPs:   []string{},\n\t\t},\n\t\t{\n\t\t\tname: \"entry not in existing set\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_1},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{\n\t\t\t\tservice: {\n\t\t\t\t\tip192_0_2_2,\n\t\t\t\t\tip192_0_2_3,\n\t\t\t\t},\n\t\t\t},\n\t\t\twantEntry: true,\n\t\t\twantIPs:   []string{ip192_0_2_2, ip192_0_2_3},\n\t\t},\n\t\t{\n\t\t\tname: \"entry is removed from set\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_2},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{\n\t\t\t\tservice: {\n\t\t\t\t\tip192_0_2_1,\n\t\t\t\t\tip192_0_2_2,\n\t\t\t\t\tip192_0_2_3,\n\t\t\t\t},\n\t\t\t},\n\t\t\twantEntry: true,\n\t\t\twantIPs:   []string{ip192_0_2_1, ip192_0_2_3},\n\t\t},\n\t\t{\n\t\t\tname: \"entries are removed from set\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_1, ip192_0_2_3},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{\n\t\t\t\tservice: {\n\t\t\t\t\tip192_0_2_1,\n\t\t\t\t\tip192_0_2_2,\n\t\t\t\t\tip192_0_2_3,\n\t\t\t\t},\n\t\t\t},\n\t\t\twantEntry: true,\n\t\t\twantIPs:   []string{ip192_0_2_2},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tipv4Handler.dynamicUpdates = tt.existingDynamicAddrs\n\t\t\tipv4Handler.deleteDynamicAddresses(tt.args.ips, tt.args.serviceID)\n\t\t\thave, ok := ipv4Handler.dynamicUpdates[tt.args.serviceID]\n\t\t\tif ok != tt.wantEntry {\n\t\t\t\tt.Errorf(\"%q wantEntry: %#v, haveEntry: %#v\", tt.args.serviceID, tt.wantEntry, ok)\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(tt.wantIPs, have) {\n\t\t\t\tt.Errorf(\"want: %#v, have: %#v\", tt.wantIPs, have)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_handler_updateDynamicAddresses(t *testing.T) {\n\ttype args struct {\n\t\tips       []string\n\t\tserviceID string\n\t}\n\n\ttests := []struct {\n\t\tname                 string\n\t\targs                 args\n\t\texistingDynamicAddrs map[string][]string\n\t\twantIPs              []string\n\t}{\n\t\t{\n\t\t\tname: \"nothing to add should create empty service entry\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{},\n\t\t\twantIPs:              []string{},\n\t\t},\n\t\t{\n\t\t\tname: \"en empty entry should simply add the IPs\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_1, ip192_0_2_2},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{},\n\t\t\twantIPs:              []string{ip192_0_2_1, ip192_0_2_2},\n\t\t},\n\t\t{\n\t\t\tname: \"different IPs should always get added to the list\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_2, ip192_0_2_3},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{\n\t\t\t\tservice: {ip192_0_2_1},\n\t\t\t},\n\t\t\twantIPs: []string{ip192_0_2_1, ip192_0_2_2, ip192_0_2_3},\n\t\t},\n\t\t{\n\t\t\tname: \"existing IPs should not be added\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_2, ip192_0_2_3},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{\n\t\t\t\tservice: {ip192_0_2_2, ip192_0_2_3},\n\t\t\t},\n\t\t\twantIPs: []string{ip192_0_2_2, ip192_0_2_3},\n\t\t},\n\t\t{\n\t\t\tname: \"mix of existing and new\",\n\t\t\targs: args{\n\t\t\t\tips:       []string{ip192_0_2_2, ip192_0_2_3},\n\t\t\t\tserviceID: service,\n\t\t\t},\n\t\t\texistingDynamicAddrs: map[string][]string{\n\t\t\t\tservice: {ip192_0_2_1, ip192_0_2_2},\n\t\t\t},\n\t\t\twantIPs: []string{ip192_0_2_1, ip192_0_2_2, ip192_0_2_3},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tipv4Handler.dynamicUpdates = tt.existingDynamicAddrs\n\t\t\tipv4Handler.updateDynamicAddresses(tt.args.ips, tt.args.serviceID)\n\t\t\thave, ok := ipv4Handler.dynamicUpdates[tt.args.serviceID]\n\t\t\tif !ok {\n\t\t\t\tt.Errorf(\"no entry for service %q\", tt.args.serviceID)\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(tt.wantIPs, have) {\n\t\t\t\tt.Errorf(\"want: %#v, have: %#v\", tt.wantIPs, have)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/ipsetprovider.go",
    "content": "// +build linux darwin\n\npackage ipsetmanager\n\nimport (\n\t\"fmt\"\n\t\"os/exec\"\n\t\"strings\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\t// path to aporeto-ipset\n\tipsetBinPath string\n)\n\n// IpsetProvider returns a fabric for Ipset.\ntype IpsetProvider interface {\n\tNewIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error)\n\tGetIpset(name string) Ipset\n\tDestroyAll(prefix string) error\n\tListIPSets() ([]string, error)\n}\n\n// Ipset is an abstraction of all the methods an implementation of userspace\n// ipsets need to provide.\ntype Ipset interface {\n\tAdd(entry string, timeout int) error\n\tAddOption(entry string, option string, timeout int) error\n\tDel(entry string) error\n\tDestroy() error\n\tFlush() error\n\tTest(entry string) (bool, error)\n}\n\ntype goIpsetProvider struct{}\n\nvar instance IpsetProvider = &goIpsetProvider{}\n\nfunc ipsetCreateBitmapPort(setname string) error {\n\t//Bitmap type is not supported by the ipset library\n\tout, err := exec.Command(ipsetBinPath, \"create\", setname, \"bitmap:port\", \"range\", \"0-65535\", \"timeout\", \"0\").CombinedOutput()\n\tif err != nil {\n\t\tif strings.Contains(string(out), \"set with the same name already exists\") {\n\t\t\tzap.L().Warn(\"Set already exists - cleaning up\", zap.String(\"set name\", setname))\n\t\t\t// Clean up the existing set\n\t\t\tif _, cerr := exec.Command(ipsetBinPath, \"-F\", setname).CombinedOutput(); cerr != nil {\n\t\t\t\treturn fmt.Errorf(\"Failed to clean up existing ipset: %s\", err)\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\t\tzap.L().Error(\"Unable to create set\", zap.String(\"set name\", setname), zap.String(\"ipset-output\", string(out)))\n\t}\n\treturn err\n}\n\n// NewIpset returns an IpsetProvider interface based on the go-ipset\n// external package.\nfunc (i *goIpsetProvider) NewIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error) {\n\n\t// Check if hashtype is a type of hash\n\tif strings.HasPrefix(ipsetType, \"hash:\") {\n\t\treturn ipset.New(name, ipsetType, p)\n\t}\n\n\tif err := ipsetCreateBitmapPort(name); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &ipset.IPSet{Name: name}, nil\n}\n\n// GetIpset gets the ipset object from the name.\nfunc (i *goIpsetProvider) GetIpset(name string) Ipset {\n\treturn &ipset.IPSet{\n\t\tName: name,\n\t}\n}\n\n// DestroyAll destroys all the ipsets with the given prefix\nfunc (i *goIpsetProvider) DestroyAll(prefix string) error {\n\treturn ipset.DestroyAll(prefix)\n}\n\nfunc (i *goIpsetProvider) ListIPSets() ([]string, error) {\n\n\tout, err := exec.Command(ipsetBinPath, \"-L\", \"-name\").CombinedOutput()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to list ipsets:%s\", err)\n\t}\n\n\treturn strings.Split(string(out), \"\\n\"), nil\n}\n\nfunc newIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error) {\n\treturn instance.NewIpset(name, ipsetType, p)\n}\n\nfunc getIpset(name string) Ipset {\n\treturn instance.GetIpset(name)\n}\n\nfunc destroyAll(prefix string) error {\n\treturn instance.DestroyAll(prefix)\n}\n\nfunc listIPSets() ([]string, error) {\n\treturn instance.ListIPSets()\n}\n\n//SetIpsetTestInstance sets a test instance of ipsetprovider\nfunc SetIpsetTestInstance(ipsetprovider IpsetProvider) {\n\tinstance = ipsetprovider\n}\n\n//SetIPsetPath sets the path for aporeto-ipset\nfunc SetIPsetPath() {\n\tipsetBinPath, _ = exec.LookPath(constants.IpsetBinaryName) // nolint: errcheck\n\t// tell the go-ipset package which ipset binary to use\n\tipset.Init(ipsetBinPath) // nolint: errcheck\n}\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/ipsetprovider_windows.go",
    "content": "// +build windows\n\npackage ipsetmanager\n\nimport (\n\t\"fmt\"\n\n\t\"go.uber.org/zap\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/frontman\"\n)\n\n// IpsetProvider returns a fabric for Ipset.\ntype IpsetProvider interface {\n\tNewIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error)\n\tGetIpset(name string) Ipset\n\tDestroyAll(prefix string) error\n\tListIPSets() ([]string, error)\n}\n\n// Ipset is an abstraction of all the methods an implementation of userspace\n// ipsets need to provide.\ntype Ipset interface {\n\tAdd(entry string, timeout int) error\n\tAddOption(entry string, option string, timeout int) error\n\tDel(entry string) error\n\tDestroy() error\n\tFlush() error\n\tTest(entry string) (bool, error)\n}\n\ntype ipsetProvider struct{}\n\nvar instance IpsetProvider = &ipsetProvider{}\n\ntype winIPSet struct {\n\thandle uintptr\n\tname   string\n}\n\n// NewIpset returns an IpsetProvider interface based on the go-ipset\n// external package.\nfunc (i *ipsetProvider) NewIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error) {\n\tipsetHandle, err := frontman.Wrapper.NewIpset(name, ipsetType)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &winIPSet{ipsetHandle, name}, nil\n}\n\n// GetIpset gets the ipset object from the name.\n// Note that the interface can't return error here, but since it's possible to fail in Windows,\n// we log error and return incomplete object, and expect a failure from Frontman on a later call.\nfunc (i *ipsetProvider) GetIpset(name string) Ipset {\n\tipsetHandle, err := frontman.Wrapper.GetIpset(name)\n\tif err != nil {\n\t\tzap.L().Error(fmt.Sprintf(\"failed to get ipset %s\", name), zap.Error(err))\n\t\treturn &winIPSet{0, name}\n\t}\n\treturn &winIPSet{ipsetHandle, name}\n}\n\n// DestroyAll destroys all the ipsets - it will fail if there are existing references\nfunc (i *ipsetProvider) DestroyAll(prefix string) error {\n\treturn frontman.Wrapper.DestroyAllIpsets(prefix)\n}\n\nfunc (i *ipsetProvider) ListIPSets() ([]string, error) {\n\treturn frontman.Wrapper.ListIpsets()\n}\n\n// IPsetProvider Returns a Go IPSet Provider\nfunc IPsetProvider() IpsetProvider {\n\treturn instance\n}\n\nfunc (w *winIPSet) Add(entry string, timeout int) error {\n\treturn frontman.Wrapper.IpsetAdd(w.handle, entry, timeout)\n}\n\nfunc (w *winIPSet) AddOption(entry string, option string, timeout int) error {\n\treturn frontman.Wrapper.IpsetAddOption(w.handle, entry, option, timeout)\n}\n\nfunc (w *winIPSet) Del(entry string) error {\n\treturn frontman.Wrapper.IpsetDelete(w.handle, entry)\n}\n\nfunc (w *winIPSet) Destroy() error {\n\treturn frontman.Wrapper.IpsetDestroy(w.handle, w.name)\n}\n\nfunc (w *winIPSet) Flush() error {\n\treturn frontman.Wrapper.IpsetFlush(w.handle)\n}\n\nfunc (w *winIPSet) Test(entry string) (bool, error) {\n\treturn frontman.Wrapper.IpsetTest(w.handle, entry)\n}\n\nfunc newIpset(name string, ipsetType string, p *ipset.Params) (Ipset, error) {\n\treturn IPsetProvider().NewIpset(name, ipsetType, p)\n}\n\nfunc getIpset(name string) Ipset {\n\treturn IPsetProvider().GetIpset(name)\n}\n\nfunc destroyAll(prefix string) error {\n\treturn IPsetProvider().DestroyAll(prefix)\n}\n\nfunc listIPSets() ([]string, error) {\n\treturn IPsetProvider().ListIPSets()\n}\n\n//SetIpsetTestInstance sets the test instance for ipsets\nfunc SetIpsetTestInstance(ipsetprovider IpsetProvider) {\n\tinstance = ipsetprovider\n}\n\n//SetIPsetPath is a no-op for windows\nfunc SetIPsetPath() {\n}\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/ipsetprovidermock.go",
    "content": "package ipsetmanager\n\nimport (\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/aporeto-inc/go-ipset/ipset\"\n)\n\ntype ipsetProviderMockedMethods struct {\n\tnewMockIPset   func(name string, hasht string, p *ipset.Params) (Ipset, error)\n\tgetMockIPset   func(name string) Ipset\n\tdestroyAllMock func(prefix string) error\n\tlistIPSetsMock func() ([]string, error)\n}\n\n// TestIpsetProvider is a test implementation for IpsetProvider\ntype TestIpsetProvider interface {\n\tIpsetProvider\n\tMockNewIpset(t *testing.T, impl func(name string, hasht string, p *ipset.Params) (Ipset, error))\n\tMockGetIpset(t *testing.T, impl func(name string) Ipset)\n\tMockDestroyAll(t *testing.T, impl func(string) error)\n\tMockListIPSets(t *testing.T, impl func() ([]string, error))\n}\n\ntype testIpsetProvider struct {\n\tmocks       map[*testing.T]*ipsetProviderMockedMethods\n\tlock        *sync.Mutex\n\tcurrentTest *testing.T\n}\n\n// NewTestIpsetProvider returns a new TestManipulator.\nfunc NewTestIpsetProvider() TestIpsetProvider {\n\treturn &testIpsetProvider{\n\t\tlock:  &sync.Mutex{},\n\t\tmocks: map[*testing.T]*ipsetProviderMockedMethods{},\n\t}\n}\n\nfunc (m *testIpsetProvider) MockNewIpset(t *testing.T, impl func(name string, hasht string, p *ipset.Params) (Ipset, error)) {\n\n\tm.currentMocks(t).newMockIPset = impl\n}\n\nfunc (m *testIpsetProvider) MockGetIpset(t *testing.T, impl func(name string) Ipset) {\n\tm.currentMocks(t).getMockIPset = impl\n}\n\nfunc (m *testIpsetProvider) MockDestroyAll(t *testing.T, impl func(string) error) {\n\n\tm.currentMocks(t).destroyAllMock = impl\n}\n\nfunc (m *testIpsetProvider) MockListIPSets(t *testing.T, impl func() ([]string, error)) {\n\n\tm.currentMocks(t).listIPSetsMock = impl\n}\n\nfunc (m *testIpsetProvider) NewIpset(name string, hasht string, p *ipset.Params) (Ipset, error) {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.newMockIPset != nil {\n\t\treturn mock.newMockIPset(name, hasht, p)\n\t}\n\n\treturn NewTestIpset(), nil\n}\n\nfunc (m *testIpsetProvider) GetIpset(name string) Ipset {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.newMockIPset != nil {\n\t\treturn mock.getMockIPset(name)\n\t}\n\n\treturn NewTestIpset()\n}\n\nfunc (m *testIpsetProvider) DestroyAll(prefix string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.destroyAllMock != nil {\n\t\treturn mock.destroyAllMock(prefix)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpsetProvider) ListIPSets() ([]string, error) {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.listIPSetsMock != nil {\n\t\treturn mock.listIPSetsMock()\n\t}\n\n\treturn nil, nil\n}\n\nfunc (m *testIpsetProvider) currentMocks(t *testing.T) *ipsetProviderMockedMethods {\n\tm.lock.Lock()\n\tdefer m.lock.Unlock()\n\n\tmocks := m.mocks[t]\n\n\tif mocks == nil {\n\t\tmocks = &ipsetProviderMockedMethods{}\n\t\tm.mocks[t] = mocks\n\t}\n\n\tm.currentTest = t\n\treturn mocks\n}\n\ntype ipsetMockedMethods struct {\n\taddMock       func(entry string, timeout int) error\n\taddOptionMock func(entry string, option string, timeout int) error\n\tdelMock       func(entry string) error\n\tdestroyMock   func() error\n\tflushMock     func() error\n\ttestMock      func(entry string) (bool, error)\n}\n\n// TestIpset is a test implementation for Ipset\ntype TestIpset interface {\n\tIpset\n\tMockAdd(t *testing.T, impl func(entry string, timeout int) error)\n\tMockAddOption(t *testing.T, impl func(entry string, option string, timeout int) error)\n\tMockDel(t *testing.T, impl func(entry string) error)\n\tMockDestroy(t *testing.T, impl func() error)\n\tMockFlush(t *testing.T, impl func() error)\n\tMockTest(t *testing.T, impl func(entry string) (bool, error))\n}\n\ntype testIpset struct {\n\tmocks       map[*testing.T]*ipsetMockedMethods\n\tlock        *sync.Mutex\n\tcurrentTest *testing.T\n}\n\n// NewTestIpset returns a new TestManipulator.\nfunc NewTestIpset() TestIpset {\n\treturn &testIpset{\n\t\tlock:  &sync.Mutex{},\n\t\tmocks: map[*testing.T]*ipsetMockedMethods{},\n\t}\n}\n\nfunc (m *testIpset) MockAdd(t *testing.T, impl func(entry string, timeout int) error) {\n\n\tm.currentMocks(t).addMock = impl\n}\n\nfunc (m *testIpset) MockAddOption(t *testing.T, impl func(entry string, option string, timeout int) error) {\n\n\tm.currentMocks(t).addOptionMock = impl\n}\n\nfunc (m *testIpset) MockDel(t *testing.T, impl func(entry string) error) {\n\n\tm.currentMocks(t).delMock = impl\n}\n\nfunc (m *testIpset) MockDestroy(t *testing.T, impl func() error) {\n\n\tm.currentMocks(t).destroyMock = impl\n}\n\nfunc (m *testIpset) MockFlush(t *testing.T, impl func() error) {\n\n\tm.currentMocks(t).flushMock = impl\n}\n\nfunc (m *testIpset) MockTest(t *testing.T, impl func(entry string) (bool, error)) {\n\n\tm.currentMocks(t).testMock = impl\n}\n\nfunc (m *testIpset) Add(entry string, timeout int) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.addMock != nil {\n\t\treturn mock.addMock(entry, timeout)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpset) AddOption(entry string, option string, timeout int) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.addOptionMock != nil {\n\t\treturn mock.addOptionMock(entry, option, timeout)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpset) Del(entry string) error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.delMock != nil {\n\t\treturn mock.delMock(entry)\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpset) Destroy() error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.destroyMock != nil {\n\t\treturn mock.destroyMock()\n\t}\n\treturn nil\n\n}\n\nfunc (m *testIpset) Flush() error {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.flushMock != nil {\n\t\treturn mock.flushMock()\n\t}\n\n\treturn nil\n}\n\nfunc (m *testIpset) Test(entry string) (bool, error) {\n\n\tif mock := m.currentMocks(m.currentTest); mock != nil && mock.testMock != nil {\n\t\treturn mock.testMock(entry)\n\t}\n\n\treturn false, nil\n}\n\nfunc (m *testIpset) currentMocks(t *testing.T) *ipsetMockedMethods {\n\tm.lock.Lock()\n\tdefer m.lock.Unlock()\n\n\tmocks := m.mocks[t]\n\n\tif mocks == nil {\n\t\tmocks = &ipsetMockedMethods{}\n\t\tm.mocks[t] = mocks\n\t}\n\n\tm.currentTest = t\n\treturn mocks\n}\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/ipsets.go",
    "content": "package ipsetmanager\n\nimport (\n\t\"encoding/base64\"\n\t\"io\"\n\t\"net\"\n\t\"sync\"\n\n\tipsetpackage \"github.com/aporeto-inc/go-ipset/ipset\"\n\t\"github.com/spaolacci/murmur3\"\n\tprovider \"go.aporeto.io/trireme-lib/controller/pkg/aclprovider\"\n\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\t//IPv6DefaultIP is the default ip of v6\n\tIPv6DefaultIP = \"::/0\"\n\t//IPv4DefaultIP is the  default ip for v4\n\tIPv4DefaultIP = \"0.0.0.0/0\"\n\t//IPsetV4 version for ipv4\n\tIPsetV4 = iota\n\t//IPsetV6 version for ipv6\n\tIPsetV6\n)\n\n//ACLManager interface is used by supervisor. This interface provides the supervisor to\n//create ipsets corresponding to service ID.\ntype ACLManager interface {\n\tAddToIPset(set provider.Ipset, data string) error\n\tDelFromIPset(set provider.Ipset, data string) error\n\n\tRegisterExternalNets(contextID string, extnets policy.IPRuleList) error\n\tDestroyUnusedIPsets()\n\tRemoveExternalNets(contextID string)\n\tGetIPsets(extnets policy.IPRuleList, ipver int) []string\n\tUpdateIPsets([]string, string)\n}\n\ntype ipsetInfo struct {\n\tcontextIDs map[string]bool\n\tname       string\n\taddresses  map[string]bool\n}\n\ntype handler struct {\n\tserviceIDtoIPset      map[string]*ipsetInfo\n\tcontextIDtoServiceIDs map[string]map[string]bool\n\tipset                 provider.IpsetProvider\n\tipsetPrefix           string\n\tipFilter              func(net.IP) bool\n\tipsetParams           *ipsetpackage.Params\n\ttoDestroy             []string\n}\n\ntype managerType struct {\n\tipv4Handler *handler\n\tipv6Handler *handler\n\tsync.RWMutex\n}\n\nconst (\n\tipv4String = \"v4-\"\n\tipv6String = \"v6-\"\n)\n\n//CreateIPsetManager creates the handle with Interface ACLManager\nfunc CreateIPsetManager(ipsetv4 provider.IpsetProvider, ipsetv6 provider.IpsetProvider) ACLManager {\n\treturn &managerType{\n\t\tipv4Handler: &handler{\n\t\t\tserviceIDtoIPset:      map[string]*ipsetInfo{},\n\t\t\tcontextIDtoServiceIDs: map[string]map[string]bool{},\n\t\t\tipset:                 ipsetv4,\n\t\t\tipsetPrefix:           ipv4String,\n\t\t\tipFilter: func(ip net.IP) bool {\n\t\t\t\treturn (ip.To4() != nil)\n\t\t\t},\n\t\t\tipsetParams: &ipsetpackage.Params{},\n\t\t},\n\t\tipv6Handler: &handler{\n\t\t\tserviceIDtoIPset:      map[string]*ipsetInfo{},\n\t\t\tcontextIDtoServiceIDs: map[string]map[string]bool{},\n\t\t\tipset:                 ipsetv6,\n\t\t\tipsetPrefix:           ipv6String,\n\t\t\tipFilter: func(ip net.IP) bool {\n\t\t\t\treturn (ip.To4() == nil)\n\t\t\t},\n\t\t\tipsetParams: &ipsetpackage.Params{HashFamily: \"inet6\"},\n\t\t},\n\t}\n\n}\n\nfunc hashServiceID(serviceID string) string {\n\thash := murmur3.New64()\n\tif _, err := io.WriteString(hash, serviceID); err != nil {\n\t\treturn \"\"\n\t}\n\n\treturn base64.URLEncoding.EncodeToString(hash.Sum(nil))\n}\n\n// AddToIPset is called with the ipset provider and the ip address to be added\nfunc (m *managerType) AddToIPset(set provider.Ipset, data string) error {\n\n\t// ipset can not program this rule\n\tif data == IPv4DefaultIP {\n\t\tif err := m.AddToIPset(set, \"0.0.0.0/1\"); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn m.AddToIPset(set, \"128.0.0.0/1\")\n\t}\n\n\t// ipset can not program this rule\n\tif data == IPv6DefaultIP {\n\t\tif err := m.AddToIPset(set, \"::/1\"); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn m.AddToIPset(set, \"8000::/1\")\n\t}\n\n\treturn set.Add(data, 0)\n}\n\n// DelFromIPset is called with the ipset set provider and the ip to be removed from ipset\nfunc (m *managerType) DelFromIPset(set provider.Ipset, data string) error {\n\n\tif data == IPv4DefaultIP {\n\t\tif err := m.DelFromIPset(set, \"0.0.0.0/1\"); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn m.DelFromIPset(set, \"128.0.0.0/1\")\n\t}\n\n\tif data == IPv6DefaultIP {\n\t\tif err := m.DelFromIPset(set, \"::/1\"); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn m.DelFromIPset(set, \"8000::/1\")\n\t}\n\n\treturn set.Del(data)\n}\n\nfunc (m *managerType) synchronizeIPsinIpset(ipHandler *handler, ipsetInfo *ipsetInfo, addresses []string) {\n\tnewips := map[string]bool{}\n\tipsetHandler := ipHandler.ipset.GetIpset(ipsetInfo.name)\n\n\tfor _, address := range addresses {\n\t\tnetIP := net.ParseIP(address)\n\t\tif netIP == nil {\n\t\t\tnetIP, _, _ = net.ParseCIDR(address)\n\t\t}\n\n\t\tif !ipHandler.ipFilter(netIP) {\n\t\t\tcontinue\n\t\t}\n\n\t\tnewips[address] = true\n\n\t\tif _, ok := ipsetInfo.addresses[address]; !ok {\n\t\t\tif err := m.AddToIPset(ipsetHandler, address); err != nil {\n\t\t\t\tzap.L().Error(\"Error adding IPs to ipset\", zap.String(\"ipset\", ipsetInfo.name), zap.String(\"address\", address))\n\t\t\t}\n\t\t}\n\t\tdelete(ipsetInfo.addresses, address)\n\t}\n\n\t// Remove the old entries\n\tfor address, val := range ipsetInfo.addresses {\n\t\tif val {\n\t\t\tif err := m.DelFromIPset(ipsetHandler, address); err != nil {\n\t\t\t\tzap.L().Error(\"Error removing IPs from ipset\", zap.String(\"ipset\", ipsetInfo.name), zap.String(\"address\", address))\n\t\t\t}\n\t\t}\n\t}\n\n\tipsetInfo.addresses = newips\n}\n\nfunc createIPset(ipHandler *handler, serviceID string) (*ipsetInfo, error) {\n\tipsetName := \"TRI-\" + ipHandler.ipsetPrefix + \"ext-\" + hashServiceID(serviceID)\n\t_, err := ipHandler.ipset.NewIpset(ipsetName, \"hash:net\", ipHandler.ipsetParams)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tipset := &ipsetInfo{contextIDs: map[string]bool{}, name: ipsetName, addresses: map[string]bool{}}\n\tipHandler.serviceIDtoIPset[serviceID] = ipset\n\n\treturn ipset, nil\n}\n\nfunc deleteServiceID(ipHandler *handler, serviceID string) {\n\tipsetInfo := ipHandler.serviceIDtoIPset[serviceID]\n\tipHandler.toDestroy = append(ipHandler.toDestroy, ipsetInfo.name)\n\tdelete(ipHandler.serviceIDtoIPset, serviceID)\n}\n\nfunc reduceReferenceFromServiceID(ipHandler *handler, contextID string, serviceID string) {\n\tvar ipset *ipsetInfo\n\n\tif ipset = ipHandler.serviceIDtoIPset[serviceID]; ipset == nil {\n\t\tzap.L().Error(\"Could not find ipset corresponding to serviceID\", zap.String(\"serviceID\", serviceID))\n\t\treturn\n\t}\n\n\tdelete(ipset.contextIDs, contextID)\n\n\t// there are no references from any pu. safe to destroy now\n\tif len(ipset.contextIDs) == 0 {\n\t\tdeleteServiceID(ipHandler, serviceID)\n\t}\n}\n\n// RegisterExternalNets registers the contextID and the corresponding serviceIDs\nfunc (m *managerType) RegisterExternalNets(contextID string, extnets policy.IPRuleList) error {\n\tm.Lock()\n\tdefer m.Unlock()\n\n\tprocessExtnets := func(ipHandler *handler) error {\n\t\tfor _, extnet := range extnets {\n\t\t\tvar ipset *ipsetInfo\n\n\t\t\tserviceID := extnet.Policy.ServiceID\n\t\t\tif ipset = ipHandler.serviceIDtoIPset[serviceID]; ipset == nil {\n\t\t\t\tvar err error\n\t\t\t\tif ipset, err = createIPset(ipHandler, serviceID); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tm.synchronizeIPsinIpset(ipHandler, ipset, extnet.Addresses)\n\t\t\t// have a backreference from serviceID to contextID\n\t\t\tipset.contextIDs[contextID] = true\n\t\t}\n\n\t\treturn nil\n\t}\n\n\tprocessOlderExtnets := func(ipHandler *handler) {\n\t\tnewExtnets := map[string]bool{}\n\n\t\tfor _, extnet := range extnets {\n\n\t\t\tserviceID := extnet.Policy.ServiceID\n\t\t\tnewExtnets[serviceID] = true\n\t\t\tm, ok := ipHandler.contextIDtoServiceIDs[contextID]\n\n\t\t\tif ok && m[serviceID] {\n\t\t\t\tdelete(m, serviceID)\n\t\t\t}\n\t\t}\n\n\t\tfor serviceID := range ipHandler.contextIDtoServiceIDs[contextID] {\n\t\t\treduceReferenceFromServiceID(ipHandler, contextID, serviceID)\n\t\t}\n\n\t\tipHandler.contextIDtoServiceIDs[contextID] = newExtnets\n\t}\n\n\tif err := processExtnets(m.ipv4Handler); err != nil {\n\t\treturn err\n\t}\n\n\tif err := processExtnets(m.ipv6Handler); err != nil {\n\t\treturn err\n\t}\n\n\tprocessOlderExtnets(m.ipv4Handler)\n\tprocessOlderExtnets(m.ipv6Handler)\n\n\treturn nil\n}\n\n// DestroyUnusedIPsets destroys the unused ipsets.\nfunc (m *managerType) DestroyUnusedIPsets() {\n\tm.Lock()\n\tdefer m.Unlock()\n\n\tdestroy := func(ipHandler *handler) {\n\t\tfor _, ipsetName := range ipHandler.toDestroy {\n\t\t\tipsetHandler := ipHandler.ipset.GetIpset(ipsetName)\n\t\t\tif err := ipsetHandler.Destroy(); err != nil {\n\t\t\t\tzap.L().Warn(\"Failed to destroy ipset\", zap.String(\"ipset\", ipsetName), zap.Error(err))\n\t\t\t}\n\t\t}\n\n\t\tipHandler.toDestroy = nil\n\t}\n\n\tdestroy(m.ipv4Handler)\n\tdestroy(m.ipv6Handler)\n}\n\n// RemoveExternalNets is called when the contextID is being unsupervised such that all the external nets can be deleted.\nfunc (m *managerType) RemoveExternalNets(contextID string) {\n\tm.Lock()\n\n\tprocess := func(ipHandler *handler) {\n\t\tm, ok := ipHandler.contextIDtoServiceIDs[contextID]\n\t\tif ok {\n\t\t\tfor serviceID := range m {\n\t\t\t\treduceReferenceFromServiceID(ipHandler, contextID, serviceID)\n\t\t\t}\n\t\t}\n\n\t\tdelete(ipHandler.contextIDtoServiceIDs, contextID)\n\t}\n\n\tprocess(m.ipv4Handler)\n\tprocess(m.ipv6Handler)\n\n\tm.Unlock()\n\tm.DestroyUnusedIPsets()\n}\n\n// GetIPsets returns the ipset names corresponding to the serviceIDs.\nfunc (m *managerType) GetIPsets(extnets policy.IPRuleList, ipver int) []string {\n\tm.Lock()\n\tdefer m.Unlock()\n\n\tvar ipHandler *handler\n\n\tif ipver == IPsetV4 {\n\t\tipHandler = m.ipv4Handler\n\t} else {\n\t\tipHandler = m.ipv6Handler\n\t}\n\n\tvar ipsets []string\n\n\tfor _, extnet := range extnets {\n\t\tserviceID := extnet.Policy.ServiceID\n\n\t\tipsetInfo, ok := ipHandler.serviceIDtoIPset[serviceID]\n\t\tif ok {\n\t\t\tipsets = append(ipsets, ipsetInfo.name)\n\t\t}\n\t}\n\n\treturn ipsets\n}\n\n// UpdateIPsets updates the ip addresses in the ipsets corresponding to the serviceID\nfunc (m *managerType) UpdateIPsets(addresses []string, serviceID string) {\n\tm.Lock()\n\tdefer m.Unlock()\n\n\tprocess := func(ipHandler *handler) {\n\t\tfor _, address := range addresses {\n\t\t\tnetIP := net.ParseIP(address)\n\t\t\tif netIP == nil {\n\t\t\t\tnetIP, _, _ = net.ParseCIDR(address)\n\t\t\t}\n\n\t\t\tif !ipHandler.ipFilter(netIP) {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tif ipset := ipHandler.serviceIDtoIPset[serviceID]; ipset != nil {\n\t\t\t\tipsetHandler := ipHandler.ipset.GetIpset(ipset.name)\n\t\t\t\tif err := m.AddToIPset(ipsetHandler, address); err != nil {\n\t\t\t\t\tzap.L().Error(\"Error adding IPs to ipset\", zap.String(\"ipset\", ipset.name), zap.String(\"address\", address))\n\t\t\t\t}\n\t\t\t\tipset.addresses[address] = true\n\t\t\t}\n\t\t}\n\t}\n\n\tprocess(m.ipv4Handler)\n\tprocess(m.ipv6Handler)\n}\n"
  },
  {
    "path": "controller/pkg/ipsetmanager/mock_ipsetmanager/ipsetmanagermock.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: ipsetmanager.go\n\n// Package mock_ipsetmanager is a generated GoMock package.\npackage mock_ipsetmanager\n\nimport (\n\tgomock \"github.com/golang/mock/gomock\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\tnet \"net\"\n\treflect \"reflect\"\n)\n\n// MockServerL3 is a mock of ServerL3 interface\ntype MockServerL3 struct {\n\tctrl     *gomock.Controller\n\trecorder *MockServerL3MockRecorder\n}\n\n// MockServerL3MockRecorder is the mock recorder for MockServerL3\ntype MockServerL3MockRecorder struct {\n\tmock *MockServerL3\n}\n\n// NewMockServerL3 creates a new mock instance\nfunc NewMockServerL3(ctrl *gomock.Controller) *MockServerL3 {\n\tmock := &MockServerL3{ctrl: ctrl}\n\tmock.recorder = &MockServerL3MockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockServerL3) EXPECT() *MockServerL3MockRecorder {\n\treturn m.recorder\n}\n\n// CreateServerPortSet mocks base method\nfunc (m *MockServerL3) CreateServerPortSet(contextID string) error {\n\tret := m.ctrl.Call(m, \"CreateServerPortSet\", contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CreateServerPortSet indicates an expected call of CreateServerPortSet\nfunc (mr *MockServerL3MockRecorder) CreateServerPortSet(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateServerPortSet\", reflect.TypeOf((*MockServerL3)(nil).CreateServerPortSet), contextID)\n}\n\n// GetServerPortSetName mocks base method\nfunc (m *MockServerL3) GetServerPortSetName(contextID string) string {\n\tret := m.ctrl.Call(m, \"GetServerPortSetName\", contextID)\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// GetServerPortSetName indicates an expected call of GetServerPortSetName\nfunc (mr *MockServerL3MockRecorder) GetServerPortSetName(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetServerPortSetName\", reflect.TypeOf((*MockServerL3)(nil).GetServerPortSetName), contextID)\n}\n\n// DestroyServerPortSet mocks base method\nfunc (m *MockServerL3) DestroyServerPortSet(contextID string) error {\n\tret := m.ctrl.Call(m, \"DestroyServerPortSet\", contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DestroyServerPortSet indicates an expected call of DestroyServerPortSet\nfunc (mr *MockServerL3MockRecorder) DestroyServerPortSet(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DestroyServerPortSet\", reflect.TypeOf((*MockServerL3)(nil).DestroyServerPortSet), contextID)\n}\n\n// AddPortToServerPortSet mocks base method\nfunc (m *MockServerL3) AddPortToServerPortSet(contextID, port string) error {\n\tret := m.ctrl.Call(m, \"AddPortToServerPortSet\", contextID, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AddPortToServerPortSet indicates an expected call of AddPortToServerPortSet\nfunc (mr *MockServerL3MockRecorder) AddPortToServerPortSet(contextID, port interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddPortToServerPortSet\", reflect.TypeOf((*MockServerL3)(nil).AddPortToServerPortSet), contextID, port)\n}\n\n// DeletePortFromServerPortSet mocks base method\nfunc (m *MockServerL3) DeletePortFromServerPortSet(contextID, port string) error {\n\tret := m.ctrl.Call(m, \"DeletePortFromServerPortSet\", contextID, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DeletePortFromServerPortSet indicates an expected call of DeletePortFromServerPortSet\nfunc (mr *MockServerL3MockRecorder) DeletePortFromServerPortSet(contextID, port interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DeletePortFromServerPortSet\", reflect.TypeOf((*MockServerL3)(nil).DeletePortFromServerPortSet), contextID, port)\n}\n\n// MockAclL3 is a mock of AclL3 interface\ntype MockAclL3 struct {\n\tctrl     *gomock.Controller\n\trecorder *MockAclL3MockRecorder\n}\n\n// MockAclL3MockRecorder is the mock recorder for MockAclL3\ntype MockAclL3MockRecorder struct {\n\tmock *MockAclL3\n}\n\n// NewMockAclL3 creates a new mock instance\nfunc NewMockAclL3(ctrl *gomock.Controller) *MockAclL3 {\n\tmock := &MockAclL3{ctrl: ctrl}\n\tmock.recorder = &MockAclL3MockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockAclL3) EXPECT() *MockAclL3MockRecorder {\n\treturn m.recorder\n}\n\n// RegisterExternalNets mocks base method\nfunc (m *MockAclL3) RegisterExternalNets(contextID string, extnets policy.IPRuleList) error {\n\tret := m.ctrl.Call(m, \"RegisterExternalNets\", contextID, extnets)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// RegisterExternalNets indicates an expected call of RegisterExternalNets\nfunc (mr *MockAclL3MockRecorder) RegisterExternalNets(contextID, extnets interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RegisterExternalNets\", reflect.TypeOf((*MockAclL3)(nil).RegisterExternalNets), contextID, extnets)\n}\n\n// UpdateACLIPsets mocks base method\nfunc (m *MockAclL3) UpdateACLIPsets(arg0 []string, arg1 string) {\n\tm.ctrl.Call(m, \"UpdateACLIPsets\", arg0, arg1)\n}\n\n// UpdateACLIPsets indicates an expected call of UpdateACLIPsets\nfunc (mr *MockAclL3MockRecorder) UpdateACLIPsets(arg0, arg1 interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateACLIPsets\", reflect.TypeOf((*MockAclL3)(nil).UpdateACLIPsets), arg0, arg1)\n}\n\n// DestroyUnusedIPsets mocks base method\nfunc (m *MockAclL3) DestroyUnusedIPsets() {\n\tm.ctrl.Call(m, \"DestroyUnusedIPsets\")\n}\n\n// DestroyUnusedIPsets indicates an expected call of DestroyUnusedIPsets\nfunc (mr *MockAclL3MockRecorder) DestroyUnusedIPsets() *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DestroyUnusedIPsets\", reflect.TypeOf((*MockAclL3)(nil).DestroyUnusedIPsets))\n}\n\n// RemoveExternalNets mocks base method\nfunc (m *MockAclL3) RemoveExternalNets(contextID string) {\n\tm.ctrl.Call(m, \"RemoveExternalNets\", contextID)\n}\n\n// RemoveExternalNets indicates an expected call of RemoveExternalNets\nfunc (mr *MockAclL3MockRecorder) RemoveExternalNets(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RemoveExternalNets\", reflect.TypeOf((*MockAclL3)(nil).RemoveExternalNets), contextID)\n}\n\n// GetACLIPsetsNames mocks base method\nfunc (m *MockAclL3) GetACLIPsetsNames(extnets policy.IPRuleList) []string {\n\tret := m.ctrl.Call(m, \"GetACLIPsetsNames\", extnets)\n\tret0, _ := ret[0].([]string)\n\treturn ret0\n}\n\n// GetACLIPsetsNames indicates an expected call of GetACLIPsetsNames\nfunc (mr *MockAclL3MockRecorder) GetACLIPsetsNames(extnets interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetACLIPsetsNames\", reflect.TypeOf((*MockAclL3)(nil).GetACLIPsetsNames), extnets)\n}\n\n// MockProxyL4 is a mock of ProxyL4 interface\ntype MockProxyL4 struct {\n\tctrl     *gomock.Controller\n\trecorder *MockProxyL4MockRecorder\n}\n\n// MockProxyL4MockRecorder is the mock recorder for MockProxyL4\ntype MockProxyL4MockRecorder struct {\n\tmock *MockProxyL4\n}\n\n// NewMockProxyL4 creates a new mock instance\nfunc NewMockProxyL4(ctrl *gomock.Controller) *MockProxyL4 {\n\tmock := &MockProxyL4{ctrl: ctrl}\n\tmock.recorder = &MockProxyL4MockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockProxyL4) EXPECT() *MockProxyL4MockRecorder {\n\treturn m.recorder\n}\n\n// CreateProxySets mocks base method\nfunc (m *MockProxyL4) CreateProxySets(contextID string) error {\n\tret := m.ctrl.Call(m, \"CreateProxySets\", contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CreateProxySets indicates an expected call of CreateProxySets\nfunc (mr *MockProxyL4MockRecorder) CreateProxySets(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateProxySets\", reflect.TypeOf((*MockProxyL4)(nil).CreateProxySets), contextID)\n}\n\n// GetProxySetNames mocks base method\nfunc (m *MockProxyL4) GetProxySetNames(contextID string) (string, string) {\n\tret := m.ctrl.Call(m, \"GetProxySetNames\", contextID)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(string)\n\treturn ret0, ret1\n}\n\n// GetProxySetNames indicates an expected call of GetProxySetNames\nfunc (mr *MockProxyL4MockRecorder) GetProxySetNames(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetProxySetNames\", reflect.TypeOf((*MockProxyL4)(nil).GetProxySetNames), contextID)\n}\n\n// DestroyProxySets mocks base method\nfunc (m *MockProxyL4) DestroyProxySets(contextID string) {\n\tm.ctrl.Call(m, \"DestroyProxySets\", contextID)\n}\n\n// DestroyProxySets indicates an expected call of DestroyProxySets\nfunc (mr *MockProxyL4MockRecorder) DestroyProxySets(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DestroyProxySets\", reflect.TypeOf((*MockProxyL4)(nil).DestroyProxySets), contextID)\n}\n\n// FlushProxySets mocks base method\nfunc (m *MockProxyL4) FlushProxySets(contextID string) {\n\tm.ctrl.Call(m, \"FlushProxySets\", contextID)\n}\n\n// FlushProxySets indicates an expected call of FlushProxySets\nfunc (mr *MockProxyL4MockRecorder) FlushProxySets(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"FlushProxySets\", reflect.TypeOf((*MockProxyL4)(nil).FlushProxySets), contextID)\n}\n\n// AddIPPortToDependentService mocks base method\nfunc (m *MockProxyL4) AddIPPortToDependentService(contextID string, ip *net.IPNet, port string) error {\n\tret := m.ctrl.Call(m, \"AddIPPortToDependentService\", contextID, ip, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AddIPPortToDependentService indicates an expected call of AddIPPortToDependentService\nfunc (mr *MockProxyL4MockRecorder) AddIPPortToDependentService(contextID, ip, port interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddIPPortToDependentService\", reflect.TypeOf((*MockProxyL4)(nil).AddIPPortToDependentService), contextID, ip, port)\n}\n\n// AddPortToExposedService mocks base method\nfunc (m *MockProxyL4) AddPortToExposedService(contextID, port string) error {\n\tret := m.ctrl.Call(m, \"AddPortToExposedService\", contextID, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AddPortToExposedService indicates an expected call of AddPortToExposedService\nfunc (mr *MockProxyL4MockRecorder) AddPortToExposedService(contextID, port interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddPortToExposedService\", reflect.TypeOf((*MockProxyL4)(nil).AddPortToExposedService), contextID, port)\n}\n\n// MockIPSetManager is a mock of IPSetManager interface\ntype MockIPSetManager struct {\n\tctrl     *gomock.Controller\n\trecorder *MockIPSetManagerMockRecorder\n}\n\n// MockIPSetManagerMockRecorder is the mock recorder for MockIPSetManager\ntype MockIPSetManagerMockRecorder struct {\n\tmock *MockIPSetManager\n}\n\n// NewMockIPSetManager creates a new mock instance\nfunc NewMockIPSetManager(ctrl *gomock.Controller) *MockIPSetManager {\n\tmock := &MockIPSetManager{ctrl: ctrl}\n\tmock.recorder = &MockIPSetManagerMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockIPSetManager) EXPECT() *MockIPSetManagerMockRecorder {\n\treturn m.recorder\n}\n\n// CreateServerPortSet mocks base method\nfunc (m *MockIPSetManager) CreateServerPortSet(contextID string) error {\n\tret := m.ctrl.Call(m, \"CreateServerPortSet\", contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CreateServerPortSet indicates an expected call of CreateServerPortSet\nfunc (mr *MockIPSetManagerMockRecorder) CreateServerPortSet(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateServerPortSet\", reflect.TypeOf((*MockIPSetManager)(nil).CreateServerPortSet), contextID)\n}\n\n// GetServerPortSetName mocks base method\nfunc (m *MockIPSetManager) GetServerPortSetName(contextID string) string {\n\tret := m.ctrl.Call(m, \"GetServerPortSetName\", contextID)\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// GetServerPortSetName indicates an expected call of GetServerPortSetName\nfunc (mr *MockIPSetManagerMockRecorder) GetServerPortSetName(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetServerPortSetName\", reflect.TypeOf((*MockIPSetManager)(nil).GetServerPortSetName), contextID)\n}\n\n// DestroyServerPortSet mocks base method\nfunc (m *MockIPSetManager) DestroyServerPortSet(contextID string) error {\n\tret := m.ctrl.Call(m, \"DestroyServerPortSet\", contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DestroyServerPortSet indicates an expected call of DestroyServerPortSet\nfunc (mr *MockIPSetManagerMockRecorder) DestroyServerPortSet(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DestroyServerPortSet\", reflect.TypeOf((*MockIPSetManager)(nil).DestroyServerPortSet), contextID)\n}\n\n// AddPortToServerPortSet mocks base method\nfunc (m *MockIPSetManager) AddPortToServerPortSet(contextID, port string) error {\n\tret := m.ctrl.Call(m, \"AddPortToServerPortSet\", contextID, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AddPortToServerPortSet indicates an expected call of AddPortToServerPortSet\nfunc (mr *MockIPSetManagerMockRecorder) AddPortToServerPortSet(contextID, port interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddPortToServerPortSet\", reflect.TypeOf((*MockIPSetManager)(nil).AddPortToServerPortSet), contextID, port)\n}\n\n// DeletePortFromServerPortSet mocks base method\nfunc (m *MockIPSetManager) DeletePortFromServerPortSet(contextID, port string) error {\n\tret := m.ctrl.Call(m, \"DeletePortFromServerPortSet\", contextID, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DeletePortFromServerPortSet indicates an expected call of DeletePortFromServerPortSet\nfunc (mr *MockIPSetManagerMockRecorder) DeletePortFromServerPortSet(contextID, port interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DeletePortFromServerPortSet\", reflect.TypeOf((*MockIPSetManager)(nil).DeletePortFromServerPortSet), contextID, port)\n}\n\n// RegisterExternalNets mocks base method\nfunc (m *MockIPSetManager) RegisterExternalNets(contextID string, extnets policy.IPRuleList) error {\n\tret := m.ctrl.Call(m, \"RegisterExternalNets\", contextID, extnets)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// RegisterExternalNets indicates an expected call of RegisterExternalNets\nfunc (mr *MockIPSetManagerMockRecorder) RegisterExternalNets(contextID, extnets interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RegisterExternalNets\", reflect.TypeOf((*MockIPSetManager)(nil).RegisterExternalNets), contextID, extnets)\n}\n\n// UpdateACLIPsets mocks base method\nfunc (m *MockIPSetManager) UpdateACLIPsets(arg0 []string, arg1 string) {\n\tm.ctrl.Call(m, \"UpdateACLIPsets\", arg0, arg1)\n}\n\n// UpdateACLIPsets indicates an expected call of UpdateACLIPsets\nfunc (mr *MockIPSetManagerMockRecorder) UpdateACLIPsets(arg0, arg1 interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateACLIPsets\", reflect.TypeOf((*MockIPSetManager)(nil).UpdateACLIPsets), arg0, arg1)\n}\n\n// DestroyUnusedIPsets mocks base method\nfunc (m *MockIPSetManager) DestroyUnusedIPsets() {\n\tm.ctrl.Call(m, \"DestroyUnusedIPsets\")\n}\n\n// DestroyUnusedIPsets indicates an expected call of DestroyUnusedIPsets\nfunc (mr *MockIPSetManagerMockRecorder) DestroyUnusedIPsets() *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DestroyUnusedIPsets\", reflect.TypeOf((*MockIPSetManager)(nil).DestroyUnusedIPsets))\n}\n\n// RemoveExternalNets mocks base method\nfunc (m *MockIPSetManager) RemoveExternalNets(contextID string) {\n\tm.ctrl.Call(m, \"RemoveExternalNets\", contextID)\n}\n\n// RemoveExternalNets indicates an expected call of RemoveExternalNets\nfunc (mr *MockIPSetManagerMockRecorder) RemoveExternalNets(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RemoveExternalNets\", reflect.TypeOf((*MockIPSetManager)(nil).RemoveExternalNets), contextID)\n}\n\n// GetACLIPsetsNames mocks base method\nfunc (m *MockIPSetManager) GetACLIPsetsNames(extnets policy.IPRuleList) []string {\n\tret := m.ctrl.Call(m, \"GetACLIPsetsNames\", extnets)\n\tret0, _ := ret[0].([]string)\n\treturn ret0\n}\n\n// GetACLIPsetsNames indicates an expected call of GetACLIPsetsNames\nfunc (mr *MockIPSetManagerMockRecorder) GetACLIPsetsNames(extnets interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetACLIPsetsNames\", reflect.TypeOf((*MockIPSetManager)(nil).GetACLIPsetsNames), extnets)\n}\n\n// CreateProxySets mocks base method\nfunc (m *MockIPSetManager) CreateProxySets(contextID string) error {\n\tret := m.ctrl.Call(m, \"CreateProxySets\", contextID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CreateProxySets indicates an expected call of CreateProxySets\nfunc (mr *MockIPSetManagerMockRecorder) CreateProxySets(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateProxySets\", reflect.TypeOf((*MockIPSetManager)(nil).CreateProxySets), contextID)\n}\n\n// GetProxySetNames mocks base method\nfunc (m *MockIPSetManager) GetProxySetNames(contextID string) (string, string) {\n\tret := m.ctrl.Call(m, \"GetProxySetNames\", contextID)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(string)\n\treturn ret0, ret1\n}\n\n// GetProxySetNames indicates an expected call of GetProxySetNames\nfunc (mr *MockIPSetManagerMockRecorder) GetProxySetNames(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetProxySetNames\", reflect.TypeOf((*MockIPSetManager)(nil).GetProxySetNames), contextID)\n}\n\n// DestroyProxySets mocks base method\nfunc (m *MockIPSetManager) DestroyProxySets(contextID string) {\n\tm.ctrl.Call(m, \"DestroyProxySets\", contextID)\n}\n\n// DestroyProxySets indicates an expected call of DestroyProxySets\nfunc (mr *MockIPSetManagerMockRecorder) DestroyProxySets(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DestroyProxySets\", reflect.TypeOf((*MockIPSetManager)(nil).DestroyProxySets), contextID)\n}\n\n// FlushProxySets mocks base method\nfunc (m *MockIPSetManager) FlushProxySets(contextID string) {\n\tm.ctrl.Call(m, \"FlushProxySets\", contextID)\n}\n\n// FlushProxySets indicates an expected call of FlushProxySets\nfunc (mr *MockIPSetManagerMockRecorder) FlushProxySets(contextID interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"FlushProxySets\", reflect.TypeOf((*MockIPSetManager)(nil).FlushProxySets), contextID)\n}\n\n// AddIPPortToDependentService mocks base method\nfunc (m *MockIPSetManager) AddIPPortToDependentService(contextID string, ip *net.IPNet, port string) error {\n\tret := m.ctrl.Call(m, \"AddIPPortToDependentService\", contextID, ip, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AddIPPortToDependentService indicates an expected call of AddIPPortToDependentService\nfunc (mr *MockIPSetManagerMockRecorder) AddIPPortToDependentService(contextID, ip, port interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddIPPortToDependentService\", reflect.TypeOf((*MockIPSetManager)(nil).AddIPPortToDependentService), contextID, ip, port)\n}\n\n// AddPortToExposedService mocks base method\nfunc (m *MockIPSetManager) AddPortToExposedService(contextID, port string) error {\n\tret := m.ctrl.Call(m, \"AddPortToExposedService\", contextID, port)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AddPortToExposedService indicates an expected call of AddPortToExposedService\nfunc (mr *MockIPSetManagerMockRecorder) AddPortToExposedService(contextID, port interface{}) *gomock.Call {\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddPortToExposedService\", reflect.TypeOf((*MockIPSetManager)(nil).AddPortToExposedService), contextID, port)\n}\n"
  },
  {
    "path": "controller/pkg/packet/constants.go",
    "content": "package packet\n\nconst (\n\t// minIPPacketLen is the min ip packet size for TCP packet\n\tminTCPIPPacketLen = 20\n\n\t// minIPPacketLen is the min ip packet size for UDP packet\n\tminUDPIPPacketLen = 8\n\n\t// minICMPPacketLen is the min ip packet size for ICMP packet\n\tminICMPIPPacketLen = 8\n\n\t// minIPHdrSize\n\tminIPv4HdrSize = 20\n\n\t// minTCPHeaderLen is the min TCP header length\n\tminTCPHeaderLen = 20\n)\n\n// IP Header field position constants\nconst (\n\t// ipHdrLenPos is location of IP (entire packet) length\n\tipv4HdrLenPos = 0\n\n\t// ipLengthPos is location of IP (entire packet) length\n\tipv4LengthPos = 2\n\n\t// ipIDPos is location of IP Identifier\n\tipv4IDPos = 4\n\n\t// ipProtoPos is the location of the IP Protocol\n\tipv4ProtoPos = 9\n\n\t// ipChecksumPos is location of IP checksum\n\tipv4ChecksumPos = 10\n\n\t// ipSourceAddrPos is location of source IP address\n\tipv4SourceAddrPos = 12\n\n\t// ipDestAddrPos is location of destination IP address\n\tipv4DestAddrPos = 16\n)\n\nconst (\n\t// ipv6HeaderLen is the length of the ipv6 header\n\tipv6HeaderLen = 40\n\n\t// ipv6VersionPos is the position in the buffer where the version is stored\n\tipv6VersionPos = 0\n\n\t// ipv6PayloadLenPos is the position in the buffer where the size of the payload is stored\n\tipv6PayloadLenPos = 4\n\n\t// ipv6ProtoPos is the position in the buffer where the protocol(tcp/udp) is stored\n\tipv6ProtoPos = 6\n\n\t// ipv6SourceAddrPos is the position in the buffer where ipv6 source address is stored\n\tipv6SourceAddrPos = 8\n\n\t// ipv6DestAddrPos is the position in the buffer where ipv6 dest address is stored\n\tipv6DestAddrPos = 24\n)\n\n// IP Protocol numbers\nconst (\n\t// IPProtocolTCP defines the constant for TCP protocol number\n\tIPProtocolTCP = 6\n\n\t// IPProtocolUDP defines the constant for UDP protocol number\n\tIPProtocolUDP = 17\n\n\t// IPProtocolICMP defines the constants for ICMP protocol number\n\tIPProtocolICMP = 1\n)\n\n// IP Header masks\nconst (\n\tipv4HdrLenMask = 0x0F\n\tipv4ProtoMask  = 0xF0\n)\n\n// TCP Header field position constants\nconst (\n\t// tcpSourcePortPos is the location of source port\n\ttcpSourcePortPos = 0\n\n\t// tcpDestPortPos is the location of destination port\n\ttcpDestPortPos = 2\n\n\t// tcpSeqPos is the location of seq\n\ttcpSeqPos = 4\n\n\t// tcpAckPos is the location of seq\n\ttcpAckPos = 8\n\n\t// tcpDataOffsetPos is the location of the TCP data offset\n\ttcpDataOffsetPos = 12\n\n\t//tcpFlagsOfsetPos is the location of the TCP flags\n\ttcpFlagsOffsetPos = 13\n\n\t// TCPChecksumPos is the location of TCP checksum\n\ttcpChecksumPos = 16\n)\n\n// TCP Header masks\nconst (\n\t// tcpDataOffsetMask is a mask for TCP data offset field\n\ttcpDataOffsetMask = 0xF0\n\n\t// TCPSynMask is a mask for the TCP Syn flags\n\tTCPSynMask = 0x2\n\n\t// TCPSynAckMask  mask idenitifies a TCP SYN-ACK packet\n\tTCPSynAckMask = 0x12\n\n\t// TCPRstMask mask that identifies RST packets\n\tTCPRstMask = 0x4\n\n\t// TCPAckMask mask that identifies ACK packets\n\tTCPAckMask = 0x10\n\n\t// TCPFinMask mask that identifies FIN packets\n\tTCPFinMask = 0x1\n\n\t// TCPPshMask = 0x8 mask that identifies PSH packets\n\tTCPPshMask = 0x8\n)\n\n// TCP Options Related constants\nconst (\n\t// TCPAuthenticationOption is the option number will be using\n\tTCPAuthenticationOption = uint8(34)\n\n\t// TCPMssOption is the type for MSS option\n\tTCPMssOption = uint8(2)\n\n\t// TCPMssOptionLen is the type for MSS option\n\tTCPMssOptionLen = uint8(4)\n)\n\n// UDP related constants.\nconst (\n\t// udpSourcePortPos is the location of source port\n\tudpSourcePortPos = 0\n\t// udpDestPortPos is the location of destination port\n\tudpDestPortPos = 2\n\t// UDPLengthPos is the location of UDP length\n\tudpLengthPos = 4\n\t// UDPChecksumPos is the location of UDP checksum\n\tudpChecksumPos = 6\n\t// UDPDataPos is the location of UDP data\n\tUDPDataPos = 8\n\t// UDPSynMask is a mask for the UDP Syn flags\n\tUDPSynMask = 0x10\n\t// UDPSynAckMask  mask idenitifies a UDP SYN-ACK packet\n\tUDPSynAckMask = 0x20\n\t// UDPAckMask mask that identifies ACK packets.\n\tUDPAckMask = 0x30\n\t// UDPFinAckMask mask that identifies the FinAck packets\n\tUDPFinAckMask = 0x40\n\t// UDPPolicyRejectMask mask that identifies a policy reject info from the remote end\n\tUDPPolicyRejectMask = 0x50\n\t// UDPDataPacket is a simple data packet\n\tUDPDataPacket = 0x80\n\t// UDPPacketMask identifies type of UDP packet.\n\tUDPPacketMask = 0xF0\n)\n\nconst (\n\t// UDPAuthMarker is 18 byte Aporeto signature for UDP\n\tUDPAuthMarker = \"n30njxq7bmiwr6dtxq\"\n\t// UDPAuthMarkerLen is the length of UDP marker.\n\tUDPAuthMarkerLen = 18\n\t// UDPSignatureLen is the length of signature on UDP control packet.\n\tUDPSignatureLen = 20\n\t// UDPAuthMarkerOffset is the beginning of UDPAuthMarker\n\tudpAuthMarkerOffset = 10\n\t// UDPSignatureEnd is the end of UDPSignature.\n\tudpSignatureEnd = UDPDataPos + UDPSignatureLen\n)\n\nconst (\n\t// UDPAporetoOption is the option kind for Aporeto option\n\tUDPAporetoOption = uint8(34)\n\t// UDPAporetoOptionLengthFirstByte is the first if length is greater than 255\n\tUDPAporetoOptionLengthFirstByte = uint8(0xff)\n\t// UDPAporetoOptionShortLength is the length of the option header if payload length is less than UDPAporetoOptionLengthFirstByte\n\tUDPAporetoOptionShortLength = 2\n\t// UDPAporetoOptionLongLength is the length of the option header if payload length is greater than UDPAporetoOptionLengthFirstByte\n\tUDPAporetoOptionLongLength = 6\n)\n"
  },
  {
    "path": "controller/pkg/packet/helpers.go",
    "content": "package packet\n\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n\t\"encoding/hex\"\n\t\"fmt\"\n\t\"strconv\"\n\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/net/ipv4\"\n)\n\n// Helpher functions for the package, mainly for debugging and validation\n// They are not used by the main package\n\n// VerifyIPv4Checksum returns true if the IP header checksum is correct\n// for this packet, false otherwise. Note that the checksum is not\n// modified.\nfunc (p *Packet) VerifyIPv4Checksum() bool {\n\n\tsum := p.computeIPv4Checksum()\n\n\treturn sum == p.ipHdr.ipChecksum\n}\n\n// UpdateIPv4Checksum computes the IP header checksum and updates the\n// packet with the value.\nfunc (p *Packet) UpdateIPv4Checksum() {\n\n\tp.ipHdr.ipChecksum = p.computeIPv4Checksum()\n\n\tbinary.BigEndian.PutUint16(p.ipHdr.Buffer[ipv4ChecksumPos:ipv4ChecksumPos+2], p.ipHdr.ipChecksum)\n}\n\n// VerifyTCPChecksum returns true if the TCP header checksum is correct\n// for this packet, false otherwise. Note that the checksum is not\n// modified.\nfunc (p *Packet) VerifyTCPChecksum() bool {\n\n\tsum := p.computeTCPChecksum()\n\n\treturn sum == p.tcpHdr.tcpChecksum\n}\n\n// UpdateTCPChecksum computes the TCP header checksum and updates the\n// packet with the value.\nfunc (p *Packet) UpdateTCPChecksum() {\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\tp.tcpHdr.tcpChecksum = p.computeTCPChecksum()\n\tbinary.BigEndian.PutUint16(buffer[tcpChecksumPos:tcpChecksumPos+2], p.tcpHdr.tcpChecksum)\n}\n\n// UpdateTCPFlags\nfunc (p *Packet) updateTCPFlags(tcpFlags uint8) {\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\tbuffer[tcpFlagsOffsetPos] = tcpFlags\n}\n\n// ConvertAcktoFinAck function removes the data from the packet\n// It is called only if the packet is Ack or Psh/Ack\n// converts psh/ack to fin/ack packet.\nfunc (p *Packet) ConvertAcktoFinAck() error {\n\tvar tcpFlags uint8\n\n\ttcpFlags = tcpFlags | TCPFinMask\n\ttcpFlags = tcpFlags | TCPAckMask\n\n\tp.updateTCPFlags(tcpFlags)\n\tp.tcpHdr.tcpFlags = tcpFlags\n\n\tp.TCPDataDetach(0)\n\n\treturn nil\n}\n\n// ConvertToRst function converts the packet to\n// a RST packet\nfunc (p *Packet) ConvertToRst() {\n\tvar tcpFlags uint8\n\n\ttcpFlags = tcpFlags | TCPRstMask\n\n\tp.updateTCPFlags(tcpFlags)\n\tp.tcpHdr.tcpFlags = tcpFlags\n\n\tp.TCPDataDetach(0)\n}\n\n//PacketToStringTCP returns a string representation of fields contained in this packet.\nfunc (p *Packet) PacketToStringTCP() string {\n\n\tvar buf bytes.Buffer\n\tbuf.WriteString(\"(error)\")\n\n\theader, err := ipv4.ParseHeader(p.ipHdr.Buffer)\n\n\tif err == nil {\n\t\tbuf.Reset()\n\t\tbuf.WriteString(header.String())\n\t\tbuf.WriteString(\" srcport=\")\n\t\tbuf.WriteString(strconv.Itoa(int(p.SourcePort())))\n\t\tbuf.WriteString(\" dstport=\")\n\t\tbuf.WriteString(strconv.Itoa(int(p.DestPort())))\n\t\tbuf.WriteString(\" tcpcksum=\")\n\t\tbuf.WriteString(fmt.Sprintf(\"0x%0x\", p.tcpHdr.tcpChecksum))\n\t\tbuf.WriteString(\" data\")\n\t\tbuf.WriteString(hex.EncodeToString(p.GetTCPBytes()))\n\t}\n\treturn buf.String()\n}\n\n// Computes the IP header checksum. The packet is not modified.\nfunc (p *Packet) computeIPv4Checksum() uint16 {\n\n\t// IP packet checksum is computed with the checksum value set to zero\n\tp.ipHdr.Buffer[ipv4ChecksumPos] = 0\n\tp.ipHdr.Buffer[ipv4ChecksumPos+1] = 0\n\n\t// Compute checksum, over IP header only\n\tsum := checksum(p.ipHdr.Buffer[:p.ipHdr.ipHeaderLen])\n\n\t// Restore the previous checksum (whether correct or not, as this function doesn't change it)\n\tbinary.BigEndian.PutUint16(p.ipHdr.Buffer[ipv4ChecksumPos:ipv4ChecksumPos+2], p.ipHdr.ipChecksum)\n\n\treturn sum\n}\n\n// Computes the TCP header checksum. The packet is not modified.\nfunc (p *Packet) computeTCPChecksum() uint16 {\n\tvar csum uint32\n\tvar buf [2]byte\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\ttcpBufSize := uint16(len(buffer))\n\n\toldCsumLow := buffer[tcpChecksumPos]\n\toldCsumHigh := buffer[tcpChecksumPos+1]\n\n\t// Put 0 to calculate the checksum. We will reset it back after the checksum\n\tbuffer[tcpChecksumPos] = 0\n\tbuffer[tcpChecksumPos+1] = 0\n\n\tif p.ipHdr.version == V4 {\n\t\tcsum = partialChecksum(0, p.ipHdr.Buffer[ipv4SourceAddrPos:ipv4SourceAddrPos+4])\n\t\tcsum = partialChecksum(csum, p.ipHdr.Buffer[ipv4DestAddrPos:ipv4DestAddrPos+4])\n\t} else {\n\t\tcsum = partialChecksum(0, p.ipHdr.Buffer[ipv6SourceAddrPos:ipv6SourceAddrPos+16])\n\t\tcsum = partialChecksum(csum, p.ipHdr.Buffer[ipv6DestAddrPos:ipv6DestAddrPos+16])\n\t}\n\n\t// reserved 0 byte\n\tbuf[0] = 0\n\t// tcp option 6\n\tbuf[1] = 6\n\n\tcsum = partialChecksum(csum, buf[:])\n\tbinary.BigEndian.PutUint16(buf[:], tcpBufSize)\n\tcsum = partialChecksum(csum, buf[:])\n\n\tcsum = partialChecksum(csum, buffer)\n\n\tcsum16 := finalizeChecksum(csum)\n\n\t// restore the checksum\n\tbuffer[tcpChecksumPos] = oldCsumLow\n\tbuffer[tcpChecksumPos+1] = oldCsumHigh\n\n\treturn csum16\n}\n\n// incCsum16 implements rfc1624, equation 3.\nfunc incCsum16(start, old, new uint16) uint16 {\n\n\tstart = start ^ 0xffff\n\told = old ^ 0xffff\n\n\tcsum := uint32(start) + uint32(old) + uint32(new)\n\tfor (csum >> 16) > 0 {\n\t\tcsum = (csum & 0xffff) + ((csum >> 16) & 0xffff)\n\t}\n\tcsum = csum ^ 0xffff\n\treturn uint16(csum)\n}\n\nfunc csumConvert32To16bit(sum uint32) uint16 {\n\tfor sum > 0xffff {\n\t\tsum = (sum >> 16) + (sum & 0xffff)\n\t}\n\n\treturn uint16(sum)\n}\n\n// Computes a sum of 16 bit numbers\nfunc checksumDelta(init uint32, buf []byte) uint32 {\n\n\tsum := init\n\n\tfor ; len(buf) >= 2; buf = buf[2:] {\n\t\tsum += uint32(buf[0])<<8 | uint32(buf[1])\n\t}\n\n\tif len(buf) > 0 {\n\t\tsum += uint32(buf[0]) << 8\n\t}\n\n\treturn sum\n}\n\n// Computes a checksum over the given slice.\nfunc checksum(buf []byte) uint16 {\n\tsum32 := checksumDelta(0, buf)\n\tsum16 := csumConvert32To16bit(sum32)\n\n\tcsum := ^sum16\n\treturn csum\n}\n\nfunc partialChecksum(csum uint32, buf []byte) uint32 {\n\treturn checksumDelta(csum, buf)\n}\n\nfunc finalizeChecksum(csum32 uint32) uint16 {\n\tcsum16 := csumConvert32To16bit(csum32)\n\tcsum := ^csum16\n\n\treturn csum\n}\n\nfunc (p *Packet) updateUDPv6Checksum() {\n\tvar csum uint32\n\tvar tmp [4]byte\n\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\n\tcsum = partialChecksum(0, p.ipHdr.sourceAddress)\n\tcsum = partialChecksum(csum, p.ipHdr.destinationAddress)\n\n\tbinary.BigEndian.PutUint32(tmp[:], uint32(len(buffer)))\n\tcsum = partialChecksum(csum, tmp[:])\n\n\ttmp = [4]byte{0, 0, 0, 17}\n\tcsum = partialChecksum(csum, tmp[:])\n\n\tbuffer[udpChecksumPos] = 0\n\tbuffer[udpChecksumPos+1] = 0\n\tcsum = partialChecksum(csum, buffer[:])\n\n\tp.udpHdr.udpChecksum = finalizeChecksum(csum)\n\t// update checksum\n\tbinary.BigEndian.PutUint16(buffer[udpChecksumPos:udpChecksumPos+2], p.udpHdr.udpChecksum)\n}\n\nfunc (p *Packet) fixupUDPHdr() {\n\t// checksum set to 0, ignored by the stack\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\t//binary.BigEndian.PutUint16(buffer[udpLengthPos:udpLengthPos+2], uint16(len(buffer)))\n\tbinary.BigEndian.PutUint16(buffer[udpLengthPos:udpLengthPos+2], 8)\n}\n\nfunc (p *Packet) fixupUDPChecksum() {\n\tif p.ipHdr.version == V4 {\n\t\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\n\t\tbuffer[udpChecksumPos] = 0\n\t\tbuffer[udpChecksumPos+1] = 0\n\t} else {\n\t\tp.updateUDPv6Checksum()\n\t}\n}\n\n// ReadUDPToken Parsing using format specified in https://tools.ietf.org/html/draft-ietf-tsvwg-udp-options-08\n// ReadUDPToken return the UDP token. Gets called only during the handshake process.\nfunc (p *Packet) ReadUDPToken() []byte {\n\t// length of options .. skip udp data\n\tudpOptions := p.ipHdr.Buffer[uint16(p.ipHdr.ipHeaderLen)+8 : p.ipHdr.ipTotalLength]\n\t// zap.L().Error(\"Received Packet\", zap.String(\"IP\\n\", hex.Dump(p.ipHdr.Buffer)))\n\t// zap.L().Error(\"UDP Options\", zap.String(\"UDP\\n\", hex.Dump(udpOptions)))\n\tfor i := 0; i < len(udpOptions); i++ {\n\t\tif udpOptions[i] == 0 || udpOptions[i] == 1 {\n\t\t\ti++\n\t\t\tcontinue\n\t\t}\n\t\tif udpOptions[i] != UDPAporetoOption {\n\t\t\tif len(udpOptions) < i+2 {\n\t\t\t\treturn []byte{}\n\t\t\t}\n\t\t\tif udpOptions[i+1] != 0xff {\n\t\t\t\ti = i + int(udpOptions[i+1])\n\t\t\t\tcontinue\n\t\t\t} else {\n\t\t\t\ti = i + int(binary.LittleEndian.Uint16(udpOptions[i+2:]))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t} else {\n\t\t\tif len(udpOptions) < i+2 {\n\t\t\t\treturn []byte{}\n\t\t\t}\n\t\t\tif udpOptions[i+1] != 0xff {\n\n\t\t\t\tif len(udpOptions[i:]) >= int(udpOptions[i+1]) {\n\t\t\t\t\treturn udpOptions[i+UDPSignatureLen+2:]\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif len(udpOptions[i:]) >= int(binary.LittleEndian.Uint16(udpOptions[i+2:])) {\n\t\t\t\t\treturn udpOptions[i+4+UDPSignatureLen:]\n\t\t\t\t}\n\n\t\t\t}\n\t\t}\n\t}\n\treturn []byte{}\n}\n\n// UDPTokenAttach attached udp packet signature and tokens.\nfunc (p *Packet) UDPTokenAttach(udpdata []byte, udptoken []byte) {\n\tif udpdata[0] == 0x22 && udpdata[1] == 0xff {\n\t\tcopy(udpdata[4+UDPSignatureLen:], udptoken)\n\t} else {\n\t\tcopy(udpdata[2+UDPSignatureLen:], udptoken)\n\t}\n\t//p.udpHdr.udpData = append([]byte(\"APORETO!\"), udpdata...)\n\n\tp.udpHdr.udpLength = 0\n\tpacketLenIncrease := uint16(len(udpdata))\n\t// IP Header Processing\n\tp.FixupIPHdrOnDataModify(p.ipHdr.ipTotalLength, p.ipHdr.ipTotalLength+packetLenIncrease)\n\n\t// Attach Data @ the end of current buffer\n\tp.ipHdr.Buffer = append(p.ipHdr.Buffer, udpdata...)\n\n\tp.fixupUDPHdr()\n\tp.fixupUDPChecksum()\n\n}\n\n// UDPDataAttach Attaches UDP data post encryption.\nfunc (p *Packet) UDPDataAttach(header, udpdata []byte) {\n\n\t// Attach Data @ the end of current buffer\n\tp.ipHdr.Buffer = append(p.ipHdr.Buffer, header...)\n\tp.ipHdr.Buffer = append(p.ipHdr.Buffer, udpdata...)\n\t// IP Header Processing\n\tp.FixupIPHdrOnDataModify(p.ipHdr.ipTotalLength, uint16(len(p.ipHdr.Buffer)))\n\n\tp.fixupUDPHdr()\n\tp.fixupUDPChecksum()\n}\n\n// UDPDataDetach detaches UDP payload from the Buffer. Called only during Encrypt/Decrypt.\nfunc (p *Packet) UDPDataDetach() {\n\t// Create constants for IP header + UDP header. copy ?\n\tp.ipHdr.Buffer = p.ipHdr.Buffer[:p.ipHdr.ipHeaderLen+UDPDataPos]\n}\n\n// CreateReverseFlowPacket modifies the packet for reverse flow.\nfunc (p *Packet) CreateReverseFlowPacket() {\n\n\tbuffer := p.ipHdr.Buffer\n\n\t// reverse ip addresses\n\tif p.ipHdr.version == V4 {\n\t\tcopy(buffer[ipv4SourceAddrPos:ipv4SourceAddrPos+4], p.ipHdr.destinationAddress)\n\t\tcopy(buffer[ipv4DestAddrPos:ipv4DestAddrPos+4], p.ipHdr.sourceAddress)\n\t} else {\n\t\tcopy(buffer[ipv6SourceAddrPos:ipv6SourceAddrPos+16], p.ipHdr.destinationAddress)\n\t\tcopy(buffer[ipv6DestAddrPos:ipv6DestAddrPos+16], p.ipHdr.sourceAddress)\n\t}\n\n\tbuffer = buffer[p.ipHdr.ipHeaderLen:]\n\n\t// reverse ports\n\tbinary.BigEndian.PutUint16(buffer[udpSourcePortPos:udpSourcePortPos+2], p.udpHdr.destinationPort)\n\tbinary.BigEndian.PutUint16(buffer[udpDestPortPos:udpDestPortPos+2], p.udpHdr.sourcePort)\n\n\t// swap in our packet structures\n\tp.ipHdr.sourceAddress, p.ipHdr.destinationAddress = p.ipHdr.destinationAddress, p.ipHdr.sourceAddress\n\tp.udpHdr.sourcePort, p.udpHdr.destinationPort = p.udpHdr.destinationPort, p.udpHdr.sourcePort\n\n\t// Just get the IP/UDP header. Ignore the rest. No need for packet\n\tp.ipHdr.Buffer = p.ipHdr.Buffer[:p.ipHdr.ipHeaderLen+UDPDataPos]\n\n\tp.FixupIPHdrOnDataModify(p.ipHdr.ipTotalLength, uint16(p.ipHdr.ipHeaderLen+UDPDataPos))\n\n\tif p.ipHdr.version == V4 {\n\t\tp.UpdateIPv4Checksum()\n\t}\n\n\tp.fixupUDPHdr()\n\tp.fixupUDPChecksum()\n}\n\n// GetUDPType returns udp type of packet.\nfunc (p *Packet) GetUDPType() byte {\n\tvar offset uint8\n\t// Every UDP control packet has a 20 byte packet signature. The\n\t// first 2 bytes represent the following control information.\n\t// Byte 0 : Bits 0,1 are reserved fields.\n\t//          Bits 2,3,4 represent version information.\n\t//          Bits 5,6 represent udp packet type,\n\t//          Bit 7 represents encryption. (currently unused).\n\t// Byte 1: reserved for future use.\n\t// Bytes [2:20]: Packet signature.\n\t//zap.L().Error(\"Packet\", zap.String(\"P/n\", hex.Dump(p.ipHdr.Buffer)))\n\tif p.ipHdr.ipTotalLength > uint16(p.IPHeaderLen())+8+255 {\n\t\toffset = 4\n\t} else {\n\t\toffset = 2\n\t}\n\n\treturn GetUDPTypeFromBuffer(p.ipHdr.Buffer[p.ipHdr.ipHeaderLen+offset:])\n}\n\n// GetUDPTypeFromBuffer gets the UDP packet from a raw buffer.,\nfunc GetUDPTypeFromBuffer(buffer []byte) byte {\n\tif len(buffer) < (UDPDataPos + UDPSignatureLen) {\n\t\treturn 0\n\t}\n\n\tmarker := buffer[UDPDataPos:udpSignatureEnd]\n\n\t// check for packet signature.\n\tif !bytes.Equal(buffer[udpAuthMarkerOffset:udpSignatureEnd], []byte(UDPAuthMarker)) {\n\t\tzap.L().Debug(\"Not an Aporeto control Packet\")\n\t\treturn 0\n\t}\n\t// control packet. byte 0 has packet type information.\n\treturn marker[0] & UDPPacketMask\n}\n\n//GetTCPFlags returns the tcp flags from the packet\nfunc (p *Packet) GetTCPFlags() uint8 {\n\treturn p.tcpHdr.tcpFlags\n}\n\n//SetTCPFlags allows to set the tcp flags on the packet\nfunc (p *Packet) SetTCPFlags(flags uint8) {\n\tp.tcpHdr.tcpFlags = flags\n}\n\n// CreateUDPAuthMarker creates a UDP auth marker.\nfunc CreateUDPAuthMarker(packetType uint8, payloadLength uint16) []byte {\n\tvar options []byte\n\t// Every UDP control packet has a 20 byte packet signature. The\n\t// first 2 bytes represent the following control information.\n\t// Byte 0 : Bits 0,1 are reserved fields.\n\t//          Bits 2,3,4 represent version information.\n\t//          Bits 5, 6, 7 represent udp packet type,\n\t// Byte 1: reserved for future use.\n\t// Bytes [2:20]: Packet signature.\n\n\t//marker := make([]byte, UDPSignatureLen)\n\t// ignore version info as of now.\n\tif payloadLength < uint16(UDPAporetoOptionLengthFirstByte) {\n\t\toptions = make([]byte, int(payloadLength)+UDPAporetoOptionShortLength+UDPSignatureLen)\n\t\toptions[0] = UDPAporetoOption\n\t\toptions[1] = uint8(payloadLength) + UDPAporetoOptionShortLength + UDPSignatureLen\n\n\t\toptions[2] |= packetType // byte 0\n\t\toptions[3] = 0           // byte 1\n\t\tcopy(options[4:], []byte(UDPAuthMarker))\n\t} else {\n\t\toptions = make([]byte, int(payloadLength)+UDPAporetoOptionLongLength+len(UDPAuthMarker))\n\t\toptions[0] = UDPAporetoOption\n\t\toptions[1] = UDPAporetoOptionLengthFirstByte\n\t\tbinary.LittleEndian.PutUint16(options[2:], payloadLength+20)\n\t\toptions[4] |= packetType // byte 0\n\t\toptions[5] = 0           // byte 1\n\t\tcopy(options[UDPAporetoOptionLongLength:], []byte(UDPAuthMarker))\n\n\t}\n\n\treturn options\n}\n\n// GetICMPTypeCode returns the icmp type and icmp code\nfunc (p *Packet) GetICMPTypeCode() (int8, int8) {\n\treturn p.icmpHdr.icmpType, p.icmpHdr.icmpCode\n}\n"
  },
  {
    "path": "controller/pkg/packet/packet.go",
    "content": "// Package packet support for TCP/IP packet manipulations\n// needed by the Aporeto infrastructure.\npackage packet\n\nimport (\n\t\"encoding/binary\"\n\t\"encoding/hex\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\n\t\"go.uber.org/zap\"\n)\n\n// printCount prints the debug header for packets every few lines that it prints\nvar printCount int\n\nvar errIPPacketCorrupt = errors.New(\"IP packet is smaller than min IP size of 20\")\nvar errTCPAuthOption = errors.New(\"tcp authentication option not found\")\n\n//NewPacket is a method called on Packet which decodes the packet into the struct\nfunc (p *Packet) NewPacket(context uint64, bytes []byte, mark string, lengthValidate bool) (err error) {\n\n\t// Get the mark value\n\tp.Mark = mark\n\n\t// Set the context\n\tp.context = context\n\tif len(bytes) < ipv4HdrLenPos {\n\t\treturn fmt.Errorf(\"invalid packet length %d\", len(bytes))\n\t}\n\tif bytes[ipv4HdrLenPos]&ipv4ProtoMask == 0x40 {\n\t\tp.ipHdr.version = V4\n\t\treturn p.parseIPv4Packet(bytes, lengthValidate)\n\t}\n\n\tp.ipHdr.version = V6\n\treturn p.parseIPv6Packet(bytes, lengthValidate)\n}\n\n// New returns a pointer to Packet structure built from the\n// provided bytes buffer which is expected to contain valid TCP/IP\n// packet bytes.\n// WARNING: This package takes control of the bytes buffer passed. The caller has\n// to be aware calling any function that returns a slice will NOT be a copy rather\n// a sub-slice of the bytes buffer passed. It is the responsibility of the caller\n// to copy the slice If and when necessary.\nfunc New(context uint64, bytes []byte, mark string, lengthValidate bool) (packet *Packet, err error) {\n\n\tvar p Packet\n\n\t// Get the mark value\n\tp.Mark = mark\n\n\t// Set the context\n\tp.context = context\n\tif len(bytes) < ipv4HdrLenPos {\n\t\treturn nil, fmt.Errorf(\"invalid packet length %d\", len(bytes))\n\t}\n\tif bytes[ipv4HdrLenPos]&ipv4ProtoMask == 0x40 {\n\t\tp.ipHdr.version = V4\n\t\treturn &p, p.parseIPv4Packet(bytes, lengthValidate)\n\t}\n\n\tp.ipHdr.version = V6\n\treturn &p, p.parseIPv6Packet(bytes, lengthValidate)\n}\n\nfunc (p *Packet) parseTCP(bytes []byte) {\n\t// TCP Header Processing\n\ttcpBuffer := bytes[p.ipHdr.ipHeaderLen:]\n\n\tp.tcpHdr.tcpChecksum = binary.BigEndian.Uint16(tcpBuffer[tcpChecksumPos : tcpChecksumPos+2])\n\tp.tcpHdr.sourcePort = binary.BigEndian.Uint16(tcpBuffer[tcpSourcePortPos : tcpSourcePortPos+2])\n\tp.tcpHdr.destinationPort = binary.BigEndian.Uint16(tcpBuffer[tcpDestPortPos : tcpDestPortPos+2])\n\tp.tcpHdr.tcpAck = binary.BigEndian.Uint32(tcpBuffer[tcpAckPos : tcpAckPos+4])\n\tp.tcpHdr.tcpSeq = binary.BigEndian.Uint32(tcpBuffer[tcpSeqPos : tcpSeqPos+4])\n\tp.tcpHdr.tcpDataOffset = (tcpBuffer[tcpDataOffsetPos] & tcpDataOffsetMask) >> 4\n\tp.tcpHdr.tcpTotalLength = uint16(len(p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]))\n\n\tp.SetTCPFlags(tcpBuffer[tcpFlagsOffsetPos])\n}\n\nfunc parseIP(s string, ipv4 bool) (net.IP, error) {\n\tip := net.ParseIP(s)\n\tif ip == nil {\n\t\treturn nil, fmt.Errorf(\"invalid IP address : %s\", s)\n\t}\n\tif ipv4 {\n\t\tip = ip.To4()\n\t\tif ip == nil {\n\t\t\treturn nil, fmt.Errorf(\"not a valid IPv4 address : %s\", s)\n\t\t}\n\t} else {\n\t\tip = ip.To16()\n\t\tif ip == nil {\n\t\t\treturn nil, fmt.Errorf(\"not a valid IPv6 address : %s\", s)\n\t\t}\n\t}\n\treturn ip, nil\n}\n\n// NewIpv4TCPPacket creates an Ipv4/TCP packet\nfunc NewIpv4TCPPacket(context uint64, tcpFlags uint8, src, dst string, srcPort, desPort uint16) (*Packet, error) {\n\n\tvar p Packet\n\tp.context = context\n\tp.ipHdr.version = V4\n\n\tsrcIP, err := parseIP(src, true)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"source address : %s\", err)\n\t}\n\tdstIP, err := parseIP(dst, true)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"destination address : %s\", err)\n\t}\n\n\tbuffer := make([]byte, minIPv4HdrSize+minTCPHeaderLen)\n\tbuffer[ipv4HdrLenPos] = 0x45\n\tcopy(buffer[ipv4SourceAddrPos:ipv4SourceAddrPos+4], srcIP)\n\tcopy(buffer[ipv4DestAddrPos:ipv4DestAddrPos+4], dstIP)\n\n\tbuffer[ipv4ProtoPos] = uint8(IPProtocolTCP)\n\tbinary.BigEndian.PutUint16(buffer[ipv4LengthPos:ipv4LengthPos+2], uint16(minIPv4HdrSize+minTCPHeaderLen))\n\n\t// TCP data\n\ttcpBuffer := buffer[minIPv4HdrSize:]\n\ttcpBuffer[tcpFlagsOffsetPos] = tcpFlags\n\tbinary.BigEndian.PutUint16(tcpBuffer[tcpSourcePortPos:tcpSourcePortPos+2], srcPort)\n\tbinary.BigEndian.PutUint16(tcpBuffer[tcpDestPortPos:tcpDestPortPos+2], desPort)\n\ttcpBuffer[tcpDataOffsetPos] = 5 << 4\n\n\tif err := p.parseIPv4Packet(buffer, true); err != nil {\n\t\treturn nil, err\n\t}\n\n\tp.UpdateIPv4Checksum()\n\tp.UpdateTCPChecksum()\n\n\treturn &p, nil\n}\n\n// NewIpv6TCPPacket creates an Ipv6/TCP packet\nfunc NewIpv6TCPPacket(context uint64, tcpFlags uint8, src, dst string, srcPort, desPort uint16) (*Packet, error) {\n\n\tvar p Packet\n\tp.context = context\n\tp.ipHdr.version = V6\n\n\tsrcIP, err := parseIP(src, false)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"source address : %s\", err)\n\t}\n\tdstIP, err := parseIP(dst, false)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"destination address : %s\", err)\n\t}\n\n\tbuffer := make([]byte, ipv6HeaderLen+minTCPHeaderLen)\n\n\tbuffer[ipv6VersionPos] = 6\n\tbinary.BigEndian.PutUint16(buffer[ipv6PayloadLenPos:ipv6PayloadLenPos+2], minTCPHeaderLen)\n\tcopy(buffer[ipv6SourceAddrPos:ipv6SourceAddrPos+16], srcIP.To16())\n\tcopy(buffer[ipv6DestAddrPos:ipv6DestAddrPos+16], dstIP.To16())\n\n\tbuffer[ipv6ProtoPos] = uint8(IPProtocolTCP)\n\n\t// TCP data\n\ttcpBuffer := buffer[ipv6HeaderLen:]\n\ttcpBuffer[tcpFlagsOffsetPos] = tcpFlags\n\tbinary.BigEndian.PutUint16(tcpBuffer[tcpSourcePortPos:tcpSourcePortPos+2], srcPort)\n\tbinary.BigEndian.PutUint16(tcpBuffer[tcpDestPortPos:tcpDestPortPos+2], desPort)\n\ttcpBuffer[tcpDataOffsetPos] = 5 << 4\n\n\tif err := p.parseIPv6Packet(buffer, true); err != nil {\n\t\treturn nil, err\n\t}\n\n\tp.UpdateTCPChecksum()\n\n\treturn &p, nil\n}\n\nfunc (p *Packet) parseICMP(bytes []byte) {\n\n\ticmpBuffer := bytes[p.ipHdr.ipHeaderLen:]\n\tp.icmpHdr.icmpType = int8(icmpBuffer[0])\n\tp.icmpHdr.icmpCode = int8(icmpBuffer[1])\n}\n\nfunc (p *Packet) parseUDP(bytes []byte) {\n\t// UDP Header Processing\n\tudpBuffer := bytes[p.ipHdr.ipHeaderLen:]\n\n\tp.udpHdr.udpChecksum = binary.BigEndian.Uint16(udpBuffer[udpChecksumPos : udpChecksumPos+2])\n\tp.udpHdr.udpLength = binary.BigEndian.Uint16(udpBuffer[udpLengthPos : udpLengthPos+2])\n\tp.udpHdr.udpData = []byte{}\n\n\tp.udpHdr.sourcePort = binary.BigEndian.Uint16(udpBuffer[udpSourcePortPos : udpSourcePortPos+2])\n\tp.udpHdr.destinationPort = binary.BigEndian.Uint16(udpBuffer[udpDestPortPos : udpDestPortPos+2])\n}\n\nfunc (p *Packet) parseIPv4Packet(bytes []byte, lengthValidate bool) (err error) {\n\n\t// IP Header Processing\n\tif len(bytes) < minIPv4HdrSize {\n\t\treturn errIPPacketCorrupt\n\t}\n\n\tp.ipHdr.ipHeaderLen = (bytes[ipv4HdrLenPos] & ipv4HdrLenMask) * 4\n\tp.ipHdr.ipProto = bytes[ipv4ProtoPos]\n\tp.ipHdr.ipTotalLength = binary.BigEndian.Uint16(bytes[ipv4LengthPos : ipv4LengthPos+2])\n\tp.ipHdr.ipID = binary.BigEndian.Uint16(bytes[ipv4IDPos : ipv4IDPos+2])\n\tp.ipHdr.ipChecksum = binary.BigEndian.Uint16(bytes[ipv4ChecksumPos : ipv4ChecksumPos+2])\n\tp.ipHdr.sourceAddress = append([]byte{}, bytes[ipv4SourceAddrPos:ipv4SourceAddrPos+4]...)\n\tp.ipHdr.destinationAddress = append([]byte{}, bytes[ipv4DestAddrPos:ipv4DestAddrPos+4]...)\n\n\tif p.ipHdr.ipHeaderLen != minIPv4HdrSize {\n\t\treturn fmt.Errorf(\"packets with ip options not supported: hdrlen=%d\", p.ipHdr.ipHeaderLen)\n\t}\n\n\tp.ipHdr.Buffer = bytes\n\n\tif lengthValidate && p.ipHdr.ipTotalLength != uint16(len(p.ipHdr.Buffer)) {\n\t\tif p.ipHdr.ipTotalLength < uint16(len(p.ipHdr.Buffer)) {\n\t\t\tp.ipHdr.Buffer = p.ipHdr.Buffer[:p.ipHdr.ipTotalLength]\n\t\t} else {\n\t\t\treturn fmt.Errorf(\"stated ip packet length %d differs from bytes available %d\", p.ipHdr.ipTotalLength, len(p.ipHdr.Buffer))\n\t\t}\n\t}\n\n\t// Some sanity checking for TCP.\n\tif p.ipHdr.ipProto == IPProtocolTCP {\n\t\tif p.ipHdr.ipTotalLength-uint16(p.ipHdr.ipHeaderLen) < minTCPIPPacketLen {\n\t\t\treturn fmt.Errorf(\"tcp ip packet too small: hdrlen=%d\", p.ipHdr.ipHeaderLen)\n\t\t}\n\n\t\tp.parseTCP(bytes)\n\t}\n\n\t// Some sanity checking for UDP.\n\tif p.ipHdr.ipProto == IPProtocolUDP {\n\t\tif p.ipHdr.ipTotalLength-uint16(p.ipHdr.ipHeaderLen) < minUDPIPPacketLen {\n\t\t\treturn fmt.Errorf(\"udp ip packet too small: hdrlen=%d\", p.ipHdr.ipHeaderLen)\n\t\t}\n\n\t\tp.parseUDP(bytes)\n\t}\n\n\tif p.ipHdr.ipProto == IPProtocolICMP {\n\t\tif p.ipHdr.ipTotalLength-uint16(p.ipHdr.ipHeaderLen) < minICMPIPPacketLen {\n\t\t\treturn fmt.Errorf(\"tcp ip packet too small: hdrlen=%d\", p.ipHdr.ipHeaderLen)\n\t\t}\n\n\t\tp.parseICMP(bytes)\n\t}\n\n\t// Chaching the result of the flow hash for performance optimization.\n\t// It is called in multiple places.\n\tp.l4flowhash = p.l4FlowHash()\n\n\treturn nil\n}\n\nfunc (p *Packet) parseIPv6Packet(bytes []byte, lengthValidate bool) (err error) {\n\t// IP Header Processing\n\tp.ipHdr.ipHeaderLen = ipv6HeaderLen\n\tp.ipHdr.ipProto = bytes[ipv6ProtoPos]\n\tp.ipHdr.ipTotalLength = ipv6HeaderLen + binary.BigEndian.Uint16(bytes[ipv6PayloadLenPos:ipv6PayloadLenPos+2])\n\tp.ipHdr.sourceAddress = append([]byte{}, bytes[ipv6SourceAddrPos:ipv6SourceAddrPos+16]...)\n\tp.ipHdr.destinationAddress = append([]byte{}, bytes[ipv6DestAddrPos:ipv6DestAddrPos+16]...)\n\n\tp.ipHdr.Buffer = bytes\n\n\tif lengthValidate && p.ipHdr.ipTotalLength != uint16(len(p.ipHdr.Buffer)) {\n\t\tif p.ipHdr.ipTotalLength < uint16(len(p.ipHdr.Buffer)) {\n\t\t\tp.ipHdr.Buffer = p.ipHdr.Buffer[:p.ipHdr.ipTotalLength]\n\t\t} else {\n\t\t\treturn fmt.Errorf(\"stated ip packet length %d differs from bytes available %d\", p.ipHdr.ipTotalLength, len(p.ipHdr.Buffer))\n\t\t}\n\n\t\tp.parseTCP(bytes)\n\t}\n\n\t// Some sanity checking for TCP.\n\tif p.ipHdr.ipProto == IPProtocolTCP {\n\t\tif p.ipHdr.ipTotalLength-uint16(p.ipHdr.ipHeaderLen) < minTCPIPPacketLen {\n\t\t\treturn fmt.Errorf(\"tcp ip packet too small: hdrlen=%d\", p.ipHdr.ipHeaderLen)\n\t\t}\n\n\t\tp.parseTCP(bytes)\n\t}\n\n\t// Some sanity checking for UDP.\n\tif p.ipHdr.ipProto == IPProtocolUDP {\n\t\tif p.ipHdr.ipTotalLength-uint16(p.ipHdr.ipHeaderLen) < minUDPIPPacketLen {\n\t\t\treturn fmt.Errorf(\"udp ip packet too small: hdrlen=%d\", p.ipHdr.ipHeaderLen)\n\t\t}\n\t\tp.parseUDP(bytes)\n\t}\n\n\t// Chaching the result of the flow hash for performance optimization.\n\t// It is called in multiple places.\n\tp.l4flowhash = p.l4FlowHash()\n\n\treturn nil\n}\n\n// IsEmptyTCPPayload returns the TCP data offset\nfunc (p *Packet) IsEmptyTCPPayload() bool {\n\treturn p.TCPDataStartBytes() == p.tcpHdr.tcpTotalLength\n}\n\n// GetUDPData return additional data in packet\nfunc (p *Packet) GetUDPData() []byte {\n\treturn p.ipHdr.Buffer[p.ipHdr.ipHeaderLen+UDPDataPos:]\n}\n\n// GetUDPDataStartBytes return start of UDP data\nfunc (p *Packet) GetUDPDataStartBytes() uint16 {\n\treturn UDPDataPos\n}\n\n// TCPDataStartBytes provides the tcp data start offset in bytes\nfunc (p *Packet) TCPDataStartBytes() uint16 {\n\treturn uint16(p.tcpHdr.tcpDataOffset) * 4\n}\n\n// GetIPLength returns the IP length\nfunc (p *Packet) GetIPLength() uint16 {\n\treturn p.ipHdr.ipTotalLength\n}\n\n// Print is a print helper function\nfunc (p *Packet) Print(context uint64, packetLogLevel bool) {\n\n\tif p.ipHdr.ipProto != IPProtocolTCP {\n\t\treturn\n\t}\n\n\tlogPkt := false\n\tdetailed := false\n\n\tif packetLogLevel || context == 0 {\n\t\tlogPkt = true\n\t\tdetailed = true\n\t}\n\n\tvar buf string\n\tprint := false\n\n\tif logPkt {\n\t\tif printCount%200 == 0 {\n\t\t\tbuf += fmt.Sprintf(\"Packet: %5s %5s %25s %15s %5s %15s %5s %6s %20s %20s %6s %20s %20s %2s %5s %5s\\n\",\n\t\t\t\t\"IPID\", \"Dir\", \"Comment\", \"SIP\", \"SP\", \"DIP\", \"DP\", \"Flags\", \"TCPSeq\", \"TCPAck\", \"TCPLen\", \"ExpAck\", \"ExpSeq\", \"DO\", \"Acsum\", \"Ccsum\")\n\t\t}\n\t\tprintCount++\n\t\toffset := 0\n\n\t\tif (p.GetTCPFlags() & TCPSynMask) == TCPSynMask {\n\t\t\toffset = 1\n\t\t}\n\n\t\texpAck := p.tcpHdr.tcpSeq + uint32(uint16(len(p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]))-p.TCPDataStartBytes()) + uint32(offset)\n\t\tccsum := p.computeTCPChecksum()\n\t\tcsumValidationStr := \"\"\n\n\t\tif p.tcpHdr.tcpChecksum != ccsum {\n\t\t\tcsumValidationStr = \"Bad Checksum\"\n\t\t}\n\n\t\tbuf += fmt.Sprintf(\"Packet: %5d %5s %25s %15s %5d %15s %5d %6s %20d %20d %6d %20d %20d %2d %5d %5d %12s\\n\",\n\t\t\tp.ipHdr.ipID,\n\t\t\tflagsToDir(p.context|context),\n\t\t\tflagsToStr(p.context|context),\n\t\t\tp.ipHdr.sourceAddress.String(), p.tcpHdr.sourcePort,\n\t\t\tp.ipHdr.destinationAddress.String(), p.tcpHdr.destinationPort,\n\t\t\ttcpFlagsToStr(p.GetTCPFlags()),\n\t\t\tp.tcpHdr.tcpSeq, p.tcpHdr.tcpAck, uint16(len(p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]))-p.TCPDataStartBytes(),\n\t\t\texpAck, expAck, p.tcpHdr.tcpDataOffset,\n\t\t\tp.tcpHdr.tcpChecksum, ccsum, csumValidationStr)\n\t\tprint = true\n\t}\n\n\tif detailed {\n\t\tpktBytes := []byte{0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 8, 0}\n\t\tpktBytes = append(pktBytes, p.ipHdr.Buffer...)\n\t\tbuf += fmt.Sprintf(\"%s\\n\", hex.Dump(pktBytes))\n\t\tprint = true\n\t}\n\n\tif print {\n\t\tzap.L().Debug(buf)\n\t}\n}\n\n//UpdatePacketBuffer updates the packet with the new updates buffer.\nfunc (p *Packet) UpdatePacketBuffer(buffer []byte, tcpOptionsLen uint16) error {\n\n\tif tcpOptionsLen != 0 {\n\t\t// If the packet has a payload then we can't insert options\n\t\tif p.TCPDataStartBytes() != uint16(len(p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:])) {\n\t\t\treturn fmt.Errorf(\"cannot insert options with existing data: optionlength=%d, iptotallength=%d\", tcpOptionsLen, p.ipHdr.ipTotalLength)\n\t\t}\n\t} else {\n\t\t// This case is for adding a payload to a packet which may or may not have options\n\t\ttcpOptionsLen := p.TCPDataStartBytes() - minTCPHeaderLen\n\t\t// Working with unsigned numbers so make sure we didn't go negative basically\n\t\tif p.TCPDataStartBytes() < tcpOptionsLen {\n\t\t\treturn fmt.Errorf(\"cannot payload: bad options length: optionlength=%d\", tcpOptionsLen)\n\t\t}\n\t}\n\n\tpacketLenIncrease := uint16(len(buffer) - len(p.ipHdr.Buffer))\n\tp.ipHdr.Buffer = buffer\n\n\t// IP Header Processing\n\tp.FixupIPHdrOnDataModify(p.ipHdr.ipTotalLength, p.ipHdr.ipTotalLength+packetLenIncrease)\n\n\t// TCP Header Processing\n\tp.FixuptcpHdrOnTCPDataAttach(tcpOptionsLen)\n\n\tp.UpdateTCPChecksum()\n\treturn nil\n}\n\n//GetTCPBytes returns the bytes in the packet. It consolidates in case of changes as well\nfunc (p *Packet) GetTCPBytes() []byte {\n\n\tpktBytes := []byte{}\n\tpktBytes = append(pktBytes, p.ipHdr.Buffer...)\n\treturn pktBytes\n}\n\n// ReadTCPDataString returns ths payload in a string variable\n// It does not remove the payload from the packet\nfunc (p *Packet) ReadTCPDataString() string {\n\treturn string(p.ReadTCPData())\n}\n\n// ReadTCPData returns ths payload in a string variable\n// It does not remove the payload from the packet\nfunc (p *Packet) ReadTCPData() []byte {\n\treturn p.ipHdr.Buffer[uint16(p.ipHdr.ipHeaderLen)+p.TCPDataStartBytes():]\n}\n\n// CheckTCPAuthenticationOption ensures authentication option exists at the offset provided\nfunc (p *Packet) CheckTCPAuthenticationOption(iOptionLength int) (err error) {\n\ttcpDataStart := p.TCPDataStartBytes()\n\n\tif tcpDataStart <= minTCPIPPacketLen {\n\t\treturn errTCPAuthOption\n\t}\n\tif len(p.ipHdr.Buffer) < int(p.ipHdr.ipHeaderLen)+20 {\n\t\treturn errors.New(\"Invalid IP Packet\")\n\t}\n\tif int(p.ipHdr.ipHeaderLen)+int(p.tcpHdr.tcpDataOffset*4) > len(p.ipHdr.Buffer) {\n\t\treturn errors.New(\"Invalid TCP Packet\")\n\t}\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen+20 : int(p.ipHdr.ipHeaderLen)+int(p.tcpHdr.tcpDataOffset*4)]\n\n\tfor i := 0; i < len(buffer); {\n\t\tif buffer[i] == 0 || buffer[i] == 1 {\n\t\t\ti++\n\t\t\tcontinue\n\t\t}\n\n\t\tif buffer[i] != TCPAuthenticationOption {\n\t\t\tif len(buffer) <= i+1 {\n\t\t\t\treturn errTCPAuthOption\n\t\t\t}\n\t\t\tif int(buffer[i+1]) == 0 {\n\t\t\t\tzap.L().Debug(\"Bad Packet Option\", zap.String(\"Buffer\", hex.Dump(buffer)))\n\t\t\t\treturn errors.New(\"Invalid TCP Option Packet\")\n\t\t\t}\n\t\t\ti = i + int(buffer[i+1])\n\t\t\tcontinue\n\t\t}\n\n\t\tif buffer[i] == TCPAuthenticationOption {\n\t\t\treturn nil\n\t\t}\n\t\treturn errTCPAuthOption\n\n\t}\n\treturn errTCPAuthOption\n}\n\n// FixupIPHdrOnDataModify modifies the IP header fields and checksum\nfunc (p *Packet) FixupIPHdrOnDataModify(old, new uint16) {\n\t// IP Header Processing\n\t// IP chekcsum fixup.\n\tp.ipHdr.ipChecksum = incCsum16(p.ipHdr.ipChecksum, old, new)\n\t// Update IP Total Length.\n\tp.ipHdr.ipTotalLength = p.ipHdr.ipTotalLength + new - old\n\n\tif p.ipHdr.version == V6 {\n\t\tbinary.BigEndian.PutUint16(p.ipHdr.Buffer[ipv6PayloadLenPos:ipv6PayloadLenPos+2], p.ipHdr.ipTotalLength-uint16(p.ipHdr.ipHeaderLen))\n\t} else {\n\t\tbinary.BigEndian.PutUint16(p.ipHdr.Buffer[ipv4LengthPos:ipv4LengthPos+2], p.ipHdr.ipTotalLength)\n\t\tbinary.BigEndian.PutUint16(p.ipHdr.Buffer[ipv4ChecksumPos:ipv4ChecksumPos+2], p.ipHdr.ipChecksum)\n\t}\n}\n\n// TCPSequenceNumber return the initial sequence number\nfunc (p *Packet) TCPSequenceNumber() uint32 {\n\tif p.ipHdr.ipProto != IPProtocolTCP {\n\t\treturn 0\n\t}\n\treturn p.tcpHdr.tcpSeq\n}\n\n// SetTCPSeq sets the TCP seq number\nfunc (p *Packet) SetTCPSeq(seq uint32) {\n\tp.tcpHdr.tcpSeq = seq\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\tbinary.BigEndian.PutUint32(buffer[tcpSeqPos:tcpSeqPos+4], p.tcpHdr.tcpSeq)\n}\n\n// IncreaseTCPSeq increases TCP seq number by incr\nfunc (p *Packet) IncreaseTCPSeq(incr uint32) {\n\tp.SetTCPSeq(p.tcpHdr.tcpSeq + incr)\n}\n\n// DecreaseTCPSeq decreases TCP seq number by decr\nfunc (p *Packet) DecreaseTCPSeq(decr uint32) {\n\tp.SetTCPSeq(p.tcpHdr.tcpSeq - decr)\n}\n\n// SetTCPAck sets the TCP ack number\nfunc (p *Packet) SetTCPAck(ack uint32) {\n\tp.tcpHdr.tcpAck = ack\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\tbinary.BigEndian.PutUint32(buffer[tcpAckPos:tcpAckPos+4], p.tcpHdr.tcpAck)\n}\n\n// IncreaseTCPAck increases TCP ack number by incr\nfunc (p *Packet) IncreaseTCPAck(incr uint32) {\n\tp.SetTCPAck(p.tcpHdr.tcpAck + incr)\n}\n\n// DecreaseTCPAck decreases TCP ack number by decr\nfunc (p *Packet) DecreaseTCPAck(decr uint32) {\n\tp.SetTCPAck(p.tcpHdr.tcpAck - decr)\n}\n\n// FixuptcpHdrOnTCPDataDetach modifies the TCP header fields and checksum\nfunc (p *Packet) FixuptcpHdrOnTCPDataDetach(optionLength uint16) {\n\n\t// Update DataOffset\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\tp.tcpHdr.tcpDataOffset = p.tcpHdr.tcpDataOffset - uint8(optionLength/4)\n\tbuffer[tcpDataOffsetPos] = p.tcpHdr.tcpDataOffset << 4\n}\n\n// tcpDataDetach splits the p.Buffer into p.Buffer (header + some options), p.tcpOptions (optionLength) and p.TCPData (dataLength)\nfunc (p *Packet) tcpDataDetach(optionLength uint16, dataLength uint16) { //nolint\n\tp.ipHdr.Buffer = p.ipHdr.Buffer[:uint16(p.ipHdr.ipHeaderLen)+p.TCPDataStartBytes()-optionLength]\n}\n\n// TCPDataDetach performs the following:\n//   - Removes all TCP data from Buffer to TCPData.\n//   - Removes \"optionLength\" bytes of options from TCP header to tcpOptions\n//   - Updates IP Hdr (lengths, checksums)\n//   - Updates TCP header (checksums)\nfunc (p *Packet) TCPDataDetach(optionLength uint16) {\n\t// Length\n\tdataLength := uint16(len(p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:])) - p.TCPDataStartBytes()\n\n\t// detach TCP data\n\tp.tcpDataDetach(optionLength, dataLength)\n\n\t// Process TCP Header fields and metadata\n\tp.FixuptcpHdrOnTCPDataDetach(optionLength)\n\n\t// Process IP Header fields\n\tp.FixupIPHdrOnDataModify(p.ipHdr.ipTotalLength, p.ipHdr.ipTotalLength-(dataLength+optionLength))\n\n\tp.UpdateTCPChecksum()\n}\n\n// FixuptcpHdrOnTCPDataAttach modifies the TCP header fields and checksum\nfunc (p *Packet) FixuptcpHdrOnTCPDataAttach(tcpOptionsLen uint16) {\n\tbuffer := p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]\n\tnumberOfOptions := tcpOptionsLen / 4\n\n\t// Modify the fields\n\tp.tcpHdr.tcpDataOffset = p.tcpHdr.tcpDataOffset + uint8(numberOfOptions)\n\tbuffer[tcpDataOffsetPos] = p.tcpHdr.tcpDataOffset << 4\n\n\t// Modify the tcp header length\n\tp.tcpHdr.tcpTotalLength = uint16(len(p.ipHdr.Buffer[p.ipHdr.ipHeaderLen:]))\n}\n\n// L4FlowHash calculate a hash string based on the 4-tuple. It returns the cached\n// value and does not re-calculate it. This leads to performance gains.\nfunc (p *Packet) L4FlowHash() string {\n\treturn p.l4flowhash\n}\n\nfunc (p *Packet) l4FlowHash() string {\n\treturn p.ipHdr.sourceAddress.String() + \":\" + p.ipHdr.destinationAddress.String() + \":\" + strconv.Itoa(int(p.SourcePort())) + \":\" + strconv.Itoa(int(p.DestPort()))\n}\n\n// L4ReverseFlowHash calculate a hash string based on the 4-tuple by reversing source and destination information\nfunc (p *Packet) L4ReverseFlowHash() string {\n\treturn p.ipHdr.destinationAddress.String() + \":\" + p.ipHdr.sourceAddress.String() + \":\" + strconv.Itoa(int(p.DestPort())) + \":\" + strconv.Itoa(int(p.SourcePort()))\n}\n\n// SourcePortHash calculates a hash based on dest ip/port for net packet and src ip/port for app packet.\nfunc (p *Packet) SourcePortHash(stage uint64) string {\n\tif stage == PacketTypeNetwork {\n\t\treturn p.L4ReverseFlowHash()\n\t}\n\n\treturn p.L4FlowHash()\n}\n\n// ID returns the IP ID of the packet\nfunc (p *Packet) ID() string {\n\treturn strconv.Itoa(int(p.ipHdr.ipID))\n}\n\n//SourcePort -- returns the appropriate source port\nfunc (p *Packet) SourcePort() uint16 {\n\tif p.ipHdr.ipProto == IPProtocolTCP {\n\t\treturn p.tcpHdr.sourcePort\n\t}\n\n\treturn p.udpHdr.sourcePort\n}\n\n//DestPort -- returns the appropriate destination port\nfunc (p *Packet) DestPort() uint16 {\n\tif p.ipHdr.ipProto == IPProtocolTCP {\n\t\treturn p.tcpHdr.destinationPort\n\t}\n\n\treturn p.udpHdr.destinationPort\n}\n\n//SourceAddress returns the source IP\nfunc (p *Packet) SourceAddress() net.IP {\n\treturn p.ipHdr.sourceAddress\n}\n\n//DestinationAddress returns the destination address\nfunc (p *Packet) DestinationAddress() net.IP {\n\treturn p.ipHdr.destinationAddress\n}\n\n//TCPSeqNum returns tcp sequence number\nfunc (p *Packet) TCPSeqNum() uint32 {\n\treturn p.tcpHdr.tcpSeq\n}\n\n//TCPAckNum returns tcp ack number\nfunc (p *Packet) TCPAckNum() uint32 {\n\treturn p.tcpHdr.tcpAck\n}\n\n//IPProto returns the L4 protocol\nfunc (p *Packet) IPProto() uint8 {\n\treturn p.ipHdr.ipProto\n}\n\n//IPTotalLen returns the total length of the packet\nfunc (p *Packet) IPTotalLen() uint16 {\n\treturn p.ipHdr.ipTotalLength\n}\n\n//IPHeaderLen returns the ip header length\nfunc (p *Packet) IPHeaderLen() uint8 {\n\treturn p.ipHdr.ipHeaderLen\n}\n\n//GetBuffer returns the slice representing the buffer at offset specified\nfunc (p *Packet) GetBuffer(offset int) []byte {\n\treturn p.ipHdr.Buffer[offset:]\n}\n\n// IPversion returns the version of ip packet\nfunc (p *Packet) IPversion() IPver {\n\treturn p.ipHdr.version\n}\n\n//TestGetTCPPacket is used by other test code when they need to create a packet\nfunc TestGetTCPPacket(srcIP, dstIP net.IP, srcPort, dstPort uint16) *Packet {\n\tp := &Packet{}\n\n\tp.ipHdr.destinationAddress = dstIP\n\tp.tcpHdr.destinationPort = dstPort\n\tp.ipHdr.sourceAddress = srcIP\n\tp.tcpHdr.sourcePort = srcPort\n\tp.ipHdr.ipProto = IPProtocolTCP\n\n\treturn p\n}\n"
  },
  {
    "path": "controller/pkg/packet/packet_test.go",
    "content": "// +build !windows\n\npackage packet\n\nimport (\n\t\"crypto/rand\"\n\t\"encoding/hex\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\ntype SamplePacketName int\n\nconst (\n\tsynBadTCPChecksum SamplePacketName = iota\n\tsynGoodTCPChecksum\n\tsynIHLTooBig\n\tsynIPLenTooSmall\n\tsynMissingBytes\n\tsynBadIPChecksum\n\tloopbackAddress = \"127.0.0.1\"\n)\n\nvar ipv6UDPPacket = \"60000000009f113f20010470e5bf10960002009900c1001020010470e5bf10011cc773ff65f5a2f700a1b4d1009fd3c93081940201033011020429cdb180020300ffcf0401030201030441303f041480004f4db1aadcadbc89affa118dbd53824c6b050201030203010a1d040774616368796f6e040c9069a445532f20d9a57844f704088c8c110c5bbf5ed80439aafc5aa6c6c8364b13f14c807562e50793abc31e99170affd717a969b032112f5df9f2a5a9e661243cfa4d37614e0aca880c74881325222831\"\n\nvar testPackets = [][]byte{\n\t// SYN packet captured from 'telnet localhost 99'.\n\t// TCP checksum is wrong.\n\t{0x45, 0x10, 0x00, 0x3c, 0xaa, 0x2e, 0x40, 0x00, 0x40, 0x06, 0x92,\n\t\t0x7b, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0xb2, 0x64, 0x00, 0x63, 0x58, 0xd1,\n\t\t0x24, 0xd9, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02, 0xaa, 0xaa, 0xfe, 0x30, 0x00, 0x00, 0x02,\n\t\t0x04, 0xff, 0xd7, 0x04, 0x02, 0x08, 0x0a, 0x00, 0xc5, 0x8e, 0xf7, 0x00, 0x00, 0x00, 0x00,\n\t\t0x01, 0x03, 0x03, 0x07},\n\n\t// SYN packet captured from 'telnet localhost 99'.\n\t// Everything is correct.\n\t{0x45, 0x10, 0x00, 0x3c, 0xec, 0x6c, 0x40, 0x00, 0x40, 0x06, 0x50,\n\t\t0x3d, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0x8c, 0x80, 0x00, 0x63, 0x2c, 0x32,\n\t\t0xa8, 0xd6, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02, 0xaa, 0xaa, 0xfe, 0x88, 0x00, 0x00, 0x02,\n\t\t0x04, 0xff, 0xd7, 0x04, 0x02, 0x08, 0x0a, 0xff, 0xff, 0x44, 0xba, 0x00, 0x00, 0x00, 0x00,\n\t\t0x01, 0x03, 0x03, 0x07},\n\n\t// SYN packet captured from 'telnet localhost 99'.\n\t// IHL (IP header length) is wrong (too big, value = 6 should be 5)\n\t{0x46, 0x10, 0x00, 0x3c, 0xaa, 0x2e, 0x40, 0x00, 0x40, 0x06, 0x92,\n\t\t0x7b, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0xb2, 0x64, 0x00, 0x63, 0x58, 0xd1,\n\t\t0x24, 0xd9, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02, 0xaa, 0xaa, 0xfe, 0x30, 0x00, 0x00, 0x02,\n\t\t0x04, 0xff, 0xd7, 0x04, 0x02, 0x08, 0x0a, 0x00, 0xc5, 0x8e, 0xf7, 0x00, 0x00, 0x00, 0x00,\n\t\t0x01, 0x03, 0x03, 0x07},\n\n\t// The IP packet length is incorrect (too small, value=38, should be 40)\n\t{0x45, 0x10, 0x00, 0x26, 0xaa, 0x2e, 0x40, 0x00, 0x40, 0x06, 0x92,\n\t\t0x7b, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0xb2, 0x64, 0x00, 0x63, 0x58, 0xd1,\n\t\t0x24, 0xd9, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02, 0xaa, 0xaa, 0xfe, 0x30},\n\n\t// SYN packet captured from 'telnet localhost 99'.\n\t// Packet is too short, missing two bytes.\n\t{0x45, 0x10, 0x00, 0x3c, 0xaa, 0x2e, 0x40, 0x00, 0x40, 0x06, 0x92,\n\t\t0x7b, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0xb2, 0x64, 0x00, 0x63, 0x58, 0xd1,\n\t\t0x24, 0xd9, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02, 0xaa, 0xaa, 0xfe, 0x30, 0x00, 0x00, 0x02,\n\t\t0x04, 0xff, 0xd7, 0x04, 0x02, 0x08, 0x0a, 0x00, 0xc5, 0x8e, 0xf7, 0x00, 0x00, 0x00, 0x00,\n\t\t0x01, 0x03},\n\n\t// SYN packet captured from 'telnet localhost 99'.\n\t// IP checksum is wrong (set to zero)\n\t{0x45, 0x10, 0x00, 0x3c, 0xaa, 0x2e, 0x40, 0x00, 0x40, 0x06, 0x00,\n\t\t0x00, 0x7f, 0x00, 0x00, 0x01, 0x7f, 0x00, 0x00, 0x01, 0xb2, 0x64, 0x00, 0x63, 0x58, 0xd1,\n\t\t0x24, 0xd9, 0x00, 0x00, 0x00, 0x00, 0xa0, 0x02, 0xaa, 0xaa, 0xfe, 0x30, 0x00, 0x00, 0x02,\n\t\t0x04, 0xff, 0xd7, 0x04, 0x02, 0x08, 0x0a, 0x00, 0xc5, 0x8e, 0xf7, 0x00, 0x00, 0x00, 0x00,\n\t\t0x01, 0x03, 0x03, 0x07}}\n\nfunc TestGoodPacket(t *testing.T) {\n\n\tt.Parallel()\n\tpkt := getTestPacket(t, synGoodTCPChecksum)\n\tt.Log(pkt.PacketToStringTCP())\n\n\tif !pkt.VerifyIPv4Checksum() {\n\t\tt.Error(\"Test packet IP checksum failed\")\n\t}\n\n\tif !pkt.VerifyTCPChecksum() {\n\t\tt.Error(\"TCP checksum failed\")\n\t}\n\n\tif pkt.DestPort() != 99 {\n\t\tt.Error(\"Unexpected destination port\")\n\t}\n\n\tif pkt.SourcePort() != 35968 {\n\t\tt.Error(\"Unexpected source port\")\n\t}\n}\n\nfunc TestBadTCPChecknum(t *testing.T) {\n\n\tt.Parallel()\n\tpkt := getTestPacket(t, synBadTCPChecksum)\n\tif !pkt.VerifyIPv4Checksum() {\n\t\tt.Error(\"Test packet IP checksum failed\")\n\t}\n\n\tif pkt.VerifyTCPChecksum() {\n\t\tt.Error(\"Expected TCP checksum failure\")\n\t}\n}\n\nfunc TestPartialChecksum(t *testing.T) {\n\t// Computes a checksum over the given slice.\n\tchecksum := func(buf []byte) uint16 {\n\t\tchecksumDelta := func(buf []byte) uint16 {\n\n\t\t\tsum := uint32(0)\n\n\t\t\tfor ; len(buf) >= 2; buf = buf[2:] {\n\t\t\t\tsum += uint32(buf[0])<<8 | uint32(buf[1])\n\t\t\t}\n\t\t\tif len(buf) > 0 {\n\t\t\t\tsum += uint32(buf[0]) << 8\n\t\t\t}\n\t\t\tfor sum > 0xffff {\n\t\t\t\tsum = (sum >> 16) + (sum & 0xffff)\n\t\t\t}\n\t\t\treturn uint16(sum)\n\t\t}\n\n\t\tsum := checksumDelta(buf)\n\t\tcsum := ^sum\n\t\treturn csum\n\t}\n\n\tfor i := 0; i < 1000; i++ {\n\t\tvar randBytes [1500]byte\n\n\t\trand.Read(randBytes[:]) // nolint\n\n\t\tcsum := checksum(randBytes[:])\n\n\t\tpCsum := partialChecksum(0, randBytes[:500])\n\t\tpCsum = partialChecksum(pCsum, randBytes[500:1000])\n\t\tpCsum = partialChecksum(pCsum, randBytes[1000:])\n\t\tfCSum := finalizeChecksum(pCsum)\n\n\t\tif csum != fCSum {\n\t\t\tt.Error(\"Checksum failed\")\n\t\t}\n\t}\n\n}\n\nfunc TestAddresses(t *testing.T) {\n\n\tt.Parallel()\n\tpkt := getTestPacket(t, synBadTCPChecksum)\n\n\tsrc := pkt.SourceAddress().String()\n\tif src != loopbackAddress {\n\t\tt.Errorf(\"Unexpected source address %s\", src)\n\t}\n\tdest := pkt.DestinationAddress().String()\n\tif dest != loopbackAddress {\n\t\tt.Errorf(\"Unexpected destination address %s\", src)\n\t}\n}\n\nfunc TestEmptyPacketNoPayload(t *testing.T) {\n\n\tt.Parallel()\n\tpkt := getTestPacket(t, synBadTCPChecksum)\n\n\tdata := pkt.ipHdr.Buffer\n\tif len(data) != 60 {\n\t\tt.Error(\"Test SYN packet should have no TCP payload\")\n\t}\n}\n\n/*\nfunc TestEmptyPacketNoTags(t *testing.T) {\n\n\tt.Parallel()\n\tpkt := getTestPacket(t, synBadTCPChecksum)\n\n\tlabels := pkt.ReadPayloadTags()\n\tif len(labels) != 0 {\n\t\tt.Error(\"Test SYN packet should have no labels\")\n\t}\n\n\textracted := pkt.ExtractPayloadTags()\n\tif len(extracted) != 0 {\n\t\tt.Error(\"Test SYN packet should have no extractable labels\")\n\t}\n\n\tpkt.connection.TCPDataDetach()\n\tif len(pkt.Bytes) != 60 {\n\t\tt.Error(\"Test SYN packet should have no TCP data at all\")\n\t}\n}\n*/\n\nfunc TestExtractedBytesStillGood(t *testing.T) {\n\n\tt.Parallel()\n\tpkt := getTestPacket(t, synBadTCPChecksum)\n\n\t// Extract unmodified bytes and feed them back in\n\tbytes := pkt.ipHdr.Buffer\n\tpkt2, err := New(0, bytes, \"0\", true)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif !pkt2.VerifyIPv4Checksum() {\n\t\tt.Error(\"Test packet2 IP checksum failed\")\n\t}\n}\n\nfunc TestLongerIPHeader(t *testing.T) {\n\n\tt.Parallel()\n\terr := getTestPacketWithError(synIHLTooBig)\n\tt.Log(err)\n\tif err == nil {\n\t\tt.Error(\"Expected failure given too long IP header length\")\n\t}\n}\n\nfunc TestShortPacketLength(t *testing.T) {\n\n\tt.Parallel()\n\terr := getTestPacketWithError(synIPLenTooSmall)\n\tt.Log(err)\n\tif err == nil {\n\t\tt.Error(\"Expected failure given too short IP header length\")\n\t}\n}\n\nfunc TestShortBuffer(t *testing.T) {\n\n\tt.Parallel()\n\terr := getTestPacketWithError(synMissingBytes)\n\tt.Log(err)\n\tif err == nil {\n\t\tt.Error(\"Expected failure given short (truncated) packet\")\n\t}\n}\n\nfunc TestSetChecksum(t *testing.T) {\n\n\tt.Parallel()\n\tpkt := getTestPacket(t, synBadIPChecksum)\n\tt.Log(pkt.PacketToStringTCP())\n\tif pkt.VerifyIPv4Checksum() {\n\t\tt.Error(\"Expected bad IP checksum given it is wrong\")\n\t}\n\n\tpkt.UpdateIPv4Checksum()\n\tt.Log(pkt.PacketToStringTCP())\n\tif !pkt.VerifyIPv4Checksum() {\n\t\tt.Error(\"IP checksum is wrong after update\")\n\t}\n}\n\nfunc TestSetTCPChecksum(t *testing.T) {\n\n\tt.Parallel()\n\tpkt := getTestPacket(t, synBadTCPChecksum)\n\tt.Log(pkt.PacketToStringTCP())\n\tif pkt.VerifyTCPChecksum() {\n\t\tt.Error(\"Expected bad TCP checksum given it is wrong\")\n\t}\n\n\tpkt.UpdateTCPChecksum()\n\tt.Log(pkt.PacketToStringTCP())\n\tif !pkt.VerifyTCPChecksum() {\n\t\tt.Error(\"TCP checksum is wrong after update\")\n\t}\n}\n\nfunc TestAddTag(t *testing.T) {\n\n\t/*\n\t\tt.Parallel()\n\t\tlabels := []string{\"TAG1\"}\n\t\tpkt := getTestPacket(t, synBadTCPChecksum)\n\t\tif !pkt.VerifyIPv4Checksum() {\n\t\t\tt.Error(\"Test packet IP checksum failed\")\n\t\t}\n\n\t\ts := pkt.String()\n\t\tt.Log(s)\n\n\t\tpkt.AttachPayloadTags(labels)\n\t\tif !pkt.VerifyIPv4Checksum() {\n\t\t\tt.Error(\"Tagged packet IP checksum failed\")\n\t\t}\n\n\t\ts2 := pkt.String()\n\t\tt.Log(s2)\n\n\t\tdata := string(pkt.Bytes[pkt.connection.TCPDataStartBytes():])\n\t\tt.Log(\"Tag extracted from payload:\", data)\n\t\tif data != \" \"+labels[0] {\n\t\t\tt.Error(\"Tag extracted from payload data doesn't match input\")\n\t\t}\n\t*/\n}\n\nfunc TestExtractTags(t *testing.T) {\n\t/*\n\t\tt.Parallel()\n\t\tlabels := []string{\"TAG1\", \"TAG2\", \"TAG3\"}\n\t\tpkt := getTestPacket(t, synGoodTCPChecksum)\n\t\tt.Log(\"Initial packet\", pkt)\n\n\t\tpkt.AttachPayloadTags(labels)\n\t\tt.Log(\"With tags\", pkt)\n\n\t\tif !pkt.VerifyIPv4Checksum() {\n\t\t\tt.Error(\"Tagged packet checksum failed\")\n\t\t}\n\n\t\tif !pkt.VerifyTCPChecksum() {\n\t\t\tt.Error(\"Packet TCP checksum failed after adding tags\")\n\t\t}\n\n\t\tlabelsRead := pkt.ExtractPayloadTags()\n\t\tt.Log(\"Tags extracted\", pkt)\n\n\t\tif len(labelsRead) != 3 {\n\t\t\tt.Errorf(\"Wrote 3 labels but read %d\", len(labelsRead))\n\t\t}\n\n\t\tfor i := range labels {\n\t\t\tif labels[i] != labelsRead[i] {\n\t\t\t\tt.Error(\"Labels read do not match labels written\")\n\t\t\t}\n\t\t}\n\n\t\tif !pkt.VerifyIPv4Checksum() {\n\t\t\tt.Error(\"Packet IP checksum failed after extracting tags\")\n\t\t}\n\n\t\tif !pkt.VerifyTCPChecksum() {\n\t\t\tt.Error(\"Packet TCP checksum failed after extracting tags\")\n\t\t}\n\n\t\tlabelsGone := pkt.ReadPayloadTags()\n\t\tif len(labelsGone) != 0 {\n\t\t\tt.Error(\"Labels still present after extraction\")\n\t\t}\n\t*/\n}\n\nfunc TestAddTags(t *testing.T) {\n\t/*\n\t\tt.Parallel()\n\t\tlabels := []string{\"TAG1\", \"TAG2\", \"TAG3\"}\n\t\tpkt := getTestPacket(t, synBadTCPChecksum)\n\t\tif !pkt.VerifyIPv4Checksum() {\n\t\t\tt.Error(\"Test packet IP checksum failed\")\n\t\t}\n\n\t\tt.Log(pkt.String())\n\n\t\tpkt.AttachPayloadTags(labels)\n\t\tif !pkt.VerifyIPv4Checksum() {\n\t\t\tt.Error(\"Tagged packet checksum failed\")\n\t\t}\n\n\t\tt.Log(pkt.String())\n\n\t\t// Just reading tags does not remove them, so try reading twice to make sure\n\t\tfor n := 0; n < 2; n++ {\n\t\t\tlabelsRead := pkt.ReadPayloadTags()\n\t\t\tif len(labelsRead) != 3 {\n\t\t\t\tt.Errorf(\"Wrote 3 labels but read %d\", len(labelsRead))\n\t\t\t}\n\n\t\t\tfor i := range labels {\n\t\t\t\tif labels[i] != labelsRead[i] {\n\t\t\t\t\tt.Error(\"Labels read do not match labels written\")\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t*/\n}\n\nfunc TestUDP(t *testing.T) {\n\tudpPacket, _ := hex.DecodeString(\"4500004b1a294000401108b90a8080800a0c82b400350e1700371e316e4f8180000100010000000003617069066272616e636802696f0000010001c00c000100010000003b00046354e9fa\")\n\n\tpkt, _ := New(0, udpPacket, \"0\", true)\n\n\tif pkt.SourceAddress().String() != \"10.128.128.128\" {\n\t\tt.Error(\"source address udp parsing incorrect\")\n\t}\n\n\tif pkt.DestinationAddress().String() != \"10.12.130.180\" {\n\t\tt.Error(\"destination address udp parsing incorrect\")\n\t}\n\n\tif pkt.SourcePort() != uint16(53) {\n\t\tt.Error(\"source port incorrect udp\")\n\t}\n\n\tif pkt.DestPort() != uint16(3607) {\n\t\tt.Error(\"destination port incorrect udp\")\n\t}\n}\n\nfunc TestRawChecksums(t *testing.T) {\n\n\tt.Parallel()\n\tvar buf = []byte{0x01, 0x00, 0xF2, 0x03, 0xF4, 0xF5, 0xF6, 0xF7, 0x00, 0x00}\n\tc := checksum(buf)\n\tif c != 0x210E {\n\t\tt.Error(\"First checksum calculation failed\")\n\t}\n\n\tvar buf2 = []byte{0x01, 0x00, 0xF2, 0x03, 0xF4, 0xF5, 0xF6, 0xF7, 0x00}\n\tc2 := checksum(buf2)\n\tif c2 != 0x210E {\n\t\tt.Error(\"Second checksum calculation (odd bytes) failed\")\n\t}\n\n\tvar buf3 = []byte{0x45, 0x00, 0x00, 0x3c, 0x1c, 0x46, 0x40, 0x00, 0x40, 0x06,\n\t\t0x00, 0x00, 0xac, 0x10, 0x0a, 0x63, 0xac, 0x10, 0x0a, 0x0c}\n\tc3 := checksum(buf3)\n\tif c3 != 0xB1E6 {\n\t\tt.Error(\"Third checksum calculation failed\")\n\t}\n}\n\nfunc TestAuthOptions(t *testing.T) {\n\n\tAuthpacketByte := []byte{0x45, 0x00, 0x00, 0x40, 0x7b, 0xd6, 0x40, 0x00, 0x40, 0x06, 0xcd, 0x9d, 0x0a, 0xb0, 0x48, 0x05, 0x0a, 0xb4, 0x93, 0xdf, 0x01, 0xbb, 0x87, 0x78, 0x2d, 0xa9, 0xaf, 0xf1, 0x4d, 0x89, 0xcc, 0xeb, 0xb0, 0x12, 0xff, 0xff, 0xe7, 0x64, 0x00, 0x00, 0x02, 0x04, 0x05, 0x50, 0x01, 0x03, 0x03, 0x0b, 0x04, 0x02, 0x08, 0x0a, 0xbd, 0x1c, 0x73, 0x43, 0x22, 0x63, 0x9b, 0x9d, 0x22, 0x4, 0x0, 0x0}\n\n\tNonAuthpacketByte := []byte{0x45, 0x00, 0x00, 0x40, 0x7b, 0xd6, 0x40, 0x00, 0x40, 0x06, 0xcd, 0x9d, 0x0a, 0xb0, 0x48, 0x05, 0x0b, 0xb4, 0x93, 0xdf, 0x01, 0xbb, 0x87, 0x78, 0x2d, 0xa9, 0xaf, 0xf1, 0x4d, 0x89, 0xcc, 0xeb, 0xb0, 0x12, 0xff, 0xff, 0xe7, 0x64, 0x00, 0x00, 0x02, 0x04, 0x05, 0x50, 0x01, 0x03, 0x03, 0x0b, 0x04, 0x02, 0x08, 0x0a, 0xbd, 0x1c, 0x73, 0x43, 0x22, 0x63, 0x9b, 0x0, 0x2, 0x0, 0x0, 0x0}\n\n\tpkt, err := New(1, AuthpacketByte, \"2\", true)\n\tif err != nil {\n\t\tt.Errorf(\"Packet not parsed %s\", err)\n\t}\n\tif err := pkt.CheckTCPAuthenticationOption(4); err != nil {\n\t\tt.Errorf(\"There is no TCP AUTH Option\")\n\t}\n\tpkt, err = New(1, NonAuthpacketByte, \"2\", true)\n\tif err != nil {\n\t\tt.Errorf(\"Packet not parsed %s\", err)\n\t}\n\tif err = pkt.CheckTCPAuthenticationOption(4); err == nil {\n\t\tt.Errorf(\"There is no TCP AUTH Option but we are reporting it\")\n\t}\n\n}\n\nfunc TestNewPacketFunctions(t *testing.T) {\n\tpkt := getTestPacket(t, synGoodTCPChecksum)\n\tpkt.Print(123456, true)\n\n\tif pkt.SourcePort() != 35968 {\n\t\tt.Error(\"Test packet source ip didnt match\")\n\t}\n\n\tif pkt.DestPort() != 99 {\n\t\tt.Error(\"Test packet dest port didnt match\")\n\t}\n\n\tif pkt.SourceAddress().String() != loopbackAddress {\n\t\tt.Error(\"Test packet source ip didnt match\")\n\t}\n\n\tif pkt.DestinationAddress().String() != loopbackAddress {\n\t\tt.Error(\"Test packet dest ip didnt match\")\n\t}\n\n\tif pkt.IPProto() != IPProtocolTCP {\n\t\tt.Error(\"Test packet ip proto didnt match\")\n\t}\n\n\tif pkt.IPTotalLen() != 60 {\n\t\tt.Error(\"Test packet total length is wrong\")\n\t}\n\n\tif pkt.IPHeaderLen() != 20 {\n\t\tt.Error(\"Test packet ip header length should be 20\")\n\t}\n\n\tif pkt.GetTCPFlags() != 2 {\n\t\tt.Error(\"test packet tcp flags didnt match\")\n\t}\n\n}\n\nfunc getTestPacket(t *testing.T, id SamplePacketName) *Packet {\n\n\ttmp := make([]byte, len(testPackets[id]))\n\tcopy(tmp, testPackets[id])\n\n\tpkt, err := New(0, tmp, \"0\", true)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\treturn pkt\n}\n\nfunc getTestPacketWithError(id SamplePacketName) error {\n\n\ttmp := make([]byte, len(testPackets[id]))\n\tcopy(tmp, testPackets[id])\n\n\t_, err := New(0, tmp, \"0\", true)\n\treturn err\n}\n\nfunc TestIPV6PacketParsing(t *testing.T) {\n\tbytes, _ := hex.DecodeString(ipv6UDPPacket)\n\tpkt, _ := New(0, bytes, \"0\", true)\n\n\tassert.Equal(t, pkt.SourceAddress().String(), \"2001:470:e5bf:1096:2:99:c1:10\", \"src addr did not match\")\n\tassert.Equal(t, pkt.DestinationAddress().String(), \"2001:470:e5bf:1001:1cc7:73ff:65f5:a2f7\", \"dst addr did not match\")\n}\n\nfunc TestReverseFlowPacket(t *testing.T) {\n\tbytes, _ := hex.DecodeString(ipv6UDPPacket)\n\tpkt, _ := New(0, bytes, \"0\", true)\n\n\tpkt.CreateReverseFlowPacket()\n\n\tassert.Equal(t, pkt.SourceAddress().String(), \"2001:470:e5bf:1001:1cc7:73ff:65f5:a2f7\", \"src addr did not match\")\n\tassert.Equal(t, pkt.DestinationAddress().String(), \"2001:470:e5bf:1096:2:99:c1:10\", \"dst addr did not match\")\n}\n\nfunc TestUDPTokenAttach(t *testing.T) {\n\tbytes, _ := hex.DecodeString(ipv6UDPPacket)\n\tpkt, _ := New(0, bytes, \"0\", true)\n\tdata := []byte(\"helloworld\")\n\t// Create UDP Option\n\tudpOptions := CreateUDPAuthMarker(UDPSynAckMask, uint16(len(data)))\n\n\tpkt.CreateReverseFlowPacket()\n\n\t// Attach the UDP data and token\n\tpkt.UDPTokenAttach(udpOptions, data)\n\n\tassert.Equal(t, string(pkt.ReadUDPToken()), \"helloworld\", \"token should match helloworld\")\n\n}\n\nfunc TestNewIpv4TCPPacket(t *testing.T) {\n\n\tsrcIP := \"10.0.0.30\"\n\tdstIP := \"10.0.0.25\"\n\tsrcPort := uint16(3000)\n\tdstPort := uint16(80)\n\ttcpFlags := uint8(0x2)\n\tcontext := uint64(100)\n\n\tp, err := NewIpv4TCPPacket(context, tcpFlags, srcIP, dstIP, srcPort, dstPort)\n\tassert.NoError(t, err)\n\tif p != nil {\n\t\tassert.Equal(t, V4, p.ipHdr.version, \"Version should equal\")\n\t\tassert.Equal(t, context, p.context, \"Context should equal\")\n\t\tassert.Equal(t, uint8(IPProtocolTCP), p.ipHdr.ipProto, \"Protocol should equal\")\n\t\tassert.Equal(t, srcIP, p.ipHdr.sourceAddress.String(), \"Source address should equal\")\n\t\tassert.Equal(t, dstIP, p.ipHdr.destinationAddress.String(), \"Destination address should equal\")\n\t\tassert.Equal(t, srcPort, p.tcpHdr.sourcePort, \"Source port should equal\")\n\t\tassert.Equal(t, dstPort, p.tcpHdr.destinationPort, \"Destination port should equal\")\n\t\tassert.Equal(t, tcpFlags, p.tcpHdr.tcpFlags, \"TCP flags should equal\")\n\t}\n}\n\nfunc TestNewIpv6TCPPacket(t *testing.T) {\n\n\tsrcIP := \"2001:db8:5002::8a2e:370:7334\"\n\tdstIP := \"2002:db9:5002::8a2b:370:7335\"\n\tsrcPort := uint16(3000)\n\tdstPort := uint16(80)\n\ttcpFlags := uint8(0x2)\n\tcontext := uint64(100)\n\n\tp, err := NewIpv6TCPPacket(context, tcpFlags, srcIP, dstIP, srcPort, dstPort)\n\tassert.NoError(t, err)\n\tif p != nil {\n\t\tassert.Equal(t, V6, p.ipHdr.version, \"Version should equal\")\n\t\tassert.Equal(t, context, p.context, \"Context should equal\")\n\t\tassert.Equal(t, uint8(IPProtocolTCP), p.ipHdr.ipProto, \"Protocol should equal\")\n\t\tassert.Equal(t, srcIP, p.ipHdr.sourceAddress.String(), \"Source address should equal\")\n\t\tassert.Equal(t, dstIP, p.ipHdr.destinationAddress.String(), \"Destination address should equal\")\n\t\tassert.Equal(t, srcPort, p.tcpHdr.sourcePort, \"Source port should equal\")\n\t\tassert.Equal(t, dstPort, p.tcpHdr.destinationPort, \"Destination port should equal\")\n\t\tassert.Equal(t, tcpFlags, p.tcpHdr.tcpFlags, \"TCP flags should equal\")\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/packet/types.go",
    "content": "package packet\n\nimport \"net\"\n\nconst (\n\t// PacketTypeNetwork is enum for from-network packets\n\tPacketTypeNetwork = 0x1000\n\t// PacketTypeApplication is enum for from-application packets\n\tPacketTypeApplication = 0x2000\n\n\t// PacketStageIncoming is an enum for incoming stage\n\tPacketStageIncoming = 0x0100\n\t// PacketStageAuth is an enum for authentication stage\n\tPacketStageAuth = 0x0200\n\t// PacketStageService is an enum for crypto stage\n\tPacketStageService = 0x0400\n\t// PacketStageOutgoing is an enum for outgoing stage\n\tPacketStageOutgoing = 0x0800\n\n\t// PacketFailureCreate is the drop reason for packet\n\tPacketFailureCreate = 0x0010\n\t// PacketFailureAuth is a drop reason for packet due to authentication error\n\tPacketFailureAuth = 0x0020\n\t// PacketFailureService is a drop reason for packet due to crypto error\n\tPacketFailureService = 0x00040\n)\n\nfunc flagsToDir(flags uint64) string {\n\n\tif flags&PacketTypeApplication != 0 {\n\t\treturn \"<<<<<\"\n\t} else if flags&PacketTypeNetwork != 0 {\n\t\treturn \">>>>>\"\n\t}\n\treturn \"xxxxx\"\n}\n\nfunc flagsToStr(flags uint64) string {\n\n\ts := \"\"\n\tif flags&PacketTypeApplication != 0 {\n\t\ts = s + \"Application\"\n\t} else if flags&PacketTypeNetwork != 0 {\n\t\ts = s + \"Network\"\n\t}\n\n\tif flags&PacketStageIncoming != 0 {\n\t\ts = s + \"-Incoming\"\n\t} else if flags&PacketStageOutgoing != 0 {\n\t\ts = s + \"-Outgoing\"\n\t} else if flags&PacketStageAuth != 0 {\n\t\ts = s + \"-Auth\"\n\t} else if flags&PacketStageService != 0 {\n\t\ts = s + \"-Service\"\n\t}\n\n\tif flags&PacketFailureCreate != 0 {\n\t\ts = s + \"-(Fail Create)\"\n\t} else if flags&PacketFailureAuth != 0 {\n\t\ts = s + \"-(Fail Auth)\"\n\t} else if flags&PacketFailureService != 0 {\n\t\ts = s + \"-(Fail Service)\"\n\t}\n\treturn s\n}\n\nfunc tcpFlagsToStr(flags uint8) string {\n\ts := \"\"\n\tif flags&0x20 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"U\"\n\t}\n\tif flags&0x10 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"A\"\n\t}\n\tif flags&0x08 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"P\"\n\t}\n\tif flags&0x04 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"R\"\n\t}\n\tif flags&0x02 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"S\"\n\t}\n\tif flags&0x01 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"F\"\n\t}\n\treturn s\n}\n\n// TCPFlagsToStr converts the TCP Flags to a string value that is human readable\nfunc TCPFlagsToStr(flags uint8) string {\n\ts := \"\"\n\tif flags&0x20 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"U\"\n\t}\n\tif flags&0x10 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"A\"\n\t}\n\tif flags&0x08 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"P\"\n\t}\n\tif flags&0x04 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"R\"\n\t}\n\tif flags&0x02 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"S\"\n\t}\n\tif flags&0x01 == 0 {\n\t\ts = s + \".\"\n\t} else {\n\t\ts = s + \"F\"\n\t}\n\treturn s\n}\n\n// IPver is the type defined for ip version\ntype IPver int\n\nconst (\n\t// V4 is the flag for ipv4\n\tV4 IPver = iota\n\t// V6 is the flag for ipv6\n\tV6\n)\n\ntype iphdr struct {\n\tBuffer []byte\n\n\t// IP Header fields\n\tipHeaderLen        uint8\n\tipProto            uint8\n\tipTotalLength      uint16\n\tipID               uint16\n\tipChecksum         uint16\n\tversion            IPver\n\tsourceAddress      net.IP\n\tdestinationAddress net.IP\n}\n\ntype tcphdr struct {\n\tsourcePort      uint16\n\tdestinationPort uint16\n\n\ttcpSeq         uint32\n\ttcpAck         uint32\n\ttcpDataOffset  uint8\n\ttcpFlags       uint8\n\ttcpChecksum    uint16\n\ttcpTotalLength uint16\n}\n\ntype udphdr struct {\n\tsourcePort      uint16\n\tdestinationPort uint16\n\tudpChecksum     uint16\n\tudpLength       uint16\n\tudpData         []byte\n}\n\ntype icmphdr struct {\n\ticmpType int8\n\ticmpCode int8\n}\n\n// PlatformMetadata structure\ntype PlatformMetadata interface {\n\tClone() PlatformMetadata\n}\n\n//Packet structure\ntype Packet struct {\n\t// Metadata\n\tcontext uint64\n\n\t// Mark is the nfqueue Mark\n\tMark        string\n\tSetConnmark bool\n\tipHdr       iphdr\n\ttcpHdr      tcphdr\n\tudpHdr      udphdr\n\ticmpHdr     icmphdr\n\tl4flowhash  string\n\t// Service Metadata\n\tSvcMetadata interface{}\n\t// Connection Metadata\n\tConnectionMetadata interface{}\n\t// Platform Metadata (needed for Windows)\n\tPlatformMetadata PlatformMetadata\n}\n"
  },
  {
    "path": "controller/pkg/packetprocessor/packetprocessor.go",
    "content": "package packetprocessor\n\nimport (\n\tprovider \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/aclprovider\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/connection\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pucontext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n)\n\n// PacketProcessor is an interface for extending packet processing functions such\n// as encryption, deep packet inspection, etc. These functions are run inline during packet\n// processing. A services processor must implement this interface.\ntype PacketProcessor interface {\n\t// Initialize  initializes any ACLs that the processor requires\n\tInitialize(fq fqconfig.FilterQueue, p []provider.IptablesProvider)\n\n\t// Stop stops the packet processor\n\tStop() error\n\n\t// PreProcessTCPAppPacket will be called for application packets and return value of false means drop packet.\n\tPreProcessTCPAppPacket(p *packet.Packet, context *pucontext.PUContext, conn *connection.TCPConnection) bool\n\n\t// PostProcessTCPAppPacket will be called for application packets and return value of false means drop packet.\n\tPostProcessTCPAppPacket(p *packet.Packet, action interface{}, context *pucontext.PUContext, conn *connection.TCPConnection) bool\n\n\t// PreProcessTCPNetPacket will be called for network packets and return value of false means drop packet\n\tPreProcessTCPNetPacket(p *packet.Packet, context *pucontext.PUContext, conn *connection.TCPConnection) bool\n\n\t// PostProcessTCPNetPacket will be called for network packets and return value of false means drop packet\n\tPostProcessTCPNetPacket(p *packet.Packet, action interface{}, claims *tokens.ConnectionClaims, context *pucontext.PUContext, conn *connection.TCPConnection) bool\n\n\t// PreProcessUDPAppPacket will be called for application packets and return value of false means drop packet\n\tPreProcessUDPAppPacket(p *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection, packetType uint8) bool\n\n\t// PostProcessUDPAppPacket will be called for application packets and return value of false means drop packet.\n\tPostProcessUDPAppPacket(p *packet.Packet, action interface{}, context *pucontext.PUContext, conn *connection.UDPConnection) bool\n\n\t// PreProcessUDPNetPacket will be called for network packets and return value of false means drop packet\n\tPreProcessUDPNetPacket(p *packet.Packet, context *pucontext.PUContext, conn *connection.UDPConnection) bool\n\n\t// PostProcessUDPNetPacket will be called for network packets and return value of false means drop packet\n\tPostProcessUDPNetPacket(p *packet.Packet, action interface{}, claims *tokens.ConnectionClaims, context *pucontext.PUContext, conn *connection.UDPConnection) bool\n}\n"
  },
  {
    "path": "controller/pkg/packettracing/packettracing.go",
    "content": "package packettracing\n\n// TracingDirection is used to configure the direction for which we want to trace packets\ntype TracingDirection int\n\n// TracingDirection enum all possible states\nconst (\n\tDisabled        TracingDirection = 0\n\tNetworkOnly     TracingDirection = 1\n\tApplicationOnly TracingDirection = 2\n\tInvalid         TracingDirection = 4\n)\n\n// PacketEvent is string for our packet decision\ntype PacketEvent string\n\n// Enum for all packetevents\nconst (\n\tPacketDropped  PacketEvent = \"Dropped\"\n\tPacketReceived PacketEvent = \"Received\"\n\tPacketSent     PacketEvent = \"Transmitted\"\n)\n\n// IsNetworkPacketTraced checks if network mode packet tracign is enabled\nfunc IsNetworkPacketTraced(direction TracingDirection) bool {\n\treturn (direction&NetworkOnly != 0)\n}\n\n// IsApplicationPacketTraced checks if application\nfunc IsApplicationPacketTraced(direction TracingDirection) bool {\n\treturn (direction&ApplicationOnly != 0)\n}\n"
  },
  {
    "path": "controller/pkg/pingconfig/pingconfig.go",
    "content": "package pingconfig\n\nimport (\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\n// PingConfig holds ping configuration for this connection.\ntype PingConfig struct {\n\tsocketFd     uintptr\n\tsocketClosed bool\n\tpingID       string\n\titerationID  int\n\tappListening bool\n\tseqNum       uint32\n\tpingReport   *collector.PingReport\n\n\tStartTime time.Time\n\n\tsync.RWMutex\n}\n\n// New returns a new locked access to PingConfig handle.\nfunc New() *PingConfig {\n\treturn &PingConfig{}\n}\n\n// SocketFd returns socket file descriptor.\nfunc (p *PingConfig) SocketFd() uintptr {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.socketFd\n}\n\n// SetSocketFd sets socket file descriptor.\nfunc (p *PingConfig) SetSocketFd(socketFd uintptr) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.socketFd = socketFd\n}\n\n// SocketClosed returns socket closed.\nfunc (p *PingConfig) SocketClosed() bool {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.socketClosed\n}\n\n// SetSocketClosed sets socket closed.\nfunc (p *PingConfig) SetSocketClosed(socketClosed bool) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.socketClosed = socketClosed\n}\n\n// PingID returns ping ID.\nfunc (p *PingConfig) PingID() string {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.pingID\n}\n\n// SetPingID sets ping ID.\nfunc (p *PingConfig) SetPingID(pingID string) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.pingID = pingID\n}\n\n// IterationID returns iteration ID.\nfunc (p *PingConfig) IterationID() int {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.iterationID\n}\n\n// SetIterationID sets iteration ID.\nfunc (p *PingConfig) SetIterationID(iterationID int) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.iterationID = iterationID\n}\n\n// ApplicationListening returns true if an app is listening.\nfunc (p *PingConfig) ApplicationListening() bool {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.appListening\n}\n\n// SetApplicationListening sets appListening.\nfunc (p *PingConfig) SetApplicationListening(appListening bool) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.appListening = appListening\n}\n\n// SeqNum returns tcp sequence number.\nfunc (p *PingConfig) SeqNum() uint32 {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.seqNum\n}\n\n// SetSeqNum sets tcp sequence number.\nfunc (p *PingConfig) SetSeqNum(seqNum uint32) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.seqNum = seqNum\n}\n\n// PingReport returns ping report.\nfunc (p *PingConfig) PingReport() *collector.PingReport {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.pingReport\n}\n\n// SetPingReport sets ping report.\nfunc (p *PingConfig) SetPingReport(pingReport *collector.PingReport) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.pingReport = pingReport\n}\n"
  },
  {
    "path": "controller/pkg/pingconfig/pingconfig_test.go",
    "content": "package pingconfig\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\nfunc Test_NewPingConfig(t *testing.T) {\n\n\tp := New()\n\trequire.NotNil(t, p)\n\n\tp.SetSocketFd(4)\n\trequire.Equal(t, uintptr(4), p.SocketFd())\n\n\tp.SetSocketClosed(true)\n\trequire.True(t, p.SocketClosed())\n\n\tp.SetPingID(\"abc\")\n\trequire.Equal(t, \"abc\", p.PingID())\n\n\tp.SetIterationID(2)\n\trequire.Equal(t, 2, p.IterationID())\n\n\tp.SetApplicationListening(true)\n\trequire.True(t, p.ApplicationListening())\n\n\tp.SetSeqNum(2323)\n\trequire.Equal(t, uint32(2323), p.SeqNum())\n\n\tpr := &collector.PingReport{PingID: \"xyz\"}\n\tp.SetPingReport(pr)\n\trequire.Equal(t, pr, p.PingReport())\n}\n"
  },
  {
    "path": "controller/pkg/pkiverifier/pkiverifier.go",
    "content": "package pkiverifier\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\t\"crypto/x509\"\n\t\"errors\"\n\t\"math/big\"\n\t\"time\"\n\n\tjwt \"github.com/dgrijalva/jwt-go\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\t// defaultValidity is the default cache validity in seconds\n\tdefaultValidity = 1\n)\n\n// PKITokenIssuer is the interface of an object that can issue a PKI token.\ntype PKITokenIssuer interface {\n\tCreateTokenFromCertificate(*x509.Certificate, []string) ([]byte, error)\n}\n\n// PKITokenVerifier is the interface of an object that can verify a PKI token.\ntype PKITokenVerifier interface {\n\tVerify([]byte) (*DatapathKey, error)\n}\n\ntype verifierClaims struct {\n\tX    *big.Int\n\tY    *big.Int\n\tTags []string `json:\"tags,omitempty\"`\n\tjwt.StandardClaims\n}\n\n// PKIControllerInfo holds the controller information about public keys\ntype PKIControllerInfo struct {\n\tNamespace      string // The namespace of the public key.\n\tController     string // The controller or control plane of the public key.\n\tSameController bool   // Does the public key come from the same controller\n}\n\n// PKIPublicKey holds information about public keys\ntype PKIPublicKey struct {\n\tPublicKey  *ecdsa.PublicKey\n\tController *PKIControllerInfo\n}\n\ntype tokenManager struct {\n\tpublicKeys []*PKIPublicKey\n\tprivateKey *ecdsa.PrivateKey\n\tsignMethod jwt.SigningMethod\n\tkeycache   cache.DataStore\n\tvalidity   time.Duration\n}\n\n// DatapathKey holds the data path key with the corresponding claims.\ntype DatapathKey struct {\n\tPublicKey  *ecdsa.PublicKey\n\tTags       []string\n\tExpiration time.Time\n\tController *PKIControllerInfo\n}\n\n// NewPKIIssuer initializes a new signer structure\nfunc NewPKIIssuer(privateKey *ecdsa.PrivateKey) PKITokenIssuer {\n\n\treturn &tokenManager{\n\t\tprivateKey: privateKey,\n\t\tsignMethod: jwt.SigningMethodES256,\n\t}\n}\n\n// NewPKIVerifier returns a new PKIConfiguration.\nfunc NewPKIVerifier(publicKeys []*PKIPublicKey, cacheValidity time.Duration) PKITokenVerifier {\n\n\tvalidity := defaultValidity * time.Second\n\tif cacheValidity > 0 {\n\t\tvalidity = cacheValidity\n\t}\n\n\treturn &tokenManager{\n\t\tpublicKeys: publicKeys,\n\t\tsignMethod: jwt.SigningMethodES256,\n\t\tkeycache:   cache.NewCacheWithExpiration(\"PKIVerifierKey\", validity),\n\t\tvalidity:   validity,\n\t}\n}\n\n// Verify verifies a token and returns the public key\nfunc (p *tokenManager) Verify(token []byte) (*DatapathKey, error) {\n\n\ttokenString := string(token)\n\tif pk, err := p.keycache.Get(tokenString); err == nil {\n\t\treturn pk.(*DatapathKey), err\n\t}\n\n\tclaims := &verifierClaims{}\n\tvar t *jwt.Token\n\tvar err error\n\tfor _, pk := range p.publicKeys {\n\n\t\tif pk == nil || pk.PublicKey == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tt, err = jwt.ParseWithClaims(tokenString, claims, func(*jwt.Token) (interface{}, error) { // nolint\n\t\t\treturn pk.PublicKey, nil\n\t\t})\n\t\tif err != nil || !t.Valid {\n\t\t\tcontinue\n\t\t}\n\n\t\texpTime := time.Unix(claims.ExpiresAt, 0)\n\t\tdp := &DatapathKey{\n\t\t\tPublicKey: &ecdsa.PublicKey{\n\t\t\t\tCurve: elliptic.P256(),\n\t\t\t\tX:     claims.X,\n\t\t\t\tY:     claims.Y,\n\t\t\t},\n\t\t\tTags:       claims.Tags,\n\t\t\tExpiration: expTime,\n\t\t\tController: pk.Controller,\n\t\t}\n\n\t\tp.keycache.AddOrUpdate(tokenString, dp)\n\n\t\t// if the token expires before our default validity, update the timer\n\t\t// so that we expire it no longer than its validity.\n\t\tif time.Now().Add(p.validity).After(expTime) {\n\t\t\tif err := p.keycache.SetTimeOut(tokenString, time.Until(expTime)); err != nil {\n\t\t\t\tzap.L().Warn(\"Failed to update cache validity for token\", zap.Error(err))\n\t\t\t}\n\t\t}\n\n\t\treturn dp, nil\n\t}\n\n\treturn nil, errors.New(\"unable to verify token against any available public key\")\n}\n\n// CreateTokenFromCertificate creates and signs a token\nfunc (p *tokenManager) CreateTokenFromCertificate(cert *x509.Certificate, tags []string) ([]byte, error) {\n\n\t// Combine the application claims with the standard claims\n\tclaims := &verifierClaims{\n\t\tX:    cert.PublicKey.(*ecdsa.PublicKey).X,\n\t\tY:    cert.PublicKey.(*ecdsa.PublicKey).Y,\n\t\tTags: tags,\n\t}\n\tclaims.ExpiresAt = cert.NotAfter.Unix()\n\n\t// Create the token and sign with our key\n\tstrtoken, err := jwt.NewWithClaims(p.signMethod, claims).SignedString(p.privateKey)\n\tif err != nil {\n\t\treturn []byte{}, err\n\t}\n\n\treturn []byte(strtoken), nil\n}\n"
  },
  {
    "path": "controller/pkg/pkiverifier/pkiverifier_test.go",
    "content": "// +build !windows\n\npackage pkiverifier\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"testing\"\n\t\"time\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n)\n\nvar (\n\tkeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIPkiHqtH372JJdAG/IxJlE1gv03cdwa8Lhg2b3m/HmbyoAoGCCqGSM49\nAwEHoUQDQgAEAfAL+AfPj/DnxrU6tUkEyzEyCxnflOWxhouy1bdzhJ7vxMb1vQ31\n8ZbW/WvMN/ojIXqXYrEpISoojznj46w64w==\n-----END EC PRIVATE KEY-----`\n\tcaPool = `-----BEGIN CERTIFICATE-----\nMIIBhTCCASwCCQC8b53yGlcQazAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABJxneTUqhbtgEIwpKUUzwz3h92SqcOdIw3mfQkMjg3Vobvr6JKlpXYe9xhsN\nrygJmLhMAN9gjF9qM9ybdbe+m3owCgYIKoZIzj0EAwIDRwAwRAIgC1fVMqdBy/o3\njNUje/Hx0fZF9VDyUK4ld+K/wF3QdK4CID1ONj/Kqinrq2OpjYdkgIjEPuXoOoR1\ntCym8dnq4wtH\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIB3jCCAYOgAwIBAgIJALsW7pyC2ERQMAoGCCqGSM49BAMCMEsxCzAJBgNVBAYT\nAlVTMQswCQYDVQQIDAJDQTEMMAoGA1UEBwwDU0pDMRAwDgYDVQQKDAdUcmlyZW1l\nMQ8wDQYDVQQDDAZ1YnVudHUwHhcNMTYwOTI3MjI0OTAwWhcNMjYwOTI1MjI0OTAw\nWjBLMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4G\nA1UECgwHVHJpcmVtZTEPMA0GA1UEAwwGdWJ1bnR1MFkwEwYHKoZIzj0CAQYIKoZI\nzj0DAQcDQgAE4c2Fd7XeIB1Vfs51fWwREfLLDa55J+NBalV12CH7YEAnEXjl47aV\ncmNqcAtdMUpf2oz9nFVI81bgO+OSudr3CqNQME4wHQYDVR0OBBYEFOBftuI09mmu\nrXjqDyIta1gT8lqvMB8GA1UdIwQYMBaAFOBftuI09mmurXjqDyIta1gT8lqvMAwG\nA1UdEwQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAMylAHhbFA0KqhXIFiXNpEbH\nJKaELL6UXXdeQ5yup8q+AiEAh5laB9rbgTymjaANcZ2YzEZH4VFS3CKoSdVqgnwC\ndW4=\n-----END CERTIFICATE-----`\n\n\tcertPEM = `-----BEGIN CERTIFICATE-----\nMIIBhjCCASwCCQCPCdgp39gHJTAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABAHwC/gHz4/w58a1OrVJBMsxMgsZ35TlsYaLstW3c4Se78TG9b0N9fGW1v1r\nzDf6IyF6l2KxKSEqKI854+OsOuMwCgYIKoZIzj0EAwIDSAAwRQIgQwQn0jnK/XvD\nKxgQd/0pW5FOAaB41cMcw4/XVlphO1oCIQDlGie+WlOMjCzrV0Xz+XqIIi1pIgPT\nIG7Nv+YlTVp5qA==\n-----END CERTIFICATE-----`\n)\n\nfunc TestNewConfig(t *testing.T) {\n\tConvey(\"When I create a new PKI configuration\", t, func() {\n\t\tkey, cert, _, err := crypto.LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I use NewPKIIssuer with valid keys, it should succeed \", func() {\n\t\t\tp := NewPKIIssuer(key).(*tokenManager)\n\t\t\tSo(p, ShouldNotBeNil)\n\t\t\tSo(p.validity, ShouldEqual, 0)\n\t\t\tSo(p.privateKey, ShouldEqual, key)\n\t\t\tSo(p.publicKeys, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I use NewPKIVerifier valid keys, it should succeed \", func() {\n\t\t\tpkiPublicKey := &PKIPublicKey{PublicKey: cert.PublicKey.(*ecdsa.PublicKey)}\n\t\t\tp := NewPKIVerifier([]*PKIPublicKey{pkiPublicKey}, -1).(*tokenManager)\n\t\t\tSo(p, ShouldNotBeNil)\n\t\t\tSo(p.validity, ShouldEqual, defaultValidity*time.Second)\n\t\t\tSo(p.privateKey, ShouldBeNil)\n\t\t\tSo(p.publicKeys, ShouldResemble, []*PKIPublicKey{pkiPublicKey})\n\t\t})\n\t\tConvey(\"When I use NewPKIVerifier valid keys with a custom validity, it should succeed \", func() {\n\t\t\tpkiPublicKey := &PKIPublicKey{PublicKey: cert.PublicKey.(*ecdsa.PublicKey)}\n\t\t\tp := NewPKIVerifier([]*PKIPublicKey{pkiPublicKey}, 10*time.Second).(*tokenManager)\n\t\t\tSo(p, ShouldNotBeNil)\n\t\t\tSo(p.validity, ShouldEqual, 10*time.Second)\n\t\t\tSo(p.privateKey, ShouldBeNil)\n\t\t\tSo(p.publicKeys, ShouldResemble, []*PKIPublicKey{pkiPublicKey})\n\t\t})\n\t})\n}\n\nfunc TestCreateAndVerify(t *testing.T) {\n\tConvey(\"Given a valid verifier\", t, func() {\n\t\tkey, cert, _, err := crypto.LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\t\tSo(err, ShouldBeNil)\n\t\tp := NewPKIIssuer(key)\n\t\tpkiPublicKey := &PKIPublicKey{PublicKey: cert.PublicKey.(*ecdsa.PublicKey)}\n\t\tv := NewPKIVerifier([]*PKIPublicKey{pkiPublicKey}, -1).(*tokenManager)\n\t\tSo(p, ShouldNotBeNil)\n\t\tConvey(\"When I create a token\", func() {\n\t\t\ttoken, err1 := p.CreateTokenFromCertificate(cert, []string{\"sometag\"})\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\trxtoken, err2 := v.Verify(token)\n\t\t\tSo(err2, ShouldBeNil)\n\t\t\tSo(*rxtoken.PublicKey.X, ShouldResemble, *cert.PublicKey.(*ecdsa.PublicKey).X)\n\t\t\tSo(*rxtoken.PublicKey.Y, ShouldResemble, *cert.PublicKey.(*ecdsa.PublicKey).Y)\n\t\t\tSo(rxtoken.PublicKey.Curve, ShouldResemble, cert.PublicKey.(*ecdsa.PublicKey).Curve)\n\t\t\tSo(rxtoken.Tags, ShouldResemble, []string{\"sometag\"})\n\t\t})\n\t})\n\n\tConvey(\"Given a valid verifier\", t, func() {\n\t\tkey, cert, _, err := crypto.LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\t\tSo(err, ShouldBeNil)\n\t\tp := NewPKIIssuer(key)\n\t\tpkiPublicKey := &PKIPublicKey{PublicKey: cert.PublicKey.(*ecdsa.PublicKey)}\n\t\tv := NewPKIVerifier([]*PKIPublicKey{pkiPublicKey}, -1).(*tokenManager)\n\t\tSo(p, ShouldNotBeNil)\n\t\tConvey(\"When I a receive a bad token, I should get an error\", func() {\n\t\t\ttoken, err1 := p.CreateTokenFromCertificate(cert, []string{})\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\ttoken = token[:len(token)-10]\n\t\t\t_, err2 := v.Verify(token)\n\t\t\tSo(err2, ShouldNotBeNil)\n\t\t})\n\t})\n\n\tConvey(\"Given an invalid verifier\", t, func() {\n\t\tkey, cert, _, err := crypto.LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\t\tSo(err, ShouldBeNil)\n\t\tp := NewPKIIssuer(key)\n\t\tpkiPublicKey := &PKIPublicKey{PublicKey: nil}\n\t\tv := NewPKIVerifier([]*PKIPublicKey{pkiPublicKey}, -1).(*tokenManager)\n\t\tSo(p, ShouldNotBeNil)\n\t\tConvey(\"When I a receive a valid token, I should get an error\", func() {\n\t\t\ttoken, err1 := p.CreateTokenFromCertificate(cert, []string{})\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\t_, err2 := v.Verify(token)\n\t\t\tSo(err2, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc TestCaching(t *testing.T) {\n\tConvey(\"Given a valid verifier with a zero timer for the cache\", t, func() {\n\t\tkey, cert, _, err := crypto.LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\t\tSo(err, ShouldBeNil)\n\t\tp := NewPKIIssuer(key)\n\t\tpkiPublicKey := &PKIPublicKey{PublicKey: cert.PublicKey.(*ecdsa.PublicKey)}\n\t\tv := NewPKIVerifier([]*PKIPublicKey{pkiPublicKey}, 1*time.Second).(*tokenManager)\n\n\t\tSo(p, ShouldNotBeNil)\n\n\t\tConvey(\"When I receive a token\", func() {\n\t\t\ttoken, err1 := p.CreateTokenFromCertificate(cert, []string{})\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\t_, err2 := v.Verify(token)\n\t\t\tSo(err2, ShouldBeNil)\n\n\t\t\tConvey(\"The cache should have the token \", func() {\n\t\t\t\t_, err := v.keycache.Get(string(token))\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t_, err2 := v.Verify(token)\n\t\t\t\tSo(err2, ShouldBeNil)\n\t\t\t})\n\n\t\t\tConvey(\"The cache should not have the token after 2 seconds \", func() {\n\t\t\t\ttime.Sleep(2 * time.Second)\n\t\t\t\t_, err := v.keycache.Get(string(token))\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/pucontext/pucontext.go",
    "content": "package pucontext\n\nimport (\n\t\"context\"\n\t\"crypto/ecdsa\"\n\t\"fmt\"\n\t\"net\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/minio/minio/pkg/wildcard\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/acls\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/lookup\"\n\t\"go.aporeto.io/underwater/core/tagutils\"\n\t\"go.uber.org/zap\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n)\n\ntype policies struct {\n\tobserveRejectRules *lookup.PolicyDB // Packet: Continue       Report:    Drop\n\trejectRules        *lookup.PolicyDB // Packet:     Drop       Report:    Drop\n\tobserveAcceptRules *lookup.PolicyDB // Packet: Continue       Report: Forward\n\tacceptRules        *lookup.PolicyDB // Packet:  Forward       Report: Forward\n\tobserveApplyRules  *lookup.PolicyDB // Packet:  Forward       Report: Forward\n\tencryptRules       *lookup.PolicyDB // Packet: Encrypt       Report: Encrypt\n}\n\ntype synTokenInfo struct {\n\tdatapathSecret  secrets.Secrets\n\tprivateKey      *ephemeralkeys.PrivateKey\n\tpublicKeyV1     []byte\n\tpublicKeySignV1 []byte\n\tpublicKeyV2     []byte\n\tpublicKeySignV2 []byte\n\ttoken           []byte\n}\n\n// PUContext holds data indexed by the PU ID\ntype PUContext struct {\n\tid                      string\n\thashID                  string\n\tusername                string\n\tautoport                bool\n\tmanagementID            string\n\tmanagementNamespace     string\n\tmanagementNamespaceHash string\n\tidentity                *policy.TagStore\n\tannotations             *policy.TagStore\n\tcompressedTags          *policy.TagStore\n\ttxt                     *policies\n\trcv                     *policies\n\tApplicationACLs         *acls.ACLCache\n\tnetworkACLs             *acls.ACLCache\n\texternalIPCache         cache.DataStore\n\tDNSACLs                 policy.DNSRuleList\n\tDNSProxyPort            string\n\tmark                    string\n\ttcpPorts                []string\n\tudpPorts                []string\n\tpuType                  common.PUType\n\tjwt                     string\n\tjwtExpiration           time.Time\n\tscopes                  []string\n\tExtension               interface{}\n\tcounters                *counters.Counters\n\tpuInfo                  *policy.PUInfo\n\tsynToken                *synTokenInfo\n\tctxCancel               context.CancelFunc\n\ttokenAccessor           tokenaccessor.TokenAccessor\n\tappDefaultFlowPolicy    *policy.FlowPolicy\n\tnetDefaultFlowPolicy    *policy.FlowPolicy\n\tsync.RWMutex\n}\n\n// NewPU creates a new PU context\nfunc NewPU(contextID string, puInfo *policy.PUInfo, tokenAccessor tokenaccessor.TokenAccessor, timeout time.Duration) (*PUContext, error) {\n\n\thashID, err := policy.Fnv32Hash(contextID)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to hash contextID: %v\", err)\n\t}\n\n\tpu := &PUContext{\n\t\tid:                   contextID,\n\t\thashID:               hashID,\n\t\tusername:             puInfo.Runtime.Options().UserID,\n\t\tautoport:             puInfo.Runtime.Options().AutoPort,\n\t\tmanagementID:         puInfo.Policy.ManagementID(),\n\t\tmanagementNamespace:  puInfo.Policy.ManagementNamespace(),\n\t\tpuType:               puInfo.Runtime.PUType(),\n\t\tidentity:             puInfo.Policy.Identity(),\n\t\tannotations:          puInfo.Policy.Annotations(),\n\t\tcompressedTags:       puInfo.Policy.CompressedTags(),\n\t\texternalIPCache:      cache.NewCacheWithExpiration(\"External IP Cache\", timeout),\n\t\tApplicationACLs:      acls.NewACLCache(),\n\t\tnetworkACLs:          acls.NewACLCache(),\n\t\tDNSACLs:              puInfo.Policy.DNSNameACLs(),\n\t\tmark:                 puInfo.Runtime.Options().CgroupMark,\n\t\tscopes:               puInfo.Policy.Scopes(),\n\t\tcounters:             counters.NewCounters(),\n\t\tpuInfo:               puInfo,\n\t\ttokenAccessor:        tokenAccessor,\n\t\tappDefaultFlowPolicy: &policy.FlowPolicy{Action: puInfo.Policy.AppDefaultPolicyAction(), PolicyID: \"default\", ServiceID: \"default\"},\n\t\tnetDefaultFlowPolicy: &policy.FlowPolicy{Action: puInfo.Policy.NetDefaultPolicyAction(), PolicyID: \"default\", ServiceID: \"default\"},\n\t}\n\n\tpu.CreateRcvRules(puInfo.Policy.ReceiverRules())\n\n\tpu.CreateTxtRules(puInfo.Policy.TransmitterRules())\n\n\ttcpPorts, udpPorts := common.ConvertServicesToProtocolPortList(puInfo.Runtime.Options().Services)\n\tpu.tcpPorts = strings.Split(tcpPorts, \",\")\n\tpu.udpPorts = strings.Split(udpPorts, \",\")\n\n\tif err := pu.UpdateApplicationACLs(puInfo.Policy.ApplicationACLs()); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif err := pu.UpdateNetworkACLs(puInfo.Policy.NetworkACLs()); err != nil {\n\t\treturn nil, err\n\t}\n\n\tnsHash, err := tagutils.Hash(pu.managementNamespace)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to hash namespace: %w\", err)\n\t}\n\tpu.managementNamespaceHash = nsHash\n\n\tctx, cancel := context.WithCancel(context.Background())\n\tpu.ctxCancel = cancel\n\n\t// tokenAccessor is nil with envoy authorizer enforcer. We\n\t// don't need our datapath in that case.\n\tif tokenAccessor != nil {\n\t\tpu.synToken = pu.createSynToken(nil, claimsheader.NewClaimsHeader())\n\n\t\tgo func() {\n\t\t\tfor {\n\t\t\t\tselect {\n\t\t\t\tcase <-ctx.Done():\n\t\t\t\t\treturn\n\t\t\t\tcase <-time.After(constants.SynTokenRefreshTime):\n\n\t\t\t\t\tsynToken := pu.createSynToken(nil, claimsheader.NewClaimsHeader())\n\t\t\t\t\tpu.Lock()\n\t\t\t\t\tpu.synToken = synToken\n\t\t\t\t\tpu.Unlock()\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\t}\n\n\treturn pu, nil\n}\n\nfunc (p *PUContext) createSynToken(pingPayload *policy.PingPayload, claimsHeader *claimsheader.ClaimsHeader) *synTokenInfo {\n\n\tvar datapathKeyPair ephemeralkeys.KeyAccessor\n\tvar err error\n\tvar nonce []byte\n\n\tfor {\n\t\tdatapathKeyPair, err = ephemeralkeys.New()\n\n\t\tif err != nil {\n\t\t\t// can generate errors only when the urandom io read buffer is full. retry till we succeed.\n\t\t\ttime.Sleep(10 * time.Millisecond)\n\t\t\tcontinue\n\t\t}\n\n\t\tbreak\n\t}\n\n\tfor {\n\t\t// can generate errors only when the urandom io read buffer is full. retry till we succeed.\n\t\tnonce, err = crypto.GenerateRandomBytes(16)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tbreak\n\t}\n\n\tclaims := &tokens.ConnectionClaims{\n\t\tLCL:   nonce,\n\t\tDEKV1: datapathKeyPair.DecodingKeyV1(),\n\t\tDEKV2: datapathKeyPair.DecodingKeyV2(),\n\t\tCT:    p.CompressedTags(),\n\t\tID:    p.ManagementID(),\n\t\tP:     pingPayload,\n\t}\n\n\tdatapathSecret := ephemeralkeys.GetDatapathSecret()\n\tvar encodedBuf [tokens.ClaimsEncodedBufSize]byte\n\n\ttoken, err := p.tokenAccessor.CreateSynPacketToken(claims, encodedBuf[:], nonce, claimsHeader, datapathSecret)\n\tif err != nil {\n\t\tzap.L().Error(\"Can not create syn packet token\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tephKeySignV1, err := p.tokenAccessor.Sign(datapathKeyPair.DecodingKeyV1(), datapathSecret.EncodingKey().(*ecdsa.PrivateKey))\n\tif err != nil {\n\t\tzap.L().Error(\"Can not sign the ephemeral public key\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tephKeySignV2, err := p.tokenAccessor.Sign(datapathKeyPair.DecodingKeyV2(), datapathSecret.EncodingKey().(*ecdsa.PrivateKey))\n\tif err != nil {\n\t\tzap.L().Error(\"Can not sign the ephemeral public key\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tprivateKey := datapathKeyPair.PrivateKey()\n\treturn &synTokenInfo{datapathSecret: datapathSecret,\n\t\tprivateKey:      privateKey,\n\t\tpublicKeyV1:     datapathKeyPair.DecodingKeyV1(),\n\t\tpublicKeySignV1: ephKeySignV1,\n\t\tpublicKeyV2:     datapathKeyPair.DecodingKeyV2(),\n\t\tpublicKeySignV2: ephKeySignV2,\n\t\ttoken:           token}\n}\n\n//StopProcessing cancels the context such that all the goroutines can return.\nfunc (p *PUContext) StopProcessing() {\n\tp.ctxCancel()\n}\n\n//GetSynToken returns the cached syntoken if the datapath secret has not changed or the ping payload is present.\nfunc (p *PUContext) GetSynToken(pingPayload *policy.PingPayload, nonce [16]byte, claimsHeader *claimsheader.ClaimsHeader) (secrets.Secrets, *ephemeralkeys.PrivateKey, []byte) {\n\n\tif pingPayload != nil {\n\t\tsynToken := p.createSynToken(pingPayload, claimsHeader)\n\t\treturn synToken.datapathSecret, synToken.privateKey, synToken.token\n\t}\n\n\tp.RLock()\n\tsynToken := p.synToken\n\tp.RUnlock()\n\n\tif synToken.datapathSecret != ephemeralkeys.GetDatapathSecret() {\n\t\tsynToken = p.createSynToken(nil, claimsheader.NewClaimsHeader())\n\t\tp.Lock()\n\t\tp.synToken = synToken\n\t\tp.Unlock()\n\t}\n\n\tp.tokenAccessor.Randomize(synToken.token, nonce[:]) //nolint\n\n\treturn synToken.datapathSecret, synToken.privateKey, synToken.token\n}\n\n//GetSecrets returns the datapath secret and ephemeral public and private key\nfunc (p *PUContext) GetSecrets() (secrets.Secrets, *ephemeralkeys.PrivateKey, []byte, []byte, []byte, []byte) {\n\tp.RLock()\n\tsynToken := p.synToken\n\tp.RUnlock()\n\n\tif synToken.datapathSecret != ephemeralkeys.GetDatapathSecret() {\n\t\tsynToken = p.createSynToken(nil, claimsheader.NewClaimsHeader())\n\t\tp.Lock()\n\t\tp.synToken = synToken\n\t\tp.Unlock()\n\t}\n\n\treturn ephemeralkeys.GetDatapathSecret(), synToken.privateKey, synToken.publicKeyV1, synToken.publicKeySignV1, synToken.publicKeyV2, synToken.publicKeySignV2\n}\n\n// GetPolicyFromFQDN gets the list of policies that are mapped with the hostname\nfunc (p *PUContext) GetPolicyFromFQDN(fqdn string) ([]policy.PortProtocolPolicy, string, error) {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\t// If we find a direct match, return policy\n\tif v, ok := p.DNSACLs[fqdn]; ok {\n\t\treturn v, fqdn, nil\n\t}\n\n\t// Try if there is a wildcard match\n\tfor policyName, policy := range p.DNSACLs {\n\t\tif wildcard.MatchSimple(policyName, fqdn) {\n\t\t\treturn policy, policyName, nil\n\t\t}\n\t}\n\n\treturn nil, \"\", fmt.Errorf(\"Policy doesn't exist\")\n}\n\n// DependentServices searches if the PU has a dependent service on this FQDN. If yes,\n// it returns the ports for that service.\nfunc (p *PUContext) DependentServices(fqdn string) []*policy.ApplicationService {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\tdependentServices := []*policy.ApplicationService{}\n\n\tfor _, dependentService := range p.puInfo.Policy.DependentServices() {\n\t\tfor _, name := range dependentService.NetworkInfo.FQDNs {\n\t\t\tif fqdn == name {\n\t\t\t\tdependentServices = append(dependentServices, dependentService)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn dependentServices\n}\n\n// UsesFQDN indicates whether this PU policy has an ACL or Service that uses an FQDN\nfunc (p *PUContext) UsesFQDN() bool {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\tif len(p.DNSACLs) > 0 {\n\t\treturn true\n\t}\n\n\tfor _, dependentService := range p.puInfo.Policy.DependentServices() {\n\t\tfor _, name := range dependentService.NetworkInfo.FQDNs {\n\t\t\tif name != \"\" {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false\n}\n\n// ID returns the ID of the PU\nfunc (p *PUContext) ID() string {\n\treturn p.id\n}\n\n// HashID returns the hash of the ID of the PU\nfunc (p *PUContext) HashID() string {\n\treturn p.hashID\n}\n\n// Username returns the ID of the PU\nfunc (p *PUContext) Username() string {\n\treturn p.username\n}\n\n// Autoport returns if auto port feature is set on the PU\nfunc (p *PUContext) Autoport() bool {\n\treturn p.autoport\n}\n\n// ManagementID returns the management ID\nfunc (p *PUContext) ManagementID() string {\n\treturn p.managementID\n}\n\n// ManagementNamespace returns the management namespace\nfunc (p *PUContext) ManagementNamespace() string {\n\treturn p.managementNamespace\n}\n\n// ManagementNamespaceHash returns the management namespace hash\nfunc (p *PUContext) ManagementNamespaceHash() string {\n\treturn p.managementNamespaceHash\n}\n\n// Type return the pu type\nfunc (p *PUContext) Type() common.PUType {\n\treturn p.puType\n}\n\n// Identity returns the indentity\nfunc (p *PUContext) Identity() *policy.TagStore {\n\treturn p.identity\n}\n\n// Mark returns the PU mark\nfunc (p *PUContext) Mark() string {\n\treturn p.mark\n}\n\n// TCPPorts returns the PU TCP ports\nfunc (p *PUContext) TCPPorts() []string {\n\treturn p.tcpPorts\n}\n\n// UDPPorts returns the PU UDP ports\nfunc (p *PUContext) UDPPorts() []string {\n\treturn p.udpPorts\n}\n\n// Annotations returns the annotations\nfunc (p *PUContext) Annotations() *policy.TagStore {\n\treturn p.annotations\n}\n\n// CompressedTags returns the compressed tags.\nfunc (p *PUContext) CompressedTags() *policy.TagStore {\n\treturn p.compressedTags\n}\n\n// RetrieveCachedExternalFlowPolicy returns the policy for an external IP\nfunc (p *PUContext) RetrieveCachedExternalFlowPolicy(id string) (interface{}, error) {\n\treturn p.externalIPCache.Get(id)\n}\n\n// NetworkACLPolicy retrieves the policy based on ACLs\nfunc (p *PUContext) NetworkACLPolicy(packet *packet.Packet) (report *policy.FlowPolicy, action *policy.FlowPolicy, err error) {\n\tdefer p.RUnlock()\n\tp.RLock()\n\n\treturn p.networkACLs.GetMatchingAction(packet.SourceAddress(), packet.DestPort(), packet.IPProto(), p.netDefaultFlowPolicy)\n}\n\n// NetworkACLPolicyFromAddr retrieve the policy given an address and port.\nfunc (p *PUContext) NetworkACLPolicyFromAddr(addr net.IP, port uint16, protocol uint8) (report *policy.FlowPolicy, action *policy.FlowPolicy, err error) {\n\tdefer p.RUnlock()\n\tp.RLock()\n\n\treturn p.networkACLs.GetMatchingAction(addr, port, protocol, p.netDefaultFlowPolicy)\n}\n\n// ApplicationICMPACLPolicy retrieve the policy for ICMP\nfunc (p *PUContext) ApplicationICMPACLPolicy(ip net.IP, icmpType, icmpCode int8) (report *policy.FlowPolicy, action *policy.FlowPolicy, err error) {\n\treturn p.ApplicationACLs.GetMatchingICMPAction(ip, icmpType, icmpCode, p.appDefaultFlowPolicy)\n}\n\n// NetworkICMPACLPolicy retrieve the policy for ICMP\nfunc (p *PUContext) NetworkICMPACLPolicy(ip net.IP, icmpType, icmpCode int8) (report *policy.FlowPolicy, action *policy.FlowPolicy, err error) {\n\treturn p.networkACLs.GetMatchingICMPAction(ip, icmpType, icmpCode, p.netDefaultFlowPolicy)\n}\n\n// ApplicationACLPolicyFromAddr retrieve the policy given an address and port.\nfunc (p *PUContext) ApplicationACLPolicyFromAddr(addr net.IP, port uint16, protocol uint8) (report *policy.FlowPolicy, action *policy.FlowPolicy, err error) {\n\tdefer p.RUnlock()\n\tp.RLock()\n\n\treturn p.ApplicationACLs.GetMatchingAction(addr, port, protocol, p.appDefaultFlowPolicy)\n}\n\n// UpdateApplicationACLs updates the application ACL policy\nfunc (p *PUContext) UpdateApplicationACLs(rules policy.IPRuleList) error {\n\tdefer p.Unlock()\n\tp.Lock()\n\n\treturn p.ApplicationACLs.AddRuleList(rules)\n}\n\n// FlushApplicationACL removes the application ACLs which are indexed with (ip, mask) key for all protocols and ports\nfunc (p *PUContext) FlushApplicationACL(addr net.IP, mask int) {\n\tdefer p.Unlock()\n\tp.Lock()\n\tp.ApplicationACLs.RemoveIPMask(addr, mask)\n}\n\n// RemoveApplicationACL removes the application ACLs for a specific IP address for all protocols and ports that match a policy.\n// NOTE: Rules need to be a full port/policy match in order to get removed. Partial port matches in ranges will not get removed.\nfunc (p *PUContext) RemoveApplicationACL(ipaddress string, protocols, ports []string, policy *policy.FlowPolicy) error {\n\tdefer p.Unlock()\n\tp.Lock()\n\n\taddress, err := acls.ParseAddress(ipaddress)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor _, protocol := range protocols {\n\t\tif err := p.ApplicationACLs.RemoveRulesForAddress(address, protocol, ports, policy); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// UpdateNetworkACLs updates the network ACL policy\nfunc (p *PUContext) UpdateNetworkACLs(rules policy.IPRuleList) error {\n\tdefer p.Unlock()\n\tp.Lock()\n\treturn p.networkACLs.AddRuleList(rules)\n}\n\n// CacheExternalFlowPolicy will cache an external flow\nfunc (p *PUContext) CacheExternalFlowPolicy(packet *packet.Packet, plc interface{}) {\n\tp.externalIPCache.AddOrUpdate(packet.SourceAddress().String()+\":\"+strconv.Itoa(int(packet.SourcePort())), plc)\n}\n\n// GetProcessKeys returns the cache keys for a process\nfunc (p *PUContext) GetProcessKeys() (string, []string, []string) {\n\treturn p.mark, p.tcpPorts, p.udpPorts\n}\n\n// Scopes returns the scopes.\nfunc (p *PUContext) Scopes() []string {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.scopes\n}\n\n// Counters returns the scopes.\nfunc (p *PUContext) Counters() *counters.Counters {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\treturn p.counters\n}\n\n// GetJWT retrieves the JWT if it exists in the cache. Returns error otherwise.\nfunc (p *PUContext) GetJWT() (string, error) {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\tif p.jwtExpiration.After(time.Now()) && len(p.jwt) > 0 {\n\t\treturn p.jwt, nil\n\t}\n\n\treturn \"\", fmt.Errorf(\"expired token\")\n}\n\n// UpdateJWT updates the JWT and provides a new expiration date.\nfunc (p *PUContext) UpdateJWT(jwt string, expiration time.Time) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.jwt = jwt\n\tp.jwtExpiration = expiration\n}\n\n// createRuleDBs creates the database of rules from the policy\nfunc (p *PUContext) createRuleDBs(policyRules policy.TagSelectorList) *policies {\n\n\tpolicyDB := &policies{\n\t\trejectRules:        lookup.NewPolicyDB(),\n\t\tobserveRejectRules: lookup.NewPolicyDB(),\n\t\tacceptRules:        lookup.NewPolicyDB(),\n\t\tobserveAcceptRules: lookup.NewPolicyDB(),\n\t\tobserveApplyRules:  lookup.NewPolicyDB(),\n\t\tencryptRules:       lookup.NewPolicyDB(),\n\t}\n\n\tfor _, rule := range policyRules {\n\t\t// Add encrypt rule to encrypt table.\n\t\tif rule.Policy.Action.Encrypted() {\n\t\t\tpolicyDB.encryptRules.AddPolicy(rule)\n\t\t}\n\n\t\tif rule.Policy.ObserveAction.ObserveContinue() {\n\t\t\tif rule.Policy.Action.Accepted() {\n\t\t\t\tpolicyDB.observeAcceptRules.AddPolicy(rule)\n\t\t\t} else if rule.Policy.Action.Rejected() {\n\t\t\t\tpolicyDB.observeRejectRules.AddPolicy(rule)\n\t\t\t}\n\t\t} else if rule.Policy.ObserveAction.ObserveApply() {\n\t\t\tpolicyDB.observeApplyRules.AddPolicy(rule)\n\t\t} else if rule.Policy.Action.Accepted() {\n\t\t\tpolicyDB.acceptRules.AddPolicy(rule)\n\t\t} else if rule.Policy.Action.Rejected() {\n\t\t\tpolicyDB.rejectRules.AddPolicy(rule)\n\t\t} else {\n\t\t\tcontinue\n\t\t}\n\t}\n\treturn policyDB\n}\n\n// CreateRcvRules create receive rules for this PU based on the update of the policy.\nfunc (p *PUContext) CreateRcvRules(policyRules policy.TagSelectorList) {\n\tp.rcv = p.createRuleDBs(policyRules)\n}\n\n// CreateTxtRules create receive rules for this PU based on the update of the policy.\nfunc (p *PUContext) CreateTxtRules(policyRules policy.TagSelectorList) {\n\tp.txt = p.createRuleDBs(policyRules)\n}\n\n// searchRules searches all reject, accpet and observed rules and returns reporting and packet forwarding action\nfunc (p *PUContext) searchRules(\n\tpolicies *policies,\n\ttags *policy.TagStore,\n\tskipRejectPolicies bool,\n\tdefaultFlowReport *policy.FlowPolicy,\n) (report *policy.FlowPolicy, packet *policy.FlowPolicy) {\n\n\tvar reportingAction *policy.FlowPolicy\n\tvar packetAction *policy.FlowPolicy\n\n\tif !skipRejectPolicies {\n\t\t// Look for rejection rules\n\t\tobserveIndex, observeAction := policies.observeRejectRules.Search(tags)\n\t\tif observeIndex >= 0 {\n\t\t\treportingAction = observeAction.(*policy.FlowPolicy)\n\t\t}\n\n\t\tindex, action := policies.rejectRules.Search(tags)\n\t\tif index >= 0 {\n\t\t\tpacketAction = action.(*policy.FlowPolicy)\n\t\t\tif reportingAction == nil {\n\t\t\t\treportingAction = packetAction\n\t\t\t}\n\t\t\treturn reportingAction, packetAction\n\t\t}\n\t}\n\n\tif reportingAction == nil {\n\t\t// Look for allow rules\n\t\tobserveIndex, observeAction := policies.observeAcceptRules.Search(tags)\n\t\tif observeIndex >= 0 {\n\t\t\treportingAction = observeAction.(*policy.FlowPolicy)\n\t\t}\n\t}\n\n\tindex, action := policies.acceptRules.Search(tags)\n\tif index >= 0 {\n\t\tpacketAction = action.(*policy.FlowPolicy)\n\t\t// Look for encrypt rules\n\t\tencryptIndex, _ := policies.encryptRules.Search(tags)\n\t\tif encryptIndex >= 0 {\n\t\t\t// Do not overwrite the action for accept rules.\n\t\t\tfinalAction := action.(*policy.FlowPolicy)\n\t\t\tpacketAction = &policy.FlowPolicy{\n\t\t\t\tAction:    policy.Accept | policy.Encrypt,\n\t\t\t\tPolicyID:  finalAction.PolicyID,\n\t\t\t\tServiceID: finalAction.ServiceID,\n\t\t\t}\n\t\t\tif finalAction.Action.Logged() {\n\t\t\t\tpacketAction.Action = packetAction.Action | policy.Log\n\t\t\t}\n\t\t}\n\t\tif reportingAction == nil {\n\t\t\treportingAction = packetAction\n\t\t}\n\t\treturn reportingAction, packetAction\n\t}\n\n\t// Look for observe apply rules\n\tobserveIndex, observeAction := policies.observeApplyRules.Search(tags)\n\tif observeIndex >= 0 {\n\t\tpacketAction = observeAction.(*policy.FlowPolicy)\n\t\tif reportingAction == nil {\n\t\t\treportingAction = packetAction\n\t\t}\n\t\treturn reportingAction, packetAction\n\t}\n\n\t// Clone the default because someone is changing the returned one\n\tpacketAction = defaultFlowReport.Clone()\n\n\tif reportingAction == nil {\n\t\treportingAction = packetAction\n\t}\n\n\treturn reportingAction, packetAction\n}\n\n// SearchTxtRules searches both receive and observed transmit rules and returns the index and action\nfunc (p *PUContext) SearchTxtRules(\n\ttags *policy.TagStore,\n\tskipRejectPolicies bool,\n) (report *policy.FlowPolicy, packet *policy.FlowPolicy) {\n\treturn p.searchRules(p.txt, tags, skipRejectPolicies, p.appDefaultFlowPolicy)\n}\n\n// SearchRcvRules searches both receive and observed receive rules and returns the index and action\nfunc (p *PUContext) SearchRcvRules(\n\ttags *policy.TagStore,\n) (report *policy.FlowPolicy, packet *policy.FlowPolicy) {\n\treturn p.searchRules(p.rcv, tags, false, p.netDefaultFlowPolicy)\n}\n\n// LookupLogPrefix lookup the log prefix from the key\nfunc (p *PUContext) LookupLogPrefix(key string) (string, bool) {\n\tp.Lock()\n\tdefer p.Unlock()\n\tif p.puInfo == nil || p.puInfo.Policy == nil {\n\t\treturn \"\", false\n\t}\n\treturn p.puInfo.Policy.LookupLogPrefix(key)\n}\n"
  },
  {
    "path": "controller/pkg/pucontext/pucontext_test.go",
    "content": "// +build !windows\n\npackage pucontext\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/x509\"\n\t\"testing\"\n\t\"time\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/nfqdatapath/tokenaccessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/compactpki\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n\t\"go.uber.org/zap\"\n\t\"gotest.tools/assert\"\n)\n\nfunc Test_NewPU(t *testing.T) {\n\n\tConvey(\"When I call NewPU with proper data\", t, func() {\n\n\t\tfp := &policy.PUInfo{\n\t\t\tRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\tPolicy:  policy.NewPUPolicy(\"\", \"/xyz\", policy.AllowAll, nil, nil, nil, nil, nil, nil, nil, nil, nil, 0, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject, policy.Reject),\n\t\t}\n\n\t\tpu, err := NewPU(\"pu1\", fp, nil, 24*time.Hour)\n\n\t\tConvey(\"I should not get error\", func() {\n\t\t\tSo(pu, ShouldNotBeNil)\n\t\t\tSo(pu.HashID(), ShouldEqual, pu.hashID)\n\t\t\tSo(pu.ManagementNamespaceHash(), ShouldEqual, \"JJ0iGN3c9I2d+bx4\")\n\t\t\tSo(pu.Counters(), ShouldNotBeNil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n}\n\nvar (\n\tkeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIPkiHqtH372JJdAG/IxJlE1gv03cdwa8Lhg2b3m/HmbyoAoGCCqGSM49\nAwEHoUQDQgAEAfAL+AfPj/DnxrU6tUkEyzEyCxnflOWxhouy1bdzhJ7vxMb1vQ31\n8ZbW/WvMN/ojIXqXYrEpISoojznj46w64w==\n-----END EC PRIVATE KEY-----`\n\tcaPool = `-----BEGIN CERTIFICATE-----\nMIIBhTCCASwCCQC8b53yGlcQazAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABJxneTUqhbtgEIwpKUUzwz3h92SqcOdIw3mfQkMjg3Vobvr6JKlpXYe9xhsN\nrygJmLhMAN9gjF9qM9ybdbe+m3owCgYIKoZIzj0EAwIDRwAwRAIgC1fVMqdBy/o3\njNUje/Hx0fZF9VDyUK4ld+K/wF3QdK4CID1ONj/Kqinrq2OpjYdkgIjEPuXoOoR1\ntCym8dnq4wtH\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIB3jCCAYOgAwIBAgIJALsW7pyC2ERQMAoGCCqGSM49BAMCMEsxCzAJBgNVBAYT\nAlVTMQswCQYDVQQIDAJDQTEMMAoGA1UEBwwDU0pDMRAwDgYDVQQKDAdUcmlyZW1l\nMQ8wDQYDVQQDDAZ1YnVudHUwHhcNMTYwOTI3MjI0OTAwWhcNMjYwOTI1MjI0OTAw\nWjBLMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4G\nA1UECgwHVHJpcmVtZTEPMA0GA1UEAwwGdWJ1bnR1MFkwEwYHKoZIzj0CAQYIKoZI\nzj0DAQcDQgAE4c2Fd7XeIB1Vfs51fWwREfLLDa55J+NBalV12CH7YEAnEXjl47aV\ncmNqcAtdMUpf2oz9nFVI81bgO+OSudr3CqNQME4wHQYDVR0OBBYEFOBftuI09mmu\nrXjqDyIta1gT8lqvMB8GA1UdIwQYMBaAFOBftuI09mmurXjqDyIta1gT8lqvMAwG\nA1UdEwQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAMylAHhbFA0KqhXIFiXNpEbH\nJKaELL6UXXdeQ5yup8q+AiEAh5laB9rbgTymjaANcZ2YzEZH4VFS3CKoSdVqgnwC\ndW4=\n-----END CERTIFICATE-----`\n\n\tcertPEM = `-----BEGIN CERTIFICATE-----\nMIIBhjCCASwCCQCPCdgp39gHJTAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABAHwC/gHz4/w58a1OrVJBMsxMgsZ35TlsYaLstW3c4Se78TG9b0N9fGW1v1r\nzDf6IyF6l2KxKSEqKI854+OsOuMwCgYIKoZIzj0EAwIDSAAwRQIgQwQn0jnK/XvD\nKxgQd/0pW5FOAaB41cMcw4/XVlphO1oCIQDlGie+WlOMjCzrV0Xz+XqIIi1pIgPT\nIG7Nv+YlTVp5qA==\n-----END CERTIFICATE-----`\n)\n\nfunc createCompactPKISecrets(tags []string) (ephemeralkeys.KeyAccessor, *x509.Certificate, secrets.Secrets, error) { //nolint\n\ttxtKey, cert, _, err := crypto.LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\tif err != nil {\n\t\treturn nil, nil, nil, err\n\t}\n\n\tissuer := pkiverifier.NewPKIIssuer(txtKey)\n\ttxtToken, err := issuer.CreateTokenFromCertificate(cert, tags)\n\tif err != nil {\n\t\treturn nil, nil, nil, err\n\t}\n\n\tControllerInfo := &secrets.ControllerInfo{\n\t\tPublicKey: []byte(certPEM),\n\t}\n\n\ttokenKeyPEMs := []*secrets.ControllerInfo{ControllerInfo}\n\n\tscrts, err := compactpki.NewCompactPKIWithTokenCA([]byte(keyPEM), []byte(certPEM), []byte(caPool), tokenKeyPEMs, txtToken, claimsheader.CompressionTypeV1)\n\tif err != nil {\n\t\treturn nil, nil, nil, err\n\t}\n\tkeyaccessor, _ := ephemeralkeys.New()\n\treturn keyaccessor, cert, scrts, nil\n}\n\nfunc Test_PUsTokenExchanges(t *testing.T) {\n\t_, _, scrts, _ := createCompactPKISecrets([]string{\"kDMRXWckV9k6mGuJ\", \"xyz\", \"eJ1s03u72o6i\"})\n\tephemeralkeys.UpdateDatapathSecrets(scrts)\n\n\tsetup := func() (*PUContext, *PUContext) {\n\n\t\tfp := &policy.PUInfo{\n\t\t\tRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\tPolicy:  policy.NewPUPolicy(\"\", \"/xyz\", policy.AllowAll, nil, nil, nil, nil, nil, nil, nil, nil, nil, 0, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject, policy.Reject),\n\t\t}\n\t\tta1, _ := tokenaccessor.New(\"pu1\", 1*time.Hour, nil)\n\t\tpu1, _ := NewPU(\"pu1\", fp, ta1, 24*time.Hour)\n\t\tpu1.managementID = \"1newPU1\"\n\t\ttags1 := policy.NewTagStore()\n\t\ttags1.AppendKeyValue(\"pu\", \"pu1\")\n\t\tpu1.compressedTags = tags1\n\n\t\tta2, _ := tokenaccessor.New(\"pu2\", 1*time.Hour, nil)\n\t\tpu2, _ := NewPU(\"pu2\", fp, ta2, 24*time.Hour)\n\t\tpu2.managementID = \"1newPU2\"\n\t\ttags2 := policy.NewTagStore()\n\t\ttags2.AppendKeyValue(\"pu\", \"pu2\")\n\t\tpu2.compressedTags = tags2\n\n\t\treturn pu1, pu2\n\t}\n\n\tpu1, pu2 := setup()\n\ttoken := pu1.createSynToken(nil, claimsheader.NewClaimsHeader())\n\n\tclaimsOnSynRcvd := &tokens.ConnectionClaims{}\n\tka, _ := ephemeralkeys.New()\n\tsecretKey, _, _, remoteNonce, remoteContextID, _, err := pu2.tokenAccessor.ParsePacketToken(ka.PrivateKey(), token.token, scrts, claimsOnSynRcvd, false)\n\n\tassert.Equal(t, err, nil, \"ParsePacketToken should return nil\")\n\tassert.Equal(t, remoteContextID, pu1.managementID, \"ParsePacketToken should get the correct remote contextID\")\n\ttags := claimsOnSynRcvd.CT.GetSlice()\n\tassert.Equal(t, tags[0], \"pu=pu1\", \"Receiver should receive correct tags\")\n\n\tephKeySignV1, _ := pu2.tokenAccessor.Sign(ka.DecodingKeyV1(), scrts.EncodingKey().(*ecdsa.PrivateKey)) //nolint\n\tephKeySignV2, _ := pu2.tokenAccessor.Sign(ka.DecodingKeyV2(), scrts.EncodingKey().(*ecdsa.PrivateKey)) //nolint\n\n\tclaimsOnSynAckSend := &tokens.ConnectionClaims{\n\t\tCT:       pu2.CompressedTags(),\n\t\tLCL:      remoteNonce,\n\t\tRMT:      remoteNonce,\n\t\tDEKV1:    ka.DecodingKeyV1(),\n\t\tDEKV2:    ka.DecodingKeyV2(),\n\t\tSDEKV1:   ephKeySignV1,\n\t\tSDEKV2:   ephKeySignV2,\n\t\tID:       pu2.ManagementID(),\n\t\tRemoteID: pu1.managementID,\n\t}\n\n\tvar encodedBuf [tokens.ClaimsEncodedBufSize]byte\n\ttokenFromSynAck, _ := pu2.tokenAccessor.CreateSynAckPacketToken(false, claimsOnSynAckSend, encodedBuf[:], remoteNonce, claimsheader.NewClaimsHeader(), scrts, secretKey) //nolint\n\n\tclaimsOnSynAckRcvd := &tokens.ConnectionClaims{}\n\tsecretKey, _, _, remoteNonce, remoteContextID, _, err = pu1.tokenAccessor.ParsePacketToken(token.privateKey, tokenFromSynAck, scrts, claimsOnSynAckRcvd, true)\n\n\tassert.Equal(t, err, nil, \"ParsePacketToken should return nil\")\n\tassert.Equal(t, remoteContextID, pu2.managementID, \"ParsePacketToken should get the correct remote contextID\")\n\ttags = claimsOnSynAckRcvd.CT.GetSlice()\n\tassert.Equal(t, tags[0], \"pu=pu2\", \"Receiver should receive correct tags\")\n\n\tclaimsOnAckSend := &tokens.ConnectionClaims{\n\t\tID:       pu1.ManagementID(),\n\t\tRMT:      remoteNonce,\n\t\tRemoteID: remoteContextID,\n\t}\n\n\tackToken, _ := pu1.tokenAccessor.CreateAckPacketToken(false, secretKey, claimsOnAckSend, encodedBuf[:])\n\tclaimsOnAckRcvd := &tokens.ConnectionClaims{}\n\terr = pu2.tokenAccessor.ParseAckToken(false, secretKey, remoteNonce, ackToken, claimsOnAckRcvd)\n\tassert.Equal(t, err, nil, \"error should be nil\")\n}\n\nfunc create314SynToken(p *PUContext, claimsHeader *claimsheader.ClaimsHeader) *synTokenInfo {\n\n\tvar datapathKeyPair ephemeralkeys.KeyAccessor\n\tvar err error\n\tvar nonce []byte\n\n\tfor {\n\t\tdatapathKeyPair, err = ephemeralkeys.New()\n\n\t\tif err != nil {\n\t\t\t// can generate errors only when the urandom io read buffer is full. retry till we succeed.\n\t\t\ttime.Sleep(10 * time.Millisecond)\n\t\t\tcontinue\n\t\t}\n\n\t\tbreak\n\t}\n\n\tfor {\n\t\t// can generate errors only when the urandom io read buffer is full. retry till we succeed.\n\t\tnonce, err = crypto.GenerateRandomBytes(16)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tbreak\n\t}\n\n\tclaims := &tokens.ConnectionClaims{\n\t\tLCL: nonce,\n\t\tCT:  p.CompressedTags(),\n\t\tID:  p.ManagementID(),\n\t}\n\n\tdatapathSecret := ephemeralkeys.GetDatapathSecret()\n\tvar encodedBuf [tokens.ClaimsEncodedBufSize]byte\n\n\ttoken, err := p.tokenAccessor.CreateSynPacketToken(claims, encodedBuf[:], nonce, claimsHeader, datapathSecret)\n\tif err != nil {\n\t\tzap.L().Error(\"Can not create syn packet token\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tephKeySignV1, err := p.tokenAccessor.Sign(datapathKeyPair.DecodingKeyV1(), datapathSecret.EncodingKey().(*ecdsa.PrivateKey))\n\tif err != nil {\n\t\tzap.L().Error(\"Can not sign the ephemeral public key\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tephKeySignV2, err := p.tokenAccessor.Sign(datapathKeyPair.DecodingKeyV2(), datapathSecret.EncodingKey().(*ecdsa.PrivateKey))\n\n\tif err != nil {\n\t\tzap.L().Error(\"Can not sign the ephemeral public key\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tprivateKey := datapathKeyPair.PrivateKey()\n\treturn &synTokenInfo{datapathSecret: datapathSecret,\n\t\tprivateKey:      privateKey,\n\t\tpublicKeyV1:     datapathKeyPair.DecodingKeyV1(),\n\t\tpublicKeyV2:     datapathKeyPair.DecodingKeyV2(),\n\t\tpublicKeySignV1: ephKeySignV1,\n\t\tpublicKeySignV2: ephKeySignV2,\n\t\ttoken:           token}\n}\n\nfunc createV1SynToken(p *PUContext, claimsHeader *claimsheader.ClaimsHeader) *synTokenInfo {\n\tvar datapathKeyPair ephemeralkeys.KeyAccessor\n\tvar err error\n\tvar nonce []byte\n\n\tfor {\n\t\tdatapathKeyPair, err = ephemeralkeys.New()\n\n\t\tif err != nil {\n\t\t\t// can generate errors only when the urandom io read buffer is full. retry till we succeed.\n\t\t\ttime.Sleep(10 * time.Millisecond)\n\t\t\tcontinue\n\t\t}\n\n\t\tbreak\n\t}\n\n\tfor {\n\t\t// can generate errors only when the urandom io read buffer is full. retry till we succeed.\n\t\tnonce, err = crypto.GenerateRandomBytes(16)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tbreak\n\t}\n\n\tclaims := &tokens.ConnectionClaims{\n\t\tLCL:   nonce,\n\t\tDEKV1: datapathKeyPair.DecodingKeyV1(),\n\t\tCT:    p.CompressedTags(),\n\t\tID:    p.ManagementID(),\n\t}\n\n\tdatapathSecret := ephemeralkeys.GetDatapathSecret()\n\tvar encodedBuf [tokens.ClaimsEncodedBufSize]byte\n\n\ttoken, err := p.tokenAccessor.CreateSynPacketToken(claims, encodedBuf[:], nonce, claimsHeader, datapathSecret)\n\tif err != nil {\n\t\tzap.L().Error(\"Can not create syn packet token\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tephKeySignV1, err := p.tokenAccessor.Sign(datapathKeyPair.DecodingKeyV1(), datapathSecret.EncodingKey().(*ecdsa.PrivateKey))\n\tif err != nil {\n\t\tzap.L().Error(\"Can not sign the ephemeral public key\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tif err != nil {\n\t\tzap.L().Error(\"Can not sign the ephemeral public key\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tprivateKey := datapathKeyPair.PrivateKey()\n\treturn &synTokenInfo{datapathSecret: datapathSecret,\n\t\tprivateKey:      privateKey,\n\t\tpublicKeyV1:     datapathKeyPair.DecodingKeyV1(),\n\t\tpublicKeySignV1: ephKeySignV1,\n\t\ttoken:           token}\n}\n\nfunc Test_PUsFrom314To500(t *testing.T) {\n\t_, _, scrts, _ := createCompactPKISecrets([]string{\"kDMRXWckV9k6mGuJ\", \"xyz\", \"eJ1s03u72o6i\"})\n\tephemeralkeys.UpdateDatapathSecrets(scrts)\n\n\tsetup := func() (*PUContext, *PUContext) {\n\n\t\tfp := &policy.PUInfo{\n\t\t\tRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\tPolicy:  policy.NewPUPolicy(\"\", \"/xyz\", policy.AllowAll, nil, nil, nil, nil, nil, nil, nil, nil, nil, 0, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject, policy.Reject),\n\t\t}\n\t\tta1, _ := tokenaccessor.New(\"pu1\", 1*time.Hour, nil)\n\t\tpu1, _ := NewPU(\"pu1\", fp, ta1, 24*time.Hour)\n\t\tpu1.managementID = \"2newPU1\"\n\t\ttags1 := policy.NewTagStore()\n\t\ttags1.AppendKeyValue(\"pu\", \"pu1\")\n\t\tpu1.compressedTags = tags1\n\n\t\tta2, _ := tokenaccessor.New(\"pu2\", 1*time.Hour, nil)\n\t\tpu2, _ := NewPU(\"pu2\", fp, ta2, 24*time.Hour)\n\t\tpu2.managementID = \"2newPU2\"\n\t\ttags2 := policy.NewTagStore()\n\t\ttags2.AppendKeyValue(\"pu\", \"pu2\")\n\t\tpu2.compressedTags = tags2\n\n\t\treturn pu1, pu2\n\t}\n\n\tpu1, pu2 := setup()\n\ttoken := create314SynToken(pu1, claimsheader.NewClaimsHeader())\n\n\tclaimsOnSynRcvd := &tokens.ConnectionClaims{}\n\tka, _ := ephemeralkeys.New()\n\tsecretKey, _, _, remoteNonce, remoteContextID, proto314, err := pu2.tokenAccessor.ParsePacketToken(ka.PrivateKey(), token.token, scrts, claimsOnSynRcvd, false)\n\n\tassert.Equal(t, proto314, true, \"protocol should be 314\")\n\tassert.Equal(t, err, nil, \"ParsePacketToken should return nil\")\n\tassert.Equal(t, remoteContextID, pu1.managementID, \"ParsePacketToken should get the correct remote contextID\")\n\ttags := claimsOnSynRcvd.CT.GetSlice()\n\tassert.Equal(t, tags[0], \"pu=pu1\", \"Receiver should receive correct tags\")\n\n\tclaimsOnSynAckSend := &tokens.ConnectionClaims{\n\t\tCT:       pu2.CompressedTags(),\n\t\tLCL:      remoteNonce,\n\t\tRMT:      remoteNonce,\n\t\tID:       pu2.ManagementID(),\n\t\tRemoteID: pu1.managementID,\n\t}\n\n\tvar encodedBuf [tokens.ClaimsEncodedBufSize]byte\n\ttokenFromSynAck, _ := pu2.tokenAccessor.CreateSynAckPacketToken(true, claimsOnSynAckSend, encodedBuf[:], remoteNonce, claimsheader.NewClaimsHeader(), scrts, secretKey) //nolint\n\n\tclaimsOnSynAckRcvd := &tokens.ConnectionClaims{}\n\tsecretKey, _, _, remoteNonce, remoteContextID, proto314, err = pu1.tokenAccessor.ParsePacketToken(token.privateKey, tokenFromSynAck, scrts, claimsOnSynAckRcvd, true)\n\n\tassert.Equal(t, proto314, true, \"protocol should be 314\")\n\tassert.Equal(t, err, nil, \"ParsePacketToken should return nil\")\n\tassert.Equal(t, remoteContextID, pu2.managementID, \"ParsePacketToken should get the correct remote contextID\")\n\ttags = claimsOnSynAckRcvd.CT.GetSlice()\n\tassert.Equal(t, tags[0], \"pu=pu2\", \"Receiver should receive correct tags\")\n\n\tclaimsOnAckSend := &tokens.ConnectionClaims{\n\t\tID:       pu1.ManagementID(),\n\t\tRMT:      remoteNonce,\n\t\tRemoteID: remoteContextID,\n\t}\n\n\tackToken, _ := pu1.tokenAccessor.CreateAckPacketToken(true, secretKey, claimsOnAckSend, encodedBuf[:])\n\tclaimsOnAckRcvd := &tokens.ConnectionClaims{}\n\terr = pu2.tokenAccessor.ParseAckToken(true, secretKey, remoteNonce, ackToken, claimsOnAckRcvd)\n\tassert.Equal(t, err, nil, \"error should be nil\")\n}\n\nfunc Test_PUsFromV1ToV2(t *testing.T) {\n\n\t_, _, scrts, _ := createCompactPKISecrets([]string{\"kDMRXWckV9k6mGuJ\", \"xyz\", \"eJ1s03u72o6i\"})\n\tephemeralkeys.UpdateDatapathSecrets(scrts)\n\n\tsetup := func() (*PUContext, *PUContext) {\n\n\t\tfp := &policy.PUInfo{\n\t\t\tRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\tPolicy:  policy.NewPUPolicy(\"\", \"/xyz\", policy.AllowAll, nil, nil, nil, nil, nil, nil, nil, nil, nil, 0, 0, nil, nil, []string{}, policy.EnforcerMapping, policy.Reject, policy.Reject),\n\t\t}\n\t\tta1, _ := tokenaccessor.New(\"pu1\", 1*time.Hour, nil)\n\t\tpu1, _ := NewPU(\"pu1\", fp, ta1, 24*time.Hour)\n\t\tpu1.managementID = \"newPU1\"\n\t\ttags1 := policy.NewTagStore()\n\t\ttags1.AppendKeyValue(\"pu\", \"pu1\")\n\t\tpu1.compressedTags = tags1\n\n\t\tta2, _ := tokenaccessor.New(\"pu2\", 1*time.Hour, nil)\n\t\tpu2, _ := NewPU(\"pu2\", fp, ta2, 24*time.Hour)\n\t\tpu2.managementID = \"newPU2\"\n\t\ttags2 := policy.NewTagStore()\n\t\ttags2.AppendKeyValue(\"pu\", \"pu2\")\n\t\tpu2.compressedTags = tags2\n\n\t\treturn pu1, pu2\n\t}\n\n\tpu1, pu2 := setup()\n\ttoken := createV1SynToken(pu1, claimsheader.NewClaimsHeader())\n\n\tclaimsOnSynRcvd := &tokens.ConnectionClaims{}\n\tka, _ := ephemeralkeys.New()\n\tsecretKey, _, _, remoteNonce, remoteContextID, _, err := pu2.tokenAccessor.ParsePacketToken(ka.PrivateKey(), token.token, scrts, claimsOnSynRcvd, false)\n\n\tassert.Equal(t, err, nil, \"ParsePacketToken should return nil\")\n\tassert.Equal(t, remoteContextID, pu1.managementID, \"ParsePacketToken should get the correct remote contextID\")\n\ttags := claimsOnSynRcvd.CT.GetSlice()\n\tassert.Equal(t, tags[0], \"pu=pu1\", \"Receiver should receive correct tags\")\n\n\tephKeySignV1, _ := pu2.tokenAccessor.Sign(ka.DecodingKeyV1(), scrts.EncodingKey().(*ecdsa.PrivateKey)) //nolint\n\n\tclaimsOnSynAckSend := &tokens.ConnectionClaims{\n\t\tCT:       pu2.CompressedTags(),\n\t\tLCL:      remoteNonce,\n\t\tRMT:      remoteNonce,\n\t\tDEKV1:    ka.DecodingKeyV1(),\n\t\tSDEKV1:   ephKeySignV1,\n\t\tID:       pu2.ManagementID(),\n\t\tRemoteID: pu1.managementID,\n\t}\n\n\tvar encodedBuf [tokens.ClaimsEncodedBufSize]byte\n\ttokenFromSynAck, _ := pu2.tokenAccessor.CreateSynAckPacketToken(false, claimsOnSynAckSend, encodedBuf[:], remoteNonce, claimsheader.NewClaimsHeader(), scrts, secretKey) //nolint\n\n\tclaimsOnSynAckRcvd := &tokens.ConnectionClaims{}\n\tsecretKey, _, _, remoteNonce, remoteContextID, _, err = pu1.tokenAccessor.ParsePacketToken(token.privateKey, tokenFromSynAck, scrts, claimsOnSynAckRcvd, true)\n\n\tassert.Equal(t, err, nil, \"ParsePacketToken should return nil\")\n\tassert.Equal(t, remoteContextID, pu2.managementID, \"ParsePacketToken should get the correct remote contextID\")\n\ttags = claimsOnSynAckRcvd.CT.GetSlice()\n\tassert.Equal(t, tags[0], \"pu=pu2\", \"Receiver should receive correct tags\")\n\n\tclaimsOnAckSend := &tokens.ConnectionClaims{\n\t\tID:       pu1.ManagementID(),\n\t\tRMT:      remoteNonce,\n\t\tRemoteID: remoteContextID,\n\t}\n\n\tackToken, _ := pu1.tokenAccessor.CreateAckPacketToken(false, secretKey, claimsOnAckSend, encodedBuf[:])\n\tclaimsOnAckRcvd := &tokens.ConnectionClaims{}\n\terr = pu2.tokenAccessor.ParseAckToken(false, secretKey, remoteNonce, ackToken, claimsOnAckRcvd)\n\tassert.Equal(t, err, nil, \"error should be nil\")\n}\n\nfunc Test_PUSearch(t *testing.T) {\n\n\tConvey(\"When I call PU Search\", t, func() {\n\n\t\tportRange80, _ := portspec.NewPortSpec(80, 85, nil)\n\t\tportRange90, _ := portspec.NewPortSpec(90, 100, nil)\n\n\t\ttagSelectorList := policy.TagSelectorList{\n\t\t\tpolicy.TagSelector{\n\t\t\t\tClause: []policy.KeyValueOperator{\n\t\t\t\t\t{\n\t\t\t\t\t\tKey:      \"app\",\n\t\t\t\t\t\tValue:    []string{\"web\"},\n\t\t\t\t\t\tID:       \"asfasfasdasd\",\n\t\t\t\t\t\tOperator: policy.Equal,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tKey:       \"@sys:port\",\n\t\t\t\t\t\tValue:     []string{\"TCP\"},\n\t\t\t\t\t\tID:        \"\",\n\t\t\t\t\t\tOperator:  policy.Equal,\n\t\t\t\t\t\tPortRange: portRange80,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tPolicyID: \"2\",\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t},\n\t\t\t},\n\t\t\tpolicy.TagSelector{\n\t\t\t\tClause: []policy.KeyValueOperator{\n\t\t\t\t\t{\n\t\t\t\t\t\tKey:      \"app\",\n\t\t\t\t\t\tValue:    []string{\"web\"},\n\t\t\t\t\t\tID:       \"asfasfasdasd\",\n\t\t\t\t\t\tOperator: policy.Equal,\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tKey:       \"@sys:port\",\n\t\t\t\t\t\tValue:     []string{\"TCP\"},\n\t\t\t\t\t\tID:        \"\",\n\t\t\t\t\t\tOperator:  policy.Equal,\n\t\t\t\t\t\tPortRange: portRange90,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tPolicy: &policy.FlowPolicy{\n\t\t\t\t\tPolicyID: \"2\",\n\t\t\t\t\tAction:   policy.Accept,\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\n\t\td := policy.NewPUPolicy(\n\t\t\t\"id\",\n\t\t\t\"/abc\",\n\t\t\tpolicy.AllowAll,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\ttagSelectorList,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\t0,\n\t\t\t0,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\t[]string{},\n\t\t\tpolicy.EnforcerMapping,\n\t\t\tpolicy.Reject|policy.Log,\n\t\t\tpolicy.Reject|policy.Log,\n\t\t)\n\n\t\tfp := &policy.PUInfo{\n\t\t\tRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\tPolicy:  d,\n\t\t}\n\n\t\tpu, _ := NewPU(\"pu1\", fp, nil, 24*time.Hour)\n\n\t\ttags := policy.NewTagStore()\n\t\ttags.AppendKeyValue(\"app\", \"web\")\n\t\ttags.AppendKeyValue(constants.PortNumberLabelString, \"TCP/85\")\n\n\t\treport, flow := pu.SearchRcvRules(tags)\n\n\t\tConvey(\"The action should be Accept when port is 85\", func() {\n\t\t\tSo(flow, ShouldNotBeNil)\n\t\t\tSo(report, ShouldNotBeNil)\n\t\t\tSo(flow.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(report.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(flow, ShouldNotBeNil)\n\t\t})\n\n\t\ttags = policy.NewTagStore()\n\t\ttags.AppendKeyValue(\"app\", \"web\")\n\t\ttags.AppendKeyValue(constants.PortNumberLabelString, \"TCP/98\")\n\n\t\treport, flow = pu.SearchRcvRules(tags)\n\n\t\tConvey(\"The action should be Accept when port is 98\", func() {\n\t\t\tSo(flow, ShouldNotBeNil)\n\t\t\tSo(report, ShouldNotBeNil)\n\t\t\tSo(flow.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(report.Action, ShouldEqual, policy.Accept)\n\t\t\tSo(flow, ShouldNotBeNil)\n\t\t})\n\n\t\ttags = policy.NewTagStore()\n\t\ttags.AppendKeyValue(\"app\", \"web\")\n\t\ttags.AppendKeyValue(constants.PortNumberLabelString, \"TCP/101\")\n\n\t\treport, flow = pu.SearchRcvRules(tags)\n\n\t\tConvey(\"The action should be Reject when port is 101\", func() {\n\t\t\tSo(flow, ShouldNotBeNil)\n\t\t\tSo(report, ShouldNotBeNil)\n\t\t\tSo(flow.Action, ShouldEqual, policy.Reject|policy.Log)\n\t\t\tSo(report.Action, ShouldEqual, policy.Reject|policy.Log)\n\t\t\tSo(flow, ShouldNotBeNil)\n\t\t})\n\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/interfaces.go",
    "content": "package remoteenforcer\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n)\n\nconst (\n\t// InitEnforcer is string for invoking RPC\n\tInitEnforcer = \"RemoteEnforcer.InitEnforcer\"\n\t//Unenforce is string for invoking RPC\n\tUnenforce = \"RemoteEnforcer.Unenforce\"\n\t//Enforce is string for invoking RPC\n\tEnforce = \"RemoteEnforcer.Enforce\"\n\t// EnforcerExit is string for invoking RPC\n\tEnforcerExit = \"RemoteEnforcer.EnforcerExit\"\n\t// UpdateSecrets is string for invoking updatesecrets RPC\n\tUpdateSecrets = \"RemoteEnforcer.UpdateSecrets\"\n\t// SetTargetNetworks is string for invoking SetTargetNetworks RPC\n\tSetTargetNetworks = \"RemoteEnforcer.SetTargetNetworks\"\n\t// EnableIPTablesPacketTracing enable iptables trace mode\n\tEnableIPTablesPacketTracing = \"RemoteEnforcer.EnableIPTablesPacketTracing\"\n\t// EnableDatapathPacketTracing enable datapath packet tracing\n\tEnableDatapathPacketTracing = \"RemoteEnforcer.EnableDatapathPacketTracing\"\n\t// SetLogLevel is string for invoking set log level RPC\n\tSetLogLevel = \"RemoteEnforcer.SetLogLevel\"\n\t// Ping is the string for invoking ping RPC\n\tPing = \"RemoteEnforcer.Ping\"\n\t// DebugCollect is the string for invoking DebugCollect RPC\n\tDebugCollect = \"RemoteEnforcer.DebugCollect\"\n)\n\n// RemoteIntf is the interface implemented by the remote enforcer\ntype RemoteIntf interface {\n\t// InitEnforcer is a function called from the controller using RPC.\n\t// It intializes data structure required by the remote enforcer\n\tInitEnforcer(req rpcwrapper.Request, resp *rpcwrapper.Response) error\n\n\t//Unenforce this method calls the unenforce method on the enforcer created from initenforcer\n\tUnenforce(req rpcwrapper.Request, resp *rpcwrapper.Response) error\n\n\t//Enforce this method calls the enforce method on the enforcer created during initenforcer\n\tEnforce(req rpcwrapper.Request, resp *rpcwrapper.Response) error\n\n\t// EnforcerExit this method is called when  we received a killrpocess message from the controller\n\t// This allows a graceful exit of the enforcer\n\tEnforcerExit(req rpcwrapper.Request, resp *rpcwrapper.Response) error\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/client/interfaces.go",
    "content": "package client\n\nimport \"context\"\n\n// Reporter interface provides functions to start/stop a remote client\n// A remote client is an active component which is responsible for collecting\n// events collected by datapath and ship them to the master enforcer.\ntype Reporter interface {\n\tRun(ctx context.Context) error\n\tSend() error\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/client/mockclient/mockclient.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/pkg/remoteenforcer/internal/client/interfaces.go\n\n// Package mockclient is a generated GoMock package.\npackage mockclient\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n)\n\n// MockReporter is a mock of Reporter interface\n// nolint\ntype MockReporter struct {\n\tctrl     *gomock.Controller\n\trecorder *MockReporterMockRecorder\n}\n\n// MockReporterMockRecorder is the mock recorder for MockReporter\n// nolint\ntype MockReporterMockRecorder struct {\n\tmock *MockReporter\n}\n\n// NewMockReporter creates a new mock instance\n// nolint\nfunc NewMockReporter(ctrl *gomock.Controller) *MockReporter {\n\tmock := &MockReporter{ctrl: ctrl}\n\tmock.recorder = &MockReporterMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockReporter) EXPECT() *MockReporterMockRecorder {\n\treturn m.recorder\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockReporter) Run(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Run\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockReporterMockRecorder) Run(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockReporter)(nil).Run), ctx)\n}\n\n// Send mocks base method\n// nolint\nfunc (m *MockReporter) Send() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Send\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Send indicates an expected call of Send\n// nolint\nfunc (mr *MockReporterMockRecorder) Send() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Send\", reflect.TypeOf((*MockReporter)(nil).Send))\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/client/reportsclient/client.go",
    "content": "package reports\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"os\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/client\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/statscollector\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\treportContextID  = \"UNUSED\"\n\treportRPCCommand = \"ProxyRPCServer.PostReportEvent\"\n)\n\n// reportClient  This is the struct for storing state for the rpc client\n// which reports events back to the controller process\ntype reportsClient struct {\n\tcollector     statscollector.Collector\n\trpchdl        *rpcwrapper.RPCWrapper\n\tsecret        string\n\treportChannel string\n\tstop          chan bool\n}\n\n// NewClient initializes a new ping report client\nfunc NewClient(cr statscollector.Collector) (client.Reporter, error) {\n\n\tp := &reportsClient{\n\t\tcollector:     cr,\n\t\trpchdl:        rpcwrapper.NewRPCWrapper(),\n\t\tsecret:        os.Getenv(constants.EnvStatsSecret),\n\t\treportChannel: os.Getenv(constants.EnvStatsChannel),\n\t\tstop:          make(chan bool),\n\t}\n\n\tif p.reportChannel == \"\" {\n\t\treturn nil, errors.New(\"no path to stats socket provided\")\n\t}\n\n\tif p.secret == \"\" {\n\t\treturn nil, errors.New(\"no secret provided for stats channel\")\n\t}\n\n\treturn p, nil\n}\n\nfunc (p *reportsClient) sendStats(ctx context.Context) {\n\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\tcase r := <-p.collector.GetReports():\n\t\t\tp.sendRequest(r)\n\t\t}\n\t}\n}\n\nfunc (p *reportsClient) sendRequest(report *statscollector.Report) {\n\n\trequest := rpcwrapper.Request{\n\t\tPayloadType: reportTypeToPayloadType(report.Type),\n\t\tPayload:     report.Payload,\n\t}\n\n\tif err := p.rpchdl.RemoteCall(\n\t\treportContextID,\n\t\treportRPCCommand,\n\t\t&request,\n\t\t&rpcwrapper.Response{},\n\t); err != nil {\n\t\tzap.L().Error(\"unable to execute rpc\", zap.Error(err))\n\t}\n}\n\n// Start This is an private function called by the remoteenforcer to connect back\n// to the controller over a stats channel\nfunc (p *reportsClient) Run(ctx context.Context) error {\n\tif err := p.rpchdl.NewRPCClient(reportContextID, p.reportChannel, p.secret); err != nil {\n\t\tzap.L().Error(\"unable to create new rpc client\", zap.Error(err))\n\t\treturn err\n\t}\n\n\tgo p.sendStats(ctx)\n\n\treturn nil\n}\n\n// Send is unimplemented.\nfunc (p *reportsClient) Send() error {\n\treturn nil\n}\n\nfunc reportTypeToPayloadType(rtype statscollector.ReportType) (ptype rpcwrapper.PayloadType) {\n\n\tswitch rtype {\n\tcase statscollector.PacketReport:\n\t\tptype = rpcwrapper.PacketReport\n\tcase statscollector.CounterReport:\n\t\tptype = rpcwrapper.CounterReport\n\tcase statscollector.DNSReport:\n\t\tptype = rpcwrapper.DNSReport\n\tcase statscollector.PingReport:\n\t\tptype = rpcwrapper.PingReport\n\tcase statscollector.ConnectionExceptionReport:\n\t\tptype = rpcwrapper.ConnectionExceptionReport\n\tdefault:\n\t\treturn\n\t}\n\n\treturn ptype\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/client/statsclient/client.go",
    "content": "package statsclient\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"os\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/client\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/statscollector\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\tdefaultStatsIntervalMiliseconds = 1000\n\tdefaultUserRetention            = 10\n\tstatsContextID                  = \"UNUSED\"\n\tstatsRPCCommand                 = \"ProxyRPCServer.PostStats\"\n)\n\n// statsClient  This is the struct for storing state for the rpc client\n// which reports flow stats back to the controller process\ntype statsClient struct {\n\tcollector     statscollector.Collector\n\trpchdl        *rpcwrapper.RPCWrapper\n\tsecret        string\n\tstatsChannel  string\n\tstatsInterval time.Duration\n\tuserRetention time.Duration\n\tstop          chan bool\n}\n\n// NewStatsClient initializes a new stats client\nfunc NewStatsClient(cr statscollector.Collector) (client.Reporter, error) {\n\n\tsc := &statsClient{\n\t\tcollector:     cr,\n\t\trpchdl:        rpcwrapper.NewRPCWrapper(),\n\t\tsecret:        os.Getenv(constants.EnvStatsSecret),\n\t\tstatsChannel:  os.Getenv(constants.EnvStatsChannel),\n\t\tstatsInterval: defaultStatsIntervalMiliseconds * time.Millisecond,\n\t\tuserRetention: defaultUserRetention * time.Minute,\n\t\tstop:          make(chan bool),\n\t}\n\n\tif sc.statsChannel == \"\" {\n\t\treturn nil, errors.New(\"no path to stats socket provided\")\n\t}\n\n\tif sc.secret == \"\" {\n\t\treturn nil, errors.New(\"no secret provided for stats channel\")\n\t}\n\n\treturn sc, nil\n}\n\n// sendStats  async function which makes a rpc call to send stats every STATS_INTERVAL\nfunc (s *statsClient) sendStats(ctx context.Context) {\n\n\tticker := time.NewTicker(s.statsInterval)\n\tuserTicker := time.NewTicker(s.userRetention)\n\t// nolint: gosimple\n\tfor {\n\t\tselect {\n\t\tcase <-ticker.C:\n\n\t\t\tflows := s.collector.GetFlowRecords()\n\t\t\tusers := s.collector.GetUserRecords()\n\t\t\tif flows == nil && users == nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\ts.sendRequest(flows, users)\n\t\tcase <-userTicker.C:\n\t\t\ts.collector.FlushUserCache()\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\t}\n\t}\n\n}\n\nfunc (s *statsClient) sendRequest(flows map[uint64]*collector.FlowRecord, users map[string]*collector.UserRecord) {\n\n\trequest := rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.StatsPayload{\n\t\t\tFlows: flows,\n\t\t\tUsers: users,\n\t\t},\n\t}\n\n\tif err := s.rpchdl.RemoteCall(\n\t\tstatsContextID,\n\t\tstatsRPCCommand,\n\t\t&request,\n\t\t&rpcwrapper.Response{},\n\t); err != nil {\n\t\tzap.L().Error(\"RPC failure in sending statistics: Unable to send flows\", zap.Error(err))\n\t}\n}\n\n// Send sends all the stats from the cache\nfunc (s *statsClient) Send() error {\n\n\tflows := s.collector.GetFlowRecords()\n\tusers := s.collector.GetUserRecords()\n\tif flows == nil && users == nil {\n\t\tzap.L().Debug(\"Flows and UserRecords are nil while sending stats to collector\")\n\t\treturn nil\n\t}\n\n\ts.sendRequest(flows, users)\n\treturn nil\n}\n\n// Start This is an private function called by the remoteenforcer to connect back\n// to the controller over a stats channel\nfunc (s *statsClient) Run(ctx context.Context) error {\n\tif err := s.rpchdl.NewRPCClient(statsContextID, s.statsChannel, s.secret); err != nil {\n\t\tzap.L().Error(\"Stats RPC client cannot connect\", zap.Error(err))\n\t\treturn err\n\t}\n\n\tgo s.sendStats(ctx)\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/statscollector/collector.go",
    "content": "package statscollector\n\nimport (\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\n// NewCollector provides a new collector interface\nfunc NewCollector() Collector {\n\treturn &collectorImpl{\n\t\tFlows:          map[uint64]*collector.FlowRecord{},\n\t\tUsers:          map[string]*collector.UserRecord{},\n\t\tProcessedUsers: map[string]bool{},\n\t\tReports:        make(chan *Report, 1000),\n\t}\n}\n\n// collectorImpl : This object is a stash implements two interfaces.\n//\n//  collector.EventCollector - so datapath can report flow events\n//  CollectorReader - so components can extract information out of this stash\n//\n// It has a flow entries cache which contains unique flows that are reported\n// back to the controller/launcher process\ntype collectorImpl struct {\n\tFlows          map[uint64]*collector.FlowRecord\n\tProcessedUsers map[string]bool\n\tUsers          map[string]*collector.UserRecord\n\tReports        chan *Report\n\n\tsync.Mutex\n}\n\n// ReportType it the type of report.\ntype ReportType uint8\n\n// ReportTypes.\nconst (\n\tFlowRecord ReportType = iota\n\tUserRecord\n\tPacketReport\n\tCounterReport\n\tDNSReport\n\tPingReport\n\tConnectionExceptionReport\n)\n\n// Report holds the report type and the payload.\ntype Report struct {\n\tType    ReportType\n\tPayload interface{}\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/statscollector/collector_reader.go",
    "content": "package statscollector\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\n// Count returns the current number of records collected.\nfunc (c *collectorImpl) Count() int {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\treturn len(c.Flows)\n}\n\n// GetFlowRecords should return all flow records stashed so far.\nfunc (c *collectorImpl) GetFlowRecords() map[uint64]*collector.FlowRecord {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif len(c.Flows) == 0 {\n\t\treturn nil\n\t}\n\n\tretval := c.Flows\n\tc.Flows = make(map[uint64]*collector.FlowRecord)\n\treturn retval\n}\n\n// GetUserRecords retrieves all the user records.\nfunc (c *collectorImpl) GetUserRecords() map[string]*collector.UserRecord {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif len(c.Users) == 0 {\n\t\treturn nil\n\t}\n\n\tretval := c.Users\n\tc.Users = map[string]*collector.UserRecord{}\n\treturn retval\n}\n\n// FlushUserCache flushes the user cache.\nfunc (c *collectorImpl) FlushUserCache() {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tc.ProcessedUsers = map[string]bool{}\n}\n\n// GetReports returns reports channel.\nfunc (c *collectorImpl) GetReports() chan *Report {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\treturn c.Reports\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/statscollector/collector_test.go",
    "content": "package statscollector\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packet\"\n)\n\nfunc TestNewCollector(t *testing.T) {\n\tConvey(\"When I create a new collector\", t, func() {\n\t\tc := NewCollector()\n\t\tConvey(\"The collector should not be nil \", func() {\n\t\t\tSo(c, ShouldNotBeNil)\n\t\t\tSo(c.GetFlowRecords(), ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestCollectFlowEvent(t *testing.T) {\n\tConvey(\"Given a stats collector\", t, func() {\n\t\tc := &collectorImpl{\n\t\t\tFlows: map[uint64]*collector.FlowRecord{},\n\t\t}\n\n\t\tConvey(\"When I add a flow event\", func() {\n\t\t\tr := &collector.FlowRecord{\n\t\t\t\tContextID: \"1\",\n\t\t\t\tSource: collector.EndPoint{\n\t\t\t\t\tID:   \"A\",\n\t\t\t\t\tIP:   \"1.1.1.1\",\n\t\t\t\t\tType: collector.EndPointTypePU,\n\t\t\t\t},\n\t\t\t\tDestination: collector.EndPoint{\n\t\t\t\t\tID:   \"B\",\n\t\t\t\t\tIP:   \"2.2.2.2\",\n\t\t\t\t\tType: collector.EndPointTypePU,\n\t\t\t\t\tPort: 80,\n\t\t\t\t},\n\t\t\t\tCount:      0,\n\t\t\t\tTags:       []string{},\n\t\t\t\tL4Protocol: packet.IPProtocolTCP,\n\t\t\t}\n\t\t\tc.CollectFlowEvent(r)\n\n\t\t\tConvey(\"The flow should be in the cache\", func() {\n\t\t\t\tSo(len(c.Flows), ShouldEqual, 1)\n\t\t\t\tSo(c.Flows[collector.StatsFlowContentHash(r)], ShouldNotBeNil)\n\t\t\t\tSo(c.Flows[collector.StatsFlowContentHash(r)].Count, ShouldEqual, 1)\n\t\t\t})\n\n\t\t\tConvey(\"When I add a second flow that matches\", func() {\n\t\t\t\tr := &collector.FlowRecord{\n\t\t\t\t\tContextID: \"1\",\n\t\t\t\t\tSource: collector.EndPoint{\n\t\t\t\t\t\tID:   \"A\",\n\t\t\t\t\t\tIP:   \"1.1.1.1\",\n\t\t\t\t\t\tType: collector.EndPointTypePU,\n\t\t\t\t\t},\n\t\t\t\t\tDestination: collector.EndPoint{\n\t\t\t\t\t\tID:   \"B\",\n\t\t\t\t\t\tIP:   \"2.2.2.2\",\n\t\t\t\t\t\tType: collector.EndPointTypePU,\n\t\t\t\t\t\tPort: 80,\n\t\t\t\t\t},\n\t\t\t\t\tCount:      10,\n\t\t\t\t\tTags:       []string{},\n\t\t\t\t\tL4Protocol: packet.IPProtocolTCP,\n\t\t\t\t}\n\t\t\t\tc.CollectFlowEvent(r)\n\t\t\t\tConvey(\"The flow should be in the cache\", func() {\n\t\t\t\t\tSo(len(c.Flows), ShouldEqual, 1)\n\t\t\t\t\tSo(c.Flows[collector.StatsFlowContentHash(r)], ShouldNotBeNil)\n\t\t\t\t\tSo(c.Flows[collector.StatsFlowContentHash(r)].Count, ShouldEqual, 11)\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I add a third flow that doesn't  matche the previous flows \", func() {\n\t\t\t\tr := &collector.FlowRecord{\n\t\t\t\t\tContextID: \"1\",\n\t\t\t\t\tSource: collector.EndPoint{\n\t\t\t\t\t\tID:   \"C\",\n\t\t\t\t\t\tIP:   \"3.3.3.3\",\n\t\t\t\t\t\tType: collector.EndPointTypePU,\n\t\t\t\t\t},\n\t\t\t\t\tDestination: collector.EndPoint{\n\t\t\t\t\t\tID:   \"D\",\n\t\t\t\t\t\tIP:   \"4.4.4.4\",\n\t\t\t\t\t\tType: collector.EndPointTypePU,\n\t\t\t\t\t\tPort: 80,\n\t\t\t\t\t},\n\t\t\t\t\tCount:      33,\n\t\t\t\t\tTags:       []string{},\n\t\t\t\t\tL4Protocol: packet.IPProtocolTCP,\n\t\t\t\t}\n\t\t\t\tc.CollectFlowEvent(r)\n\t\t\t\tConvey(\"The flow should be in the cache\", func() {\n\t\t\t\t\tSo(len(c.Flows), ShouldEqual, 2)\n\t\t\t\t\tSo(c.Flows[collector.StatsFlowContentHash(r)], ShouldNotBeNil)\n\t\t\t\t\tSo(c.Flows[collector.StatsFlowContentHash(r)].Count, ShouldEqual, 33)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestGetAllDataPathPacketRecords(t *testing.T) {\n\tConvey(\"Given i collect a new collector\", t, func() {\n\t\tc := NewCollector()\n\t\tConvey(\"I trace single packet\", func() {\n\t\t\tc.CollectPacketEvent(&collector.PacketReport{\n\t\t\t\tDestinationIP: \"1.2.3.4\",\n\t\t\t})\n\t\t\trecords := c.GetReports()\n\t\t\tSo(len(records), ShouldEqual, 1)\n\t\t})\n\n\t})\n\n}\n\nfunc TestAllCounterReports(t *testing.T) {\n\tConvey(\"Given i collect a new collector\", t, func() {\n\t\tc := NewCollector()\n\t\tc.(*collectorImpl).Reports = make(chan *Report, 1)\n\t\tConvey(\"I trace a single packet\", func() {\n\t\t\tc.CollectCounterEvent(&collector.CounterReport{})\n\t\t\trecords := c.GetReports()\n\t\t\tSo(len(records), ShouldEqual, 1)\n\t\t\tc.CollectCounterEvent(&collector.CounterReport{})\n\t\t\trecords = c.GetReports()\n\t\t\tSo(len(records), ShouldEqual, 1)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/statscollector/collector_trireme.go",
    "content": "package statscollector\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.uber.org/zap\"\n)\n\n// CollectFlowEvent collects a new flow event and adds it to a local list it shares with SendStats\nfunc (c *collectorImpl) CollectFlowEvent(record *collector.FlowRecord) {\n\n\thash := collector.StatsFlowContentHash(record)\n\n\t// If flow event doesn't have a count make it equal to 1. At least one flow is collected\n\tif record.Count == 0 {\n\t\trecord.Count = 1\n\t}\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif r, ok := c.Flows[hash]; ok {\n\t\tr.Count = r.Count + record.Count\n\t\treturn\n\t}\n\n\tc.Flows[hash] = record\n\n\tc.Flows[hash].Tags = record.Tags\n}\n\n// CollectContainerEvent is called when container events are received\nfunc (c *collectorImpl) CollectContainerEvent(record *collector.ContainerRecord) {\n\tzap.L().Error(\"Unexpected call for collecting container event\")\n}\n\n// CollectUserEvent collects a new user event and adds it to a local cache.\nfunc (c *collectorImpl) CollectUserEvent(record *collector.UserRecord) {\n\n\tif err := collector.StatsUserHash(record); err != nil {\n\t\tzap.L().Error(\"Cannot store user record\", zap.Error(err))\n\t\treturn\n\t}\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif _, ok := c.ProcessedUsers[record.ID]; !ok {\n\t\tc.Users[record.ID] = record\n\t\tc.ProcessedUsers[record.ID] = true\n\t}\n}\n\n// CollectTraceEvent collect trace events\nfunc (c *collectorImpl) CollectTraceEvent(records []string) {\n\t//We will leave this unimplemented\n\t// trace event collection in done from the main enforcer\n}\n\n// CollectTraceEvent collect trace events\nfunc (c *collectorImpl) CollectPacketEvent(report *collector.PacketReport) {\n\tc.send(PacketReport, report)\n}\n\n// CollectCounterEvent collect counters from the datapath\nfunc (c *collectorImpl) CollectCounterEvent(report *collector.CounterReport) {\n\tc.send(CounterReport, report)\n}\n\n// CollectCounterEvent collect counters from the datapath\nfunc (c *collectorImpl) CollectDNSRequests(report *collector.DNSRequestReport) {\n\tc.send(DNSReport, report)\n}\n\n// CollectPingEvent collect ping events from the datapath\nfunc (c *collectorImpl) CollectPingEvent(report *collector.PingReport) {\n\tc.send(PingReport, report)\n}\n\n// CollectConnectionExceptionReport collect collect connection exception reports from the datapath\nfunc (c *collectorImpl) CollectConnectionExceptionReport(report *collector.ConnectionExceptionReport) {\n\tc.send(ConnectionExceptionReport, report)\n}\n\nfunc (c *collectorImpl) send(rtype ReportType, report interface{}) {\n\n\tselect {\n\tcase c.Reports <- &Report{rtype, report}:\n\tdefault:\n\t\tzap.L().Warn(\"Reports queue full, dropped\",\n\t\t\tzap.Reflect(\"reportType\", rtype),\n\t\t\tzap.Reflect(\"report\", report),\n\t\t)\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/statscollector/interfaces.go",
    "content": "package statscollector\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n)\n\n// CollectorReader interface which provides functions to query pending stats.\ntype CollectorReader interface {\n\tCount() int\n\tFlushUserCache()\n\tGetFlowRecords() map[uint64]*collector.FlowRecord\n\tGetUserRecords() map[string]*collector.UserRecord\n\tGetReports() chan *Report\n}\n\n// Collector interface implements event collector.\ntype Collector interface {\n\tCollectorReader\n\tcollector.EventCollector\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/statscollector/mockstatscollector/mockstatscollector.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/pkg/remoteenforcer/internal/statscollector/interfaces.go\n\n// Package mockstatscollector is a generated GoMock package.\npackage mockstatscollector\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tcollector \"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\tstatscollector \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/statscollector\"\n)\n\n// MockCollectorReader is a mock of CollectorReader interface\n// nolint\ntype MockCollectorReader struct {\n\tctrl     *gomock.Controller\n\trecorder *MockCollectorReaderMockRecorder\n}\n\n// MockCollectorReaderMockRecorder is the mock recorder for MockCollectorReader\n// nolint\ntype MockCollectorReaderMockRecorder struct {\n\tmock *MockCollectorReader\n}\n\n// NewMockCollectorReader creates a new mock instance\n// nolint\nfunc NewMockCollectorReader(ctrl *gomock.Controller) *MockCollectorReader {\n\tmock := &MockCollectorReader{ctrl: ctrl}\n\tmock.recorder = &MockCollectorReaderMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockCollectorReader) EXPECT() *MockCollectorReaderMockRecorder {\n\treturn m.recorder\n}\n\n// Count mocks base method\n// nolint\nfunc (m *MockCollectorReader) Count() int {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Count\")\n\tret0, _ := ret[0].(int)\n\treturn ret0\n}\n\n// Count indicates an expected call of Count\n// nolint\nfunc (mr *MockCollectorReaderMockRecorder) Count() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Count\", reflect.TypeOf((*MockCollectorReader)(nil).Count))\n}\n\n// FlushUserCache mocks base method\n// nolint\nfunc (m *MockCollectorReader) FlushUserCache() {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"FlushUserCache\")\n}\n\n// FlushUserCache indicates an expected call of FlushUserCache\n// nolint\nfunc (mr *MockCollectorReaderMockRecorder) FlushUserCache() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"FlushUserCache\", reflect.TypeOf((*MockCollectorReader)(nil).FlushUserCache))\n}\n\n// GetFlowRecords mocks base method\n// nolint\nfunc (m *MockCollectorReader) GetFlowRecords() map[uint64]*collector.FlowRecord {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetFlowRecords\")\n\tret0, _ := ret[0].(map[uint64]*collector.FlowRecord)\n\treturn ret0\n}\n\n// GetFlowRecords indicates an expected call of GetFlowRecords\n// nolint\nfunc (mr *MockCollectorReaderMockRecorder) GetFlowRecords() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetFlowRecords\", reflect.TypeOf((*MockCollectorReader)(nil).GetFlowRecords))\n}\n\n// GetUserRecords mocks base method\n// nolint\nfunc (m *MockCollectorReader) GetUserRecords() map[string]*collector.UserRecord {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetUserRecords\")\n\tret0, _ := ret[0].(map[string]*collector.UserRecord)\n\treturn ret0\n}\n\n// GetUserRecords indicates an expected call of GetUserRecords\n// nolint\nfunc (mr *MockCollectorReaderMockRecorder) GetUserRecords() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetUserRecords\", reflect.TypeOf((*MockCollectorReader)(nil).GetUserRecords))\n}\n\n// GetReports mocks base method\n// nolint\nfunc (m *MockCollectorReader) GetReports() chan *statscollector.Report {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetReports\")\n\tret0, _ := ret[0].(chan *statscollector.Report)\n\treturn ret0\n}\n\n// GetReports indicates an expected call of GetReports\n// nolint\nfunc (mr *MockCollectorReaderMockRecorder) GetReports() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetReports\", reflect.TypeOf((*MockCollectorReader)(nil).GetReports))\n}\n\n// MockCollector is a mock of Collector interface\n// nolint\ntype MockCollector struct {\n\tctrl     *gomock.Controller\n\trecorder *MockCollectorMockRecorder\n}\n\n// MockCollectorMockRecorder is the mock recorder for MockCollector\n// nolint\ntype MockCollectorMockRecorder struct {\n\tmock *MockCollector\n}\n\n// NewMockCollector creates a new mock instance\n// nolint\nfunc NewMockCollector(ctrl *gomock.Controller) *MockCollector {\n\tmock := &MockCollector{ctrl: ctrl}\n\tmock.recorder = &MockCollectorMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockCollector) EXPECT() *MockCollectorMockRecorder {\n\treturn m.recorder\n}\n\n// Count mocks base method\n// nolint\nfunc (m *MockCollector) Count() int {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Count\")\n\tret0, _ := ret[0].(int)\n\treturn ret0\n}\n\n// Count indicates an expected call of Count\n// nolint\nfunc (mr *MockCollectorMockRecorder) Count() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Count\", reflect.TypeOf((*MockCollector)(nil).Count))\n}\n\n// FlushUserCache mocks base method\n// nolint\nfunc (m *MockCollector) FlushUserCache() {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"FlushUserCache\")\n}\n\n// FlushUserCache indicates an expected call of FlushUserCache\n// nolint\nfunc (mr *MockCollectorMockRecorder) FlushUserCache() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"FlushUserCache\", reflect.TypeOf((*MockCollector)(nil).FlushUserCache))\n}\n\n// GetFlowRecords mocks base method\n// nolint\nfunc (m *MockCollector) GetFlowRecords() map[uint64]*collector.FlowRecord {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetFlowRecords\")\n\tret0, _ := ret[0].(map[uint64]*collector.FlowRecord)\n\treturn ret0\n}\n\n// GetFlowRecords indicates an expected call of GetFlowRecords\n// nolint\nfunc (mr *MockCollectorMockRecorder) GetFlowRecords() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetFlowRecords\", reflect.TypeOf((*MockCollector)(nil).GetFlowRecords))\n}\n\n// GetUserRecords mocks base method\n// nolint\nfunc (m *MockCollector) GetUserRecords() map[string]*collector.UserRecord {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetUserRecords\")\n\tret0, _ := ret[0].(map[string]*collector.UserRecord)\n\treturn ret0\n}\n\n// GetUserRecords indicates an expected call of GetUserRecords\n// nolint\nfunc (mr *MockCollectorMockRecorder) GetUserRecords() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetUserRecords\", reflect.TypeOf((*MockCollector)(nil).GetUserRecords))\n}\n\n// GetReports mocks base method\n// nolint\nfunc (m *MockCollector) GetReports() chan *statscollector.Report {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetReports\")\n\tret0, _ := ret[0].(chan *statscollector.Report)\n\treturn ret0\n}\n\n// GetReports indicates an expected call of GetReports\n// nolint\nfunc (mr *MockCollectorMockRecorder) GetReports() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetReports\", reflect.TypeOf((*MockCollector)(nil).GetReports))\n}\n\n// CollectFlowEvent mocks base method\n// nolint\nfunc (m *MockCollector) CollectFlowEvent(record *collector.FlowRecord) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectFlowEvent\", record)\n}\n\n// CollectFlowEvent indicates an expected call of CollectFlowEvent\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectFlowEvent(record interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectFlowEvent\", reflect.TypeOf((*MockCollector)(nil).CollectFlowEvent), record)\n}\n\n// CollectContainerEvent mocks base method\n// nolint\nfunc (m *MockCollector) CollectContainerEvent(record *collector.ContainerRecord) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectContainerEvent\", record)\n}\n\n// CollectContainerEvent indicates an expected call of CollectContainerEvent\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectContainerEvent(record interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectContainerEvent\", reflect.TypeOf((*MockCollector)(nil).CollectContainerEvent), record)\n}\n\n// CollectUserEvent mocks base method\n// nolint\nfunc (m *MockCollector) CollectUserEvent(record *collector.UserRecord) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectUserEvent\", record)\n}\n\n// CollectUserEvent indicates an expected call of CollectUserEvent\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectUserEvent(record interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectUserEvent\", reflect.TypeOf((*MockCollector)(nil).CollectUserEvent), record)\n}\n\n// CollectTraceEvent mocks base method\n// nolint\nfunc (m *MockCollector) CollectTraceEvent(records []string) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectTraceEvent\", records)\n}\n\n// CollectTraceEvent indicates an expected call of CollectTraceEvent\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectTraceEvent(records interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectTraceEvent\", reflect.TypeOf((*MockCollector)(nil).CollectTraceEvent), records)\n}\n\n// CollectPacketEvent mocks base method\n// nolint\nfunc (m *MockCollector) CollectPacketEvent(report *collector.PacketReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectPacketEvent\", report)\n}\n\n// CollectPacketEvent indicates an expected call of CollectPacketEvent\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectPacketEvent(report interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectPacketEvent\", reflect.TypeOf((*MockCollector)(nil).CollectPacketEvent), report)\n}\n\n// CollectCounterEvent mocks base method\n// nolint\nfunc (m *MockCollector) CollectCounterEvent(counterReport *collector.CounterReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectCounterEvent\", counterReport)\n}\n\n// CollectCounterEvent indicates an expected call of CollectCounterEvent\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectCounterEvent(counterReport interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectCounterEvent\", reflect.TypeOf((*MockCollector)(nil).CollectCounterEvent), counterReport)\n}\n\n// CollectDNSRequests mocks base method\n// nolint\nfunc (m *MockCollector) CollectDNSRequests(request *collector.DNSRequestReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectDNSRequests\", request)\n}\n\n// CollectDNSRequests indicates an expected call of CollectDNSRequests\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectDNSRequests(request interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectDNSRequests\", reflect.TypeOf((*MockCollector)(nil).CollectDNSRequests), request)\n}\n\n// CollectPingEvent mocks base method\n// nolint\nfunc (m *MockCollector) CollectPingEvent(report *collector.PingReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectPingEvent\", report)\n}\n\n// CollectPingEvent indicates an expected call of CollectPingEvent\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectPingEvent(report interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectPingEvent\", reflect.TypeOf((*MockCollector)(nil).CollectPingEvent), report)\n}\n\n// CollectConnectionExceptionReport mocks base method\n// nolint\nfunc (m *MockCollector) CollectConnectionExceptionReport(report *collector.ConnectionExceptionReport) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"CollectConnectionExceptionReport\", report)\n}\n\n// CollectConnectionExceptionReport indicates an expected call of CollectConnectionExceptionReport\n// nolint\nfunc (mr *MockCollectorMockRecorder) CollectConnectionExceptionReport(report interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CollectConnectionExceptionReport\", reflect.TypeOf((*MockCollector)(nil).CollectConnectionExceptionReport), report)\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/tokenissuer/mocktokenclient/mocktokenclient.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/pkg/remoteenforcer/internal/tokenissuer/tokenissuer.go\n\n// Package mocktokenclient is a generated GoMock package.\npackage mocktokenclient\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\ttime \"time\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tcommon \"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// MockTokenClient is a mock of TokenClient interface\n// nolint\ntype MockTokenClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockTokenClientMockRecorder\n}\n\n// MockTokenClientMockRecorder is the mock recorder for MockTokenClient\n// nolint\ntype MockTokenClientMockRecorder struct {\n\tmock *MockTokenClient\n}\n\n// NewMockTokenClient creates a new mock instance\n// nolint\nfunc NewMockTokenClient(ctrl *gomock.Controller) *MockTokenClient {\n\tmock := &MockTokenClient{ctrl: ctrl}\n\tmock.recorder = &MockTokenClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockTokenClient) EXPECT() *MockTokenClientMockRecorder {\n\treturn m.recorder\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockTokenClient) Run(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Run\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockTokenClientMockRecorder) Run(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockTokenClient)(nil).Run), ctx)\n}\n\n// Issue mocks base method\n// nolint\nfunc (m *MockTokenClient) Issue(ctx context.Context, contextID string, stype common.ServiceTokenType, audience string, validity time.Duration) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Issue\", ctx, contextID, stype, audience, validity)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Issue indicates an expected call of Issue\n// nolint\nfunc (mr *MockTokenClientMockRecorder) Issue(ctx, contextID, stype, audience, validity interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Issue\", reflect.TypeOf((*MockTokenClient)(nil).Issue), ctx, contextID, stype, audience, validity)\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/internal/tokenissuer/tokenissuer.go",
    "content": "package tokenissuer\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"reflect\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.uber.org/zap\"\n)\n\n// TokenClient interface provides a start function. the client is used to\n// request tokens.\ntype TokenClient interface {\n\tRun(ctx context.Context) error\n\tIssue(ctx context.Context, contextID string, stype common.ServiceTokenType, audience string, validity time.Duration) (string, error)\n}\n\nconst (\n\ttokenIssuerContextID = \"UNUSED\"\n\tretrieveTokenCommand = \"ProxyRPCServer.RetrieveToken\"\n)\n\n// Client represents the remote API client.\ntype Client struct {\n\trpchdl     rpcwrapper.RPCClient\n\tsecret     string\n\tsocketPath string\n\tstop       chan bool\n}\n\n// NewClient returns a remote API client that can be used for\n// issuing API calls to the master enforcer.\nfunc NewClient() (*Client, error) {\n\tc := &Client{\n\t\trpchdl:     rpcwrapper.NewRPCWrapper(),\n\t\tsecret:     os.Getenv(constants.EnvStatsSecret),\n\t\tsocketPath: os.Getenv(constants.EnvStatsChannel),\n\t\tstop:       make(chan bool),\n\t}\n\tif c.socketPath == \"\" {\n\t\treturn nil, errors.New(\"no path to socket provided\")\n\t}\n\tif c.secret == \"\" {\n\t\treturn nil, errors.New(\"no secret provided for  channel\")\n\t}\n\n\treturn c, nil\n}\n\n// RetrieveToken will issue a token request to the main over the RPC channnel.\nfunc (c *Client) RetrieveToken(contextID string, stype common.ServiceTokenType, audience string, validity time.Duration) (string, error) {\n\n\trequest := &rpcwrapper.Request{\n\t\tPayload: &rpcwrapper.TokenRequestPayload{\n\t\t\tContextID:        contextID,\n\t\t\tAudience:         audience,\n\t\t\tValidity:         validity,\n\t\t\tServiceTokenType: stype,\n\t\t},\n\t}\n\n\tresponse := &rpcwrapper.Response{}\n\n\tif err := c.rpchdl.RemoteCall(tokenIssuerContextID, retrieveTokenCommand, request, response); err != nil {\n\t\treturn \"\", err\n\t}\n\n\tpayload, ok := response.Payload.(rpcwrapper.TokenResponsePayload)\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"unrecognized response payload. Received payload is %s\", reflect.TypeOf(response.Payload))\n\t}\n\n\treturn payload.Token, nil\n}\n\n// Issue implements the ServiceTokenIssuer interface.\nfunc (c *Client) Issue(ctx context.Context, contextID string, stype common.ServiceTokenType, audience string, validity time.Duration) (string, error) {\n\treturn c.RetrieveToken(contextID, stype, audience, validity)\n}\n\n// Run will initialize the client.\nfunc (c *Client) Run(ctx context.Context) error {\n\tif err := c.rpchdl.NewRPCClient(tokenIssuerContextID, c.socketPath, c.secret); err != nil {\n\t\tzap.L().Error(\"CounterClient RPC client cannot connect\", zap.Error(err))\n\t\treturn err\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/mockremoteenforcer/mockremoteenforcer.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/pkg/remoteenforcer/interfaces.go\n\n// Package mockremoteenforcer is a generated GoMock package.\npackage mockremoteenforcer\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\trpcwrapper \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n)\n\n// MockRemoteIntf is a mock of RemoteIntf interface\n// nolint\ntype MockRemoteIntf struct {\n\tctrl     *gomock.Controller\n\trecorder *MockRemoteIntfMockRecorder\n}\n\n// MockRemoteIntfMockRecorder is the mock recorder for MockRemoteIntf\n// nolint\ntype MockRemoteIntfMockRecorder struct {\n\tmock *MockRemoteIntf\n}\n\n// NewMockRemoteIntf creates a new mock instance\n// nolint\nfunc NewMockRemoteIntf(ctrl *gomock.Controller) *MockRemoteIntf {\n\tmock := &MockRemoteIntf{ctrl: ctrl}\n\tmock.recorder = &MockRemoteIntfMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockRemoteIntf) EXPECT() *MockRemoteIntfMockRecorder {\n\treturn m.recorder\n}\n\n// InitEnforcer mocks base method\n// nolint\nfunc (m *MockRemoteIntf) InitEnforcer(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"InitEnforcer\", req, resp)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// InitEnforcer indicates an expected call of InitEnforcer\n// nolint\nfunc (mr *MockRemoteIntfMockRecorder) InitEnforcer(req, resp interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"InitEnforcer\", reflect.TypeOf((*MockRemoteIntf)(nil).InitEnforcer), req, resp)\n}\n\n// Unenforce mocks base method\n// nolint\nfunc (m *MockRemoteIntf) Unenforce(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Unenforce\", req, resp)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Unenforce indicates an expected call of Unenforce\n// nolint\nfunc (mr *MockRemoteIntfMockRecorder) Unenforce(req, resp interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Unenforce\", reflect.TypeOf((*MockRemoteIntf)(nil).Unenforce), req, resp)\n}\n\n// Enforce mocks base method\n// nolint\nfunc (m *MockRemoteIntf) Enforce(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Enforce\", req, resp)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Enforce indicates an expected call of Enforce\n// nolint\nfunc (mr *MockRemoteIntfMockRecorder) Enforce(req, resp interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Enforce\", reflect.TypeOf((*MockRemoteIntf)(nil).Enforce), req, resp)\n}\n\n// EnforcerExit mocks base method\n// nolint\nfunc (m *MockRemoteIntf) EnforcerExit(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EnforcerExit\", req, resp)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// EnforcerExit indicates an expected call of EnforcerExit\n// nolint\nfunc (mr *MockRemoteIntfMockRecorder) EnforcerExit(req, resp interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EnforcerExit\", reflect.TypeOf((*MockRemoteIntf)(nil).EnforcerExit), req, resp)\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/remoteenforcer_linux.go",
    "content": "// +build linux\n\npackage remoteenforcer\n\n/*\n#cgo CFLAGS: -Wall\n#include <stdlib.h>\n*/\nimport \"C\"\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"os/signal\"\n\t\"strings\"\n\t\"sync\"\n\t\"syscall\"\n\n\t\"github.com/blang/semver\"\n\t\"go.aporeto.io/enforcerd/internal/diagnostics\"\n\t\"go.aporeto.io/enforcerd/internal/logging/remotelog\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t_ \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/nsenter\" // nolint\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/client\"\n\treports \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/client/reportsclient\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/client/statsclient\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/statscollector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/tokenissuer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/rpc\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/sys/unix\"\n)\n\n// Initialization functions as variables in order to enable testing.\nvar (\n\tcreateEnforcer = enforcer.New\n\n\tcreateSupervisor = supervisor.NewSupervisor\n)\n\nvar cmdLock sync.Mutex\n\n// newRemoteEnforcer starts a new server\nfunc newRemoteEnforcer(\n\tctx context.Context,\n\trpcHandle rpcwrapper.RPCServer,\n\tsecret string,\n\tstatsClient client.Reporter,\n\tcollector statscollector.Collector,\n\treportsClient client.Reporter,\n\ttokenIssuer tokenissuer.TokenClient,\n\tlogLevel string,\n\tlogFormat string,\n\tlogID string,\n\tnumQueues int,\n\tenforcerType policy.EnforcerType,\n\tagentVersion semver.Version,\n) (*RemoteEnforcer, error) {\n\n\tvar err error\n\n\tif collector == nil {\n\t\tcollector = statscollector.NewCollector()\n\t}\n\n\tif statsClient == nil {\n\t\tstatsClient, err = statsclient.NewStatsClient(collector)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif reportsClient == nil {\n\t\treportsClient, err = reports.NewClient(collector)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif tokenIssuer == nil {\n\t\ttokenIssuer, err = tokenissuer.NewClient()\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t}\n\n\tprocMountPoint := os.Getenv(constants.EnvMountPoint)\n\tif procMountPoint == \"\" {\n\t\tprocMountPoint = constants.DefaultProcMountPoint\n\t}\n\n\tfqConfig := fqconfig.NewFilterQueue(\n\t\tnumQueues,\n\t\t[]string{\"0.0.0.0/0\"},\n\t)\n\n\treturn &RemoteEnforcer{\n\t\tcollector:      collector,\n\t\trpcSecret:      secret,\n\t\trpcHandle:      rpcHandle,\n\t\tprocMountPoint: procMountPoint,\n\t\tstatsClient:    statsClient,\n\t\treportsClient:  reportsClient,\n\t\tctx:            ctx,\n\t\texit:           make(chan bool),\n\t\ttokenIssuer:    tokenIssuer,\n\t\tenforcerType:   enforcerType,\n\t\tagentVersion:   agentVersion,\n\t\tconfig:         logConfig{logLevel: logLevel, logFormat: logFormat, logID: logID},\n\t\tfqConfig:       fqConfig,\n\t}, nil\n}\n\n// InitEnforcer is a function called from the controller using RPC. It intializes\n// data structure required by the remote enforcer\nfunc (s *RemoteEnforcer) InitEnforcer(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tzap.L().Debug(\"Configuring remote enforcer\")\n\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = fmt.Sprintf(\"init message authentication failed\") // nolint:gosimple\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\n\tpayload, ok := req.Payload.(rpcwrapper.InitRequestPayload)\n\tif !ok {\n\t\tresp.Status = fmt.Sprintf(\"invalid request payload\") // nolint:gosimple\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tif s.supervisor != nil || s.enforcer != nil {\n\t\tresp.Status = fmt.Sprintf(\"remote enforcer is already initialized\") // nolint:gosimple\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tvar err error\n\n\tdefer func() {\n\t\tif err != nil {\n\t\t\ts.cleanup()\n\t\t}\n\t}()\n\n\tif err = s.setupEnforcer(&payload); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tif err = s.setupSupervisor(&payload); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tif err = s.enforcer.Run(s.ctx); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tif err = s.statsClient.Run(s.ctx); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tif err = s.supervisor.Run(s.ctx); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tif err = s.reportsClient.Run(s.ctx); err != nil {\n\t\tresp.Status = \"ReportsClient\" + err.Error()\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tif err = s.tokenIssuer.Run(s.ctx); err != nil {\n\t\tresp.Status = \"TokenIssuer\" + err.Error()\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tresp.Status = \"\"\n\n\treturn nil\n}\n\n// Enforce this method calls the enforce method on the enforcer created during initenforcer\nfunc (s *RemoteEnforcer) Enforce(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"enforce message auth failed\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\n\tpayload, ok := req.Payload.(rpcwrapper.EnforcePayload)\n\tif !ok {\n\t\tresp.Status = \"invalid enforcer payload\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tplc, err := payload.Policy.ToPrivatePolicy(s.ctx, true)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\n\tpuInfo := &policy.PUInfo{\n\t\tContextID: payload.ContextID,\n\t\tPolicy:    plc,\n\t\tRuntime:   runtime,\n\t}\n\n\tif s.enforcer == nil || s.supervisor == nil {\n\t\tresp.Status = \"enforcer not initialized - cannot enforce\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\t// If any error happens, cleanup everything on exit so that we can recover\n\t// by launcing a new remote.\n\tdefer func() {\n\t\tif err != nil {\n\t\t\ts.cleanup()\n\t\t}\n\t}()\n\n\tif err = s.supervisor.Supervise(payload.ContextID, puInfo); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn err\n\t}\n\n\tif err = s.enforcer.Enforce(s.ctx, payload.ContextID, puInfo); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn err\n\t}\n\n\tresp.Status = \"\"\n\n\treturn nil\n}\n\n// Unenforce this method calls the unenforce method on the enforcer created from initenforcer\nfunc (s *RemoteEnforcer) Unenforce(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"unenforce message auth failed\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\n\ts.statsClient.Send() // nolint: errcheck\n\n\tpayload, ok := req.Payload.(rpcwrapper.UnEnforcePayload)\n\tif !ok {\n\t\tresp.Status = \"invalid unenforcer payload\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tvar err error\n\n\t// If any error happens, cleanup everything on exit so that we can recover\n\t// by launcing a new remote.\n\tdefer func() {\n\t\tif err != nil {\n\t\t\ts.cleanup()\n\t\t}\n\t}()\n\n\tif err = s.supervisor.Unsupervise(payload.ContextID); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn fmt.Errorf(\"unable to clean supervisor: %s\", err)\n\t}\n\n\tif err = s.enforcer.Unenforce(s.ctx, payload.ContextID); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn fmt.Errorf(\"unable to stop enforcer: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// SetTargetNetworks calls the same method on the actual enforcer\nfunc (s *RemoteEnforcer) SetTargetNetworks(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tvar err error\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"SetTargetNetworks message auth failed\" // nolint\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\n\tif s.enforcer == nil || s.supervisor == nil {\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tpayload := req.Payload.(rpcwrapper.SetTargetNetworksPayload)\n\n\t// If any error happens, cleanup everything on exit so that we can recover\n\t// by launcing a new remote.\n\tdefer func() {\n\t\tif err != nil {\n\t\t\ts.cleanup()\n\t\t}\n\t}()\n\n\tif err = s.enforcer.SetTargetNetworks(payload.Configuration); err != nil {\n\t\treturn err\n\t}\n\n\terr = s.supervisor.SetTargetNetworks(payload.Configuration)\n\n\treturn err\n}\n\n// EnforcerExit is processing messages from the remote that are requesting an exit. In this\n// case we simply cancel the context.\nfunc (s *RemoteEnforcer) EnforcerExit(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\ts.cleanup()\n\n\ts.exit <- true\n\n\treturn nil\n}\n\n// UpdateSecrets updates the secrets used by the remote enforcer\nfunc (s *RemoteEnforcer) UpdateSecrets(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\tvar err error\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"updatesecrets auth failed\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\tif s.enforcer == nil {\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\t// If any error happens, cleanup everything on exit so that we can recover\n\t// by launcing a new remote.\n\tdefer func() {\n\t\tif err != nil {\n\t\t\ts.cleanup()\n\t\t}\n\t}()\n\n\tpayload := req.Payload.(rpcwrapper.UpdateSecretsPayload)\n\ts.secrets, err = rpc.NewSecrets(payload.Secrets)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = s.enforcer.UpdateSecrets(s.secrets)\n\n\treturn err\n}\n\n// EnableDatapathPacketTracing enable nfq datapath packet tracing\nfunc (s *RemoteEnforcer) EnableDatapathPacketTracing(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"enable datapath packet tracing auth failed\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\n\tpayload := req.Payload.(rpcwrapper.EnableDatapathPacketTracingPayLoad)\n\n\tif err := s.enforcer.EnableDatapathPacketTracing(s.ctx, payload.ContextID, payload.Direction, payload.Interval); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn err\n\t}\n\n\tresp.Status = \"\"\n\treturn nil\n}\n\n// EnableIPTablesPacketTracing enables iptables trace packet tracing\nfunc (s *RemoteEnforcer) EnableIPTablesPacketTracing(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"enable iptable packet tracing auth failed\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\n\tpayload := req.Payload.(rpcwrapper.EnableIPTablesPacketTracingPayLoad)\n\n\tif err := s.supervisor.EnableIPTablesPacketTracing(s.ctx, payload.ContextID, payload.Interval); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn err\n\t}\n\n\tresp.Status = \"\"\n\treturn nil\n}\n\n// Ping runs ping to the given config\nfunc (s *RemoteEnforcer) Ping(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"ping auth failed\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\n\tpayload := req.Payload.(rpcwrapper.PingPayload)\n\n\tif err := s.enforcer.Ping(s.ctx, payload.ContextID, payload.PingConfig); err != nil {\n\t\tresp.Status = err.Error()\n\t\treturn err\n\t}\n\n\tresp.Status = \"\"\n\treturn nil\n}\n\n// DebugCollect collects the desired debug information\nfunc (s *RemoteEnforcer) DebugCollect(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"debug collect auth failed\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\n\tpayload := req.Payload.(rpcwrapper.DebugCollectPayload)\n\n\tvar commandOutput string\n\tvar pid int\n\n\tif payload.CommandExec != \"\" {\n\t\tif values := strings.Split(payload.CommandExec, \" \"); len(values) >= 1 {\n\t\t\tcmd := exec.CommandContext(s.ctx, values[0], values[1:]...)\n\t\t\toutput, err := cmd.CombinedOutput()\n\t\t\tif err != nil {\n\t\t\t\tresp.Status = err.Error()\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcommandOutput = string(output)\n\t\t}\n\t} else if payload.PcapFilePath != \"\" {\n\t\tcmd, err := diagnostics.StartTcpdump(s.ctx, payload.PcapFilePath, payload.PcapFilter)\n\t\tif err != nil {\n\t\t\tresp.Status = err.Error()\n\t\t\treturn err\n\t\t}\n\n\t\t// spawn goroutine to call Wait() so we don't have defunct child process\n\t\tgo func() {\n\t\t\tif err := cmd.Wait(); err != nil {\n\t\t\t\tzap.L().Warn(\"DebugCollect Wait failed on tcpdump process\", zap.Error(err))\n\t\t\t}\n\t\t}()\n\n\t\tpid = cmd.Process.Pid\n\t} else {\n\t\t// otherwise, return pid of remote enforcer\n\t\tpid = os.Getpid()\n\t}\n\n\tresp.Status = \"\"\n\tresp.Payload = rpcwrapper.DebugCollectResponsePayload{\n\t\tContextID:     payload.ContextID,\n\t\tPID:           pid,\n\t\tCommandOutput: commandOutput,\n\t}\n\treturn nil\n}\n\n// SetLogLevel sets log level.\nfunc (s *RemoteEnforcer) SetLogLevel(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\n\tif !s.rpcHandle.CheckValidity(&req, s.rpcSecret) {\n\t\tresp.Status = \"set log level auth failed\"\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tcmdLock.Lock()\n\tdefer cmdLock.Unlock()\n\tif s.enforcer == nil {\n\t\treturn fmt.Errorf(resp.Status)\n\t}\n\n\tpayload := req.Payload.(rpcwrapper.SetLogLevelPayload)\n\n\tnewLevel := triremeLogLevelToString(payload.Level)\n\tif payload.Level != \"\" && s.config.logLevel != newLevel {\n\t\tremotelog.SetupRemoteLogger(newLevel, s.config.logFormat, s.config.logID) // nolint: errcheck\n\t\ts.config.logLevel = newLevel\n\n\t\tif err := s.enforcer.SetLogLevel(payload.Level); err != nil {\n\t\t\tresp.Status = err.Error()\n\t\t\treturn err\n\t\t}\n\t}\n\n\tresp.Status = \"\"\n\treturn nil\n}\n\nfunc triremeLogLevelToString(level constants.LogLevel) string {\n\tswitch level {\n\tcase constants.Debug:\n\t\treturn \"debug\"\n\tcase constants.Trace:\n\t\treturn \"trace\"\n\tcase constants.Error:\n\t\treturn \"error\"\n\tcase constants.Info:\n\t\treturn \"info\"\n\tcase constants.Warn:\n\t\treturn \"warn\"\n\tdefault:\n\t\treturn \"info\"\n\t}\n}\n\n// setup an enforcer\nfunc (s *RemoteEnforcer) setupEnforcer(payload *rpcwrapper.InitRequestPayload) error {\n\n\tvar err error\n\n\ts.secrets, err = rpc.NewSecrets(payload.Secrets)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// we are usually always starting RemoteContainer enforcers,\n\t// however, if envoy is requested, we change the mode to RemoteContainerEnvoyAuthorizer\n\tmode := constants.RemoteContainer\n\tif s.enforcerType == policy.EnvoyAuthorizerEnforcer {\n\t\tmode = constants.RemoteContainerEnvoyAuthorizer\n\t}\n\n\tif s.enforcer, err = createEnforcer(\n\t\tpayload.MutualAuth,\n\t\ts.fqConfig,\n\t\ts.collector,\n\t\ts.secrets,\n\t\tpayload.ServerID,\n\t\tpayload.Validity,\n\t\tmode,\n\t\ts.procMountPoint,\n\t\tpayload.ExternalIPCacheTimeout,\n\t\tpayload.PacketLogs,\n\t\tpayload.Configuration,\n\t\ts.tokenIssuer,\n\t\tpayload.IsBPFEnabled,\n\t\ts.agentVersion,\n\t\tpayload.ServiceMeshType,\n\t); err != nil || s.enforcer == nil {\n\t\treturn fmt.Errorf(\"Error while initializing remote enforcer, %s\", err)\n\t}\n\n\treturn nil\n}\n\nfunc (s *RemoteEnforcer) setupSupervisor(payload *rpcwrapper.InitRequestPayload) error {\n\n\t// we are usually always starting RemoteContainer enforcers,\n\t// however, if envoy is requested, we change the mode to RemoteContainerEnvoyAuthorizer\n\tmode := constants.RemoteContainer\n\tif s.enforcerType == policy.EnvoyAuthorizerEnforcer {\n\t\tmode = constants.RemoteContainerEnvoyAuthorizer\n\t}\n\n\th, err := createSupervisor(\n\t\ts.collector,\n\t\ts.enforcer,\n\t\tmode,\n\t\tpayload.Configuration,\n\t\tpayload.IPv6Enabled,\n\t\tpayload.IPTablesLockfile,\n\t)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to setup supervisor: %s\", err)\n\t}\n\ts.supervisor = h\n\n\treturn nil\n}\n\n// cleanup cleans all the acls and any state of the local enforcer.\nfunc (s *RemoteEnforcer) cleanup() {\n\n\tif s.supervisor != nil {\n\t\tif err := s.supervisor.CleanUp(); err != nil {\n\t\t\tzap.L().Error(\"unable to clean supervisor state\", zap.Error(err))\n\t\t}\n\t}\n\n\tif s.enforcer != nil {\n\t\tif err := s.enforcer.CleanUp(); err != nil {\n\t\t\tzap.L().Error(\"unable to clean enforcer state\", zap.Error(err))\n\t\t}\n\t}\n\n\tif s.service != nil {\n\t\tif err := s.service.Stop(); err != nil {\n\t\t\tzap.L().Error(\"unable to clean service state\", zap.Error(err))\n\t\t}\n\t}\n}\n\n// LaunchRemoteEnforcer launches a remote enforcer\nfunc LaunchRemoteEnforcer(ctx context.Context, logLevel, logFormat, logID string, numQueues int, agentVersion semver.Version) error {\n\n\t// Before doing anything validate that we are in the right namespace.\n\tif err := validateNamespace(); err != nil {\n\t\treturn err\n\t}\n\n\tnamedPipe := os.Getenv(constants.EnvContextSocket)\n\tsecret := os.Getenv(constants.EnvRPCClientSecret)\n\tif secret == \"\" {\n\t\tzap.L().Fatal(\"No secret found\")\n\t}\n\tos.Setenv(constants.EnvRPCClientSecret, \"\") // nolint: errcheck\n\n\tflag := unix.SIGHUP\n\tif err := unix.Prctl(unix.PR_SET_PDEATHSIG, uintptr(flag), 0, 0, 0); err != nil {\n\t\treturn err\n\t}\n\n\tenforcerType, err := policy.EnforcerTypeFromString(os.Getenv(constants.EnvEnforcerType))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\trpcHandle := rpcwrapper.NewRPCServer()\n\tre, err := newRemoteEnforcer(ctx, rpcHandle, secret, nil, nil, nil, nil, logLevel, logFormat, logID, numQueues, enforcerType, agentVersion)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tgo func() {\n\t\tif err := rpcHandle.StartServer(ctx, \"unix\", namedPipe, re); err != nil {\n\t\t\tzap.L().Fatal(\"Failed to start the RPC server\", zap.Error(err))\n\t\t}\n\t}()\n\n\tc := make(chan os.Signal, 1)\n\tsignal.Notify(c, syscall.SIGTERM, syscall.SIGINT, syscall.SIGQUIT)\n\n\tselect {\n\n\tcase <-re.exit:\n\t\tzap.L().Info(\"Remote enforcer exiting ...\")\n\n\tcase sig := <-c:\n\t\tzap.L().Warn(\"Remote enforcer received a signal. exiting ...\", zap.Any(\"signal\", sig))\n\t\tre.cleanup()\n\t\t// TODO would be useful to set and return an exit code (instead of nil) to indicate the signal received\n\n\tcase <-ctx.Done():\n\t\tre.cleanup()\n\t}\n\n\treturn nil\n}\n\n// getCEnvVariable returns an environment variable set in the c context\nfunc getCEnvVariable(name string) string {\n\n\tval := C.getenv(C.CString(name))\n\tif val == nil {\n\t\treturn \"\"\n\t}\n\n\treturn C.GoString(val)\n}\n\nfunc validateNamespace() error {\n\t// Check if successfully switched namespace\n\tnsEnterState := getCEnvVariable(constants.EnvNsenterErrorState)\n\tnsEnterLogMsg := getCEnvVariable(constants.EnvNsenterLogs)\n\tif nsEnterState != \"\" {\n\t\treturn fmt.Errorf(\"nsErr: %s nsLogs: %s\", nsEnterState, nsEnterLogMsg)\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/remoteenforcer_stub.go",
    "content": "// +build !linux\n\npackage remoteenforcer\n\nimport (\n\t\"context\"\n\n\t\"github.com/blang/semver\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/client\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/tokenissuer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/statscollector\"\n)\n\n// nolint:unused // seem to be used by the test\nvar (\n\tcreateEnforcer   = enforcer.New\n\tcreateSupervisor = supervisor.NewSupervisor\n)\n\n// newRemoteEnforcer starts a new server\nfunc newRemoteEnforcer(\n\tctx context.Context,\n\trpcHandle rpcwrapper.RPCServer,\n\tsecret string,\n\tstatsClient client.Reporter,\n\tcollector statscollector.Collector,\n\treportsClient client.Reporter,\n\ttokenIssuer tokenissuer.TokenClient,\n\tlogLevel string,\n\tlogFormat string,\n\tlogID string,\n\tnumQueues int,\n\tenforcerType policy.EnforcerType,\n\tagentVersion semver.Version,\n) (*RemoteEnforcer, error) {\n\treturn nil, nil\n}\n\n// LaunchRemoteEnforcer is a fake implementation for building on darwin.\nfunc LaunchRemoteEnforcer(ctx context.Context, logLevel, logFormat, logID string, numQueues int, agentVersion semver.Version) error {\n\treturn nil\n}\n\n// InitEnforcer is a function called from the controller using RPC. It intializes data structure required by the\n// remote enforcer\nfunc (s *RemoteEnforcer) InitEnforcer(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\treturn nil\n}\n\n// Enforce this method calls the enforce method on the enforcer created during initenforcer\nfunc (s *RemoteEnforcer) Enforce(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\treturn nil\n}\n\n// Unenforce this method calls the unenforce method on the enforcer created from initenforcer\nfunc (s *RemoteEnforcer) Unenforce(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\treturn nil\n}\n\n// EnforcerExit this method is called when  we received a killrpocess message from the controller\n// This allows a graceful exit of the enforcer\nfunc (s *RemoteEnforcer) EnforcerExit(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\treturn nil\n}\n\n// EnableDatapathPacketTracing enable nfq datapath packet tracing\nfunc (s *RemoteEnforcer) EnableDatapathPacketTracing(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\treturn nil\n}\n\n// EnableIPTablesPacketTracing enables iptables trace packet tracing\nfunc (s *RemoteEnforcer) EnableIPTablesPacketTracing(req rpcwrapper.Request, resp *rpcwrapper.Response) error {\n\treturn nil\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/remoteenforcer_test.go",
    "content": "package remoteenforcer\n\nimport (\n\t\"context\"\n\t\"crypto/hmac\"\n\t\"crypto/sha256\"\n\t\"encoding/binary\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/blang/semver\"\n\t\"github.com/golang/mock/gomock\"\n\t\"github.com/mitchellh/hashstructure\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/mockenforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper/mockrpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor/mocksupervisor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/client/mockclient\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/statscollector/mockstatscollector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/tokenissuer/mocktokenclient\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/testhelper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/runtime\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nconst (\n\tpcchan = \"/tmp/test.sock\"\n)\n\nfunc initTestEnfReqPayload() rpcwrapper.InitRequestPayload {\n\tvar initEnfPayload rpcwrapper.InitRequestPayload\n\n\tinitEnfPayload.Validity = constants.SynTokenRefreshTime\n\tinitEnfPayload.MutualAuth = true\n\tinitEnfPayload.ServerID = \"598236b81c252c000102665d\"\n\n\t_, s, err := testhelper.NewTestCompactPKISecrets()\n\tif err != nil {\n\t\tfmt.Println(\"CompackPKI creation failed with:\", err)\n\t}\n\tinitEnfPayload.Secrets = s.RPCSecrets()\n\tinitEnfPayload.Configuration = &runtime.Configuration{}\n\treturn initEnfPayload\n}\n\nfunc initIdentity(id string) *policy.TagStore {\n\tinitID := policy.NewTagStore()\n\tinitID.AppendKeyValue(id, \"\")\n\treturn initID\n}\n\nfunc initAnnotations(an string) *policy.TagStore {\n\tinitAnno := policy.NewTagStore()\n\tinitAnno.AppendKeyValue(an, \"\")\n\treturn initAnno\n}\n\nfunc initTrans() policy.TagSelectorList {\n\n\tvar tags policy.TagSelectorList\n\tvar tag policy.TagSelector\n\tvar keyval policy.KeyValueOperator\n\tvar action policy.FlowPolicy\n\tvar accept policy.ActionType\n\n\tkeyval.Key = \"@usr:role\"\n\tkeyval.Value = []string{\"server\"}\n\tkeyval.Operator = \"=\"\n\taccept = policy.Accept\n\taction.Action = accept\n\ttag.Clause = []policy.KeyValueOperator{keyval}\n\ttag.Policy = &action\n\ttags = []policy.TagSelector{tag}\n\n\treturn tags\n}\n\nfunc getHash(payload interface{}) []byte {\n\thash, err := hashstructure.Hash(payload, nil)\n\tif err != nil {\n\t\treturn []byte{}\n\t}\n\n\tbuf := make([]byte, 8)\n\tbinary.BigEndian.PutUint64(buf, hash)\n\treturn buf\n}\n\nfunc initTestEnfPayload() rpcwrapper.EnforcePayload {\n\n\tvar initPayload rpcwrapper.EnforcePayload\n\tidString := \"@usr:role=client $namespace=/sibicentos AporetoContextID=5983bc8c923caa0001337b11\"\n\tanoString := \"@app:name=/inspiring_roentgen $namespace=/sibicentos @usr:build-date=20170801 @usr:license=GPLv2 @usr:name=CentOS Base Image @usr:role=client @usr:vendor=CentOS $id=5983bc8c923caa0001337b11 $namespace=/sibicentos $operationalstatus=Running $protected=false $type=Docker $description=centos $enforcerid=5983bba4923caa0001337a19 $name=centos $nativecontextid=b06f47830f64 @app:image=centos @usr:role=client role=client $id=5983bc8c923caa0001337b11 $identity=processingunit $id=5983bc8c923caa0001337b11 $namespace=/sibicentos\"\n\n\tinitPayload.ContextID = \"b06f47830f64\"\n\tinitPayload.Policy = &policy.PUPolicyPublic{\n\t\tManagementID:     \"5983bc8c923caa0001337b11\",\n\t\tTriremeAction:    2,\n\t\tIPs:              policy.ExtendedMap{\"bridge\": \"172.17.0.2\"},\n\t\tIdentity:         initIdentity(idString).GetSlice(),\n\t\tAnnotations:      initAnnotations(anoString).GetSlice(),\n\t\tCompressedTags:   policy.NewTagStore().GetSlice(),\n\t\tTransmitterRules: initTrans(),\n\t}\n\n\treturn initPayload\n}\n\nfunc initTestUnEnfPayload() rpcwrapper.UnEnforcePayload {\n\n\tvar initPayload rpcwrapper.UnEnforcePayload\n\n\tinitPayload.ContextID = \"b06f47830f64\"\n\n\treturn initPayload\n}\n\nfunc Test_NewRemoteEnforcer(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to retrieve rpc server handle\", t, func() {\n\n\t\trpcHdl := mockrpcwrapper.NewMockRPCServer(ctrl)\n\t\tstatsClient := mockclient.NewMockReporter(ctrl)\n\t\treportsClient := mockclient.NewMockReporter(ctrl)\n\t\tcollector := mockstatscollector.NewMockCollector(ctrl)\n\t\ttokenclient := mocktokenclient.NewMockTokenClient(ctrl)\n\t\tConvey(\"When I try to create new server with no env set\", func() {\n\t\t\tctx := context.Background()\n\n\t\t\trpcHdl.EXPECT().StartServer(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(0)\n\t\t\tserver, err := newRemoteEnforcer(ctx, rpcHdl, \"mysecret\", statsClient, collector, reportsClient, tokenclient, \"\", \"\", \"\", 1, policy.EnforcerMapping, semver.Version{})\n\n\t\t\tConvey(\"Then I should get error for no stats\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(server, ShouldNotBeNil)\n\t\t\t\tSo(server.service, ShouldBeNil)\n\t\t\t\tSo(server.rpcHandle, ShouldEqual, rpcHdl)\n\t\t\t\tSo(server.procMountPoint, ShouldResemble, constants.DefaultProcMountPoint)\n\t\t\t\tSo(server.statsClient, ShouldEqual, statsClient)\n\t\t\t\tSo(server.reportsClient, ShouldEqual, reportsClient)\n\t\t\t\tSo(server.ctx, ShouldEqual, ctx)\n\t\t\t\tSo(server.exit, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestInitEnforcer(t *testing.T) {\n\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to retrieve rpc server handle\", t, func() {\n\t\trpcHdl := mockrpcwrapper.NewMockRPCServer(ctrl)\n\t\tmockEnf := mockenforcer.NewMockEnforcer(ctrl)\n\t\tmockStats := mockclient.NewMockReporter(ctrl)\n\t\tmockReports := mockclient.NewMockReporter(ctrl)\n\t\tmockCollector := mockstatscollector.NewMockCollector(ctrl)\n\t\tmockSupevisor := mocksupervisor.NewMockSupervisor(ctrl)\n\t\tmockTokenClient := mocktokenclient.NewMockTokenClient(ctrl)\n\n\t\t// Mock the global functions.\n\t\tcreateEnforcer = func(\n\t\t\tmutualAuthorization bool,\n\t\t\tfqConfig fqconfig.FilterQueue,\n\t\t\tcollector collector.EventCollector,\n\t\t\tsecrets secrets.Secrets,\n\t\t\tserverID string,\n\t\t\tvalidity time.Duration,\n\t\t\tmode constants.ModeType,\n\t\t\tprocMountPoint string,\n\t\t\texternalIPCacheTimeout time.Duration,\n\t\t\tpacketLogs bool,\n\t\t\tcfg *runtime.Configuration,\n\t\t\ttokenIssuer common.ServiceTokenIssuer,\n\t\t\tisBPFEnabled bool,\n\t\t\tagentVersion semver.Version,\n\t\t\tserviceMeshType policy.ServiceMesh,\n\t\t) (enforcer.Enforcer, error) {\n\t\t\treturn mockEnf, nil\n\t\t}\n\n\t\tcreateSupervisor = func(\n\t\t\tcollector collector.EventCollector,\n\t\t\tenforcerInstance enforcer.Enforcer,\n\t\t\tmode constants.ModeType,\n\t\t\tcfg *runtime.Configuration,\n\t\t\tipv6Enabled bool,\n\t\t\tiptablesLockfile string,\n\t\t) (supervisor.Supervisor, error) {\n\t\t\treturn mockSupevisor, nil\n\t\t}\n\t\tdefer func() {\n\t\t\tcreateSupervisor = supervisor.NewSupervisor\n\t\t\tcreateEnforcer = enforcer.New\n\t\t}()\n\n\t\tConvey(\"When I try to create new server with env set\", func() {\n\t\t\tserr := os.Setenv(constants.EnvStatsChannel, pcchan)\n\t\t\tSo(serr, ShouldBeNil)\n\t\t\tserr = os.Setenv(constants.EnvStatsSecret, \"T6UYZGcKW-aum_vi-XakafF3vHV7F6x8wdofZs7akGU=\")\n\t\t\tSo(serr, ShouldBeNil)\n\n\t\t\tsecret := \"T6UYZGcKW-aum_vi-XakafF3vHV7F6x8wdofZs7akGU=\"\n\t\t\tserver, err := newRemoteEnforcer(context.Background(), rpcHdl, secret, mockStats, mockCollector, mockReports, mockTokenClient, \"\", \"\", \"\", 1, policy.EnforcerMapping, semver.Version{})\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tConvey(\"When I try to initiate an enforcer with invalid secret\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(false)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"init message authentication failed\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to instantiate the enforcer with a bad payload, it should error \", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfPayload()\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"invalid request payload\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to instantiate the enforcer amd the enforcer is initialized, it should fail \", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\tserver.enforcer = mockEnf\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"remote enforcer is already initialized\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to instantiate the enforcer and the enforcer fails, it should fail and cleanup\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\tcreateEnforcer = func(\n\t\t\t\t\tmutualAuthorization bool,\n\t\t\t\t\tfqConfig fqconfig.FilterQueue,\n\t\t\t\t\tcollector collector.EventCollector,\n\t\t\t\t\tsecrets secrets.Secrets,\n\t\t\t\t\tserverID string,\n\t\t\t\t\tvalidity time.Duration,\n\t\t\t\t\tmode constants.ModeType,\n\t\t\t\t\tprocMountPoint string,\n\t\t\t\t\texternalIPCacheTimeout time.Duration,\n\t\t\t\t\tpacketLogs bool,\n\t\t\t\t\tcfg *runtime.Configuration,\n\t\t\t\t\ttokenIssuer common.ServiceTokenIssuer,\n\t\t\t\t\tisBPFEnabled bool,\n\t\t\t\t\tagentVersion semver.Version,\n\t\t\t\t\tserviceMeshType policy.ServiceMesh,\n\t\t\t\t) (enforcer.Enforcer, error) {\n\t\t\t\t\treturn nil, fmt.Errorf(\"failed enforcer\")\n\t\t\t\t}\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"Error while initializing remote enforcer, failed enforcer\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to instantiate the enforcer and the supervisor fails, it should fail\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\tcreateSupervisor = func(\n\t\t\t\t\tcollector collector.EventCollector,\n\t\t\t\t\tenforcerInstance enforcer.Enforcer,\n\t\t\t\t\tmode constants.ModeType,\n\t\t\t\t\tcfg *runtime.Configuration,\n\t\t\t\t\tipv6Enabled bool,\n\t\t\t\t\tiptablesLockfile string,\n\t\t\t\t) (supervisor.Supervisor, error) {\n\t\t\t\t\treturn nil, fmt.Errorf(\"failed supervisor\")\n\t\t\t\t}\n\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to setup supervisor: failed supervisor\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to instantiate the enforcer and the controller fails to run, it should clean up\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\tmockEnf.EXPECT().Run(server.ctx).Return(fmt.Errorf(\"enforcer run error\"))\n\t\t\t\tmockSupevisor.EXPECT().CleanUp()\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"enforcer run error\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to instantiate the enforcer and the statclient fails to run, it should clean up\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\tmockEnf.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockStats.EXPECT().Run(server.ctx).Return(fmt.Errorf(\"stats error\"))\n\t\t\t\tmockSupevisor.EXPECT().CleanUp()\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"stats error\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to instantiate the enforcer and the supervisor fails to run, it should clean up\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\tmockEnf.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockStats.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockSupevisor.EXPECT().Run(server.ctx).Return(fmt.Errorf(\"supervisor run\"))\n\t\t\t\tmockSupevisor.EXPECT().CleanUp()\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"supervisor run\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When i try to instantiate the enforcer and reports Client fails to run it should cleanup\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\tmockEnf.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockStats.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockSupevisor.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockReports.EXPECT().Run(server.ctx).Return(errors.New(\"failed to run counterclient\"))\n\t\t\t\tmockSupevisor.EXPECT().CleanUp()\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"ReportsClientfailed to run counterclient\"))\n\t\t\t\t})\n\n\t\t\t})\n\t\t\tConvey(\"When I try to instantiate the enforcer and it succeeds it should not error\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), os.Getenv(constants.EnvStatsSecret)).Times(1).Return(true)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\tmockEnf.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockStats.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockSupevisor.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockReports.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\tmockTokenClient.EXPECT().Run(server.ctx).Return(nil)\n\t\t\t\terr := server.InitEnforcer(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should not get error\", func() {\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t})\n\t\t\t})\n\n\t\t})\n\t})\n}\n\nfunc TestEnforce(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to retrieve rpc server handle\", t, func() {\n\t\trpcHdl := mockrpcwrapper.NewMockRPCServer(ctrl)\n\t\tmockEnf := mockenforcer.NewMockEnforcer(ctrl)\n\t\tmockSup := mocksupervisor.NewMockSupervisor(ctrl)\n\t\tctx, cancel := context.WithCancel(context.TODO())\n\n\t\tConvey(\"When I try to create new server with env set\", func() {\n\n\t\t\tserver := &RemoteEnforcer{\n\t\t\t\trpcHandle:  rpcHdl,\n\t\t\t\tsupervisor: mockSup,\n\t\t\t\tenforcer:   mockEnf,\n\t\t\t\tctx:        ctx,\n\t\t\t\tcancel:     cancel,\n\t\t\t}\n\n\t\t\tConvey(\"When I try to send enforce command with invalid secret\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(false)\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.HashAuth = []byte{0xDE, 0xBD, 0x1C, 0x6A, 0x2A, 0x51, 0xC0, 0x02, 0x4B, 0xD7, 0xD1, 0x82, 0x78, 0x8A, 0xC4, 0xF1, 0xBE, 0xBF, 0x00, 0x89, 0x47, 0x0F, 0x13, 0x71, 0xAB, 0x4C, 0x0D, 0xD9, 0x9D, 0x85, 0x45, 0x04}\n\t\t\t\trpcwrperreq.Payload = initTestEnfPayload()\n\t\t\t\trpcwrperres.Status = \"\"\n\n\t\t\t\tdigest := hmac.New(sha256.New, []byte(\"InvalidSecret\"))\n\t\t\t\tif _, err := digest.Write(getHash(rpcwrperreq.Payload)); err != nil {\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t}\n\t\t\t\trpcwrperreq.HashAuth = digest.Sum(nil)\n\t\t\t\tserver.enforcer = mockEnf\n\n\t\t\t\terr := server.Enforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"enforce message auth failed\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to send enforce command with wrong payload it should fail\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\terr := server.Enforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"invalid enforcer payload\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to send enforce command and the supervisor is nil, it should fail\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfPayload()\n\t\t\t\tserver.supervisor = nil\n\n\t\t\t\terr := server.Enforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"enforcer not initialized - cannot enforce\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to send enforce command and the supervisor fails, it should fail and cleanup\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockSup.EXPECT().Supervise(gomock.Any(), gomock.Any()).Return(fmt.Errorf(\"supervisor error\"))\n\t\t\t\tmockSup.EXPECT().CleanUp()\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfPayload()\n\n\t\t\t\terr := server.Enforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"supervisor error\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to send enforce command and the enforcer fails, it should fail and cleanup\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockSup.EXPECT().Supervise(gomock.Any(), gomock.Any()).Return(nil)\n\t\t\t\tmockEnf.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Return(fmt.Errorf(\"enforcer error\"))\n\t\t\t\tmockSup.EXPECT().CleanUp()\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfPayload()\n\n\t\t\t\terr := server.Enforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"enforcer error\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When the enforce command succeeds, I should get no errors\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockSup.EXPECT().Supervise(gomock.Any(), gomock.Any()).Return(nil)\n\t\t\t\tmockEnf.EXPECT().Enforce(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfPayload()\n\n\t\t\t\terr := server.Enforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should not get an error \", func() {\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_UnEnforce(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a new server\", t, func() {\n\t\trpcHdl := mockrpcwrapper.NewMockRPCServer(ctrl)\n\t\tmockEnf := mockenforcer.NewMockEnforcer(ctrl)\n\t\tmockSup := mocksupervisor.NewMockSupervisor(ctrl)\n\t\tmockStats := mockclient.NewMockReporter(ctrl)\n\t\tctx, cancel := context.WithCancel(context.TODO())\n\n\t\tConvey(\"With proper initialization\", func() {\n\n\t\t\tserver := &RemoteEnforcer{\n\t\t\t\trpcHandle:   rpcHdl,\n\t\t\t\tsupervisor:  mockSup,\n\t\t\t\tenforcer:    mockEnf,\n\t\t\t\tctx:         ctx,\n\t\t\t\tcancel:      cancel,\n\t\t\t\tstatsClient: mockStats,\n\t\t\t}\n\n\t\t\tConvey(\"When I try to send unenforce command with invalid secret\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(false)\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestUnEnfPayload()\n\t\t\t\trpcwrperres.Status = \"\"\n\n\t\t\t\terr := server.Unenforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"unenforce message auth failed\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to send unenforce command with wrong payload it should fail\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockStats.EXPECT().Send()\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestEnfReqPayload()\n\n\t\t\t\terr := server.Unenforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"invalid unenforcer payload\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to send unenforce command and the supervisor fails, it should fail and cleanup\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockStats.EXPECT().Send()\n\t\t\t\tmockSup.EXPECT().Unsupervise(gomock.Any()).Return(fmt.Errorf(\"supervisor error\"))\n\t\t\t\tmockSup.EXPECT().CleanUp()\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestUnEnfPayload()\n\n\t\t\t\terr := server.Unenforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to clean supervisor: supervisor error\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to send unenforce command and the enforcer fails, it should fail and cleanup\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockStats.EXPECT().Send()\n\t\t\t\tmockSup.EXPECT().Unsupervise(gomock.Any()).Return(nil)\n\t\t\t\tmockEnf.EXPECT().Unenforce(gomock.Any(), gomock.Any()).Return(fmt.Errorf(\"enforcer error\"))\n\t\t\t\tmockSup.EXPECT().CleanUp()\n\t\t\t\tmockEnf.EXPECT().CleanUp()\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestUnEnfPayload()\n\n\t\t\t\terr := server.Unenforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to stop enforcer: enforcer error\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When the enforce command succeeds, I should get no errors\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockStats.EXPECT().Send()\n\t\t\t\tmockSup.EXPECT().Unsupervise(gomock.Any()).Return(nil)\n\t\t\t\tmockEnf.EXPECT().Unenforce(gomock.Any(), gomock.Any()).Return(nil)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = initTestUnEnfPayload()\n\n\t\t\t\terr := server.Unenforce(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should not get an error \", func() {\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_EnableDatapathPacketTracing(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a new server\", t, func() {\n\t\trpcHdl := mockrpcwrapper.NewMockRPCServer(ctrl)\n\t\tmockEnf := mockenforcer.NewMockEnforcer(ctrl)\n\t\tctx, cancel := context.WithCancel(context.TODO())\n\n\t\tConvey(\"With proper initialization\", func() {\n\n\t\t\tserver := &RemoteEnforcer{\n\t\t\t\trpcHandle: rpcHdl,\n\t\t\t\tenforcer:  mockEnf,\n\t\t\t\tctx:       ctx,\n\t\t\t\tcancel:    cancel,\n\t\t\t}\n\n\t\t\tConvey(\"When I try to enable datapath tracing and the validity fails, it should fail\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(false)\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = rpcwrapper.EnableDatapathPacketTracingPayLoad{}\n\t\t\t\trpcwrperres.Status = \"\"\n\n\t\t\t\terr := server.EnableDatapathPacketTracing(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"enable datapath packet tracing auth failed\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to enable datapath tracing  and the enforcer fails, it should fail and cleanup\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockEnf.EXPECT().EnableDatapathPacketTracing(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(fmt.Errorf(\"error\"))\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = rpcwrapper.EnableDatapathPacketTracingPayLoad{}\n\n\t\t\t\terr := server.EnableDatapathPacketTracing(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"error\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When the enforce command succeeds, I should get no errors\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockEnf.EXPECT().EnableDatapathPacketTracing(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = rpcwrapper.EnableDatapathPacketTracingPayLoad{}\n\n\t\t\t\terr := server.EnableDatapathPacketTracing(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should not get an error \", func() {\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_EnableIPTablesPacketTracing(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a new server\", t, func() {\n\t\trpcHdl := mockrpcwrapper.NewMockRPCServer(ctrl)\n\t\tmockSup := mocksupervisor.NewMockSupervisor(ctrl)\n\t\tctx, cancel := context.WithCancel(context.TODO())\n\n\t\tConvey(\"With proper initialization\", func() {\n\n\t\t\tserver := &RemoteEnforcer{\n\t\t\t\trpcHandle:  rpcHdl,\n\t\t\t\tsupervisor: mockSup,\n\t\t\t\tctx:        ctx,\n\t\t\t\tcancel:     cancel,\n\t\t\t}\n\n\t\t\tConvey(\"When I try to enable datapath tracing and the validity fails, it should fail\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(false)\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = rpcwrapper.EnableIPTablesPacketTracingPayLoad{}\n\t\t\t\trpcwrperres.Status = \"\"\n\n\t\t\t\terr := server.EnableIPTablesPacketTracing(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"enable iptable packet tracing auth failed\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When I try to enable datapath tracing  and the enforcer fails, it should fail and cleanup\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockSup.EXPECT().EnableIPTablesPacketTracing(gomock.Any(), gomock.Any(), gomock.Any()).Return(fmt.Errorf(\"error\"))\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = rpcwrapper.EnableIPTablesPacketTracingPayLoad{}\n\n\t\t\t\terr := server.EnableIPTablesPacketTracing(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"error\"))\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"When the enforce command succeeds, I should get no errors\", func() {\n\t\t\t\trpcHdl.EXPECT().CheckValidity(gomock.Any(), gomock.Any()).Times(1).Return(true)\n\t\t\t\tmockSup.EXPECT().EnableIPTablesPacketTracing(gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\n\t\t\t\tvar rpcwrperreq rpcwrapper.Request\n\t\t\t\tvar rpcwrperres rpcwrapper.Response\n\n\t\t\t\trpcwrperreq.Payload = rpcwrapper.EnableIPTablesPacketTracingPayLoad{}\n\n\t\t\t\terr := server.EnableIPTablesPacketTracing(rpcwrperreq, &rpcwrperres)\n\n\t\t\t\tConvey(\"Then I should not get an error \", func() {\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/remoteenforcer/type.go",
    "content": "package remoteenforcer\n\nimport (\n\t\"context\"\n\n\t\"github.com/blang/semver\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/rpcwrapper\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/supervisor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/fqconfig\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/packetprocessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/client\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/statscollector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/tokenissuer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n)\n\ntype logConfig struct {\n\tlogLevel  string // nolint:structcheck,unused\n\tlogFormat string // nolint:structcheck,unused\n\tlogID     string // nolint:structcheck,unused\n}\n\n// RemoteEnforcer : This is the structure for maintaining state required by the\n// remote enforcer.\n// It is a cache of variables passed by the controller to the remote enforcer and\n// other handles required by the remote enforcer to talk to the external processes\n//\n// Why is this public when all members are private ? For golang RPC server requirements\n// The lint directives below are for non-linux compiles that use remoteenforcer_stub.go\ntype RemoteEnforcer struct {\n\trpcSecret      string // nolint:structcheck,unused\n\trpcHandle      rpcwrapper.RPCServer\n\tcollector      statscollector.Collector // nolint:structcheck,unused\n\tstatsClient    client.Reporter\n\treportsClient  client.Reporter\n\tprocMountPoint string\n\tenforcer       enforcer.Enforcer\n\tsupervisor     supervisor.Supervisor\n\tservice        packetprocessor.PacketProcessor\n\tsecrets        secrets.Secrets // nolint:structcheck,unused\n\tctx            context.Context\n\tcancel         context.CancelFunc\n\texit           chan bool\n\tconfig         logConfig               // nolint:structcheck,unused\n\ttokenIssuer    tokenissuer.TokenClient // nolint:structcheck,unused\n\tenforcerType   policy.EnforcerType     // nolint:structcheck,unused\n\tagentVersion   semver.Version          // nolint:structcheck,unused\n\tfqConfig       fqconfig.FilterQueue    // nolint:structcheck,unused\n}\n"
  },
  {
    "path": "controller/pkg/secrets/compactpki/compactpki.go",
    "content": "package compactpki\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/x509\"\n\t\"errors\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n)\n\nconst (\n\tcompactPKIAckSize = 300\n)\n\n// CompactPKI holds all PKI information\ntype CompactPKI struct {\n\tprivateKeyPEM      []byte\n\tpublicKeyPEM       []byte\n\tauthorityPEM       []byte\n\ttrustedControllers []*secrets.ControllerInfo\n\tcompressed         claimsheader.CompressionType\n\tprivateKey         *ecdsa.PrivateKey\n\tpublicKey          *x509.Certificate\n\ttxKey              []byte\n\tverifier           pkiverifier.PKITokenVerifier\n}\n\n// NewCompactPKIWithTokenCA creates new secrets for PKI implementation based on compact encoding.\n//    keyPEM: is the private key that will be used for signing tokens formated as a PEM file.\n//    certPEM: is the public key that will be used formated as a PEM file.\n//    trustedControllers: is a list of trusted controllers.\n//    txKey: is the public key that is send over the wire.\n//    compressionType: is packed with the secrets to indicate compression.\nfunc NewCompactPKIWithTokenCA(keyPEM []byte, certPEM []byte, caPEM []byte, trustedControllers []*secrets.ControllerInfo, txKey []byte, compress claimsheader.CompressionType) (*CompactPKI, error) {\n\n\tkey, cert, _, err := crypto.LoadAndVerifyECSecrets(keyPEM, certPEM, caPEM)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttokenKeys := make([]*pkiverifier.PKIPublicKey, len(trustedControllers))\n\tfor _, tokenKey := range trustedControllers {\n\t\tcaCert, err := crypto.LoadCertificate(tokenKey.PublicKey)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tnamespaceKey := &pkiverifier.PKIPublicKey{\n\t\t\tPublicKey:  caCert.PublicKey.(*ecdsa.PublicKey),\n\t\t\tController: tokenKey.Controller,\n\t\t}\n\n\t\ttokenKeys = append(tokenKeys, namespaceKey)\n\t}\n\n\tif len(txKey) == 0 {\n\t\treturn nil, errors.New(\"transmit token missing\")\n\t}\n\n\tp := &CompactPKI{\n\t\tprivateKeyPEM:      keyPEM,\n\t\tpublicKeyPEM:       certPEM,\n\t\tauthorityPEM:       caPEM,\n\t\ttrustedControllers: trustedControllers,\n\t\tcompressed:         compress,\n\t\tprivateKey:         key,\n\t\tpublicKey:          cert,\n\t\ttxKey:              txKey,\n\t\tverifier:           pkiverifier.NewPKIVerifier(tokenKeys, 5*time.Minute),\n\t}\n\n\treturn p, nil\n}\n\n// EncodingKey returns the private key\nfunc (p *CompactPKI) EncodingKey() interface{} {\n\treturn p.privateKey\n}\n\n// PublicKey returns the public key\nfunc (p *CompactPKI) PublicKey() interface{} {\n\treturn p.publicKey\n}\n\n// CertAuthority returns the cert authority\nfunc (p *CompactPKI) CertAuthority() []byte {\n\treturn p.authorityPEM\n}\n\n//KeyAndClaims returns both the key and any attributes associated with the public key.\nfunc (p *CompactPKI) KeyAndClaims(pkey []byte) (interface{}, []string, time.Time, *pkiverifier.PKIControllerInfo, error) {\n\tkc, err := p.verifier.Verify(pkey)\n\tif err != nil {\n\t\treturn nil, nil, time.Unix(0, 0), nil, err\n\t}\n\treturn kc.PublicKey, kc.Tags, kc.Expiration, kc.Controller, nil\n}\n\n// TransmittedKey returns the PEM of the public key in the case of PKI\n// if there is no certificate cache configured\nfunc (p *CompactPKI) TransmittedKey() []byte {\n\treturn p.txKey\n}\n\n// AckSize returns the default size of an ACK packet\nfunc (p *CompactPKI) AckSize() uint32 {\n\treturn uint32(compactPKIAckSize)\n}\n\n// RPCSecrets returns the secrets that are marshallable over the RPC interface.\nfunc (p *CompactPKI) RPCSecrets() secrets.RPCSecrets {\n\treturn secrets.RPCSecrets{\n\t\tKey:                p.privateKeyPEM,\n\t\tCertificate:        p.publicKeyPEM,\n\t\tCA:                 p.authorityPEM,\n\t\tToken:              p.txKey,\n\t\tTrustedControllers: p.trustedControllers,\n\t\tCompressed:         p.compressed,\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/secrets/compactpki/compactpki_test.go",
    "content": "// +build !windows\n\npackage compactpki\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/x509\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n)\n\n// Certs\nvar (\n\tcaPEM = `-----BEGIN CERTIFICATE-----\nMIIBmzCCAUCgAwIBAgIRAIbf7tsXeg6vUJ2pe3WXzgwwCgYIKoZIzj0EAwIwPDEQ\nMA4GA1UEChMHQXBvcmV0bzEPMA0GA1UECxMGYXBvbXV4MRcwFQYDVQQDEw5BcG9t\ndXggUm9vdCBDQTAeFw0xODA1MDExODM3MjNaFw0yODAzMDkxODM3MjNaMDwxEDAO\nBgNVBAoTB0Fwb3JldG8xDzANBgNVBAsTBmFwb211eDEXMBUGA1UEAxMOQXBvbXV4\nIFJvb3QgQ0EwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQcpOm4VAWyNcI4/WZP\nqj9EBu5XWQppyG2LoXVYNv1YCfJBFYuVERxVaZEcUJ0ceE/doFyphS1Ohw3QjqDQ\nxakeoyMwITAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggqhkjO\nPQQDAgNJADBGAiEA+OL+qkSyXwLu6P/75kXBPo8fFGvXyX2vYis0hUAyHJcCIQCn\n86EFqkJDkeAguDEKvVtORcnxl+rAP924/PJAHLMh6Q==\n-----END CERTIFICATE-----`\n\tcaKeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEILpUWKqL6Sr+HrKDKLHt/vN6EYi22rJKV2q9xgKmiCqioAoGCCqGSM49\nAwEHoUQDQgAEHKTpuFQFsjXCOP1mT6o/RAbuV1kKachti6F1WDb9WAnyQRWLlREc\nVWmRHFCdHHhP3aBcqYUtTocN0I6g0MWpHg==\n-----END EC PRIVATE KEY-----`\n\tprivateKeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIGx017ukBSUSddLXefL/5nxxaRXuM1H/tUxQAYxWBrQtoAoGCCqGSM49\nAwEHoUQDQgAEZKBbcTmg0hGyVcgsUH7xijvaNOJ3EPM3Oq08VdCBsPNAojAR9wfX\nKLO/w0SRKj1DL03a9dl1Gwk0r7F0VnPQyw==\n-----END EC PRIVATE KEY-----`\n\tpublicPEM = `-----BEGIN CERTIFICATE-----\nMIIBsDCCAVagAwIBAgIRAOmitRugFU+nAhiGsp6fYOwwCgYIKoZIzj0EAwIwPDEQ\nMA4GA1UEChMHQXBvcmV0bzEPMA0GA1UECxMGYXBvbXV4MRcwFQYDVQQDEw5BcG9t\ndXggUm9vdCBDQTAeFw0xODA1MDExODQwMzFaFw0yODAzMDkxODQwMzFaMDYxETAP\nBgNVBAoTCHNvbWUgb3JnMRIwEAYDVQQLEwlzb21lLXVuaXQxDTALBgNVBAMTBHRl\nc3QwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARkoFtxOaDSEbJVyCxQfvGKO9o0\n4ncQ8zc6rTxV0IGw80CiMBH3B9cos7/DRJEqPUMvTdr12XUbCTSvsXRWc9DLoz8w\nPTAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMB\nMAwGA1UdEwEB/wQCMAAwCgYIKoZIzj0EAwIDSAAwRQIgBNYmLdmHI2gKy2NqfSXn\nMEDF56xWq7son2mcSePvLU8CIQCUxgYfDZDf067Y7vqLw1mWMlSnqECELnq7zel1\nfXtpyA==\n-----END CERTIFICATE-----`\n)\n\n// createTxtToken creates a transmitter token\nfunc createTxtToken() []byte {\n\tcaKey, err := crypto.LoadEllipticCurveKey([]byte(caKeyPEM))\n\tif err != nil {\n\t\tpanic(\"bad ca key \")\n\t}\n\n\tclientCert, err := crypto.LoadCertificate([]byte(publicPEM))\n\tif err != nil {\n\t\tpanic(\"bad client cert \")\n\t}\n\n\tp := pkiverifier.NewPKIIssuer(caKey)\n\ttoken, err := p.CreateTokenFromCertificate(clientCert, []string{})\n\tif err != nil {\n\t\tpanic(\"can't create token\")\n\t}\n\treturn token\n}\n\nfunc TestNewCompactPKIWithTokenCA(t *testing.T) {\n\ttxKey := createTxtToken()\n\t// txkey is a token that has the client public key signed by the CA\n\tConvey(\"When I create a new compact PKI, it should succeed \", t, func() {\n\t\ttokenKey := &secrets.ControllerInfo{\n\t\t\tPublicKey: []byte(caPEM),\n\t\t}\n\t\tcontrollerInfo := []*secrets.ControllerInfo{tokenKey}\n\t\tp, err := NewCompactPKIWithTokenCA([]byte(privateKeyPEM), []byte(publicPEM), []byte(caPEM), controllerInfo, txKey, claimsheader.CompressionTypeV1)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p, ShouldNotBeNil)\n\t\tSo(p.authorityPEM, ShouldResemble, []byte(caPEM))\n\t\tSo(p.privateKeyPEM, ShouldResemble, []byte(privateKeyPEM))\n\t\tSo(p.publicKeyPEM, ShouldResemble, []byte(publicPEM))\n\t})\n\n\tConvey(\"When I create a new compact PKI with invalid certs, it should fail\", t, func() {\n\t\ttokenKey := &secrets.ControllerInfo{\n\t\t\tPublicKey: []byte(caPEM),\n\t\t}\n\t\tcontrollerInfo := []*secrets.ControllerInfo{tokenKey}\n\t\tp, err := NewCompactPKIWithTokenCA([]byte(privateKeyPEM)[:20], []byte(publicPEM)[:30], []byte(caPEM), controllerInfo, txKey, claimsheader.CompressionTypeV1)\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(p, ShouldBeNil)\n\t})\n\n\tConvey(\"When I create a new compact PKI with invalid CA, it should fail\", t, func() {\n\t\ttokenKey := &secrets.ControllerInfo{\n\t\t\tPublicKey: []byte(caPEM),\n\t\t}\n\t\tcontrollerInfo := []*secrets.ControllerInfo{tokenKey}\n\t\tp, err := NewCompactPKIWithTokenCA([]byte(privateKeyPEM), []byte(publicPEM), []byte(caPEM)[:10], controllerInfo, txKey, claimsheader.CompressionTypeV1)\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(p, ShouldBeNil)\n\t})\n\n}\n\nfunc TestBasicInterfaceFunctions(t *testing.T) {\n\ttxKey := createTxtToken()\n\tConvey(\"Given a valid CompactPKI \", t, func() {\n\t\ttokenKey := &secrets.ControllerInfo{\n\t\t\tPublicKey: []byte(caPEM),\n\t\t}\n\t\tcontrollerInfo := []*secrets.ControllerInfo{tokenKey}\n\t\tp, err := NewCompactPKIWithTokenCA([]byte(privateKeyPEM), []byte(publicPEM), []byte(caPEM), controllerInfo, txKey, claimsheader.CompressionTypeV1)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p, ShouldNotBeNil)\n\n\t\tkey, cert, _, _ := crypto.LoadAndVerifyECSecrets([]byte(privateKeyPEM), []byte(publicPEM), []byte(caPEM))\n\t\tConvey(\"I should get the right encoding key\", func() {\n\t\t\tSo(*(p.EncodingKey().(*ecdsa.PrivateKey)), ShouldResemble, *key)\n\t\t})\n\n\t\tConvey(\"I should get the right transmitter key\", func() {\n\t\t\tSo(p.TransmittedKey(), ShouldResemble, txKey)\n\t\t})\n\n\t\tConvey(\"I should ge the right ack size\", func() {\n\t\t\tSo(p.AckSize(), ShouldEqual, compactPKIAckSize)\n\t\t})\n\n\t\tConvey(\"I should get the right public key, \", func() {\n\t\t\tSo(p.PublicKey().(*x509.Certificate), ShouldResemble, cert)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/secrets/compactpki.go",
    "content": "package secrets\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/x509\"\n\t\"errors\"\n\t\"time\"\n\n\t\"go.aporeto.io/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/trireme-lib/utils/crypto\"\n\t\"go.uber.org/zap\"\n)\n\nconst (\n\tcompactPKIAckSize = 300\n)\n\n// CompactPKI holds all PKI information\ntype CompactPKI struct {\n\tPrivateKeyPEM []byte\n\tPublicKeyPEM  []byte\n\tAuthorityPEM  []byte\n\tTokenKeyPEMs  [][]byte\n\tCompressed    claimsheader.CompressionType\n\tprivateKey    *ecdsa.PrivateKey\n\tpublicKey     *x509.Certificate\n\ttxKey         []byte\n\tverifier      pkiverifier.PKITokenVerifier\n}\n\n// NewCompactPKI creates new secrets for PKI implementation based on compact encoding\nfunc NewCompactPKI(keyPEM []byte, certPEM []byte, caPEM []byte, txKey []byte, compress claimsheader.CompressionType) (*CompactPKI, error) {\n\n\tzap.L().Warn(\"DEPRECATED. secrets.NewCompactPKI is deprecated in favor of secrets.NewCompactPKIWithTokenCA\")\n\treturn NewCompactPKIWithTokenCA(keyPEM, certPEM, caPEM, [][]byte{caPEM}, txKey, compress)\n}\n\n// NewCompactPKIWithTokenCA creates new secrets for PKI implementation based on compact encoding.\n//    keyPEM: is the private key that will be used for signing tokens formated as a PEM file.\n//    certPEM: is the public key that will be used formated as a PEM file.\n//    tokenKeyPEMs: is a list of public keys that can be used to verify the public token that\n//                  that is transmitted over the wire. These are essentially the public CA PEMs\n//                  that were used to sign the txtKey\n//    txKey: is the public key that is send over the wire.\n//    compressionType: is packed with the secrets to indicate compression.\nfunc NewCompactPKIWithTokenCA(keyPEM []byte, certPEM []byte, caPEM []byte, tokenKeyPEMs [][]byte, txKey []byte, compress claimsheader.CompressionType) (*CompactPKI, error) {\n\n\tkey, cert, _, err := crypto.LoadAndVerifyECSecrets(keyPEM, certPEM, caPEM)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttokenKeys := make([]*ecdsa.PublicKey, len(tokenKeyPEMs))\n\tfor _, ca := range tokenKeyPEMs {\n\t\tcaCert, err := crypto.LoadCertificate(ca)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\ttokenKeys = append(tokenKeys, caCert.PublicKey.(*ecdsa.PublicKey))\n\t}\n\n\tif len(txKey) == 0 {\n\t\treturn nil, errors.New(\"transmit token missing\")\n\t}\n\n\tp := &CompactPKI{\n\t\tPrivateKeyPEM: keyPEM,\n\t\tPublicKeyPEM:  certPEM,\n\t\tAuthorityPEM:  caPEM,\n\t\tTokenKeyPEMs:  tokenKeyPEMs,\n\t\tCompressed:    compress,\n\t\tprivateKey:    key,\n\t\tpublicKey:     cert,\n\t\ttxKey:         txKey,\n\t\tverifier:      pkiverifier.NewPKIVerifier(tokenKeys, 5*time.Minute),\n\t}\n\n\treturn p, nil\n}\n\n// Type implements the interface Secrets\nfunc (p *CompactPKI) Type() PrivateSecretsType {\n\treturn PKICompactType\n}\n\n// EncodingKey returns the private key\nfunc (p *CompactPKI) EncodingKey() interface{} {\n\treturn p.privateKey\n}\n\n// PublicKey returns the public key\nfunc (p *CompactPKI) PublicKey() interface{} {\n\treturn p.publicKey\n}\n\n//KeyAndClaims returns both the key and any attributes associated with the public key.\nfunc (p *CompactPKI) KeyAndClaims(pkey []byte) (interface{}, []string, time.Time, error) {\n\tkc, err := p.verifier.Verify(pkey)\n\tif err != nil {\n\t\treturn nil, nil, time.Unix(0, 0), err\n\t}\n\treturn kc.PublicKey, kc.Tags, kc.Expiration, nil\n}\n\n// TransmittedKey returns the PEM of the public key in the case of PKI\n// if there is no certificate cache configured\nfunc (p *CompactPKI) TransmittedKey() []byte {\n\treturn p.txKey\n}\n\n// AckSize returns the default size of an ACK packet\nfunc (p *CompactPKI) AckSize() uint32 {\n\treturn uint32(compactPKIAckSize)\n}\n\n// PublicSecrets returns the secrets that are marshallable over the RPC interface.\nfunc (p *CompactPKI) PublicSecrets() PublicSecrets {\n\treturn &CompactPKIPublicSecrets{\n\t\tType:        PKICompactType,\n\t\tKey:         p.PrivateKeyPEM,\n\t\tCertificate: p.PublicKeyPEM,\n\t\tCA:          p.AuthorityPEM,\n\t\tToken:       p.txKey,\n\t\tTokenCAs:    p.TokenKeyPEMs,\n\t\tCompressed:  p.Compressed,\n\t}\n}\n\n// CompactPKIPublicSecrets includes all the secrets that can be transmitted over\n// the RPC interface.\ntype CompactPKIPublicSecrets struct {\n\tType        PrivateSecretsType\n\tKey         []byte\n\tCertificate []byte\n\tCA          []byte\n\tTokenCAs    [][]byte\n\tToken       []byte\n\tCompressed  claimsheader.CompressionType\n}\n\n// SecretsType returns the type of secrets.\nfunc (p *CompactPKIPublicSecrets) SecretsType() PrivateSecretsType {\n\treturn p.Type\n}\n\n// CertAuthority returns the cert authority\nfunc (p *CompactPKIPublicSecrets) CertAuthority() []byte {\n\treturn p.CA\n}\n"
  },
  {
    "path": "controller/pkg/secrets/compactpki_test.go",
    "content": "// +build !windows\n\npackage secrets\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/x509\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/trireme-lib/utils/crypto\"\n)\n\nfunc TestNewCompactPKI(t *testing.T) {\n\ttxKey := CreateTxtToken()\n\t// txkey is a token that has the client public key signed by the CA\n\tConvey(\"When I create a new compact PKI, it should succeed \", t, func() {\n\n\t\tp, err := NewCompactPKI([]byte(PrivateKeyPEM), []byte(PublicPEM), []byte(CAPEM), txKey, claimsheader.CompressionTypeNone)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p, ShouldNotBeNil)\n\t\tSo(p.AuthorityPEM, ShouldResemble, []byte(CAPEM))\n\t\tSo(p.PrivateKeyPEM, ShouldResemble, []byte(PrivateKeyPEM))\n\t\tSo(p.PublicKeyPEM, ShouldResemble, []byte(PublicPEM))\n\t})\n\n\tConvey(\"When I create a new compact PKI with invalid certs, it should fail\", t, func() {\n\t\tp, err := NewCompactPKI([]byte(PrivateKeyPEM)[:20], []byte(PublicPEM)[:30], []byte(CAPEM), txKey, claimsheader.CompressionTypeNone)\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(p, ShouldBeNil)\n\t})\n\n\tConvey(\"When I create a new compact PKI with invalid CA, it should fail\", t, func() {\n\t\tp, err := NewCompactPKI([]byte(PrivateKeyPEM), []byte(PublicPEM), []byte(CAPEM)[:10], txKey, claimsheader.CompressionTypeNone)\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(p, ShouldBeNil)\n\t})\n\n}\n\nfunc TestBasicInterfaceFunctions(t *testing.T) {\n\ttxKey := CreateTxtToken()\n\tConvey(\"Given a valid CompactPKI \", t, func() {\n\t\tp, err := NewCompactPKI([]byte(PrivateKeyPEM), []byte(PublicPEM), []byte(CAPEM), txKey, claimsheader.CompressionTypeNone)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p, ShouldNotBeNil)\n\n\t\tkey, cert, _, _ := crypto.LoadAndVerifyECSecrets([]byte(PrivateKeyPEM), []byte(PublicPEM), []byte(CAPEM))\n\t\tConvey(\"I should get the right secrets type \", func() {\n\t\t\tSo(p.Type(), ShouldResemble, PKICompactType)\n\t\t})\n\n\t\tConvey(\"I should get the right encoding key\", func() {\n\t\t\tSo(*(p.EncodingKey().(*ecdsa.PrivateKey)), ShouldResemble, *key)\n\t\t})\n\n\t\tConvey(\"I should get the right transmitter key\", func() {\n\t\t\tSo(p.TransmittedKey(), ShouldResemble, txKey)\n\t\t})\n\n\t\tConvey(\"I should ge the right ack size\", func() {\n\t\t\tSo(p.AckSize(), ShouldEqual, compactPKIAckSize)\n\t\t})\n\n\t\tConvey(\"I should get the right public key, \", func() {\n\t\t\tSo(p.PublicKey().(*x509.Certificate), ShouldResemble, cert)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/secrets/interfaces.go",
    "content": "package secrets\n\n// PublicKeyAdder register a publicKey for a Node.\ntype PublicKeyAdder interface {\n\n\t// PublicKeyAdd adds the given cert for the given host.\n\tPublicKeyAdd(host string, cert []byte) error\n}\n"
  },
  {
    "path": "controller/pkg/secrets/mocksecrets/mocksecrets.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: trireme-lib/controller/pkg/secrets/secrets.go\n\n// Package mocksecrets is a generated GoMock package.\npackage mocksecrets\n\nimport (\n\tgomock \"github.com/golang/mock/gomock\"\n\tpkiverifier \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\tsecrets \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\treflect \"reflect\"\n\ttime \"time\"\n)\n\n// MockLockedSecrets is a mock of LockedSecrets interface\ntype MockLockedSecrets struct {\n\tctrl     *gomock.Controller\n\trecorder *MockLockedSecretsMockRecorder\n}\n\n// MockLockedSecretsMockRecorder is the mock recorder for MockLockedSecrets\ntype MockLockedSecretsMockRecorder struct {\n\tmock *MockLockedSecrets\n}\n\n// NewMockLockedSecrets creates a new mock instance\nfunc NewMockLockedSecrets(ctrl *gomock.Controller) *MockLockedSecrets {\n\tmock := &MockLockedSecrets{ctrl: ctrl}\n\tmock.recorder = &MockLockedSecretsMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockLockedSecrets) EXPECT() *MockLockedSecretsMockRecorder {\n\treturn m.recorder\n}\n\n// Secrets mocks base method\nfunc (m *MockLockedSecrets) Secrets() (secrets.Secrets, func()) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Secrets\")\n\tret0, _ := ret[0].(secrets.Secrets)\n\tret1, _ := ret[1].(func())\n\treturn ret0, ret1\n}\n\n// Secrets indicates an expected call of Secrets\nfunc (mr *MockLockedSecretsMockRecorder) Secrets() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Secrets\", reflect.TypeOf((*MockLockedSecrets)(nil).Secrets))\n}\n\n// MockSecrets is a mock of Secrets interface\ntype MockSecrets struct {\n\tctrl     *gomock.Controller\n\trecorder *MockSecretsMockRecorder\n}\n\n// MockSecretsMockRecorder is the mock recorder for MockSecrets\ntype MockSecretsMockRecorder struct {\n\tmock *MockSecrets\n}\n\n// NewMockSecrets creates a new mock instance\nfunc NewMockSecrets(ctrl *gomock.Controller) *MockSecrets {\n\tmock := &MockSecrets{ctrl: ctrl}\n\tmock.recorder = &MockSecretsMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockSecrets) EXPECT() *MockSecretsMockRecorder {\n\treturn m.recorder\n}\n\n// EncodingKey mocks base method\nfunc (m *MockSecrets) EncodingKey() interface{} {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"EncodingKey\")\n\tret0, _ := ret[0].(interface{})\n\treturn ret0\n}\n\n// EncodingKey indicates an expected call of EncodingKey\nfunc (mr *MockSecretsMockRecorder) EncodingKey() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"EncodingKey\", reflect.TypeOf((*MockSecrets)(nil).EncodingKey))\n}\n\n// PublicKey mocks base method\nfunc (m *MockSecrets) PublicKey() interface{} {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PublicKey\")\n\tret0, _ := ret[0].(interface{})\n\treturn ret0\n}\n\n// PublicKey indicates an expected call of PublicKey\nfunc (mr *MockSecretsMockRecorder) PublicKey() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PublicKey\", reflect.TypeOf((*MockSecrets)(nil).PublicKey))\n}\n\n// CertAuthority mocks base method\nfunc (m *MockSecrets) CertAuthority() []byte {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CertAuthority\")\n\tret0, _ := ret[0].([]byte)\n\treturn ret0\n}\n\n// CertAuthority indicates an expected call of CertAuthority\nfunc (mr *MockSecretsMockRecorder) CertAuthority() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CertAuthority\", reflect.TypeOf((*MockSecrets)(nil).CertAuthority))\n}\n\n// TransmittedKey mocks base method\nfunc (m *MockSecrets) TransmittedKey() []byte {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TransmittedKey\")\n\tret0, _ := ret[0].([]byte)\n\treturn ret0\n}\n\n// TransmittedKey indicates an expected call of TransmittedKey\nfunc (mr *MockSecretsMockRecorder) TransmittedKey() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TransmittedKey\", reflect.TypeOf((*MockSecrets)(nil).TransmittedKey))\n}\n\n// KeyAndClaims mocks base method\nfunc (m *MockSecrets) KeyAndClaims(pkey []byte) (interface{}, []string, time.Time, *pkiverifier.PKIControllerInfo, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"KeyAndClaims\", pkey)\n\tret0, _ := ret[0].(interface{})\n\tret1, _ := ret[1].([]string)\n\tret2, _ := ret[2].(time.Time)\n\tret3, _ := ret[3].(*pkiverifier.PKIControllerInfo)\n\tret4, _ := ret[4].(error)\n\treturn ret0, ret1, ret2, ret3, ret4\n}\n\n// KeyAndClaims indicates an expected call of KeyAndClaims\nfunc (mr *MockSecretsMockRecorder) KeyAndClaims(pkey interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"KeyAndClaims\", reflect.TypeOf((*MockSecrets)(nil).KeyAndClaims), pkey)\n}\n\n// AckSize mocks base method\nfunc (m *MockSecrets) AckSize() uint32 {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"AckSize\")\n\tret0, _ := ret[0].(uint32)\n\treturn ret0\n}\n\n// AckSize indicates an expected call of AckSize\nfunc (mr *MockSecretsMockRecorder) AckSize() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AckSize\", reflect.TypeOf((*MockSecrets)(nil).AckSize))\n}\n\n// RPCSecrets mocks base method\nfunc (m *MockSecrets) RPCSecrets() secrets.RPCSecrets {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RPCSecrets\")\n\tret0, _ := ret[0].(secrets.RPCSecrets)\n\treturn ret0\n}\n\n// RPCSecrets indicates an expected call of RPCSecrets\nfunc (mr *MockSecretsMockRecorder) RPCSecrets() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RPCSecrets\", reflect.TypeOf((*MockSecrets)(nil).RPCSecrets))\n}\n"
  },
  {
    "path": "controller/pkg/secrets/null.go",
    "content": "package secrets\n\nimport (\n\t\"time\"\n\n\tjwt \"github.com/dgrijalva/jwt-go\"\n)\n\n// This is a NULL secrets implementation only for performance testing\n// ATTENTION *** ONLY FOR TESTING\n// DO NOT USE FOR ANY REAL CODE\n\n// NullPKI holds all PKI information\ntype NullPKI struct {\n\tPrivateKeyPEM []byte\n\tPublicKeyPEM  []byte\n\tAuthorityPEM  []byte\n}\n\n// NewNullPKI creates new secrets for PKI implementation based on compact encoding\nfunc NewNullPKI(keyPEM, certPEM, caPEM []byte) (*NullPKI, error) {\n\n\tp := &NullPKI{}\n\treturn p, nil\n}\n\n// Type implements the interface Secrets\nfunc (p *NullPKI) Type() PrivateSecretsType {\n\treturn PKINull\n}\n\n// EncodingKey returns the private key\nfunc (p *NullPKI) EncodingKey() interface{} {\n\treturn jwt.UnsafeAllowNoneSignatureType\n}\n\n// PublicKey returns nil in this case\nfunc (p *NullPKI) PublicKey() interface{} {\n\treturn nil\n}\n\n//KeyAndClaims returns both the key and any attributes associated with the public key.\nfunc (p *NullPKI) KeyAndClaims(pkey []byte) (interface{}, []string, time.Time, error) {\n\treturn jwt.UnsafeAllowNoneSignatureType, []string{}, time.Now(), nil\n}\n\n// TransmittedKey returns the PEM of the public key in the case of PKI\n// if there is no certificate cache configured\nfunc (p *NullPKI) TransmittedKey() []byte {\n\treturn []byte(\"none\")\n}\n\n// AckSize returns the default size of an ACK packet\nfunc (p *NullPKI) AckSize() uint32 {\n\treturn uint32(235)\n}\n\n// PublicSecrets returns the secrets that are marshallable over the RPC interface.\nfunc (p *NullPKI) PublicSecrets() PublicSecrets {\n\treturn &NullPublicSecrets{\n\t\tType: PKINull,\n\t}\n}\n\n// NullPublicSecrets includes all the secrets that can be transmitted over\n// the RPC interface.\ntype NullPublicSecrets struct {\n\tType PrivateSecretsType\n}\n\n// SecretsType returns the type of secrets.\nfunc (p *NullPublicSecrets) SecretsType() PrivateSecretsType {\n\treturn p.Type\n}\n\n// CertAuthority returns the cert authority - N/A to PSK\nfunc (p *NullPublicSecrets) CertAuthority() []byte {\n\treturn []byte{}\n}\n"
  },
  {
    "path": "controller/pkg/secrets/rpc/rpc.go",
    "content": "package rpc\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/compactpki\"\n)\n\n// NewSecrets creates a new set of secrets based on the RPCSecrets.\n// We support only one type for now, CompactPKI.\nfunc NewSecrets(r secrets.RPCSecrets) (secrets.Secrets, error) {\n\treturn compactpki.NewCompactPKIWithTokenCA(r.Key, r.Certificate, r.CA, r.TrustedControllers, r.Token, r.Compressed)\n}\n"
  },
  {
    "path": "controller/pkg/secrets/secrets.go",
    "content": "package secrets\n\nimport (\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n)\n\n// LockedSecrets provides a way to use secrets where shared read access is required. The user becomes\n// responsible for unlocking when done using them. The implementation should lock the access to secrets\n// for reading, and pass down the function for unlocking.\ntype LockedSecrets interface {\n\tSecrets() (Secrets, func())\n}\n\n// Secrets is an interface implementing secrets\ntype Secrets interface {\n\t// EncodingKey returns the key used to encode the tokens.\n\tEncodingKey() interface{}\n\t// PublicKey returns the public ket of the secrets.\n\tPublicKey() interface{}\n\t// CertAuthority returns the CA\n\tCertAuthority() []byte\n\t// TransmittedKey returns the public key as a byte slice and as it is transmitted\n\t// on the wire.\n\tTransmittedKey() []byte\n\t// KeyAndClaims will verify the public key and return any claims that are part of the key.\n\tKeyAndClaims(pkey []byte) (interface{}, []string, time.Time, *pkiverifier.PKIControllerInfo, error)\n\t// AckSize calculates the size of the ACK packet based on the keys.\n\tAckSize() uint32\n\t// RPCSecrets returns the PEM formated secrets to be transmitted over the RPC interface.\n\tRPCSecrets() RPCSecrets\n}\n\n// ControllerInfo holds information about public keys\ntype ControllerInfo struct {\n\t// PublicKey is the public key for a controller which is used to verify the public token\n\t// that that is transmitted over the wire. These were used to sign the txtKey.\n\tPublicKey []byte\n\t// Controller is information for a given controller.\n\tController *pkiverifier.PKIControllerInfo\n}\n\n// RPCSecrets includes all the secrets that can be transmitted over\n// the RPC interface.\ntype RPCSecrets struct {\n\tKey                []byte\n\tCertificate        []byte\n\tCA                 []byte\n\tTrustedControllers []*ControllerInfo\n\tToken              []byte\n\tCompressed         claimsheader.CompressionType\n}\n"
  },
  {
    "path": "controller/pkg/secrets/test_utils.go",
    "content": "package secrets\n\nimport (\n\t\"crypto/x509\"\n\n\t\"go.aporeto.io/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/trireme-lib/utils/crypto\"\n)\n\n// Certs\nvar (\n\tCAPEM = `-----BEGIN CERTIFICATE-----\nMIIBmzCCAUCgAwIBAgIRAIbf7tsXeg6vUJ2pe3WXzgwwCgYIKoZIzj0EAwIwPDEQ\nMA4GA1UEChMHQXBvcmV0bzEPMA0GA1UECxMGYXBvbXV4MRcwFQYDVQQDEw5BcG9t\ndXggUm9vdCBDQTAeFw0xODA1MDExODM3MjNaFw0yODAzMDkxODM3MjNaMDwxEDAO\nBgNVBAoTB0Fwb3JldG8xDzANBgNVBAsTBmFwb211eDEXMBUGA1UEAxMOQXBvbXV4\nIFJvb3QgQ0EwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQcpOm4VAWyNcI4/WZP\nqj9EBu5XWQppyG2LoXVYNv1YCfJBFYuVERxVaZEcUJ0ceE/doFyphS1Ohw3QjqDQ\nxakeoyMwITAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggqhkjO\nPQQDAgNJADBGAiEA+OL+qkSyXwLu6P/75kXBPo8fFGvXyX2vYis0hUAyHJcCIQCn\n86EFqkJDkeAguDEKvVtORcnxl+rAP924/PJAHLMh6Q==\n-----END CERTIFICATE-----`\n\tCAKeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEILpUWKqL6Sr+HrKDKLHt/vN6EYi22rJKV2q9xgKmiCqioAoGCCqGSM49\nAwEHoUQDQgAEHKTpuFQFsjXCOP1mT6o/RAbuV1kKachti6F1WDb9WAnyQRWLlREc\nVWmRHFCdHHhP3aBcqYUtTocN0I6g0MWpHg==\n-----END EC PRIVATE KEY-----`\n\tPrivateKeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIGx017ukBSUSddLXefL/5nxxaRXuM1H/tUxQAYxWBrQtoAoGCCqGSM49\nAwEHoUQDQgAEZKBbcTmg0hGyVcgsUH7xijvaNOJ3EPM3Oq08VdCBsPNAojAR9wfX\nKLO/w0SRKj1DL03a9dl1Gwk0r7F0VnPQyw==\n-----END EC PRIVATE KEY-----`\n\tPublicPEM = `-----BEGIN CERTIFICATE-----\nMIIBsDCCAVagAwIBAgIRAOmitRugFU+nAhiGsp6fYOwwCgYIKoZIzj0EAwIwPDEQ\nMA4GA1UEChMHQXBvcmV0bzEPMA0GA1UECxMGYXBvbXV4MRcwFQYDVQQDEw5BcG9t\ndXggUm9vdCBDQTAeFw0xODA1MDExODQwMzFaFw0yODAzMDkxODQwMzFaMDYxETAP\nBgNVBAoTCHNvbWUgb3JnMRIwEAYDVQQLEwlzb21lLXVuaXQxDTALBgNVBAMTBHRl\nc3QwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARkoFtxOaDSEbJVyCxQfvGKO9o0\n4ncQ8zc6rTxV0IGw80CiMBH3B9cos7/DRJEqPUMvTdr12XUbCTSvsXRWc9DLoz8w\nPTAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMB\nMAwGA1UdEwEB/wQCMAAwCgYIKoZIzj0EAwIDSAAwRQIgBNYmLdmHI2gKy2NqfSXn\nMEDF56xWq7son2mcSePvLU8CIQCUxgYfDZDf067Y7vqLw1mWMlSnqECELnq7zel1\nfXtpyA==\n-----END CERTIFICATE-----`\n)\n\n// CreateTxtToken creates a transmitter token\nfunc CreateTxtToken() []byte {\n\tcaKey, err := crypto.LoadEllipticCurveKey([]byte(CAKeyPEM))\n\tif err != nil {\n\t\tpanic(\"bad ca key \")\n\t}\n\n\tclientCert, err := crypto.LoadCertificate([]byte(PublicPEM))\n\tif err != nil {\n\t\tpanic(\"bad client cert \")\n\t}\n\n\tp := pkiverifier.NewPKIIssuer(caKey)\n\ttoken, err := p.CreateTokenFromCertificate(clientCert, []string{})\n\tif err != nil {\n\t\tpanic(\"can't create token\")\n\t}\n\treturn token\n}\n\n// CreateCompactPKITestSecrets creates test secrets\nfunc CreateCompactPKITestSecrets() (*x509.Certificate, Secrets, error) {\n\ttxtKey, err := crypto.LoadEllipticCurveKey([]byte(PrivateKeyPEM))\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tcert, err := crypto.LoadCertificate([]byte(PublicPEM))\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tissuer := pkiverifier.NewPKIIssuer(txtKey)\n\ttxtToken, err := issuer.CreateTokenFromCertificate(cert, []string{})\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tscrts, err := NewCompactPKIWithTokenCA([]byte(PrivateKeyPEM), []byte(PublicPEM), []byte(CAPEM), [][]byte{[]byte(PublicPEM)}, txtToken, claimsheader.CompressionTypeNone)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\treturn cert, scrts, nil\n}\n"
  },
  {
    "path": "controller/pkg/secrets/testhelper/testhelper.go",
    "content": "package testhelper\n\nimport (\n\t\"crypto/x509\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/compactpki\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n)\n\n// **** ATTENTION ****\n// This package is only to help other packages to do unit tests.\n// It's a very valid question, why arent they using a mock !\n\n// Certs\nvar (\n\tcaPEM = `-----BEGIN CERTIFICATE-----\nMIIBmzCCAUCgAwIBAgIRAIbf7tsXeg6vUJ2pe3WXzgwwCgYIKoZIzj0EAwIwPDEQ\nMA4GA1UEChMHQXBvcmV0bzEPMA0GA1UECxMGYXBvbXV4MRcwFQYDVQQDEw5BcG9t\ndXggUm9vdCBDQTAeFw0xODA1MDExODM3MjNaFw0yODAzMDkxODM3MjNaMDwxEDAO\nBgNVBAoTB0Fwb3JldG8xDzANBgNVBAsTBmFwb211eDEXMBUGA1UEAxMOQXBvbXV4\nIFJvb3QgQ0EwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQcpOm4VAWyNcI4/WZP\nqj9EBu5XWQppyG2LoXVYNv1YCfJBFYuVERxVaZEcUJ0ceE/doFyphS1Ohw3QjqDQ\nxakeoyMwITAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggqhkjO\nPQQDAgNJADBGAiEA+OL+qkSyXwLu6P/75kXBPo8fFGvXyX2vYis0hUAyHJcCIQCn\n86EFqkJDkeAguDEKvVtORcnxl+rAP924/PJAHLMh6Q==\n-----END CERTIFICATE-----`\n\tcaKeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEILpUWKqL6Sr+HrKDKLHt/vN6EYi22rJKV2q9xgKmiCqioAoGCCqGSM49\nAwEHoUQDQgAEHKTpuFQFsjXCOP1mT6o/RAbuV1kKachti6F1WDb9WAnyQRWLlREc\nVWmRHFCdHHhP3aBcqYUtTocN0I6g0MWpHg==\n-----END EC PRIVATE KEY-----`\n\tprivateKeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIGx017ukBSUSddLXefL/5nxxaRXuM1H/tUxQAYxWBrQtoAoGCCqGSM49\nAwEHoUQDQgAEZKBbcTmg0hGyVcgsUH7xijvaNOJ3EPM3Oq08VdCBsPNAojAR9wfX\nKLO/w0SRKj1DL03a9dl1Gwk0r7F0VnPQyw==\n-----END EC PRIVATE KEY-----`\n\tpublicPEM = `-----BEGIN CERTIFICATE-----\nMIIBsDCCAVagAwIBAgIRAOmitRugFU+nAhiGsp6fYOwwCgYIKoZIzj0EAwIwPDEQ\nMA4GA1UEChMHQXBvcmV0bzEPMA0GA1UECxMGYXBvbXV4MRcwFQYDVQQDEw5BcG9t\ndXggUm9vdCBDQTAeFw0xODA1MDExODQwMzFaFw0yODAzMDkxODQwMzFaMDYxETAP\nBgNVBAoTCHNvbWUgb3JnMRIwEAYDVQQLEwlzb21lLXVuaXQxDTALBgNVBAMTBHRl\nc3QwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARkoFtxOaDSEbJVyCxQfvGKO9o0\n4ncQ8zc6rTxV0IGw80CiMBH3B9cos7/DRJEqPUMvTdr12XUbCTSvsXRWc9DLoz8w\nPTAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMB\nMAwGA1UdEwEB/wQCMAAwCgYIKoZIzj0EAwIDSAAwRQIgBNYmLdmHI2gKy2NqfSXn\nMEDF56xWq7son2mcSePvLU8CIQCUxgYfDZDf067Y7vqLw1mWMlSnqECELnq7zel1\nfXtpyA==\n-----END CERTIFICATE-----`\n)\n\n// createTxtToken creates a transmitter token\nfunc createTxtToken() []byte {\n\tcaKey, err := crypto.LoadEllipticCurveKey([]byte(caKeyPEM))\n\tif err != nil {\n\t\tpanic(\"bad ca key \")\n\t}\n\n\tclientCert, err := crypto.LoadCertificate([]byte(publicPEM))\n\tif err != nil {\n\t\tpanic(\"bad client cert \")\n\t}\n\n\tp := pkiverifier.NewPKIIssuer(caKey)\n\ttoken, err := p.CreateTokenFromCertificate(clientCert, []string{})\n\tif err != nil {\n\t\tpanic(\"can't create token\")\n\t}\n\treturn token\n}\n\n// NewTestCompactPKISecrets creates test secrets\nfunc NewTestCompactPKISecrets() (*x509.Certificate, secrets.Secrets, error) {\n\ttxtKey, err := crypto.LoadEllipticCurveKey([]byte(privateKeyPEM))\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tcert, err := crypto.LoadCertificate([]byte(publicPEM))\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tissuer := pkiverifier.NewPKIIssuer(txtKey)\n\ttxtToken, err := issuer.CreateTokenFromCertificate(cert, []string{})\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\ttokenKey := &secrets.ControllerInfo{\n\t\tPublicKey: []byte(publicPEM),\n\t}\n\n\tscrts, err := compactpki.NewCompactPKIWithTokenCA([]byte(privateKeyPEM), []byte(publicPEM), []byte(caPEM), []*secrets.ControllerInfo{tokenKey}, txtToken, claimsheader.CompressionTypeV1)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\treturn cert, scrts, nil\n}\n"
  },
  {
    "path": "controller/pkg/servicetokens/servicetokens.go",
    "content": "package servicetokens\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/bluele/gcache\"\n\tjwt \"github.com/dgrijalva/jwt-go\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\tlocalCache = cache.NewCacheWithExpiration(\"tokens\", time.Second*10)\n)\n\n// JWTClaims is the structure of the claims we are sending on the wire.\ntype JWTClaims struct {\n\tjwt.StandardClaims\n\tScopes      []string\n\tProfile     []string\n\tData        map[string]string\n\tPingPayload *policy.PingPayload `json:\",omitempty\"`\n}\n\n// Verifier keeps all the structures for processing tokens.\ntype Verifier struct {\n\tsecrets    secrets.Secrets\n\tglobalCert *x509.Certificate\n\ttokenCache gcache.Cache\n\tsync.RWMutex\n}\n\n// NewVerifier creates a new Aporeto JWT Verifier. The globalCertificate is optional\n// and is needed for configurations that do not transmit the token over the wire.\nfunc NewVerifier(s secrets.Secrets, globalCertificate *x509.Certificate) *Verifier {\n\treturn &Verifier{\n\t\tsecrets:    s,\n\t\tglobalCert: globalCertificate,\n\t\t// tokenCache will cache the token results to accelerate performance\n\t\ttokenCache: gcache.New(2048).LRU().Expiration(20 * time.Second).Build(),\n\t}\n}\n\n// ParseToken parses and validates the JWT token, give the publicKey. It returns the scopes\n// the identity and the subject of the provided token. These tokens are strictly\n// signed with EC.\n// TODO: We can be more flexible with the algorithm selection here.\nfunc (p *Verifier) ParseToken(token string, publicKey string) (string, []string, []string, *policy.PingPayload, error) {\n\tp.RLock()\n\tdefer p.RUnlock()\n\n\tif data, _ := p.tokenCache.Get(token); data != nil {\n\t\tclaims := data.(*JWTClaims)\n\t\treturn claims.Subject, claims.Scopes, claims.Profile, claims.PingPayload, nil\n\t}\n\n\t// if a public key is transmitted in the wire, we need to verify its validity and use it.\n\t// Otherwise we use the public key of the stored secrets.\n\tvar key *ecdsa.PublicKey\n\tvar ok bool\n\tvar enforcerclaims []string\n\n\tif len(publicKey) > 0 {\n\t\t// Public keys are cached and verified and they are the compact keys\n\t\t// that we transmit in all other requests signed by the CA. These keys\n\t\t// are not full certificates.\n\t\tgKey, rootClaims, _, _, err := p.secrets.KeyAndClaims([]byte(publicKey))\n\t\tif err != nil {\n\t\t\treturn \"\", nil, nil, nil, fmt.Errorf(\"Cannot validate public key: %s\", err)\n\t\t}\n\t\tenforcerclaims = rootClaims\n\t\tkey, ok = gKey.(*ecdsa.PublicKey)\n\t\tif !ok {\n\t\t\treturn \"\", nil, nil, nil, fmt.Errorf(\"Provided public key not supported\")\n\t\t}\n\t} else {\n\t\tif p.globalCert == nil {\n\t\t\treturn \"\", nil, nil, nil, fmt.Errorf(\"Cannot validate global public key\")\n\t\t}\n\t\tkey, ok = p.globalCert.PublicKey.(*ecdsa.PublicKey)\n\t\tif !ok {\n\t\t\treturn \"\", nil, nil, nil, fmt.Errorf(\"Global public key is not supported\")\n\t\t}\n\t}\n\n\tclaims := &JWTClaims{}\n\tif _, err := jwt.ParseWithClaims(token, claims, func(*jwt.Token) (interface{}, error) { // nolint\n\t\treturn key, nil\n\t}); err != nil {\n\t\treturn \"\", nil, nil, nil, err\n\t}\n\tclaims.Profile = append(claims.Profile, enforcerclaims...)\n\tif err := p.tokenCache.Set(token, claims); err != nil {\n\t\tzap.L().Error(\"Failed to cache token\", zap.Error(err))\n\t}\n\n\tfor k, v := range claims.Data {\n\t\tclaims.Scopes = append(claims.Scopes, \"data:\"+k+\"=\"+v)\n\t}\n\treturn claims.Subject, claims.Scopes, claims.Profile, claims.PingPayload, nil\n}\n\n// UpdateSecrets updates the secrets of the token Verifier.\nfunc (p *Verifier) UpdateSecrets(s secrets.Secrets, globalCert *x509.Certificate) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.secrets = s\n\tp.globalCert = globalCert\n}\n\n// CreateAndSign creates a new JWT token based on the Aporeto identities.\nfunc CreateAndSign(server string, profile, scopes []string, id string, validity time.Duration, gkey interface{}, pingPayload *policy.PingPayload) (string, error) {\n\tkey, ok := gkey.(*ecdsa.PrivateKey)\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"Not a valid private key format\")\n\t}\n\tif token, err := localCache.Get(id); err == nil {\n\t\treturn token.(string), nil\n\t}\n\tclaims := &JWTClaims{\n\t\tStandardClaims: jwt.StandardClaims{\n\t\t\tIssuer:    server,\n\t\t\tExpiresAt: time.Now().Add(validity).Unix(),\n\t\t\tSubject:   id,\n\t\t},\n\t\tProfile:     profile,\n\t\tScopes:      scopes,\n\t\tPingPayload: pingPayload,\n\t}\n\n\ttoken, err := jwt.NewWithClaims(jwt.SigningMethodES256, claims).SignedString(key)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\t// pingPayload should be nil for non-ping requests. If pingPayload is not nil,\n\t// we disable the token caching.\n\tif pingPayload == nil {\n\t\tlocalCache.AddOrUpdate(id, token)\n\t}\n\treturn token, nil\n}\n"
  },
  {
    "path": "controller/pkg/tokens/binarycodec.go",
    "content": "// +build go1.6\n\n// Code generated by codecgen - DO NOT EDIT.\n\npackage tokens\n\nimport (\n\t\"errors\"\n\tpkg3_jwt_go \"github.com/dgrijalva/jwt-go\"\n\tcodec1978 \"github.com/ugorji/go/codec\"\n\tpkg2_claimsheader \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\tpkg1_policy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"runtime\"\n\t\"strconv\"\n)\n\nconst (\n\t// ----- content types ----\n\tcodecSelferCcUTF88267 = 1\n\tcodecSelferCcRAW8267  = 255\n\t// ----- value types used ----\n\tcodecSelferValueTypeArray8267     = 10\n\tcodecSelferValueTypeMap8267       = 9\n\tcodecSelferValueTypeString8267    = 6\n\tcodecSelferValueTypeInt8267       = 2\n\tcodecSelferValueTypeUint8267      = 3\n\tcodecSelferValueTypeFloat8267     = 4\n\tcodecSelferValueTypeNil8267       = 1\n\tcodecSelferBitsize8267            = uint8(32 << (^uint(0) >> 63))\n\tcodecSelferDecContainerLenNil8267 = -2147483648\n)\n\nvar (\n\terrCodecSelferOnlyMapOrArrayEncodeToStruct8267 = errors.New(`only encoded map or array can be decoded into a struct`)\n)\n\ntype codecSelfer8267 struct{}\n\nfunc codecSelfer8267False() bool { return false }\nfunc codecSelfer8267True() bool  { return true }\n\nfunc init() {\n\tif codec1978.GenVersion != 21 {\n\t\t_, file, _, _ := runtime.Caller(0)\n\t\tver := strconv.FormatInt(int64(codec1978.GenVersion), 10)\n\t\tpanic(errors.New(\"codecgen version mismatch: current: 21, need \" + ver + \". Re-generate file: \" + file))\n\t}\n\tif false { // reference the types, but skip this branch at build/run time\n\t\tvar _ pkg3_jwt_go.StandardClaims\n\t\tvar _ pkg2_claimsheader.HeaderBytes\n\t\tvar _ pkg1_policy.PingPayload\n\t}\n}\n\nfunc (x *BinaryJWTClaims) CodecEncodeSelf(e *codec1978.Encoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Encoder(e)\n\t_, _, _ = h, z, r\n\tif x == nil {\n\t\tr.EncodeNil()\n\t} else {\n\t\tyy2arr2 := z.EncBasicHandle().StructToArray\n\t\t_ = yy2arr2\n\t\tconst yyr2 bool = false // struct tag has 'toArray'\n\t\tvar yyn12 bool = x.P == nil\n\t\tvar yyq2 = [12]bool{ // should field at this index be written?\n\t\t\tlen(x.T) != 0,         // T\n\t\t\tlen(x.CT) != 0,        // CT\n\t\t\tlen(x.RMT) != 0,       // RMT\n\t\t\tlen(x.LCL) != 0,       // LCL\n\t\t\tlen(x.DEK) != 0,       // DEK\n\t\t\tlen(x.SDEK) != 0,      // SDEK\n\t\t\tx.ID != \"\",            // ID\n\t\t\tx.ExpiresAt != 0,      // ExpiresAt\n\t\t\tlen(x.SignerKey) != 0, // SignerKey\n\t\t\tx.P != nil,            // P\n\t\t\tlen(x.DEKV2) != 0,     // DEKV2\n\t\t\tlen(x.SDEKV2) != 0,    // SDEKV2\n\t\t}\n\t\t_ = yyq2\n\t\tif yyr2 || yy2arr2 {\n\t\t\tz.EncWriteArrayStart(12)\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[0] {\n\t\t\t\tif x.T == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tz.F.EncSliceStringV(x.T, e)\n\t\t\t\t} // end block: if x.T slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[1] {\n\t\t\t\tif x.CT == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tz.F.EncSliceStringV(x.CT, e)\n\t\t\t\t} // end block: if x.CT slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[2] {\n\t\t\t\tif x.RMT == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.RMT))\n\t\t\t\t} // end block: if x.RMT slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[3] {\n\t\t\t\tif x.LCL == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.LCL))\n\t\t\t\t} // end block: if x.LCL slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[4] {\n\t\t\t\tif x.DEK == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.DEK))\n\t\t\t\t} // end block: if x.DEK slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[5] {\n\t\t\t\tif x.SDEK == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.SDEK))\n\t\t\t\t} // end block: if x.SDEK slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[6] {\n\t\t\t\tr.EncodeString(string(x.ID))\n\t\t\t} else {\n\t\t\t\tr.EncodeString(\"\")\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[7] {\n\t\t\t\tr.EncodeInt(int64(x.ExpiresAt))\n\t\t\t} else {\n\t\t\t\tr.EncodeInt(0)\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[8] {\n\t\t\t\tif x.SignerKey == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.SignerKey))\n\t\t\t\t} // end block: if x.SignerKey slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tif yyn12 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[9] {\n\t\t\t\t\tif yyxt24 := z.Extension(x.P); yyxt24 != nil {\n\t\t\t\t\t\tz.EncExtension(x.P, yyxt24)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tz.EncFallback(x.P)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[10] {\n\t\t\t\tif x.DEKV2 == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.DEKV2))\n\t\t\t\t} // end block: if x.DEKV2 slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[11] {\n\t\t\t\tif x.SDEKV2 == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.SDEKV2))\n\t\t\t\t} // end block: if x.SDEKV2 slice == nil\n\t\t\t} else {\n\t\t\t\tr.EncodeNil()\n\t\t\t}\n\t\t\tz.EncWriteArrayEnd()\n\t\t} else {\n\t\t\tvar yynn2 int\n\t\t\tfor _, b := range yyq2 {\n\t\t\t\tif b {\n\t\t\t\t\tyynn2++\n\t\t\t\t}\n\t\t\t}\n\t\t\tz.EncWriteMapStart(yynn2)\n\t\t\tyynn2 = 0\n\t\t\tif yyq2[0] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"T\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`T`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.T == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tz.F.EncSliceStringV(x.T, e)\n\t\t\t\t} // end block: if x.T slice == nil\n\t\t\t}\n\t\t\tif yyq2[1] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"CT\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`CT`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.CT == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tz.F.EncSliceStringV(x.CT, e)\n\t\t\t\t} // end block: if x.CT slice == nil\n\t\t\t}\n\t\t\tif yyq2[2] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"RMT\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`RMT`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.RMT == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.RMT))\n\t\t\t\t} // end block: if x.RMT slice == nil\n\t\t\t}\n\t\t\tif yyq2[3] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"LCL\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`LCL`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.LCL == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.LCL))\n\t\t\t\t} // end block: if x.LCL slice == nil\n\t\t\t}\n\t\t\tif yyq2[4] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"DEK\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`DEK`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.DEK == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.DEK))\n\t\t\t\t} // end block: if x.DEK slice == nil\n\t\t\t}\n\t\t\tif yyq2[5] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"SDEK\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`SDEK`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.SDEK == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.SDEK))\n\t\t\t\t} // end block: if x.SDEK slice == nil\n\t\t\t}\n\t\t\tif yyq2[6] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"ID\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`ID`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeString(string(x.ID))\n\t\t\t}\n\t\t\tif yyq2[7] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"ExpiresAt\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`ExpiresAt`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeInt(int64(x.ExpiresAt))\n\t\t\t}\n\t\t\tif yyq2[8] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"SignerKey\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`SignerKey`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.SignerKey == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.SignerKey))\n\t\t\t\t} // end block: if x.SignerKey slice == nil\n\t\t\t}\n\t\t\tif yyq2[9] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"P\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`P`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn12 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif yyxt36 := z.Extension(x.P); yyxt36 != nil {\n\t\t\t\t\t\tz.EncExtension(x.P, yyxt36)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tz.EncFallback(x.P)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[10] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"DEKV2\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`DEKV2`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.DEKV2 == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.DEKV2))\n\t\t\t\t} // end block: if x.DEKV2 slice == nil\n\t\t\t}\n\t\t\tif yyq2[11] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"SDEKV2\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`SDEKV2`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif x.SDEKV2 == nil {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.SDEKV2))\n\t\t\t\t} // end block: if x.SDEKV2 slice == nil\n\t\t\t}\n\t\t\tz.EncWriteMapEnd()\n\t\t}\n\t}\n}\n\nfunc (x *BinaryJWTClaims) CodecDecodeSelf(d *codec1978.Decoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Decoder(d)\n\t_, _, _ = h, z, r\n\tyyct2 := r.ContainerType()\n\tif yyct2 == codecSelferValueTypeNil8267 {\n\t\t*(x) = BinaryJWTClaims{}\n\t} else if yyct2 == codecSelferValueTypeMap8267 {\n\t\tyyl2 := z.DecReadMapStart()\n\t\tif yyl2 == 0 {\n\t\t} else {\n\t\t\tx.codecDecodeSelfFromMap(yyl2, d)\n\t\t}\n\t\tz.DecReadMapEnd()\n\t} else if yyct2 == codecSelferValueTypeArray8267 {\n\t\tyyl2 := z.DecReadArrayStart()\n\t\tif yyl2 != 0 {\n\t\t\tx.codecDecodeSelfFromArray(yyl2, d)\n\t\t}\n\t\tz.DecReadArrayEnd()\n\t} else {\n\t\tpanic(errCodecSelferOnlyMapOrArrayEncodeToStruct8267)\n\t}\n}\n\nfunc (x *BinaryJWTClaims) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Decoder(d)\n\t_, _, _ = h, z, r\n\tvar yyhl3 bool = l >= 0\n\tfor yyj3 := 0; ; yyj3++ {\n\t\tif yyhl3 {\n\t\t\tif yyj3 >= l {\n\t\t\t\tbreak\n\t\t\t}\n\t\t} else {\n\t\t\tif z.DecCheckBreak() {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tz.DecReadMapElemKey()\n\t\tyys3 := r.DecodeStringAsBytes()\n\t\tz.DecReadMapElemValue()\n\t\tswitch string(yys3) {\n\t\tcase \"T\":\n\t\t\tz.F.DecSliceStringX(&x.T, d)\n\t\tcase \"CT\":\n\t\t\tz.F.DecSliceStringX(&x.CT, d)\n\t\tcase \"RMT\":\n\t\t\tx.RMT = z.DecodeBytesInto(([]byte)(x.RMT))\n\t\tcase \"LCL\":\n\t\t\tx.LCL = z.DecodeBytesInto(([]byte)(x.LCL))\n\t\tcase \"DEK\":\n\t\t\tx.DEK = z.DecodeBytesInto(([]byte)(x.DEK))\n\t\tcase \"SDEK\":\n\t\t\tx.SDEK = z.DecodeBytesInto(([]byte)(x.SDEK))\n\t\tcase \"ID\":\n\t\t\tx.ID = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t\tcase \"ExpiresAt\":\n\t\t\tx.ExpiresAt = (int64)(r.DecodeInt64())\n\t\tcase \"SignerKey\":\n\t\t\tx.SignerKey = z.DecodeBytesInto(([]byte)(x.SignerKey))\n\t\tcase \"P\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.P != nil { // remove the if-true\n\t\t\t\t\tx.P = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.P == nil {\n\t\t\t\t\tx.P = new(pkg1_policy.PingPayload)\n\t\t\t\t}\n\t\t\t\tif yyxt21 := z.Extension(x.P); yyxt21 != nil {\n\t\t\t\t\tz.DecExtension(x.P, yyxt21)\n\t\t\t\t} else {\n\t\t\t\t\tz.DecFallback(x.P, false)\n\t\t\t\t}\n\t\t\t}\n\t\tcase \"DEKV2\":\n\t\t\tx.DEKV2 = z.DecodeBytesInto(([]byte)(x.DEKV2))\n\t\tcase \"SDEKV2\":\n\t\t\tx.SDEKV2 = z.DecodeBytesInto(([]byte)(x.SDEKV2))\n\t\tdefault:\n\t\t\tz.DecStructFieldNotFound(-1, string(yys3))\n\t\t} // end switch yys3\n\t} // end for yyj3\n}\n\nfunc (x *BinaryJWTClaims) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Decoder(d)\n\t_, _, _ = h, z, r\n\tvar yyj26 int\n\tvar yyb26 bool\n\tvar yyhl26 bool = l >= 0\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tz.F.DecSliceStringX(&x.T, d)\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tz.F.DecSliceStringX(&x.CT, d)\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.RMT = z.DecodeBytesInto(([]byte)(x.RMT))\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.LCL = z.DecodeBytesInto(([]byte)(x.LCL))\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.DEK = z.DecodeBytesInto(([]byte)(x.DEK))\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.SDEK = z.DecodeBytesInto(([]byte)(x.SDEK))\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.ID = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.ExpiresAt = (int64)(r.DecodeInt64())\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.SignerKey = z.DecodeBytesInto(([]byte)(x.SignerKey))\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.P != nil { // remove the if-true\n\t\t\tx.P = nil\n\t\t}\n\t} else {\n\t\tif x.P == nil {\n\t\t\tx.P = new(pkg1_policy.PingPayload)\n\t\t}\n\t\tif yyxt44 := z.Extension(x.P); yyxt44 != nil {\n\t\t\tz.DecExtension(x.P, yyxt44)\n\t\t} else {\n\t\t\tz.DecFallback(x.P, false)\n\t\t}\n\t}\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.DEKV2 = z.DecodeBytesInto(([]byte)(x.DEKV2))\n\tyyj26++\n\tif yyhl26 {\n\t\tyyb26 = yyj26 > l\n\t} else {\n\t\tyyb26 = z.DecCheckBreak()\n\t}\n\tif yyb26 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.SDEKV2 = z.DecodeBytesInto(([]byte)(x.SDEKV2))\n\tfor {\n\t\tyyj26++\n\t\tif yyhl26 {\n\t\t\tyyb26 = yyj26 > l\n\t\t} else {\n\t\t\tyyb26 = z.DecCheckBreak()\n\t\t}\n\t\tif yyb26 {\n\t\t\tbreak\n\t\t}\n\t\tz.DecReadArrayElem()\n\t\tz.DecStructFieldNotFound(yyj26-1, \"\")\n\t}\n}\n\nfunc (x *BinaryJWTClaims) IsCodecEmpty() bool {\n\treturn !(len(x.T) != 0 || len(x.CT) != 0 || len(x.RMT) != 0 || len(x.LCL) != 0 || len(x.DEK) != 0 || len(x.SDEK) != 0 || x.ID != \"\" || x.ExpiresAt != 0 || len(x.SignerKey) != 0 || len(x.DEKV2) != 0 || len(x.SDEKV2) != 0 || false)\n}\n\nfunc (x *JWTClaims) CodecEncodeSelf(e *codec1978.Encoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Encoder(e)\n\t_, _, _ = h, z, r\n\tif x == nil {\n\t\tr.EncodeNil()\n\t} else {\n\t\tyy2arr2 := z.EncBasicHandle().StructToArray\n\t\t_ = yy2arr2\n\t\tconst yyr2 bool = false // struct tag has 'toArray'\n\t\tvar yyn3 bool = x.ConnectionClaims == nil || x.ConnectionClaims.T == nil\n\t\tvar yyn4 bool = x.ConnectionClaims == nil\n\t\tvar yyn5 bool = x.ConnectionClaims == nil\n\t\tvar yyn6 bool = x.ConnectionClaims == nil\n\t\tvar yyn7 bool = x.ConnectionClaims == nil\n\t\tvar yyn8 bool = x.ConnectionClaims == nil || x.ConnectionClaims.CT == nil\n\t\tvar yyn9 bool = x.ConnectionClaims == nil\n\t\tvar yyn10 bool = x.ConnectionClaims == nil\n\t\tvar yyn11 bool = x.ConnectionClaims == nil\n\t\tvar yyn12 bool = x.ConnectionClaims == nil || x.ConnectionClaims.P == nil\n\t\tvar yyn13 bool = x.ConnectionClaims == nil\n\t\tvar yyn14 bool = x.ConnectionClaims == nil\n\t\tvar yyq2 = [19]bool{ // should field at this index be written?\n\t\t\tx.ConnectionClaims != nil && x.T != nil,         // T\n\t\t\tx.ConnectionClaims != nil && len(x.RMT) != 0,    // RMT\n\t\t\tx.ConnectionClaims != nil && len(x.LCL) != 0,    // LCL\n\t\t\tx.ConnectionClaims != nil && len(x.DEKV1) != 0,  // DEKV1\n\t\t\tx.ConnectionClaims != nil && len(x.SDEKV1) != 0, // SDEKV1\n\t\t\tx.ConnectionClaims != nil && x.CT != nil,        // CT\n\t\t\tx.ConnectionClaims != nil && x.ID != \"\",         // ID\n\t\t\tx.ConnectionClaims != nil && x.RemoteID != \"\",   // RemoteID\n\t\t\tx.ConnectionClaims != nil && len(x.H) != 0,      // H\n\t\t\tx.ConnectionClaims != nil && x.P != nil,         // P\n\t\t\tx.ConnectionClaims != nil && len(x.DEKV2) != 0,  // DEKV2\n\t\t\tx.ConnectionClaims != nil && len(x.SDEKV2) != 0, // SDEKV2\n\t\t\tx.Audience != \"\", // aud\n\t\t\tx.ExpiresAt != 0, // exp\n\t\t\tx.Id != \"\",       // jti\n\t\t\tx.IssuedAt != 0,  // iat\n\t\t\tx.Issuer != \"\",   // iss\n\t\t\tx.NotBefore != 0, // nbf\n\t\t\tx.Subject != \"\",  // sub\n\t\t}\n\t\t_ = yyq2\n\t\tif yyr2 || yy2arr2 {\n\t\t\tz.EncWriteArrayStart(19)\n\t\t\tif yyn3 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[0] {\n\t\t\t\t\tif yyxt22 := z.Extension(x.ConnectionClaims.T); yyxt22 != nil {\n\t\t\t\t\t\tz.EncExtension(x.ConnectionClaims.T, yyxt22)\n\t\t\t\t\t} else if !z.EncBinary() && z.IsJSONHandle() {\n\t\t\t\t\t\tz.EncJSONMarshal(x.ConnectionClaims.T)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tz.EncFallback(x.ConnectionClaims.T)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn4 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[1] {\n\t\t\t\t\tif x.ConnectionClaims.RMT == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.RMT))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.RMT slice == nil\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn5 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[2] {\n\t\t\t\t\tif x.ConnectionClaims.LCL == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.LCL))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.LCL slice == nil\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn6 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[3] {\n\t\t\t\t\tif x.ConnectionClaims.DEKV1 == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.DEKV1))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.DEKV1 slice == nil\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn7 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[4] {\n\t\t\t\t\tif x.ConnectionClaims.SDEKV1 == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.SDEKV1))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.SDEKV1 slice == nil\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn8 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[5] {\n\t\t\t\t\tif yyxt27 := z.Extension(x.ConnectionClaims.CT); yyxt27 != nil {\n\t\t\t\t\t\tz.EncExtension(x.ConnectionClaims.CT, yyxt27)\n\t\t\t\t\t} else if !z.EncBinary() && z.IsJSONHandle() {\n\t\t\t\t\t\tz.EncJSONMarshal(x.ConnectionClaims.CT)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tz.EncFallback(x.ConnectionClaims.CT)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn9 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[6] {\n\t\t\t\t\tr.EncodeString(string(x.ConnectionClaims.ID))\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(\"\")\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn10 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[7] {\n\t\t\t\t\tr.EncodeString(string(x.ConnectionClaims.RemoteID))\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(\"\")\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn11 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[8] {\n\t\t\t\t\tif yyxt30 := z.Extension(x.ConnectionClaims.H); yyxt30 != nil {\n\t\t\t\t\t\tz.EncExtension(x.ConnectionClaims.H, yyxt30)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif x.ConnectionClaims.H == nil {\n\t\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\th.encclaimsheader_HeaderBytes((pkg2_claimsheader.HeaderBytes)(x.ConnectionClaims.H), e)\n\t\t\t\t\t\t} // end block: if x.ConnectionClaims.H slice == nil\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn12 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[9] {\n\t\t\t\t\tif yyxt31 := z.Extension(x.ConnectionClaims.P); yyxt31 != nil {\n\t\t\t\t\t\tz.EncExtension(x.ConnectionClaims.P, yyxt31)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tz.EncFallback(x.ConnectionClaims.P)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn13 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[10] {\n\t\t\t\t\tif x.ConnectionClaims.DEKV2 == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.DEKV2))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.DEKV2 slice == nil\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyn14 {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tr.EncodeNil()\n\t\t\t} else {\n\t\t\t\tz.EncWriteArrayElem()\n\t\t\t\tif yyq2[11] {\n\t\t\t\t\tif x.ConnectionClaims.SDEKV2 == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.SDEKV2))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.SDEKV2 slice == nil\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t}\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[12] {\n\t\t\t\tr.EncodeString(string(x.StandardClaims.Audience))\n\t\t\t} else {\n\t\t\t\tr.EncodeString(\"\")\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[13] {\n\t\t\t\tr.EncodeInt(int64(x.StandardClaims.ExpiresAt))\n\t\t\t} else {\n\t\t\t\tr.EncodeInt(0)\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[14] {\n\t\t\t\tr.EncodeString(string(x.StandardClaims.Id))\n\t\t\t} else {\n\t\t\t\tr.EncodeString(\"\")\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[15] {\n\t\t\t\tr.EncodeInt(int64(x.StandardClaims.IssuedAt))\n\t\t\t} else {\n\t\t\t\tr.EncodeInt(0)\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[16] {\n\t\t\t\tr.EncodeString(string(x.StandardClaims.Issuer))\n\t\t\t} else {\n\t\t\t\tr.EncodeString(\"\")\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[17] {\n\t\t\t\tr.EncodeInt(int64(x.StandardClaims.NotBefore))\n\t\t\t} else {\n\t\t\t\tr.EncodeInt(0)\n\t\t\t}\n\t\t\tz.EncWriteArrayElem()\n\t\t\tif yyq2[18] {\n\t\t\t\tr.EncodeString(string(x.StandardClaims.Subject))\n\t\t\t} else {\n\t\t\t\tr.EncodeString(\"\")\n\t\t\t}\n\t\t\tz.EncWriteArrayEnd()\n\t\t} else {\n\t\t\tvar yynn2 int\n\t\t\tfor _, b := range yyq2 {\n\t\t\t\tif b {\n\t\t\t\t\tyynn2++\n\t\t\t\t}\n\t\t\t}\n\t\t\tz.EncWriteMapStart(yynn2)\n\t\t\tyynn2 = 0\n\t\t\tif yyq2[0] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"T\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`T`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn3 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif yyxt41 := z.Extension(x.ConnectionClaims.T); yyxt41 != nil {\n\t\t\t\t\t\tz.EncExtension(x.ConnectionClaims.T, yyxt41)\n\t\t\t\t\t} else if !z.EncBinary() && z.IsJSONHandle() {\n\t\t\t\t\t\tz.EncJSONMarshal(x.ConnectionClaims.T)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tz.EncFallback(x.ConnectionClaims.T)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[1] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"RMT\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`RMT`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn4 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif x.ConnectionClaims.RMT == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.RMT))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.RMT slice == nil\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[2] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"LCL\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`LCL`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn5 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif x.ConnectionClaims.LCL == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.LCL))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.LCL slice == nil\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[3] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"DEKV1\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`DEKV1`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn6 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif x.ConnectionClaims.DEKV1 == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.DEKV1))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.DEKV1 slice == nil\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[4] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"SDEKV1\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`SDEKV1`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn7 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif x.ConnectionClaims.SDEKV1 == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.SDEKV1))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.SDEKV1 slice == nil\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[5] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"CT\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`CT`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn8 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif yyxt46 := z.Extension(x.ConnectionClaims.CT); yyxt46 != nil {\n\t\t\t\t\t\tz.EncExtension(x.ConnectionClaims.CT, yyxt46)\n\t\t\t\t\t} else if !z.EncBinary() && z.IsJSONHandle() {\n\t\t\t\t\t\tz.EncJSONMarshal(x.ConnectionClaims.CT)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tz.EncFallback(x.ConnectionClaims.CT)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[6] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"ID\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`ID`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn9 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(string(x.ConnectionClaims.ID))\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[7] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"RemoteID\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`RemoteID`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn10 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(string(x.ConnectionClaims.RemoteID))\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[8] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"H\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`H`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn11 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif yyxt49 := z.Extension(x.ConnectionClaims.H); yyxt49 != nil {\n\t\t\t\t\t\tz.EncExtension(x.ConnectionClaims.H, yyxt49)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif x.ConnectionClaims.H == nil {\n\t\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\th.encclaimsheader_HeaderBytes((pkg2_claimsheader.HeaderBytes)(x.ConnectionClaims.H), e)\n\t\t\t\t\t\t} // end block: if x.ConnectionClaims.H slice == nil\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[9] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"P\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`P`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn12 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif yyxt50 := z.Extension(x.ConnectionClaims.P); yyxt50 != nil {\n\t\t\t\t\t\tz.EncExtension(x.ConnectionClaims.P, yyxt50)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tz.EncFallback(x.ConnectionClaims.P)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[10] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"DEKV2\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`DEKV2`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn13 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif x.ConnectionClaims.DEKV2 == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.DEKV2))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.DEKV2 slice == nil\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[11] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"SDEKV2\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`SDEKV2`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tif yyn14 {\n\t\t\t\t\tr.EncodeNil()\n\t\t\t\t} else {\n\t\t\t\t\tif x.ConnectionClaims.SDEKV2 == nil {\n\t\t\t\t\t\tr.EncodeNil()\n\t\t\t\t\t} else {\n\t\t\t\t\t\tr.EncodeStringBytesRaw([]byte(x.ConnectionClaims.SDEKV2))\n\t\t\t\t\t} // end block: if x.ConnectionClaims.SDEKV2 slice == nil\n\t\t\t\t}\n\t\t\t}\n\t\t\tif yyq2[12] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"aud\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`aud`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeString(string(x.StandardClaims.Audience))\n\t\t\t}\n\t\t\tif yyq2[13] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"exp\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`exp`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeInt(int64(x.StandardClaims.ExpiresAt))\n\t\t\t}\n\t\t\tif yyq2[14] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"jti\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`jti`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeString(string(x.StandardClaims.Id))\n\t\t\t}\n\t\t\tif yyq2[15] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"iat\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`iat`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeInt(int64(x.StandardClaims.IssuedAt))\n\t\t\t}\n\t\t\tif yyq2[16] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"iss\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`iss`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeString(string(x.StandardClaims.Issuer))\n\t\t\t}\n\t\t\tif yyq2[17] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"nbf\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`nbf`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeInt(int64(x.StandardClaims.NotBefore))\n\t\t\t}\n\t\t\tif yyq2[18] {\n\t\t\t\tz.EncWriteMapElemKey()\n\t\t\t\tif z.IsJSONHandle() {\n\t\t\t\t\tz.WriteStr(\"\\\"sub\\\"\")\n\t\t\t\t} else {\n\t\t\t\t\tr.EncodeString(`sub`)\n\t\t\t\t}\n\t\t\t\tz.EncWriteMapElemValue()\n\t\t\t\tr.EncodeString(string(x.StandardClaims.Subject))\n\t\t\t}\n\t\t\tz.EncWriteMapEnd()\n\t\t}\n\t}\n}\n\nfunc (x *JWTClaims) CodecDecodeSelf(d *codec1978.Decoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Decoder(d)\n\t_, _, _ = h, z, r\n\tyyct2 := r.ContainerType()\n\tif yyct2 == codecSelferValueTypeNil8267 {\n\t\t*(x) = JWTClaims{}\n\t} else if yyct2 == codecSelferValueTypeMap8267 {\n\t\tyyl2 := z.DecReadMapStart()\n\t\tif yyl2 == 0 {\n\t\t} else {\n\t\t\tx.codecDecodeSelfFromMap(yyl2, d)\n\t\t}\n\t\tz.DecReadMapEnd()\n\t} else if yyct2 == codecSelferValueTypeArray8267 {\n\t\tyyl2 := z.DecReadArrayStart()\n\t\tif yyl2 != 0 {\n\t\t\tx.codecDecodeSelfFromArray(yyl2, d)\n\t\t}\n\t\tz.DecReadArrayEnd()\n\t} else {\n\t\tpanic(errCodecSelferOnlyMapOrArrayEncodeToStruct8267)\n\t}\n}\n\nfunc (x *JWTClaims) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Decoder(d)\n\t_, _, _ = h, z, r\n\tvar yyhl3 bool = l >= 0\n\tfor yyj3 := 0; ; yyj3++ {\n\t\tif yyhl3 {\n\t\t\tif yyj3 >= l {\n\t\t\t\tbreak\n\t\t\t}\n\t\t} else {\n\t\t\tif z.DecCheckBreak() {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tz.DecReadMapElemKey()\n\t\tyys3 := r.DecodeStringAsBytes()\n\t\tz.DecReadMapElemValue()\n\t\tswitch string(yys3) {\n\t\tcase \"T\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil && x.ConnectionClaims.T != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.T = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tif x.ConnectionClaims.T == nil {\n\t\t\t\t\tx.ConnectionClaims.T = new(pkg1_policy.TagStore)\n\t\t\t\t}\n\t\t\t\tif yyxt5 := z.Extension(x.ConnectionClaims.T); yyxt5 != nil {\n\t\t\t\t\tz.DecExtension(x.ConnectionClaims.T, yyxt5)\n\t\t\t\t} else if !z.DecBinary() && z.IsJSONHandle() {\n\t\t\t\t\tz.DecJSONUnmarshal(x.ConnectionClaims.T)\n\t\t\t\t} else {\n\t\t\t\t\tz.DecFallback(x.ConnectionClaims.T, false)\n\t\t\t\t}\n\t\t\t}\n\t\tcase \"RMT\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.RMT = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tx.ConnectionClaims.RMT = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.RMT))\n\t\t\t}\n\t\tcase \"LCL\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.LCL = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tx.ConnectionClaims.LCL = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.LCL))\n\t\t\t}\n\t\tcase \"DEKV1\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.DEKV1 = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tx.ConnectionClaims.DEKV1 = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.DEKV1))\n\t\t\t}\n\t\tcase \"SDEKV1\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.SDEKV1 = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tx.ConnectionClaims.SDEKV1 = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.SDEKV1))\n\t\t\t}\n\t\tcase \"CT\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil && x.ConnectionClaims.CT != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.CT = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tif x.ConnectionClaims.CT == nil {\n\t\t\t\t\tx.ConnectionClaims.CT = new(pkg1_policy.TagStore)\n\t\t\t\t}\n\t\t\t\tif yyxt15 := z.Extension(x.ConnectionClaims.CT); yyxt15 != nil {\n\t\t\t\t\tz.DecExtension(x.ConnectionClaims.CT, yyxt15)\n\t\t\t\t} else if !z.DecBinary() && z.IsJSONHandle() {\n\t\t\t\t\tz.DecJSONUnmarshal(x.ConnectionClaims.CT)\n\t\t\t\t} else {\n\t\t\t\t\tz.DecFallback(x.ConnectionClaims.CT, false)\n\t\t\t\t}\n\t\t\t}\n\t\tcase \"ID\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.ID = \"\"\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tx.ConnectionClaims.ID = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t\t\t}\n\t\tcase \"RemoteID\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.RemoteID = \"\"\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tx.ConnectionClaims.RemoteID = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t\t\t}\n\t\tcase \"H\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.H = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tif yyxt19 := z.Extension(x.ConnectionClaims.H); yyxt19 != nil {\n\t\t\t\t\tz.DecExtension(&x.ConnectionClaims.H, yyxt19)\n\t\t\t\t} else {\n\t\t\t\t\th.decclaimsheader_HeaderBytes((*pkg2_claimsheader.HeaderBytes)(&x.ConnectionClaims.H), d)\n\t\t\t\t}\n\t\t\t}\n\t\tcase \"P\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil && x.ConnectionClaims.P != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.P = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tif x.ConnectionClaims.P == nil {\n\t\t\t\t\tx.ConnectionClaims.P = new(pkg1_policy.PingPayload)\n\t\t\t\t}\n\t\t\t\tif yyxt21 := z.Extension(x.ConnectionClaims.P); yyxt21 != nil {\n\t\t\t\t\tz.DecExtension(x.ConnectionClaims.P, yyxt21)\n\t\t\t\t} else {\n\t\t\t\t\tz.DecFallback(x.ConnectionClaims.P, false)\n\t\t\t\t}\n\t\t\t}\n\t\tcase \"DEKV2\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.DEKV2 = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tx.ConnectionClaims.DEKV2 = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.DEKV2))\n\t\t\t}\n\t\tcase \"SDEKV2\":\n\t\t\tif r.TryNil() {\n\t\t\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\t\t\tx.ConnectionClaims.SDEKV2 = nil\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif x.ConnectionClaims == nil {\n\t\t\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t\t\t}\n\t\t\t\tx.ConnectionClaims.SDEKV2 = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.SDEKV2))\n\t\t\t}\n\t\tcase \"aud\":\n\t\t\tx.StandardClaims.Audience = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t\tcase \"exp\":\n\t\t\tx.StandardClaims.ExpiresAt = (int64)(r.DecodeInt64())\n\t\tcase \"jti\":\n\t\t\tx.StandardClaims.Id = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t\tcase \"iat\":\n\t\t\tx.StandardClaims.IssuedAt = (int64)(r.DecodeInt64())\n\t\tcase \"iss\":\n\t\t\tx.StandardClaims.Issuer = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t\tcase \"nbf\":\n\t\t\tx.StandardClaims.NotBefore = (int64)(r.DecodeInt64())\n\t\tcase \"sub\":\n\t\t\tx.StandardClaims.Subject = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t\tdefault:\n\t\t\tz.DecStructFieldNotFound(-1, string(yys3))\n\t\t} // end switch yys3\n\t} // end for yyj3\n}\n\nfunc (x *JWTClaims) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Decoder(d)\n\t_, _, _ = h, z, r\n\tvar yyj33 int\n\tvar yyb33 bool\n\tvar yyhl33 bool = l >= 0\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil && x.ConnectionClaims.T != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.T = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tif x.ConnectionClaims.T == nil {\n\t\t\tx.ConnectionClaims.T = new(pkg1_policy.TagStore)\n\t\t}\n\t\tif yyxt35 := z.Extension(x.ConnectionClaims.T); yyxt35 != nil {\n\t\t\tz.DecExtension(x.ConnectionClaims.T, yyxt35)\n\t\t} else if !z.DecBinary() && z.IsJSONHandle() {\n\t\t\tz.DecJSONUnmarshal(x.ConnectionClaims.T)\n\t\t} else {\n\t\t\tz.DecFallback(x.ConnectionClaims.T, false)\n\t\t}\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.RMT = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tx.ConnectionClaims.RMT = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.RMT))\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.LCL = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tx.ConnectionClaims.LCL = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.LCL))\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.DEKV1 = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tx.ConnectionClaims.DEKV1 = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.DEKV1))\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.SDEKV1 = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tx.ConnectionClaims.SDEKV1 = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.SDEKV1))\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil && x.ConnectionClaims.CT != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.CT = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tif x.ConnectionClaims.CT == nil {\n\t\t\tx.ConnectionClaims.CT = new(pkg1_policy.TagStore)\n\t\t}\n\t\tif yyxt45 := z.Extension(x.ConnectionClaims.CT); yyxt45 != nil {\n\t\t\tz.DecExtension(x.ConnectionClaims.CT, yyxt45)\n\t\t} else if !z.DecBinary() && z.IsJSONHandle() {\n\t\t\tz.DecJSONUnmarshal(x.ConnectionClaims.CT)\n\t\t} else {\n\t\t\tz.DecFallback(x.ConnectionClaims.CT, false)\n\t\t}\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.ID = \"\"\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tx.ConnectionClaims.ID = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.RemoteID = \"\"\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tx.ConnectionClaims.RemoteID = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.H = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tif yyxt49 := z.Extension(x.ConnectionClaims.H); yyxt49 != nil {\n\t\t\tz.DecExtension(&x.ConnectionClaims.H, yyxt49)\n\t\t} else {\n\t\t\th.decclaimsheader_HeaderBytes((*pkg2_claimsheader.HeaderBytes)(&x.ConnectionClaims.H), d)\n\t\t}\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil && x.ConnectionClaims.P != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.P = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tif x.ConnectionClaims.P == nil {\n\t\t\tx.ConnectionClaims.P = new(pkg1_policy.PingPayload)\n\t\t}\n\t\tif yyxt51 := z.Extension(x.ConnectionClaims.P); yyxt51 != nil {\n\t\t\tz.DecExtension(x.ConnectionClaims.P, yyxt51)\n\t\t} else {\n\t\t\tz.DecFallback(x.ConnectionClaims.P, false)\n\t\t}\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.DEKV2 = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tx.ConnectionClaims.DEKV2 = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.DEKV2))\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tif r.TryNil() {\n\t\tif x.ConnectionClaims != nil { // remove the if-true\n\t\t\tx.ConnectionClaims.SDEKV2 = nil\n\t\t}\n\t} else {\n\t\tif x.ConnectionClaims == nil {\n\t\t\tx.ConnectionClaims = new(ConnectionClaims)\n\t\t}\n\t\tx.ConnectionClaims.SDEKV2 = z.DecodeBytesInto(([]byte)(x.ConnectionClaims.SDEKV2))\n\t}\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.StandardClaims.Audience = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.StandardClaims.ExpiresAt = (int64)(r.DecodeInt64())\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.StandardClaims.Id = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.StandardClaims.IssuedAt = (int64)(r.DecodeInt64())\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.StandardClaims.Issuer = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.StandardClaims.NotBefore = (int64)(r.DecodeInt64())\n\tyyj33++\n\tif yyhl33 {\n\t\tyyb33 = yyj33 > l\n\t} else {\n\t\tyyb33 = z.DecCheckBreak()\n\t}\n\tif yyb33 {\n\t\tz.DecReadArrayEnd()\n\t\treturn\n\t}\n\tz.DecReadArrayElem()\n\tx.StandardClaims.Subject = (string)(z.DecStringZC(r.DecodeStringAsBytes()))\n\tfor {\n\t\tyyj33++\n\t\tif yyhl33 {\n\t\t\tyyb33 = yyj33 > l\n\t\t} else {\n\t\t\tyyb33 = z.DecCheckBreak()\n\t\t}\n\t\tif yyb33 {\n\t\t\tbreak\n\t\t}\n\t\tz.DecReadArrayElem()\n\t\tz.DecStructFieldNotFound(yyj33-1, \"\")\n\t}\n}\n\nfunc (x *JWTClaims) IsCodecEmpty() bool {\n\treturn !(x.ConnectionClaims != nil && x.StandardClaims != pkg3_jwt_go.StandardClaims{} || false)\n}\n\nfunc (x codecSelfer8267) encclaimsheader_HeaderBytes(v pkg2_claimsheader.HeaderBytes, e *codec1978.Encoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Encoder(e)\n\t_, _, _ = h, z, r\n\tif v == nil {\n\t\tr.EncodeNil()\n\t\treturn\n\t}\n\tr.EncodeStringBytesRaw([]byte(v))\n}\n\nfunc (x codecSelfer8267) decclaimsheader_HeaderBytes(v *pkg2_claimsheader.HeaderBytes, d *codec1978.Decoder) {\n\tvar h codecSelfer8267\n\tz, r := codec1978.GenHelper().Decoder(d)\n\t_, _, _ = h, z, r\n\t*v = z.DecodeBytesInto(*((*[]byte)(v)))\n}\n"
  },
  {
    "path": "controller/pkg/tokens/binaryjwt.go",
    "content": "package tokens\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\t\"crypto/rand\"\n\t\"encoding/binary\"\n\t\"fmt\"\n\t\"math/big\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/ugorji/go/codec\"\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\tlocalcrypto \"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n)\n\n// To generate the codecs,\n// codecgen -o binarycodec.go binaryjwtclaimtypes.go\n\n// Format of Binary Tokens\n//    0             1              2               3               4\n//  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  |     D     |CT|E| Encoding |    R (reserved)                   |\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  | Signature Position           |    nonce                       |\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  |   ...                                                         |\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  |   token                                                       |\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  |   ...                                                         |\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  | Signature                                                     |\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  |   ...                                                         |\n//  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n//  D  [0:6]   - Datapath version\n//  CT [6:8]   - Compressed tag type\n//  E  [8:9]   - Encryption enabled\n//  C  [9:12]  - Codec selector\n//  R  [12:32] - Reserved\n//  L  [32:48] - Token Length\n//  Token bytes (equal to token length)\n//  Signature bytes\n\nconst (\n\tbinaryNoncePosition   = 6\n\tlengthPosition        = 4\n\theaderLength          = 4\n\tsharedKeyCacheTimeout = 5 * time.Minute\n)\n\n//ClaimsEncodedBufSize is the size of maximum buffer that is required\n//for claims to be serialized into\nconst ClaimsEncodedBufSize = 1400\n\n// AckPattern is added in SYN and ACK tokens.\nvar AckPattern = []byte(\"PANWIDENTITY\")\nvar sha256KeyLength int = 32\n\ntype sharedKeyStruct struct {\n\tsharedKeys map[string][]byte\n\tsync.RWMutex\n}\n\nfunc (s *sharedKeyStruct) Get(key string) []byte {\n\n\ts.RLock()\n\n\tif val, ok := s.sharedKeys[key]; ok {\n\t\ts.RUnlock()\n\t\treturn val\n\t}\n\n\ts.RUnlock()\n\treturn nil\n}\n\nfunc (s *sharedKeyStruct) Put(key string, val []byte) {\n\n\ts.Lock()\n\ts.sharedKeys[key] = val\n\ts.Unlock()\n\n\ttime.AfterFunc(sharedKeyCacheTimeout, func() {\n\t\ts.Lock()\n\t\tdelete(s.sharedKeys, key)\n\t\ts.Unlock()\n\t})\n}\n\n// BinaryJWTConfig configures the JWT token generator with the standard parameters. One\n// configuration is assigned to each server\ntype BinaryJWTConfig struct {\n\t// ValidityPeriod  period of the JWT\n\tValidityPeriod time.Duration\n\t// Issuer is the server that issues the JWT\n\tIssuer string\n\t// cache test\n\ttokenCache cache.DataStore\n\t// sharedKey is a cache of pre-shared keys.\n\tsharedKeys *sharedKeyStruct\n}\n\n// NewBinaryJWT creates a new JWT token processor\nfunc NewBinaryJWT(validity time.Duration, issuer string) (*BinaryJWTConfig, error) {\n\n\treturn &BinaryJWTConfig{\n\t\tValidityPeriod: validity,\n\t\tIssuer:         issuer,\n\t\ttokenCache:     cache.NewCacheWithExpiration(\"JWTTokenCache\", validity),\n\t\tsharedKeys:     &sharedKeyStruct{sharedKeys: map[string][]byte{}},\n\t}, nil\n}\n\n// DecodeSyn takes as argument the JWT token and the certificate of the issuer.\n// First it verifies the certificate with the local CA pool, and the decodes\n// the JWT if the certificate is trusted\nfunc (c *BinaryJWTConfig) DecodeSyn(isSynAck bool, data []byte, privateKey *ephemeralkeys.PrivateKey, secrets secrets.Secrets, connClaims *ConnectionClaims) ([]byte, *claimsheader.ClaimsHeader, []byte, *pkiverifier.PKIControllerInfo, bool, error) {\n\theader, nonce, token, sig, err := unpackToken(false, data)\n\tif err != nil {\n\t\treturn nil, nil, nil, nil, false, err\n\t}\n\t// Parse the claims header.\n\tclaimsHeader := claimsheader.HeaderBytes(header).ToClaimsHeader()\n\n\t// Validate the header version.\n\tif err := c.verifyClaimsHeader(claimsHeader); err != nil {\n\t\treturn nil, nil, nil, nil, false, err\n\t}\n\n\t// Decode the claims to a data structure.\n\tbinaryClaims, err := decode(token)\n\tif err != nil {\n\t\treturn nil, nil, nil, nil, false, err\n\t}\n\n\t//Process 314 Protocol\n\tif len(binaryClaims.DEK) == 0 {\n\t\tsecretKey, controller, err := c.process314Protocol(isSynAck, token, secrets, connClaims, binaryClaims, sig)\n\t\treturn secretKey, claimsHeader, nonce, controller, true, err\n\t}\n\n\t//Process 500 Protocol\n\tsecretKey, controller, err := c.process500Protocol(isSynAck, token, privateKey, secrets, connClaims, binaryClaims, sig)\n\n\treturn secretKey, claimsHeader, nonce, controller, false, err\n}\n\n// DecodeAck decodes the ack packet token\nfunc (c *BinaryJWTConfig) DecodeAck(proto314 bool, secretKey []byte, data []byte, connClaims *ConnectionClaims) error {\n\t// Unpack the token first.\n\theader, _, token, sig, err := unpackToken(true, data)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Parse the claims header.\n\tclaimsHeader := claimsheader.HeaderBytes(header).ToClaimsHeader()\n\n\t// Validate the header.\n\tif err := c.verifyClaimsHeader(claimsHeader); err != nil {\n\t\treturn err\n\t}\n\n\t// Decode the claims to a data structure.\n\tbinaryClaims, err := decode(token)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif proto314 {\n\t\t// Calculate the signature on the token and compare it with the incoming\n\t\t// signature. Since this is simple symetric hashing this is simple.\n\t\tif err := c.verifyWithSharedKey314(token, secretKey, sig); err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tif err := c.verifyWithSharedKey500(token, secretKey, sig[0:sha256KeyLength]); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tCopyToConnectionClaims(binaryClaims, connClaims)\n\treturn nil\n}\n\n//CreateSynToken creates the token which is attached to the tcp syn packet.\nfunc (c *BinaryJWTConfig) CreateSynToken(claims *ConnectionClaims, encodedBuf []byte, nonce []byte, header *claimsheader.ClaimsHeader, secrets secrets.Secrets) ([]byte, error) {\n\t// Set the appropriate claims header\n\theader.SetCompressionType(claimsheader.CompressionTypeV1)\n\theader.SetDatapathVersion(claimsheader.DatapathVersion1)\n\n\t// Combine the application claims with the standard claims.\n\t// In all cases for Syn/SynAck packets we also transmit our\n\t// public key.\n\tallclaims := ConvertToBinaryClaims(claims, c.ValidityPeriod)\n\tallclaims.SignerKey = secrets.TransmittedKey()\n\n\t// Encode the claims in a buffer.\n\terr := encode(allclaims, &encodedBuf)\n\tif err != nil {\n\t\treturn nil, logError(ErrTokenEncodeFailed, err.Error())\n\t}\n\n\tvar sig []byte\n\n\tencodedBuf = append(encodedBuf, AckPattern...)\n\n\tsig, err = c.sign(encodedBuf, secrets.EncodingKey().(*ecdsa.PrivateKey))\n\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Pack and return the token.\n\treturn packToken(header.ToBytes(), nonce, encodedBuf, sig), nil\n}\n\n//CreateSynAckToken creates syn/ack token which is attached to the syn/ack packet.\nfunc (c *BinaryJWTConfig) CreateSynAckToken(proto314 bool, claims *ConnectionClaims, encodedBuf []byte, nonce []byte, header *claimsheader.ClaimsHeader, secrets secrets.Secrets, secretKey []byte) ([]byte, error) {\n\n\t// Set the appropriate claims header\n\theader.SetCompressionType(claimsheader.CompressionTypeV1)\n\theader.SetDatapathVersion(claimsheader.DatapathVersion1)\n\n\t// Combine the application claims with the standard claims.\n\t// In all cases for Syn/SynAck packets we also transmit our\n\t// public key.\n\tallclaims := ConvertToBinaryClaims(claims, c.ValidityPeriod)\n\tallclaims.SignerKey = secrets.TransmittedKey()\n\n\t// Encode the claims in a buffer.\n\terr := encode(allclaims, &encodedBuf)\n\tif err != nil {\n\t\treturn nil, logError(ErrTokenEncodeFailed, err.Error())\n\t}\n\n\tvar sig []byte\n\n\tencodedBuf = append(encodedBuf, AckPattern...)\n\n\tif proto314 {\n\t\tsig, err = hash314(encodedBuf, secretKey)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else {\n\t\tsig, err = hash500(encodedBuf, secretKey)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\t// Pack and return the token.\n\treturn packToken(header.ToBytes(), nonce, encodedBuf, sig), nil\n}\n\n// Randomize puts the random nonce in the syn token\nfunc (c *BinaryJWTConfig) Randomize(token []byte, nonce []byte) error {\n\n\tif len(token) < 6+NonceLength {\n\t\treturn logError(ErrTokenTooSmall, \"token is too small\")\n\t}\n\n\tcopy(token[6:], nonce)\n\n\treturn nil\n}\n\n//CreateAckToken creates ack token which is attached to the ack packet.\nfunc (c *BinaryJWTConfig) CreateAckToken(proto314 bool, secretKey []byte, claims *ConnectionClaims, encodedBuf []byte, header *claimsheader.ClaimsHeader) ([]byte, error) {\n\n\tvar pad []byte\n\t// Combine the application claims with the standard claims\n\tallclaims := ConvertToBinaryClaims(claims, c.ValidityPeriod)\n\n\t// Encode the claims in a buffer.\n\terr := encode(allclaims, &encodedBuf)\n\tif err != nil {\n\t\treturn nil, logError(ErrTokenEncodeFailed, err.Error())\n\t}\n\tencodedBuf = append(encodedBuf, AckPattern...)\n\n\tvar sig []byte\n\t// Sign the buffer with the pre-shared key.\n\tif proto314 {\n\t\tsig, err = hash314(encodedBuf, secretKey)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tpad = sig\n\t} else {\n\t\tpad = make([]byte, 64)\n\t\tsig, err = hash500(encodedBuf, secretKey)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tcopy(pad, sig)\n\t}\n\n\t// Pack and return the token.\n\treturn packToken(header.ToBytes(), nil, encodedBuf, pad), nil\n}\n\nfunc (c *BinaryJWTConfig) verifyClaimsHeader(h *claimsheader.ClaimsHeader) error {\n\n\tif h.CompressionType() != claimsheader.CompressionTypeV1 {\n\t\treturn ErrCompressedTagMismatch\n\n\t}\n\n\tif h.DatapathVersion() != claimsheader.DatapathVersion1 {\n\t\treturn ErrDatapathVersionMismatch\n\t}\n\n\treturn nil\n}\n\n// Sign takes in a slice of bytes and a private key, and returns a ecdsa signature.\nfunc (c *BinaryJWTConfig) Sign(buf []byte, key *ecdsa.PrivateKey) ([]byte, error) {\n\treturn c.sign(buf, key)\n}\n\nfunc (c *BinaryJWTConfig) sign(buf []byte, key *ecdsa.PrivateKey) ([]byte, error) {\n\n\t// Create the hash and use this for the signature. This is a SHA256 hash\n\t// of the token.\n\th, err := hash500(buf, nil)\n\tif err != nil {\n\t\treturn nil, logError(ErrTokenHashFailed, err.Error())\n\t}\n\n\t// Sign the hash with the private key using the ECDSA algorithm\n\t// and properly format the resulting signature.\n\tr, s, err := ecdsa.Sign(rand.Reader, key, h)\n\tif err != nil {\n\t\treturn nil, logError(ErrTokenSignFailed, err.Error())\n\t}\n\n\tcurveBits := key.Curve.Params().BitSize\n\tkeyBytes := curveBits / 8\n\tif curveBits%8 > 0 {\n\t\tkeyBytes++\n\t}\n\n\t// We serialize the outpus (r and s) into big-endian byte arrays and pad\n\t// them with zeros on the left to make sure the sizes work out. Both arrays\n\t// must be keyBytes long, and the output must be 2*keyBytes long.\n\ttokenBytes := make([]byte, 2*keyBytes)\n\n\trBytes := r.Bytes()\n\tcopy(tokenBytes[keyBytes-len(rBytes):], rBytes)\n\n\tsBytes := s.Bytes()\n\tcopy(tokenBytes[2*keyBytes-len(sBytes):], sBytes)\n\n\treturn tokenBytes, nil\n}\n\nfunc (c *BinaryJWTConfig) verify(buf []byte, sig []byte, key *ecdsa.PublicKey) error {\n\n\tif len(sig) != 64 {\n\t\treturn ErrInvalidSignature\n\t}\n\n\tr := big.NewInt(0).SetBytes(sig[:32])\n\ts := big.NewInt(0).SetBytes(sig[32:])\n\n\t// Create the hash and use this for the signature. This is a SHA256 hash\n\t// of the token.\n\th, err := hash500(buf, nil)\n\tif err != nil {\n\t\treturn logError(ErrTokenHashFailed, err.Error())\n\t}\n\n\tif verifyStatus := ecdsa.Verify(key, h, r, s); verifyStatus {\n\t\treturn nil\n\t}\n\n\treturn ErrInvalidSignature\n}\n\nfunc (c *BinaryJWTConfig) getSecretKey(privateKey *ephemeralkeys.PrivateKey, remotePublicKeyString string, isV1Proto bool) ([]byte, error) {\n\n\tvar remotePublicKey *ecdsa.PublicKey\n\tvar err error\n\n\thashKey := privateKey.PrivateKeyString + remotePublicKeyString\n\n\tsecretKey := c.sharedKeys.Get(hashKey)\n\n\tif secretKey != nil {\n\t\treturn secretKey, nil\n\t}\n\n\tif isV1Proto {\n\t\tremotePublicKey, err = localcrypto.DecodePublicKeyV1([]byte(remotePublicKeyString))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else {\n\t\tremotePublicKey, err = localcrypto.DecodePublicKeyV2([]byte(remotePublicKeyString))\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tif secretKey, err = symmetricKey(privateKey.PrivateKey, remotePublicKey); err != nil {\n\t\treturn nil, err\n\t}\n\n\tc.sharedKeys.Put(hashKey, secretKey)\n\n\treturn secretKey, nil\n}\n\nfunc encode(c *BinaryJWTClaims, buf *[]byte) error {\n\t// Encode and sign the token\n\tif cap(*buf) != ClaimsEncodedBufSize {\n\t\treturn fmt.Errorf(\"Not enough space in byte slice\")\n\t}\n\n\tvar h codec.Handle = new(codec.CborHandle)\n\tenc := codec.NewEncoderBytes(buf, h)\n\tif err := enc.Encode(c); err != nil {\n\t\treturn fmt.Errorf(\"unable to encode message: %s\", err)\n\t}\n\n\treturn nil\n}\n\nfunc decode(buf []byte) (*BinaryJWTClaims, error) {\n\t// Decode the token into a structure.\n\tbinaryClaims := &BinaryJWTClaims{}\n\tvar h codec.Handle = new(codec.CborHandle)\n\n\tdec := codec.NewDecoderBytes(buf, h)\n\n\tif err := dec.Decode(binaryClaims); err != nil {\n\t\treturn nil, logError(ErrTokenDecodeFailed, err.Error())\n\t}\n\n\tif binaryClaims.ExpiresAt < time.Now().Unix() {\n\t\treturn nil, logError(ErrTokenExpired, fmt.Sprintf(\"token is expired since: %s\", time.Unix(binaryClaims.ExpiresAt, 0)))\n\t}\n\n\treturn binaryClaims, nil\n}\n\nfunc packToken(header, nonce, token, sig []byte) []byte {\n\n\tbinaryTokenPosition := binaryNoncePosition + len(nonce)\n\tsigPosition := binaryTokenPosition + len(token)\n\n\t// Token is the concatenation of\n\t// [Position of Signature] [nonce] [token] [signature]\n\tdata := make([]byte, sigPosition+len(sig))\n\n\t// Header bytes\n\tcopy(data[0:headerLength], header)\n\t// Length of token\n\tbinary.BigEndian.PutUint16(data[lengthPosition:], uint16(sigPosition))\n\n\t// nonce not required for ack packets\n\tif len(nonce) > 0 {\n\t\tcopy(data[binaryNoncePosition:], nonce)\n\t}\n\n\t// token\n\tcopy(data[binaryTokenPosition:], token)\n\n\t// signature\n\tcopy(data[sigPosition:], sig)\n\n\treturn data\n}\n\n// unpackToken returns nonce, token, signature or error if something fails\nfunc unpackToken(isAck bool, data []byte) ([]byte, []byte, []byte, []byte, error) {\n\n\t// We must have enough data to read the length.\n\tif len(data) < binaryNoncePosition {\n\t\treturn nil, nil, nil, nil, ErrInvalidTokenLength\n\t}\n\n\theader := make([]byte, headerLength)\n\tcopy(header, data[:lengthPosition])\n\n\tsigPosition := int(binary.BigEndian.Uint16(data[lengthPosition : lengthPosition+2]))\n\t// The token must be long enough to have at least 1 byte of signature.\n\tif len(data) < sigPosition+1 || sigPosition == 0 {\n\t\treturn nil, nil, nil, nil, ErrMissingSignature\n\t}\n\n\tvar nonce []byte\n\n\tif !isAck {\n\t\tnonce = make([]byte, 16)\n\t\tcopy(nonce, data[binaryNoncePosition:binaryNoncePosition+NonceLength])\n\t}\n\n\t// Only if nonce is found do we need to advance. So, use the\n\t// actual length of the nonce and not just a constant here.\n\ttoken := data[binaryNoncePosition+len(nonce) : sigPosition]\n\n\tsig := data[sigPosition:]\n\treturn header, nonce, token, sig, nil\n}\n\n// symmetricKey returns a symmetric key for encryption\nfunc symmetricKey(privateKey *ecdsa.PrivateKey, remotePublic *ecdsa.PublicKey) ([]byte, error) {\n\n\tc := elliptic.P256()\n\n\tx, _ := c.ScalarMult(remotePublic.X, remotePublic.Y, privateKey.D.Bytes())\n\n\treturn hash500(x.Bytes(), nil)\n}\n\nfunc uncompressTags(binaryClaims *BinaryJWTClaims, publicKeyClaims []string) {\n\n\tbinaryClaims.T = append(binaryClaims.CT, enforcerconstants.TransmitterLabel+\"=\"+binaryClaims.ID)\n\n\tfor _, pc := range publicKeyClaims {\n\n\t\tif len(pc) <= claimsheader.CompressedTagLengthV1 {\n\t\t\tbinaryClaims.T = append(binaryClaims.T, pc)\n\t\t\tcontinue\n\t\t}\n\n\t\tbinaryClaims.T = append(binaryClaims.T, pc[:claimsheader.CompressedTagLengthV1])\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/tokens/binaryjwt314.go",
    "content": "package tokens\n\nimport (\n\t\"bytes\"\n\t\"crypto\"\n\t\"crypto/ecdsa\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\tlocalcrypto \"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n)\n\nfunc (c *BinaryJWTConfig) getSharedKey314(pub interface{}, priv interface{}) ([]byte, error) {\n\n\tpublicKey := pub.(*ecdsa.PublicKey)\n\tprivateKey := priv.(*ecdsa.PrivateKey)\n\n\thashKey := string(localcrypto.EncodePublicKeyV2(publicKey)) + string(localcrypto.EncodePrivateKey(privateKey))\n\n\tsecretKey := c.sharedKeys.Get(hashKey)\n\tif secretKey != nil {\n\t\treturn secretKey, nil\n\t}\n\n\tsecretKey, err := symmetricKey(privateKey, publicKey)\n\tif err != nil {\n\t\treturn nil, logError(ErrSharedKeyHashFailed, err.Error())\n\t}\n\n\t// Add it in the cache\n\tc.sharedKeys.Put(hashKey, secretKey)\n\n\treturn secretKey, nil\n}\n\nfunc hash314(buf []byte, key []byte) ([]byte, error) {\n\n\thasher := crypto.SHA256.New()\n\tif _, err := hasher.Write(buf); err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to hash data structure: %s\", err)\n\t}\n\n\treturn hasher.Sum(key), nil\n}\n\nfunc (c *BinaryJWTConfig) verifyWithSharedKey314(buf []byte, key []byte, sig []byte) error {\n\n\tps, err := hash314(buf, key)\n\tif err != nil {\n\t\treturn logError(ErrTokenHashFailed, err.Error())\n\t}\n\n\tif !bytes.Equal(ps, sig) {\n\t\treturn logError(ErrSignatureMismatch, fmt.Sprintf(\"unable to verify token with shared secret: they don't match %d %d \", len(ps), len(sig)))\n\t}\n\n\treturn nil\n}\n\nfunc (c *BinaryJWTConfig) process314Protocol(isSynAck bool, token []byte, secrets secrets.Secrets, connClaims *ConnectionClaims, binaryClaims *BinaryJWTClaims, sig []byte) ([]byte, *pkiverifier.PKIControllerInfo, error) {\n\n\tvar secretKey []byte\n\tpublicKey, publicKeyClaims, _, controller, err := secrets.KeyAndClaims(binaryClaims.SignerKey)\n\tif err != nil || publicKey == nil {\n\t\treturn nil, nil, ErrPublicKeyFailed\n\t}\n\n\t// Since we know that the signature is valid, we check if the token is already in\n\t// the cache and accept it. We do that after the verification, in case the\n\t// public key has expired and we still have it in the cache. This is true for syn only\n\tif !isSynAck {\n\t\tif cachedClaims, cerr := c.tokenCache.Get(string(token)); cerr == nil {\n\t\t\t*connClaims = *cachedClaims.(*ConnectionClaims)\n\n\t\t\tsecretKey, err = c.getSharedKey314(publicKey, secrets.EncodingKey())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t\treturn secretKey, controller, nil\n\t\t}\n\t}\n\n\t// Uncommpress the tags and add the public key claims to the tags that\n\t// we return.\n\tuncompressTags(binaryClaims, publicKeyClaims)\n\tCopyToConnectionClaims(binaryClaims, connClaims)\n\n\tif isSynAck {\n\t\tbinaryClaims.RMT = nil\n\t\tsecretKey, err = c.getSharedKey314(publicKey, secrets.EncodingKey())\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\tif err := c.verifyWithSharedKey314(token, secretKey, sig); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t} else {\n\t\t// If the token is not in the cache, we validate the token with the\n\t\t// provided and validated public key. We will then add it in the\n\t\t// cache for future reference.\n\n\t\tif err := c.verify(token, sig, publicKey.(*ecdsa.PublicKey)); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\n\t\tsecretKey, err = c.getSharedKey314(publicKey, secrets.EncodingKey())\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t}\n\n\t// create a copy of the connClaims as this conn claims belongs to the original Connection.\n\t// It would have been fine to store the connclaims in the cache here, but the gc will not be able to\n\t// reclaim memory of the entire connection.\n\tif !isSynAck {\n\t\tconnClaimsCopy := new(ConnectionClaims)\n\t\t*connClaimsCopy = *connClaims\n\t\tc.tokenCache.AddOrUpdate(string(token), connClaimsCopy)\n\t}\n\n\treturn secretKey, controller, nil\n}\n"
  },
  {
    "path": "controller/pkg/tokens/binaryjwt500.go",
    "content": "package tokens\n\nimport (\n\t\"bytes\"\n\t\"crypto\"\n\t\"crypto/ecdsa\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.uber.org/zap\"\n)\n\nfunc (c *BinaryJWTConfig) process500Protocol(isSynAck bool, token []byte, privateKey *ephemeralkeys.PrivateKey, secrets secrets.Secrets, connClaims *ConnectionClaims, binaryClaims *BinaryJWTClaims, sig []byte) ([]byte, *pkiverifier.PKIControllerInfo, error) {\n\n\tpublicKey, publicKeyClaims, _, controller, err := secrets.KeyAndClaims(binaryClaims.SignerKey)\n\tif err != nil || publicKey == nil {\n\t\treturn nil, nil, ErrPublicKeyFailed\n\t}\n\n\tvar remotePublicKeyString, remotePublicKeySig string\n\tvar isV1Proto bool\n\n\t// Since we know that the signature is valid, we check if the token is already in\n\t// the cache and accept it. We do that after the verification, in case the\n\t// public key has expired and we still have it in the cache. This is true for syn only\n\tif !isSynAck {\n\t\tif cachedClaims, cerr := c.tokenCache.Get(string(token)); cerr == nil {\n\t\t\t*connClaims = *cachedClaims.(*ConnectionClaims)\n\t\t\tif len(connClaims.DEKV2) == 0 {\n\t\t\t\tremotePublicKeyString = string(connClaims.DEKV1)\n\t\t\t\tisV1Proto = true\n\t\t\t} else {\n\t\t\t\tremotePublicKeyString = string(connClaims.DEKV2)\n\t\t\t\tisV1Proto = false\n\t\t\t}\n\n\t\t\tsecretKey, err := c.getSecretKey(privateKey, remotePublicKeyString, isV1Proto)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t\treturn secretKey, controller, nil\n\t\t}\n\t}\n\n\t// Uncommpress the tags and add the public key claims to the tags that\n\t// we return.\n\tuncompressTags(binaryClaims, publicKeyClaims)\n\tCopyToConnectionClaims(binaryClaims, connClaims)\n\n\tif len(connClaims.DEKV2) == 0 {\n\t\tremotePublicKeyString = string(connClaims.DEKV1)\n\t\tremotePublicKeySig = string(connClaims.SDEKV1)\n\t\tisV1Proto = true\n\t} else {\n\t\tremotePublicKeyString = string(connClaims.DEKV2)\n\t\tremotePublicKeySig = string(connClaims.SDEKV2)\n\t\tisV1Proto = false\n\t}\n\n\t// We haven't seen this token again, so we will validate it with the\n\t// public key and cache it for future calls.\n\n\t// First we check if we know RMT attribute is set. This will indicate\n\t// that this is SynAck packet that carries the remote nonce, and we\n\t// can use the shared key approach. In the protocol we mandate\n\t// that RMT in the SynAck is populated since it carries the nonce\n\t// of the remote.\n\tif isSynAck {\n\t\tbinaryClaims.RMT = nil\n\n\t\t// We don't need to verify the ephemeral key if we have done it already.\n\t\tif _, cerr := c.tokenCache.Get(remotePublicKeyString); cerr != nil {\n\t\t\tif err := c.verify([]byte(remotePublicKeyString), []byte(remotePublicKeySig), publicKey.(*ecdsa.PublicKey)); err != nil {\n\t\t\t\tzap.L().Error(\"Ephemeral key can not be verified\", zap.Error(err))\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\n\t\t\tc.tokenCache.AddOrUpdate(remotePublicKeyString, \"\")\n\t\t}\n\t} else {\n\t\t// If the token is not in the cache, we validate the token with the\n\t\t// provided and validated public key. We will then add it in the\n\t\t// cache for future reference.\n\n\t\tif err := c.verify(token, sig, publicKey.(*ecdsa.PublicKey)); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t}\n\n\tsecretKey, err := c.getSecretKey(privateKey, remotePublicKeyString, isV1Proto)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\t// create a copy of the connClaims as this conn claims belongs to the original Connection.\n\t// It would have been fine to store the connclaims in the cache here, but the gc will not be able to\n\t// reclaim memory of the entire connection.\n\tif !isSynAck {\n\t\tconnClaimsCopy := new(ConnectionClaims)\n\t\t*connClaimsCopy = *connClaims\n\t\tc.tokenCache.AddOrUpdate(string(token), connClaimsCopy)\n\t}\n\n\treturn secretKey, controller, nil\n}\n\nfunc hash500(buf []byte, key []byte) ([]byte, error) {\n\n\tnewBuf := make([]byte, 0, len(buf)+len(key))\n\tnewBuf = append(newBuf, buf...)\n\tnewBuf = append(newBuf, key...)\n\n\thasher := crypto.SHA256.New()\n\tif _, err := hasher.Write(newBuf); err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to hash data structure: %s\", err)\n\t}\n\n\treturn hasher.Sum(nil), nil\n}\n\nfunc (c *BinaryJWTConfig) verifyWithSharedKey500(buf []byte, key []byte, sig []byte) error {\n\n\tps, err := hash500(buf, key)\n\tif err != nil {\n\t\treturn logError(ErrTokenHashFailed, err.Error())\n\t}\n\n\tif !bytes.Equal(ps, sig) {\n\t\treturn logError(ErrSignatureMismatch, fmt.Sprintf(\"unable to verify token with shared secret: they don't match %d %d \", len(ps), len(sig)))\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/pkg/tokens/binaryjwt_test.go",
    "content": "// +build !windows\n\npackage tokens\n\nimport (\n\t\"bytes\"\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\t\"crypto/rand\"\n\t\"encoding/gob\"\n\t\"math/big\"\n\t\"reflect\"\n\t\"strconv\"\n\t\"testing\"\n\t\"time\"\n\n\tenforcerconstants \"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets/compactpki\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/crypto\"\n\t\"gotest.tools/assert\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nvar (\n\trmt       = \"1234567890123456\"\n\tlcl       = \"098765432109876\"\n\tbvalidity = time.Second * 10\n\n\theader = claimsheader.NewClaimsHeader(\n\t\tclaimsheader.OptionCompressionType(claimsheader.CompressionTypeV1),\n\t)\n)\nvar (\n\tkeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIPkiHqtH372JJdAG/IxJlE1gv03cdwa8Lhg2b3m/HmbyoAoGCCqGSM49\nAwEHoUQDQgAEAfAL+AfPj/DnxrU6tUkEyzEyCxnflOWxhouy1bdzhJ7vxMb1vQ31\n8ZbW/WvMN/ojIXqXYrEpISoojznj46w64w==\n-----END EC PRIVATE KEY-----`\n\tcaPool = `-----BEGIN CERTIFICATE-----\nMIIBhTCCASwCCQC8b53yGlcQazAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABJxneTUqhbtgEIwpKUUzwz3h92SqcOdIw3mfQkMjg3Vobvr6JKlpXYe9xhsN\nrygJmLhMAN9gjF9qM9ybdbe+m3owCgYIKoZIzj0EAwIDRwAwRAIgC1fVMqdBy/o3\njNUje/Hx0fZF9VDyUK4ld+K/wF3QdK4CID1ONj/Kqinrq2OpjYdkgIjEPuXoOoR1\ntCym8dnq4wtH\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIB3jCCAYOgAwIBAgIJALsW7pyC2ERQMAoGCCqGSM49BAMCMEsxCzAJBgNVBAYT\nAlVTMQswCQYDVQQIDAJDQTEMMAoGA1UEBwwDU0pDMRAwDgYDVQQKDAdUcmlyZW1l\nMQ8wDQYDVQQDDAZ1YnVudHUwHhcNMTYwOTI3MjI0OTAwWhcNMjYwOTI1MjI0OTAw\nWjBLMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4G\nA1UECgwHVHJpcmVtZTEPMA0GA1UEAwwGdWJ1bnR1MFkwEwYHKoZIzj0CAQYIKoZI\nzj0DAQcDQgAE4c2Fd7XeIB1Vfs51fWwREfLLDa55J+NBalV12CH7YEAnEXjl47aV\ncmNqcAtdMUpf2oz9nFVI81bgO+OSudr3CqNQME4wHQYDVR0OBBYEFOBftuI09mmu\nrXjqDyIta1gT8lqvMB8GA1UdIwQYMBaAFOBftuI09mmurXjqDyIta1gT8lqvMAwG\nA1UdEwQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAMylAHhbFA0KqhXIFiXNpEbH\nJKaELL6UXXdeQ5yup8q+AiEAh5laB9rbgTymjaANcZ2YzEZH4VFS3CKoSdVqgnwC\ndW4=\n-----END CERTIFICATE-----`\n\n\tcertPEM = `-----BEGIN CERTIFICATE-----\nMIIBhjCCASwCCQCPCdgp39gHJTAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABAHwC/gHz4/w58a1OrVJBMsxMgsZ35TlsYaLstW3c4Se78TG9b0N9fGW1v1r\nzDf6IyF6l2KxKSEqKI854+OsOuMwCgYIKoZIzj0EAwIDSAAwRQIgQwQn0jnK/XvD\nKxgQd/0pW5FOAaB41cMcw4/XVlphO1oCIQDlGie+WlOMjCzrV0Xz+XqIIi1pIgPT\nIG7Nv+YlTVp5qA==\n-----END CERTIFICATE-----`\n)\n\nfunc createCompactPKISecrets(tags []string) (secrets.Secrets, error) {\n\ttxtKey, cert, _, err := crypto.LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tissuer := pkiverifier.NewPKIIssuer(txtKey)\n\ttxtToken, err := issuer.CreateTokenFromCertificate(cert, tags)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tControllerInfo := &secrets.ControllerInfo{\n\t\tPublicKey: []byte(certPEM),\n\t}\n\n\ttokenKeyPEMs := []*secrets.ControllerInfo{ControllerInfo}\n\n\tscrts, err := compactpki.NewCompactPKIWithTokenCA([]byte(keyPEM), []byte(certPEM), []byte(caPool), tokenKeyPEMs, txtToken, claimsheader.CompressionTypeV1)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn scrts, nil\n}\n\nfunc createUncompressedTags(pu string) *policy.TagStore {\n\ttags := []string{\n\t\tenforcerconstants.TransmitterLabel + \"=\" + pu,\n\t}\n\n\treturn policy.NewTagStoreFromSlice(tags)\n}\n\nfunc createCompressedTagArray() *policy.TagStore {\n\treturn policy.NewTagStoreFromSlice([]string{\n\t\t\"vpdCmPoRCx7k\",\n\t\t\"8wLk0bOXS9w0\",\n\t\t\"GUmf49pmErzC\",\n\t\t\"7J+9IX0dRGog\",\n\t\t\"3BQgvLnKvSUj\",\n\t})\n}\n\nfunc Test_NewBinaryJWT(t *testing.T) {\n\tConvey(\"When I try to instantiate a new binary JWT, it should succeed\", t, func() {\n\t\tb, err := NewBinaryJWT(bvalidity, \"0123456789012345678901234567890123456789\")\n\t\tSo(err, ShouldBeNil)\n\t\tSo(b, ShouldNotBeNil)\n\t\tSo(b.ValidityPeriod, ShouldEqual, bvalidity)\n\t\tSo(b.Issuer, ShouldEqual, \"0123456789012345678901234567890123456789\")\n\t\tSo(b.tokenCache, ShouldNotBeNil)\n\t\tSo(b.sharedKeys, ShouldNotBeNil)\n\t})\n}\n\nfunc Test_EncodeDecode(t *testing.T) {\n\tConvey(\"Given a validy binary JWT issuer\", t, func() {\n\t\tscrts, err := createCompactPKISecrets([]string{\"kDMRXWckV9k6mGuJ\", \"xyz\", \"eJ1s03u72o6i\"})\n\t\tSo(err, ShouldBeNil)\n\n\t\tb, err := NewBinaryJWT(bvalidity, \"0123456789012345678901234567890123456789\")\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I encode and decode a bad Syn Packet\", func() {\n\n\t\t\ttoken := make([]byte, 400)\n\t\t\ttoken = append(token, []byte(\"abcdefghijklmnopqrstuvwxyz\")...)\n\n\t\t\tConvey(\"When I decode the token, it should throw error\", func() {\n\t\t\t\t_, _, _, _, _, err := b.DecodeSyn(false, token, nil, scrts, nil)\n\t\t\t\tSo(err, ShouldResemble, ErrMissingSignature)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I encode and decode a nil Syn Packet\", func() {\n\n\t\t\tConvey(\"When I decode the token, it should be give the original claims\", func() {\n\t\t\t\t_, _, _, _, _, err := b.DecodeSyn(false, nil, nil, scrts, nil)\n\t\t\t\tSo(err, ShouldResemble, ErrInvalidTokenLength)\n\t\t\t})\n\t\t})\n\t})\n}\n\ntype TestPublicKey struct {\n\tX *big.Int\n\tY *big.Int\n}\n\nfunc testencodeKey() ([]byte, error) {\n\tvar data bytes.Buffer\n\tremotePrivateKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\treturn []byte{}, err\n\t}\n\tp := &TestPublicKey{\n\t\tX: remotePrivateKey.PublicKey.X,\n\t\tY: remotePrivateKey.PublicKey.Y,\n\t}\n\tenc := gob.NewEncoder(&data)\n\tif err := enc.Encode(p); err != nil {\n\t\treturn nil, err\n\t}\n\treturn data.Bytes(), nil\n\n}\n\ntype PublicKeys struct {\n\tX *big.Int\n\tY *big.Int\n}\n\nfunc Test_BinaryTokenLengths(t *testing.T) {\n\tConvey(\"Given a JWT valid engine with a valid Compact PKI key \", t, func() {\n\t\tscrts, err := createCompactPKISecrets(nil)\n\t\tSo(err, ShouldBeNil)\n\n\t\tt, err := NewBinaryJWT(bvalidity, \"01234567890123456789012345678901234567\")\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I try with 64 12-byte tags, the max length must not be exceeded\", func() {\n\n\t\t\tprivatekey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tpublickey := &PublicKeys{\n\t\t\t\tX: privatekey.PublicKey.X,\n\t\t\t\tY: privatekey.PublicKey.Y,\n\t\t\t}\n\n\t\t\tvar data bytes.Buffer\n\n\t\t\tenc := gob.NewEncoder(&data)\n\t\t\terr = enc.Encode(publickey)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tmsg := \"Length of bytes \" + strconv.Itoa(len(data.Bytes()))\n\t\t\tConvey(msg, func() {})\n\n\t\t\tvar compressedTags []string\n\t\t\tb := make([]byte, 12)\n\t\t\tfor i := 0; i < 56; i++ {\n\t\t\t\t_, err := rand.Read(b)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tcompressedTags = append(compressedTags, string(b))\n\t\t\t}\n\n\t\t\tclaims := &ConnectionClaims{\n\t\t\t\tID:  \"5c5baa93d5f54a3019bede4e\",\n\t\t\t\tRMT: []byte(rmt),\n\t\t\t\tLCL: []byte(lcl),\n\t\t\t\tCT:  policy.NewTagStoreFromSlice(compressedTags),\n\t\t\t}\n\n\t\t\tvar encodedBuf [ClaimsEncodedBufSize]byte\n\n\t\t\ttoken, err := t.CreateSynToken(claims, encodedBuf[:], []byte(lcl), claimsheader.NewClaimsHeader(), scrts)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(token), ShouldBeLessThan, 1420)\n\t\t})\n\n\t})\n\n\tConvey(\"Given a JWT valid engine with a valid Compact PKI key \", t, func() {\n\t\tscrts, err := createCompactPKISecrets(nil)\n\t\tSo(err, ShouldBeNil)\n\n\t\tt, err := NewBinaryJWT(bvalidity, \"0123456789012345678901234567890123456789\")\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"When I try with 64 12-byte tags, the max length must not be exceeded\", func() {\n\n\t\t\tprivatekey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tpublickey := &PublicKeys{\n\t\t\t\tX: privatekey.PublicKey.X,\n\t\t\t\tY: privatekey.PublicKey.Y,\n\t\t\t}\n\n\t\t\tvar data bytes.Buffer\n\n\t\t\tenc := gob.NewEncoder(&data)\n\t\t\terr = enc.Encode(publickey)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tmsg := \"Length of bytes \" + strconv.Itoa(len(data.Bytes()))\n\t\t\tConvey(msg, func() {})\n\n\t\t\tvar compressedTags []string\n\t\t\tb := make([]byte, 8)\n\t\t\tfor i := 0; i < 80; i++ {\n\t\t\t\t_, err := rand.Read(b)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tcompressedTags = append(compressedTags, string(b))\n\t\t\t}\n\n\t\t\tclaims := &ConnectionClaims{\n\t\t\t\tID:  \"5c5baa93d5f54a3019bede4e\",\n\t\t\t\tRMT: []byte(rmt),\n\t\t\t\tLCL: []byte(lcl),\n\t\t\t\tCT:  policy.NewTagStoreFromSlice(compressedTags),\n\t\t\t}\n\t\t\tvar encodedBuf [ClaimsEncodedBufSize]byte\n\t\t\ttoken, err := t.CreateSynToken(claims, encodedBuf[:], []byte(lcl), claimsheader.NewClaimsHeader(), scrts)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(token), ShouldBeLessThan, 1420)\n\t\t})\n\n\t})\n}\n\nfunc Test_PANWIdentitySynToken(t *testing.T) {\n\tConvey(\"Given a JWT valid engine with a valid Compact PKI key \", t, func() {\n\t\tscrts, err := createCompactPKISecrets(nil)\n\t\tSo(err, ShouldBeNil)\n\n\t\tt1, err := NewBinaryJWT(bvalidity, \"01234567890123456789012345678901234567\")\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"PANWIDENTITY string should be there at 64 byte boundary from the end of the token\", func() {\n\n\t\t\tprivatekey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tpublickey := &PublicKeys{\n\t\t\t\tX: privatekey.PublicKey.X,\n\t\t\t\tY: privatekey.PublicKey.Y,\n\t\t\t}\n\n\t\t\tvar data bytes.Buffer\n\n\t\t\tenc := gob.NewEncoder(&data)\n\t\t\terr = enc.Encode(publickey)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tmsg := \"Length of bytes \" + strconv.Itoa(len(data.Bytes()))\n\t\t\tConvey(msg, func() {})\n\n\t\t\tvar compressedTags []string\n\t\t\tb := make([]byte, 12)\n\t\t\tfor i := 0; i < 56; i++ {\n\t\t\t\t_, err := rand.Read(b)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tcompressedTags = append(compressedTags, string(b))\n\t\t\t}\n\n\t\t\tclaims := &ConnectionClaims{\n\t\t\t\tID:  \"5c5baa93d5f54a3019bede4e\",\n\t\t\t\tRMT: []byte(rmt),\n\t\t\t\tLCL: []byte(lcl),\n\t\t\t\tCT:  policy.NewTagStoreFromSlice(compressedTags),\n\t\t\t}\n\n\t\t\tvar encodedBuf [ClaimsEncodedBufSize]byte\n\n\t\t\tsynToken, err := t1.CreateSynToken(claims, encodedBuf[:], []byte(lcl), claimsheader.NewClaimsHeader(), scrts)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tstartOffset := len(synToken) - 64 - len([]byte(\"PANWIDENTITY\"))\n\t\t\tendOffset := len(synToken) - 64\n\t\t\tassert.Equal(t, string(synToken[startOffset:endOffset]), \"PANWIDENTITY\", \"string should match PANWIDENTITY\")\n\t\t})\n\n\t})\n}\n\nfunc Test_PANWIdentityAckToken(t *testing.T) {\n\n\tConvey(\"Given a JWT valid engine with a valid Compact PKI key \", t, func() {\n\t\tt1, err := NewBinaryJWT(bvalidity, \"0123456789012345678901234567890123456789\")\n\t\tSo(err, ShouldBeNil)\n\n\t\tvar compressedTags []string\n\t\tb := make([]byte, 12)\n\t\tfor i := 0; i < 56; i++ {\n\t\t\t_, err := rand.Read(b)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tcompressedTags = append(compressedTags, string(b))\n\t\t}\n\n\t\tclaims := &ConnectionClaims{\n\t\t\tID:  \"5c5baa93d5f54a3019bede4e\",\n\t\t\tRMT: []byte(rmt),\n\t\t\tLCL: []byte(lcl),\n\t\t\tCT:  policy.NewTagStoreFromSlice(compressedTags),\n\t\t}\n\n\t\tvar encodedBuf [ClaimsEncodedBufSize]byte\n\n\t\tackToken, err := t1.CreateAckToken(false, []byte(\"hello\"), claims, encodedBuf[:], claimsheader.NewClaimsHeader())\n\t\tSo(err, ShouldBeNil)\n\t\tstartOffset := len(ackToken) - 64 - len([]byte(\"PANWIDENTITY\"))\n\t\tendOffset := len(ackToken) - 64\n\t\tassert.Equal(t, string(ackToken[startOffset:endOffset]), \"PANWIDENTITY\", \"string should match PANWIDENTITY\")\n\t})\n}\n\nfunc Test_EncDecClaims(t *testing.T) {\n\tvar compressedTags []string\n\tb := make([]byte, 12)\n\tfor i := 0; i < 56; i++ {\n\t\trand.Read(b) //nolint\n\t\tcompressedTags = append(compressedTags, string(b))\n\t}\n\n\t// Encode the claims in a buffer.\n\n\tclaims := &ConnectionClaims{\n\t\tID:  \"5c5baa93d5f54a3019bede4e\",\n\t\tRMT: []byte(rmt),\n\t\tLCL: []byte(lcl),\n\t\tCT:  policy.NewTagStoreFromSlice(compressedTags),\n\t}\n\tallclaims := ConvertToBinaryClaims(claims, 1*time.Minute)\n\n\tvar encodedBuf [ClaimsEncodedBufSize]byte\n\tencBuf := encodedBuf[:]\n\tencode(allclaims, &encBuf) //nolint\n\n\tdecClaims, _ := decode(encBuf)\n\teq := reflect.DeepEqual(allclaims, decClaims)\n\n\tassert.Equal(t, eq, true, \"decoded claims should be equal to original claims which was encoded\")\n}\n"
  },
  {
    "path": "controller/pkg/tokens/binaryjwtclaimtypes.go",
    "content": "package tokens\n\nimport (\n\t\"time\"\n\n\tjwt \"github.com/dgrijalva/jwt-go\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// BinaryJWTClaims captures all the custom  claims\ntype BinaryJWTClaims struct {\n\t// Tags\n\tT []string `codec:\",omitempty\"`\n\t// Compressed tags\n\tCT []string `codec:\",omitempty\"`\n\t// RMT is the nonce of the remote that has to be signed in the JWT\n\tRMT []byte `codec:\",omitempty\"`\n\t// LCL is the nonce of the local node that has to be signed\n\tLCL []byte `codec:\",omitempty\"`\n\t// DEK is the datapath ephemeral keys used to derived shared keys during the handshake\n\tDEK []byte `codec:\",omitempty\"`\n\t// SDEK is the signature of the ephemeral key\n\tSDEK []byte `codec:\",omitempty\"`\n\t// ID is the source PU ID\n\tID string `codec:\",omitempty\"`\n\t// Expiration time\n\tExpiresAt int64 `codec:\",omitempty\"`\n\t// SignerKey\n\tSignerKey []byte `codec:\",omitempty\"`\n\t// P holds the ping payload\n\tP *policy.PingPayload `codec:\",omitempty\"`\n\t// DEKV2 is the datapath ephemeral key V2 used to derived shared keys during the handshake\n\tDEKV2 []byte `codec:\",omitempty\"`\n\t// SDEK is the signature of the ephemeral key V2\n\tSDEKV2 []byte `codec:\",omitempty\"`\n}\n\n// JWTClaims captures all the custom  clains\ntype JWTClaims struct {\n\t*ConnectionClaims\n\tjwt.StandardClaims\n}\n\n//CopyToConnectionClaims copies the binary jwt claims to connection claims\nfunc CopyToConnectionClaims(b *BinaryJWTClaims, connClaims *ConnectionClaims) {\n\t*connClaims = ConnectionClaims{\n\t\tT:      policy.NewTagStoreFromSlice(b.T),\n\t\tCT:     policy.NewTagStoreFromSlice(b.CT),\n\t\tRMT:    b.RMT,\n\t\tLCL:    b.LCL,\n\t\tSDEKV1: b.SDEK,\n\t\tDEKV1:  b.DEK,\n\t\tSDEKV2: b.SDEKV2,\n\t\tDEKV2:  b.DEKV2,\n\t\tID:     b.ID,\n\t\tP:      b.P,\n\t}\n}\n\n// ConvertToJWTClaims converts to old claims\nfunc ConvertToJWTClaims(b *BinaryJWTClaims) *JWTClaims {\n\treturn &JWTClaims{\n\t\tConnectionClaims: &ConnectionClaims{\n\t\t\tT:      policy.NewTagStoreFromSlice(b.T),\n\t\t\tCT:     policy.NewTagStoreFromSlice(b.CT),\n\t\t\tRMT:    b.RMT,\n\t\t\tLCL:    b.LCL,\n\t\t\tSDEKV1: b.SDEK,\n\t\t\tDEKV1:  b.DEK,\n\t\t\tSDEKV2: b.SDEKV2,\n\t\t\tDEKV2:  b.DEKV2,\n\t\t\tID:     b.ID,\n\t\t\tP:      b.P,\n\t\t},\n\t\tStandardClaims: jwt.StandardClaims{\n\t\t\tExpiresAt: b.ExpiresAt,\n\t\t},\n\t}\n}\n\n// ConvertToBinaryClaims coverts back,\nfunc ConvertToBinaryClaims(j *ConnectionClaims, validity time.Duration) *BinaryJWTClaims {\n\tb := &BinaryJWTClaims{\n\t\tRMT:       j.RMT,\n\t\tLCL:       j.LCL,\n\t\tSDEK:      j.SDEKV1,\n\t\tDEK:       j.DEKV1,\n\t\tSDEKV2:    j.SDEKV2,\n\t\tDEKV2:     j.DEKV2,\n\t\tID:        j.ID,\n\t\tExpiresAt: time.Now().Add(validity).Unix(),\n\t\tP:         j.P,\n\t}\n\tif j.T != nil {\n\t\tb.T = j.T.GetSlice()\n\t}\n\tif j.CT != nil {\n\t\tb.CT = j.CT.GetSlice()\n\t}\n\n\treturn b\n}\n"
  },
  {
    "path": "controller/pkg/tokens/errors.go",
    "content": "package tokens\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n)\n\n// Custom errors used by this package.\nvar (\n\tErrTokenTooSmall           = errors.New(\"randomize: token is small\")\n\tErrTokenEncodeFailed       = errors.New(\"unable to encode token\")\n\tErrTokenHashFailed         = errors.New(\"unable to hash token\")\n\tErrTokenSignFailed         = errors.New(\"unable to sign token\")\n\tErrSharedSecretMissing     = errors.New(\"secret not found\")\n\tErrInvalidSecret           = errors.New(\"invalid secret\")\n\tErrInvalidTokenLength      = errors.New(\"not enough data\")\n\tErrMissingSignature        = errors.New(\"signature is missing\")\n\tErrInvalidSignature        = errors.New(\"invalid signature\")\n\tErrCompressedTagMismatch   = errors.New(\"Compressed tag mismatch\")\n\tErrDatapathVersionMismatch = errors.New(\"Datapath version mismatch\")\n\tErrTokenDecodeFailed       = errors.New(\"unable to decode token\")\n\tErrTokenExpired            = errors.New(\"token expired\")\n\tErrSignatureMismatch       = errors.New(\"signature mismatch\")\n\tErrSharedKeyHashFailed     = errors.New(\"unable to hash shared key\")\n\tErrPublicKeyFailed         = errors.New(\"unable to verify public key\")\n)\n\n// logError is a convinience function which logs the err:msg and returns the error.\nfunc logError(err error, msg string) error {\n\n\tif err == nil {\n\t\treturn nil\n\t}\n\n\treturn fmt.Errorf(\"err = err.Error(), msg = %s\", msg)\n}\n"
  },
  {
    "path": "controller/pkg/tokens/jwt.go",
    "content": "package tokens\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"encoding/binary\"\n\t\"errors\"\n\t\"fmt\"\n\t\"time\"\n\n\tjwt \"github.com/dgrijalva/jwt-go\"\n\tenforcerconstants \"go.aporeto.io/trireme-lib/controller/internal/enforcer/constants\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/cache\"\n)\n\nvar (\n\tnoncePosition = 2\n\ttokenPosition = 2 + NonceLength\n)\n\n// JWTConfig configures the JWT token generator with the standard parameters. One\n// configuration is assigned to each server\ntype JWTConfig struct {\n\t// ValidityPeriod  period of the JWT\n\tValidityPeriod time.Duration\n\t// Issuer is the server that issues the JWT\n\tIssuer string\n\t// signMethod is the method used to sign the JWT\n\tsignMethod jwt.SigningMethod\n\t// cache test\n\ttokenCache cache.DataStore\n\t// compressionType determines of compression should be used when creating tokens\n\tcompressionType claimsheader.CompressionType\n\t// compressionTagLength is the length of tags based on compressionType\n\tcompressionTagLength int\n\t// datapathVersion is the current version of the datapath\n\tdatapathVersion claimsheader.DatapathVersion\n}\n\n// JWTClaims captures all the custom  clains\ntype JWTClaims struct {\n\t*ConnectionClaims\n\tjwt.StandardClaims\n}\n\n// NewJWT creates a new JWT token processor\nfunc NewJWT(validity time.Duration, issuer string, s secrets.Secrets) (*JWTConfig, error) {\n\n\tif len(issuer) > MaxServerName {\n\t\treturn nil, fmt.Errorf(\"server id should be max %d chars. got %s\", MaxServerName, issuer)\n\t}\n\n\tfor i := len(issuer); i < MaxServerName; i++ {\n\t\tissuer = issuer + \" \"\n\t}\n\n\tvar signMethod jwt.SigningMethod\n\tcompressionType := claimsheader.CompressionTypeNone\n\n\tif s == nil {\n\t\treturn nil, errors.New(\"secrets can not be nil\")\n\t}\n\n\tswitch s.Type() {\n\tcase secrets.PKICompactType:\n\t\tsignMethod = jwt.SigningMethodES256\n\t\tcompressionType = s.(*secrets.CompactPKI).Compressed\n\tdefault:\n\t\tsignMethod = jwt.SigningMethodNone\n\t}\n\n\treturn &JWTConfig{\n\t\tValidityPeriod:       validity,\n\t\tIssuer:               issuer,\n\t\tsignMethod:           signMethod,\n\t\ttokenCache:           cache.NewCacheWithExpiration(\"JWTTokenCache\", time.Millisecond*500),\n\t\tcompressionType:      compressionType,\n\t\tcompressionTagLength: claimsheader.CompressionTypeToTagLength(compressionType),\n\t\tdatapathVersion:      claimsheader.DatapathVersion1,\n\t}, nil\n}\n\n// CreateAndSign  creates a new token, attaches an ephemeral key pair and signs with the issuer\n// key. It also randomizes the source nonce of the token. It returns back the token and the private key.\nfunc (c *JWTConfig) CreateAndSign(isAck bool, claims *ConnectionClaims, nonce []byte, claimsHeader *claimsheader.ClaimsHeader, secrets secrets.Secrets) (token []byte, err error) {\n\n\t// Set the appropriate claims header\n\tclaimsHeader.SetCompressionType(c.compressionType)\n\tclaimsHeader.SetDatapathVersion(c.datapathVersion)\n\n\t// Combine the application claims with the standard claims\n\tallclaims := &JWTClaims{\n\t\t&ConnectionClaims{\n\t\t\tT:   claims.T,\n\t\t\tEK:  claims.EK,\n\t\t\tRMT: claims.RMT,\n\t\t\tH:   claimsHeader.ToBytes(),\n\t\t\tID:  claims.ID,\n\t\t\tCT:  claims.CT,\n\t\t},\n\t\tjwt.StandardClaims{\n\t\t\tExpiresAt: time.Now().Add(c.ValidityPeriod).Unix(),\n\t\t},\n\t}\n\n\t// For backward compatibility, keep the issuer in Ack packets.\n\tif isAck {\n\t\tallclaims.Issuer = c.Issuer\n\t\tallclaims.LCL = claims.LCL\n\t}\n\n\t// Create the token and sign with our key\n\tstrtoken, err := jwt.NewWithClaims(c.signMethod, allclaims).SignedString(secrets.EncodingKey())\n\tif err != nil {\n\t\treturn []byte{}, err\n\t}\n\n\t// Copy the certificate if needed. Note that we don't send the certificate\n\t// again for Ack packets to reduce overhead\n\tif !isAck {\n\n\t\ttxKey := secrets.TransmittedKey()\n\n\t\ttotalLength := len(strtoken) + len(txKey) + noncePosition + NonceLength + 1\n\n\t\ttoken := make([]byte, totalLength)\n\n\t\t// Offset of public key\n\t\tbinary.BigEndian.PutUint16(token[0:noncePosition], uint16(len(strtoken)))\n\n\t\t// Attach the nonse\n\t\tcopy(token[noncePosition:], nonce)\n\n\t\t// Copy the JWT tokenn\n\t\tcopy(token[tokenPosition:], []byte(strtoken))\n\n\t\ttoken[tokenPosition+len(strtoken)] = '%'\n\t\t// Copy the public key\n\t\tif len(txKey) > 0 {\n\t\t\tcopy(token[tokenPosition+len(strtoken)+1:], txKey)\n\t\t}\n\n\t\treturn token, nil\n\t}\n\n\treturn []byte(strtoken), nil\n\n}\n\n// Decode  takes as argument the JWT token and the certificate of the issuer.\n// First it verifies the certificate with the local CA pool, and the decodes\n// the JWT if the certificate is trusted\nfunc (c *JWTConfig) Decode(isAck bool, data []byte, previousCert interface{}, secrets secrets.Secrets) (claims *ConnectionClaims, nonce []byte, publicKey interface{}, err error) {\n\n\tvar ackCert interface{}\n\tvar certClaims []string\n\n\ttoken := data\n\n\tjwtClaims := &JWTClaims{}\n\n\tnonce = make([]byte, NonceLength)\n\n\t// Get the token and data from the buffer and validate the certificate\n\t// Ack packets don't have a certificate and it must be provided in the\n\t// Decode function. If certificates are distributed out of band we\n\t// will look in the certPool for the certificate\n\tif !isAck {\n\t\t// We must have at least enough data to get the length\n\t\tif len(data) < tokenPosition {\n\t\t\treturn nil, nil, nil, errors.New(\"not enough data\")\n\t\t}\n\n\t\ttokenLength := int(binary.BigEndian.Uint16(data[0:noncePosition]))\n\t\t// Data must be enought to accommodate the token\n\t\tif len(data) < tokenPosition+tokenLength+1 {\n\t\t\treturn nil, nil, nil, errors.New(\"invalid token length\")\n\t\t}\n\n\t\tcopy(nonce, data[noncePosition:tokenPosition])\n\n\t\ttoken = data[tokenPosition : tokenPosition+tokenLength]\n\n\t\tcertBytes := data[tokenPosition+tokenLength+1:]\n\n\t\tackCert, certClaims, _, err = secrets.KeyAndClaims(certBytes)\n\t\tif err != nil {\n\t\t\treturn nil, nil, nil, fmt.Errorf(\"invalid public key: %s\", err)\n\t\t}\n\n\t\tif cachedClaims, cerr := c.tokenCache.Get(string(token)); cerr == nil {\n\t\t\treturn cachedClaims.(*ConnectionClaims), nonce, ackCert, nil\n\t\t}\n\t}\n\n\t// Parse the JWT token with the public key recovered. If it is an Ack packet\n\t// use the previous cert.\n\tjwttoken, err := jwt.ParseWithClaims(string(token), jwtClaims, func(token *jwt.Token) (interface{}, error) { // nolint\n\t\tif ackCert != nil {\n\t\t\treturn ackCert.(*ecdsa.PublicKey), nil\n\t\t}\n\t\tif previousCert != nil {\n\t\t\treturn previousCert.(*ecdsa.PublicKey), nil\n\t\t}\n\t\treturn nil, fmt.Errorf(\"Unable to find certificate\")\n\t})\n\n\t// If error is returned or the token is not valid, reject it\n\tif err != nil {\n\t\treturn nil, nil, nil, fmt.Errorf(\"unable to parse token: %s\", err)\n\t}\n\tif !jwttoken.Valid {\n\t\treturn nil, nil, nil, errors.New(\"invalid token\")\n\t}\n\n\tif !isAck {\n\t\ttags := []string{enforcerconstants.TransmitterLabel + \"=\" + jwtClaims.ConnectionClaims.ID}\n\t\tif jwtClaims.ConnectionClaims.T != nil {\n\t\t\ttags = jwtClaims.ConnectionClaims.T.Tags\n\t\t}\n\n\t\tif certClaims != nil {\n\t\t\ttags = append(tags, certClaims...)\n\t\t}\n\n\t\tjwtClaims.ConnectionClaims.T = policy.NewTagStoreFromSlice(tags)\n\t}\n\n\tif jwtClaims.ConnectionClaims.H != nil {\n\t\tif err := c.verifyClaimsHeader(jwtClaims.ConnectionClaims.H.ToClaimsHeader()); err != nil {\n\t\t\treturn nil, nil, nil, err\n\t\t}\n\t}\n\n\tc.tokenCache.AddOrUpdate(string(token), jwtClaims.ConnectionClaims)\n\n\treturn jwtClaims.ConnectionClaims, nonce, ackCert, nil\n}\n\n// Randomize adds a nonce to an existing token. Returns the nonce\nfunc (c *JWTConfig) Randomize(token []byte, nonce []byte) (err error) {\n\n\tif len(token) < tokenPosition {\n\t\treturn errors.New(\"token is too small\")\n\t}\n\n\tcopy(token[noncePosition:], nonce)\n\n\treturn nil\n}\n\nfunc (c *JWTConfig) verifyClaimsHeader(claimsHeader *claimsheader.ClaimsHeader) error {\n\n\tswitch {\n\tcase claimsHeader.CompressionType() != c.compressionType:\n\t\treturn ErrCompressedTagMismatch\n\tcase claimsHeader.DatapathVersion() != c.datapathVersion:\n\t\treturn ErrDatapathVersionMismatch\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "controller/pkg/tokens/jwt_test.go",
    "content": "// +build !windows\n\npackage tokens\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/x509\"\n\t\"testing\"\n\t\"time\"\n\n\tjwt \"github.com/dgrijalva/jwt-go\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/crypto\"\n)\n\nvar (\n\ttags = policy.NewTagStoreFromMap(map[string]string{\n\t\t\"label1\": \"value1\",\n\t\t\"label2\": \"value2\",\n\t})\n\n\trmt           = \"1234567890123456\"\n\tlcl           = \"098765432109876\"\n\tdefaultClaims = ConnectionClaims{\n\t\tT:   tags,\n\t\tRMT: []byte(rmt),\n\t\tEK:  []byte{},\n\t}\n\n\tackClaims = ConnectionClaims{\n\t\tT:   nil,\n\t\tRMT: []byte(rmt),\n\t\tLCL: []byte(lcl),\n\t\tEK:  []byte{},\n\t}\n\tvalidity = time.Second * 10\n\n\tkeyPEM = `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIPkiHqtH372JJdAG/IxJlE1gv03cdwa8Lhg2b3m/HmbyoAoGCCqGSM49\nAwEHoUQDQgAEAfAL+AfPj/DnxrU6tUkEyzEyCxnflOWxhouy1bdzhJ7vxMb1vQ31\n8ZbW/WvMN/ojIXqXYrEpISoojznj46w64w==\n-----END EC PRIVATE KEY-----`\n\tcaPool = `-----BEGIN CERTIFICATE-----\nMIIBhTCCASwCCQC8b53yGlcQazAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABJxneTUqhbtgEIwpKUUzwz3h92SqcOdIw3mfQkMjg3Vobvr6JKlpXYe9xhsN\nrygJmLhMAN9gjF9qM9ybdbe+m3owCgYIKoZIzj0EAwIDRwAwRAIgC1fVMqdBy/o3\njNUje/Hx0fZF9VDyUK4ld+K/wF3QdK4CID1ONj/Kqinrq2OpjYdkgIjEPuXoOoR1\ntCym8dnq4wtH\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIB3jCCAYOgAwIBAgIJALsW7pyC2ERQMAoGCCqGSM49BAMCMEsxCzAJBgNVBAYT\nAlVTMQswCQYDVQQIDAJDQTEMMAoGA1UEBwwDU0pDMRAwDgYDVQQKDAdUcmlyZW1l\nMQ8wDQYDVQQDDAZ1YnVudHUwHhcNMTYwOTI3MjI0OTAwWhcNMjYwOTI1MjI0OTAw\nWjBLMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4G\nA1UECgwHVHJpcmVtZTEPMA0GA1UEAwwGdWJ1bnR1MFkwEwYHKoZIzj0CAQYIKoZI\nzj0DAQcDQgAE4c2Fd7XeIB1Vfs51fWwREfLLDa55J+NBalV12CH7YEAnEXjl47aV\ncmNqcAtdMUpf2oz9nFVI81bgO+OSudr3CqNQME4wHQYDVR0OBBYEFOBftuI09mmu\nrXjqDyIta1gT8lqvMB8GA1UdIwQYMBaAFOBftuI09mmurXjqDyIta1gT8lqvMAwG\nA1UdEwQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAMylAHhbFA0KqhXIFiXNpEbH\nJKaELL6UXXdeQ5yup8q+AiEAh5laB9rbgTymjaANcZ2YzEZH4VFS3CKoSdVqgnwC\ndW4=\n-----END CERTIFICATE-----`\n\n\tcertPEM = `-----BEGIN CERTIFICATE-----\nMIIBhjCCASwCCQCPCdgp39gHJTAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABAHwC/gHz4/w58a1OrVJBMsxMgsZ35TlsYaLstW3c4Se78TG9b0N9fGW1v1r\nzDf6IyF6l2KxKSEqKI854+OsOuMwCgYIKoZIzj0EAwIDSAAwRQIgQwQn0jnK/XvD\nKxgQd/0pW5FOAaB41cMcw4/XVlphO1oCIQDlGie+WlOMjCzrV0Xz+XqIIi1pIgPT\nIG7Nv+YlTVp5qA==\n-----END CERTIFICATE-----`\n)\n\nfunc createCompactPKISecrets() (*x509.Certificate, secrets.Secrets, error) {\n\ttxtKey, cert, _, err := crypto.LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tissuer := pkiverifier.NewPKIIssuer(txtKey)\n\ttxtToken, err := issuer.CreateTokenFromCertificate(cert, []string{})\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tscrts, err := secrets.NewCompactPKIWithTokenCA([]byte(keyPEM), []byte(certPEM), []byte(caPool), [][]byte{[]byte(certPEM)}, txtToken, claimsheader.CompressionTypeNone)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\treturn cert, scrts, nil\n}\n\n// TestConstructorNewPolicyDB tests the NewPolicyDB constructor\nfunc TestConstructorNewJWT(t *testing.T) {\n\tConvey(\"Given that I instantiate a new JWT Engine with max server name that violates requirements, it should fail\", t, func() {\n\t\tscrts, err := secrets.NewNullPKI([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\t\tSo(err, ShouldBeNil)\n\t\t_, err = NewJWT(validity, \"0123456789012345678901234567890123456789\", scrts)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"Given that I instantiate a new JWT Engine with nil secrets, it should fail\", t, func() {\n\t\t_, err := NewJWT(validity, \"TEST\", nil)\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"Given that I instantiate a new JWT Engine with PKI secrets, it should succeed\", t, func() {\n\n\t\tj := &JWTConfig{}\n\n\t\t_, scrts, err := createCompactPKISecrets()\n\t\tSo(err, ShouldBeNil)\n\n\t\tjwtConfig, _ := NewJWT(validity, \"TRIREME\", scrts)\n\n\t\tSo(jwtConfig, ShouldHaveSameTypeAs, j)\n\t\tSo(jwtConfig.Issuer, ShouldResemble, \"TRIREME                 \")\n\t\tSo(jwtConfig.ValidityPeriod.Seconds(), ShouldEqual, validity.Seconds())\n\t\tSo(jwtConfig.signMethod, ShouldEqual, jwt.SigningMethodES256)\n\t})\n\n\tConvey(\"Given that I instantiate a new JWT null encryption, it should succeed\", t, func() {\n\n\t\tj := &JWTConfig{}\n\n\t\tscrts, err := secrets.NewNullPKI([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\t\tSo(err, ShouldBeNil)\n\n\t\tjwtConfig, _ := NewJWT(validity, \"TRIREME\", scrts)\n\n\t\tSo(jwtConfig, ShouldHaveSameTypeAs, j)\n\t\tSo(jwtConfig.Issuer, ShouldResemble, \"TRIREME                 \")\n\t\tSo(jwtConfig.ValidityPeriod.Seconds(), ShouldEqual, validity.Seconds())\n\t\tSo(jwtConfig.signMethod, ShouldEqual, jwt.SigningMethodNone)\n\t})\n\n}\n\nfunc TestCreateAndVerifyPKI(t *testing.T) {\n\tConvey(\"Given a JWT valid engine with a valid Compact PKI key \", t, func() {\n\t\tcert, scrts, err := createCompactPKISecrets()\n\t\tSo(err, ShouldBeNil)\n\n\t\tjwtConfig, _ := NewJWT(validity, \"TRIREME\", scrts)\n\n\t\tnonce := []byte(\"1234567890123456\")\n\t\tConvey(\"Given a signature request for a normal packet\", func() {\n\t\t\ttoken, err1 := jwtConfig.CreateAndSign(false, &defaultClaims, nonce, claimsheader.NewClaimsHeader(), scrts)\n\t\t\trecoveredClaims, recoveredNonce, publicKey, err2 := jwtConfig.Decode(false, token, nil, scrts)\n\n\t\t\tSo(err2, ShouldBeNil)\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\tSo(recoveredClaims, ShouldNotBeNil)\n\t\t\tlclaims, ok1 := recoveredClaims.T.Get(\"label1\")\n\t\t\tdclaims, ok2 := recoveredClaims.T.Get(\"label1\")\n\t\t\tSo(ok1, ShouldBeTrue)\n\t\t\tSo(ok2, ShouldBeTrue)\n\t\t\tSo(lclaims, ShouldResemble, dclaims)\n\t\t\tSo(string(recoveredClaims.RMT), ShouldEqual, rmt)\n\t\t\tSo(string(recoveredClaims.LCL), ShouldEqual, \"\")\n\t\t\tSo(nonce, ShouldResemble, recoveredNonce)\n\t\t\tSo(cert.PublicKey, ShouldResemble, publicKey)\n\t\t})\n\n\t\tConvey(\"Given a signature request that hits the cache \", func() {\n\t\t\ttoken1, err1 := jwtConfig.CreateAndSign(false, &defaultClaims, nonce, claimsheader.NewClaimsHeader(), scrts)\n\t\t\trecoveredClaims1, recoveredNonce1, key1, err2 := jwtConfig.Decode(false, token1, nil, scrts)\n\t\t\tnonce2 := []byte(\"9876543210123456\")\n\t\t\terr3 := jwtConfig.Randomize(token1, nonce2)\n\t\t\trecoveredClaims2, recoveredNonce2, key2, err4 := jwtConfig.Decode(false, token1, nil, scrts)\n\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\tSo(err2, ShouldBeNil)\n\t\t\tSo(err3, ShouldBeNil)\n\t\t\tSo(err4, ShouldBeNil)\n\t\t\tSo(recoveredClaims1, ShouldNotBeNil)\n\t\t\tSo(recoveredClaims2, ShouldNotBeNil)\n\t\t\tlclaims1, ok1 := recoveredClaims1.T.Get(\"label1\")\n\t\t\tdclaims1, ok2 := recoveredClaims1.T.Get(\"label1\")\n\t\t\tSo(ok1, ShouldBeTrue)\n\t\t\tSo(ok2, ShouldBeTrue)\n\t\t\tSo(lclaims1, ShouldResemble, dclaims1)\n\t\t\tlclaims2, ok3 := recoveredClaims2.T.Get(\"label1\")\n\t\t\tdclaims2, ok4 := recoveredClaims2.T.Get(\"label1\")\n\t\t\tSo(ok3, ShouldBeTrue)\n\t\t\tSo(ok4, ShouldBeTrue)\n\t\t\tSo(lclaims2, ShouldResemble, dclaims2)\n\t\t\tSo(string(recoveredClaims1.RMT), ShouldEqual, rmt)\n\t\t\tSo(string(recoveredClaims1.LCL), ShouldEqual, \"\")\n\t\t\tSo(string(recoveredClaims2.RMT), ShouldEqual, rmt)\n\t\t\tSo(string(recoveredClaims2.LCL), ShouldEqual, \"\")\n\t\t\tSo(nonce, ShouldResemble, recoveredNonce1)\n\t\t\tSo(nonce2, ShouldResemble, recoveredNonce2)\n\t\t\tSo(cert.PublicKey, ShouldResemble, key1)\n\t\t\tSo(cert.PublicKey, ShouldResemble, key2)\n\t\t})\n\n\t\tConvey(\"Given a signature request for an ACK packet\", func() {\n\t\t\ttoken, err1 := jwtConfig.CreateAndSign(true, &ackClaims, nonce, claimsheader.NewClaimsHeader(), scrts)\n\t\t\trecoveredClaims, _, _, err2 := jwtConfig.Decode(true, token, cert.PublicKey.(*ecdsa.PublicKey), scrts)\n\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\tSo(err2, ShouldBeNil)\n\t\t\tSo(recoveredClaims, ShouldNotBeNil)\n\t\t\tSo(string(recoveredClaims.RMT), ShouldEqual, rmt)\n\t\t\tSo(string(recoveredClaims.LCL), ShouldEqual, lcl)\n\t\t\tSo(recoveredClaims.T, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestNegativeConditions(t *testing.T) {\n\tConvey(\"Given a JWT valid engine with a PKI  key \", t, func() {\n\t\t_, scrts, err := createCompactPKISecrets()\n\t\tSo(err, ShouldBeNil)\n\n\t\tjwtConfig, _ := NewJWT(validity, \"TRIREME\", scrts)\n\t\tnonce := []byte(\"012456789123456\")\n\n\t\tConvey(\"Test a token with a bad length \", func() {\n\t\t\ttoken, err1 := jwtConfig.CreateAndSign(false, &defaultClaims, nonce, claimsheader.NewClaimsHeader(), scrts)\n\t\t\t_, _, _, err2 := jwtConfig.Decode(false, token[:len(token)-len(certPEM)-1], nil, scrts)\n\t\t\tSo(err2, ShouldNotBeNil)\n\t\t\tSo(err1, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"Test a token with a bad public key\", func() {\n\t\t\ttoken, err1 := jwtConfig.CreateAndSign(false, &defaultClaims, nonce, claimsheader.NewClaimsHeader(), scrts)\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\ttoken[len(token)-1] = 0\n\t\t\ttoken[len(token)-2] = 0\n\t\t\ttoken[len(token)-3] = 0\n\t\t\ttoken[len(token)-4] = 0\n\t\t\t_, _, _, err2 := jwtConfig.Decode(false, token, nil, scrts)\n\t\t\tSo(err2, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Test an ack token with a bad key\", func() {\n\t\t\ttoken, err1 := jwtConfig.CreateAndSign(false, &ackClaims, nonce, claimsheader.NewClaimsHeader(), scrts)\n\n\t\t\t_, _, _, err2 := jwtConfig.Decode(true, token, certPEM[:10], scrts)\n\t\t\tSo(err2, ShouldNotBeNil)\n\t\t\tSo(err1, ShouldBeNil)\n\t\t})\n\n\t})\n}\n\nfunc TestRamdomize(t *testing.T) {\n\tConvey(\"Given a token engine with PKI key and a good token\", t, func() {\n\t\tnonce := []byte(\"012456789123456\")\n\n\t\t_, scrts, err := createCompactPKISecrets()\n\t\tSo(err, ShouldBeNil)\n\n\t\tjwtConfig, _ := NewJWT(validity, \"TRIREME\", scrts)\n\t\ttoken, err := jwtConfig.CreateAndSign(false, &defaultClaims, nonce, claimsheader.NewClaimsHeader(), scrts)\n\t\tSo(err, ShouldBeNil)\n\n\t\tnewNonce := []byte(\"9876543219123456\")\n\t\tConvey(\"I should get a new random nonce\", func() {\n\t\t\terr := jwtConfig.Randomize(token, newNonce)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"I should an error if the token is short \", func() {\n\t\t\terr := jwtConfig.Randomize(token[:noncePosition+NonceLength-1], nonce)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/tokens/mocktokens/mocktokens.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/pkg/tokens/tokens.go\n\n// Package mocktokens is a generated GoMock package.\npackage mocktokens\n\nimport (\n\tecdsa \"crypto/ecdsa\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tclaimsheader \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\tpkiverifier \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\tsecrets \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\ttokens \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/tokens\"\n)\n\n// MockTokenEngine is a mock of TokenEngine interface\n// nolint\ntype MockTokenEngine struct {\n\tctrl     *gomock.Controller\n\trecorder *MockTokenEngineMockRecorder\n}\n\n// MockTokenEngineMockRecorder is the mock recorder for MockTokenEngine\n// nolint\ntype MockTokenEngineMockRecorder struct {\n\tmock *MockTokenEngine\n}\n\n// NewMockTokenEngine creates a new mock instance\n// nolint\nfunc NewMockTokenEngine(ctrl *gomock.Controller) *MockTokenEngine {\n\tmock := &MockTokenEngine{ctrl: ctrl}\n\tmock.recorder = &MockTokenEngineMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockTokenEngine) EXPECT() *MockTokenEngineMockRecorder {\n\treturn m.recorder\n}\n\n// CreateAndSign mocks base method\n// nolint\nfunc (m *MockTokenEngine) CreateAndSign(isAck bool, claims *tokens.ConnectionClaims, encodedBuf, nonce []byte, claimsHeader *claimsheader.ClaimsHeader, secrets secrets.Secrets) ([]byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CreateAndSign\", isAck, claims, encodedBuf, nonce, claimsHeader, secrets)\n\tret0, _ := ret[0].([]byte)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CreateAndSign indicates an expected call of CreateAndSign\n// nolint\nfunc (mr *MockTokenEngineMockRecorder) CreateAndSign(isAck, claims, encodedBuf, nonce, claimsHeader, secrets interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateAndSign\", reflect.TypeOf((*MockTokenEngine)(nil).CreateAndSign), isAck, claims, encodedBuf, nonce, claimsHeader, secrets)\n}\n\n// DecodeSyn mocks base method\n// nolint\nfunc (m *MockTokenEngine) DecodeSyn(isSynAck bool, data []byte, privateKey *ecdsa.PrivateKey, secrets secrets.Secrets, connClaims *tokens.ConnectionClaims) (*claimsheader.ClaimsHeader, []byte, interface{}, *pkiverifier.PKIControllerInfo, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DecodeSyn\", isSynAck, data, privateKey, secrets, connClaims)\n\tret0, _ := ret[0].(*claimsheader.ClaimsHeader)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(interface{})\n\tret3, _ := ret[3].(*pkiverifier.PKIControllerInfo)\n\tret4, _ := ret[4].(error)\n\treturn ret0, ret1, ret2, ret3, ret4\n}\n\n// DecodeSyn indicates an expected call of DecodeSyn\n// nolint\nfunc (mr *MockTokenEngineMockRecorder) DecodeSyn(isSynAck, data, privateKey, secrets, connClaims interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DecodeSyn\", reflect.TypeOf((*MockTokenEngine)(nil).DecodeSyn), isSynAck, data, privateKey, secrets, connClaims)\n}\n\n// DecodeAck mocks base method\n// nolint\nfunc (m *MockTokenEngine) DecodeAck(data []byte, connClaims *tokens.ConnectionClaims) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DecodeAck\", data, connClaims)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DecodeAck indicates an expected call of DecodeAck\n// nolint\nfunc (mr *MockTokenEngineMockRecorder) DecodeAck(data, connClaims interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DecodeAck\", reflect.TypeOf((*MockTokenEngine)(nil).DecodeAck), data, connClaims)\n}\n\n// Randomize mocks base method\n// nolint\nfunc (m *MockTokenEngine) Randomize(arg0, arg1 []byte) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Randomize\", arg0, arg1)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Randomize indicates an expected call of Randomize\n// nolint\nfunc (mr *MockTokenEngineMockRecorder) Randomize(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Randomize\", reflect.TypeOf((*MockTokenEngine)(nil).Randomize), arg0, arg1)\n}\n\n// Sign mocks base method\n// nolint\nfunc (m *MockTokenEngine) Sign(arg0 []byte, arg1 *ecdsa.PrivateKey) ([]byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Sign\", arg0, arg1)\n\tret0, _ := ret[0].([]byte)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Sign indicates an expected call of Sign\n// nolint\nfunc (mr *MockTokenEngineMockRecorder) Sign(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Sign\", reflect.TypeOf((*MockTokenEngine)(nil).Sign), arg0, arg1)\n}\n"
  },
  {
    "path": "controller/pkg/tokens/tokens.go",
    "content": "package tokens\n\nimport (\n\t\"crypto/ecdsa\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/internal/enforcer/utils/ephemeralkeys\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/claimsheader\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/pkiverifier\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/secrets\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// ConnectionClaims captures all the claim information\ntype ConnectionClaims struct {\n\tT *policy.TagStore `json:\",omitempty\"`\n\t// RMT is the nonce of the remote that has to be signed in the JWT\n\tRMT []byte `json:\",omitempty\"`\n\t// LCL is the nonce of the local node that has to be signed\n\tLCL []byte `json:\",omitempty\"`\n\t// DEKV1 is the datapath ephemeral keys used to derived shared keys during the handshake\n\tDEKV1 []byte `json:\",omitempty\"`\n\t// SDEKV1 is the signature of the ephemeral key\n\tSDEKV1 []byte `json:\",omitempty\"`\n\t// C is the compressed tags in one string\n\tCT *policy.TagStore `json:\",omitempty\"`\n\t// ID is the source PU ID\n\tID string `json:\",omitempty\"`\n\t// RemoteID is the ID of the remote if known.\n\tRemoteID string `json:\",omitempty\"`\n\t// H is the claims header\n\tH claimsheader.HeaderBytes `json:\",omitempty\"`\n\t// P holds the ping payload\n\tP *policy.PingPayload `codec:\",omitempty\"`\n\t// DEKV2 is the datapath ephemeral keys used to derived shared keys during the handshake\n\tDEKV2 []byte `json:\",omitempty\"`\n\t// SDEKV2 is the signature of the ephemeral key\n\tSDEKV2 []byte `json:\",omitempty\"`\n}\n\n// TokenEngine is the interface to the different implementations of tokens\ntype TokenEngine interface {\n\t// CreteAndSign creates a token, signs it and produces the final byte string\n\tCreateSynToken(claims *ConnectionClaims, encodedBuf []byte, nonce []byte, header *claimsheader.ClaimsHeader, secrets secrets.Secrets) ([]byte, error)\n\tCreateSynAckToken(proto314 bool, claims *ConnectionClaims, encodedBuf []byte, nonce []byte, header *claimsheader.ClaimsHeader, secrets secrets.Secrets, secretKey []byte) ([]byte, error)\n\tCreateAckToken(proto314 bool, secretKey []byte, claims *ConnectionClaims, encodedBuf []byte, header *claimsheader.ClaimsHeader) ([]byte, error)\n\n\tDecodeSyn(isSynAck bool, data []byte, privateKey *ephemeralkeys.PrivateKey, secrets secrets.Secrets, connClaims *ConnectionClaims) ([]byte, *claimsheader.ClaimsHeader, []byte, *pkiverifier.PKIControllerInfo, bool, error)\n\tDecodeAck(proto314 bool, secretKey []byte, data []byte, connClaims *ConnectionClaims) error\n\n\t// Randomize inserts a source nonce in an existing token - New nonce will be\n\t// create every time the token is transmitted as a challenge to the other side\n\t// even when the token is cached. There should be space in the token already.\n\t// Returns an error if there is no space\n\tRandomize([]byte, []byte) (err error)\n\tSign([]byte, *ecdsa.PrivateKey) ([]byte, error)\n}\n\nconst (\n\t// MaxServerName must be of UUID size maximum\n\tMaxServerName = 24\n\t// NonceLength is the length of the Nonce to be used in the secrets\n\tNonceLength = 16\n)\n"
  },
  {
    "path": "controller/pkg/urisearch/urisearch.go",
    "content": "package urisearch\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\ntype node struct {\n\tchildren map[string]*node\n\tleaf     bool\n\tdata     interface{}\n}\n\n// APICache represents an API cache.\ntype APICache struct {\n\tmethodRoots map[string]*node\n\tID          string\n\tExternal    bool\n}\n\ntype scopeRule struct {\n\trule *policy.HTTPRule\n}\n\n// NewAPICache creates a new API cache\nfunc NewAPICache(rules []*policy.HTTPRule, id string, external bool) *APICache {\n\ta := &APICache{\n\t\tmethodRoots: map[string]*node{},\n\t\tID:          id,\n\t\tExternal:    external,\n\t}\n\n\tfor _, rule := range rules {\n\t\tsc := &scopeRule{\n\t\t\trule: rule,\n\t\t}\n\t\tfor _, method := range rule.Methods {\n\t\t\tif _, ok := a.methodRoots[method]; !ok {\n\t\t\t\ta.methodRoots[method] = &node{}\n\t\t\t}\n\t\t\tfor _, uri := range rule.URIs {\n\t\t\t\tinsert(a.methodRoots[method], uri, sc)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn a\n}\n\n// FindRule finds a rule in the APICache without validating scopes\nfunc (c *APICache) FindRule(verb, uri string) (bool, *policy.HTTPRule) {\n\tfound, rule := c.Find(verb, uri)\n\tif rule == nil {\n\t\treturn found, nil\n\t}\n\tif policyRule, ok := rule.(*scopeRule); ok {\n\t\treturn found, policyRule.rule\n\t}\n\treturn false, nil\n}\n\n// FindAndMatchScope finds the rule and returns true only if the scope matches\n// as well. It also returns true of this was a public rule, allowing the callers\n// to decide how to present the data or potentially what to do if authorization\n// fails.\nfunc (c *APICache) FindAndMatchScope(verb, uri string, attributes []string) (bool, bool) {\n\tfound, rule := c.Find(verb, uri)\n\tif !found || rule == nil {\n\t\treturn false, false\n\t}\n\tpolicyRule, ok := rule.(*scopeRule)\n\tif !ok {\n\t\treturn false, false\n\t}\n\tif policyRule.rule.Public {\n\t\treturn true, true\n\t}\n\treturn c.MatchClaims(policyRule.rule.ClaimMatchingRules, attributes), false\n}\n\n// MatchClaims receives a set of claim matchibg rules and a set of claims\n// and returns true of the claims match the rules.\nfunc (c *APICache) MatchClaims(rules [][]string, claims []string) bool {\n\n\tclaimsMap := map[string]struct{}{}\n\tfor _, claim := range claims {\n\t\tclaimsMap[claim] = struct{}{}\n\t}\n\n\tvar matched int\n\tfor _, clause := range rules {\n\t\tmatched = len(clause)\n\t\tfor _, claim := range clause {\n\t\t\tif _, ok := claimsMap[claim]; ok {\n\t\t\t\tmatched--\n\t\t\t}\n\t\t\tif matched == 0 {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\treturn false\n}\n\n// Find finds a URI in the cache and returns true and the data if found.\n// If not found it returns false.\nfunc (c *APICache) Find(verb, uri string) (bool, interface{}) {\n\troot, ok := c.methodRoots[verb]\n\tif !ok {\n\t\treturn false, nil\n\t}\n\treturn search(root, uri)\n}\n\n// parse parses a URI and splits into prefix, suffix\nfunc parse(s string) (string, string) {\n\tif s == \"/\" {\n\t\treturn s, \"\"\n\t}\n\tfor i := 1; i < len(s); i++ {\n\t\tif s[i] == '/' {\n\t\t\treturn s[0:i], s[i:]\n\t\t}\n\t}\n\n\treturn s, \"\"\n}\n\n// insert adds an api to the api cache\nfunc insert(n *node, api string, data interface{}) {\n\tif len(api) == 0 {\n\t\tn.data = data\n\t\tn.leaf = true\n\t\treturn\n\t}\n\n\tprefix, suffix := parse(api)\n\n\t// root node or terminal node\n\tif prefix == \"/\" {\n\t\tn.data = data\n\t\tn.leaf = true\n\t\treturn\n\t}\n\n\tif n.children == nil {\n\t\tn.children = map[string]*node{}\n\t}\n\n\t// If there is no child, add the new child.\n\tnext, ok := n.children[prefix]\n\tif !ok {\n\t\tnext = &node{}\n\t\tn.children[prefix] = next\n\t}\n\n\tinsert(next, suffix, data)\n}\n\nfunc search(n *node, api string) (found bool, data interface{}) {\n\n\tprefix, suffix := parse(api)\n\n\tif prefix == \"/\" {\n\t\tif n.leaf {\n\t\t\treturn true, n.data\n\t\t}\n\t}\n\n\tnext, foundPrefix := n.children[prefix]\n\t// We found either an exact match or a * match\n\tif foundPrefix {\n\t\tmatchedChildren, data := search(next, suffix)\n\t\tif matchedChildren {\n\t\t\treturn true, data\n\t\t}\n\t}\n\n\t// If not found, try the ignore operator.\n\tnext, foundPrefix = n.children[\"/?\"]\n\tif foundPrefix {\n\t\tmatchedChildren, data := search(next, suffix)\n\t\tif matchedChildren {\n\t\t\treturn true, data\n\t\t}\n\t}\n\n\t// If not found, try the * operator and ignore the rest of path.\n\tnext, foundPrefix = n.children[\"/*\"]\n\tif foundPrefix {\n\t\tfor len(suffix) > 0 {\n\t\t\tmatchedChildren, data := search(next, suffix)\n\t\t\tif matchedChildren {\n\t\t\t\treturn true, data\n\t\t\t}\n\t\t\tprefix, suffix = parse(suffix)\n\t\t}\n\t\tmatchedChildren, data := search(next, \"/\")\n\t\tif matchedChildren {\n\t\t\treturn true, data\n\t\t}\n\t}\n\n\tif n.leaf && len(prefix) == 0 {\n\t\treturn true, n.data\n\t}\n\n\treturn false, nil\n}\n"
  },
  {
    "path": "controller/pkg/urisearch/urisearch_test.go",
    "content": "// +build !windows\n\npackage urisearch\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc initTrieRules() []*policy.HTTPRule {\n\n\treturn []*policy.HTTPRule{\n\t\t{\n\t\t\tMethods: []string{\"GET\", \"PUT\"},\n\t\t\tURIs: []string{\n\t\t\t\t\"/users/?/name\",\n\t\t\t\t\"/things/?\",\n\t\t\t},\n\t\t\tClaimMatchingRules: [][]string{{\"policy1\"}},\n\t\t},\n\t\t{\n\t\t\tMethods: []string{\"PATCH\"},\n\t\t\tURIs: []string{\n\t\t\t\t\"/users/?/name\",\n\t\t\t\t\"/things/?\",\n\t\t\t},\n\t\t\tClaimMatchingRules: [][]string{{\"policy2\"}},\n\t\t},\n\t\t{\n\t\t\tMethods: []string{\"POST\"},\n\t\t\tURIs: []string{\n\t\t\t\t\"/v1/users/?/name\",\n\t\t\t\t\"/v1/things/?\",\n\t\t\t},\n\t\t\tPublic:             true,\n\t\t\tClaimMatchingRules: [][]string{{\"policy3\"}},\n\t\t},\n\t\t{\n\t\t\tMethods:            []string{\"POST\"},\n\t\t\tURIs:               []string{\"/\"},\n\t\t\tClaimMatchingRules: [][]string{{\"policy4\"}},\n\t\t},\n\t\t{\n\t\t\tMethods:            []string{\"PATCH\"},\n\t\t\tURIs:               []string{\"/?\"},\n\t\t\tClaimMatchingRules: [][]string{{\"policy5\"}},\n\t\t},\n\t\t{\n\t\t\tMethods:            []string{\"HEAD\"},\n\t\t\tURIs:               []string{\"/*\"},\n\t\t\tClaimMatchingRules: [][]string{{\"policy7\"}},\n\t\t},\n\t\t{\n\t\t\tMethods:            []string{\"HEAD\"},\n\t\t\tURIs:               []string{\"/a/?/c/d\"},\n\t\t\tClaimMatchingRules: [][]string{{\"policy8\"}},\n\t\t},\n\t\t{\n\t\t\tMethods:            []string{\"HEAD\"},\n\t\t\tURIs:               []string{\"/a/b/?/e\"},\n\t\t\tClaimMatchingRules: [][]string{{\"policy9\"}},\n\t\t},\n\t\t{\n\t\t\tMethods:            []string{\"HEAD\"},\n\t\t\tURIs:               []string{\"/a/*/c/x\"},\n\t\t\tClaimMatchingRules: [][]string{{\"policy10\"}},\n\t\t},\n\t\t{\n\t\t\tMethods:            []string{\"HEAD\"},\n\t\t\tURIs:               []string{\"/a/b/?/w\"},\n\t\t\tClaimMatchingRules: [][]string{{\"policy11\"}},\n\t\t},\n\t\t{\n\t\t\tMethods:            []string{\"HEAD\"},\n\t\t\tURIs:               []string{\"/a/b/?/y/*\"},\n\t\t\tClaimMatchingRules: [][]string{{\"policy12\"}, {\"policy13\"}, {\"policy14\", \"policy15\"}},\n\t\t},\n\t}\n}\n\nfunc TestNewAPICache(t *testing.T) {\n\tConvey(\"Given a set of valid rules\", t, func() {\n\t\trules := initTrieRules()\n\n\t\tConvey(\"When I insert them in the cache, I should get a valid cache\", func() {\n\t\t\tc := NewAPICache(rules, \"id\", false)\n\t\t\tSo(c, ShouldNotBeNil)\n\t\t\tSo(c.methodRoots, ShouldNotBeNil)\n\t\t\tSo(len(c.methodRoots), ShouldEqual, 5)\n\t\t\tSo(c.methodRoots, ShouldContainKey, \"GET\")\n\t\t\tSo(c.methodRoots, ShouldContainKey, \"POST\")\n\t\t\tSo(c.methodRoots, ShouldContainKey, \"PUT\")\n\t\t\tSo(c.methodRoots, ShouldContainKey, \"PATCH\")\n\t\t\tSo(c.methodRoots, ShouldContainKey, \"HEAD\")\n\t\t\tSo(c.methodRoots[\"GET\"], ShouldNotBeNil)\n\t\t\tSo(c.methodRoots[\"POST\"], ShouldNotBeNil)\n\t\t\tSo(c.methodRoots[\"PUT\"], ShouldNotBeNil)\n\t\t\tSo(c.methodRoots[\"PATCH\"], ShouldNotBeNil)\n\t\t\tSo(c.methodRoots[\"POST\"].data, ShouldNotBeNil)\n\t\t\tSo(len(c.methodRoots[\"GET\"].children), ShouldEqual, 2)\n\t\t})\n\t})\n}\n\nfunc TestInsert(t *testing.T) {\n\n\tConvey(\"When I insert a root node, it should succeed\", t, func() {\n\t\tn := &node{}\n\t\tinsert(n, \"/\", \"data\")\n\t\tSo(n.data.(string), ShouldResemble, \"data\")\n\t\tSo(n.leaf, ShouldBeTrue)\n\t})\n\n\tConvey(\"When I insert a one level node, it should succeed\", t, func() {\n\t\tn := &node{}\n\t\tinsert(n, \"/a\", \"data\")\n\t\tSo(n.leaf, ShouldEqual, false)\n\t\tSo(len(n.children), ShouldEqual, 1)\n\t\tSo(n.children[\"/a\"], ShouldNotBeNil)\n\t\tSo(n.children[\"/a\"].leaf, ShouldBeTrue)\n\t\tSo(n.children[\"/a\"].data.(string), ShouldResemble, \"data\")\n\t})\n\n\tConvey(\"When I insert two level node, it should succeed\", t, func() {\n\t\tn := &node{}\n\t\tinsert(n, \"/a/b\", \"data\")\n\t\tSo(n.leaf, ShouldEqual, false)\n\t\tSo(len(n.children), ShouldEqual, 1)\n\t\tSo(n.children[\"/a\"], ShouldNotBeNil)\n\t\tSo(n.children[\"/a\"].leaf, ShouldBeFalse)\n\t\tSo(n.children[\"/a\"].children[\"/b\"], ShouldNotBeNil)\n\t\tSo(n.children[\"/a\"].children[\"/b\"].leaf, ShouldBeTrue)\n\t})\n\n\tConvey(\"When I insert two level node with a * it should succeed\", t, func() {\n\t\tn := &node{}\n\t\tinsert(n, \"/a/*\", \"data\")\n\t\tSo(n.leaf, ShouldEqual, false)\n\t\tSo(len(n.children), ShouldEqual, 1)\n\t\tSo(n.children[\"/a\"], ShouldNotBeNil)\n\t\tSo(n.children[\"/a\"].leaf, ShouldBeFalse)\n\t\tSo(n.children[\"/a\"].children[\"/*\"], ShouldNotBeNil)\n\t\tSo(n.children[\"/a\"].children[\"/*\"].leaf, ShouldBeTrue)\n\t})\n\n\tConvey(\"When I insert a two level node, where the first part is * it should succeed\", t, func() {\n\t\tn := &node{}\n\t\tinsert(n, \"/*/a\", \"data\")\n\t\tSo(n.leaf, ShouldEqual, false)\n\t\tSo(len(n.children), ShouldEqual, 1)\n\t\tSo(n.children[\"/*\"], ShouldNotBeNil)\n\t\tSo(n.children[\"/*\"].leaf, ShouldBeFalse)\n\t\tSo(n.children[\"/*\"].children[\"/a\"], ShouldNotBeNil)\n\t\tSo(n.children[\"/*\"].children[\"/a\"].leaf, ShouldBeTrue)\n\t})\n\n}\n\nfunc TestParse(t *testing.T) {\n\tConvey(\"When I parse a root URI, I should get no suffix\", t, func() {\n\t\tprefix, suffix := parse(\"/\")\n\t\tSo(prefix, ShouldEqual, \"/\")\n\t\tSo(suffix, ShouldEqual, \"\")\n\t})\n\n\tConvey(\"When I parse non root URIs with one level\", t, func() {\n\t\tprefix, suffix := parse(\"/a\")\n\t\tSo(prefix, ShouldEqual, \"/a\")\n\t\tSo(suffix, ShouldEqual, \"\")\n\t})\n\n\tConvey(\"When I parse non root URIs with two levels, I should get the right suffix\", t, func() {\n\t\tprefix, suffix := parse(\"/a/b\")\n\t\tSo(prefix, ShouldEqual, \"/a\")\n\t\tSo(suffix, ShouldEqual, \"/b\")\n\t})\n\n\tConvey(\"When I parse non root URIs with three levels, I should get the right suffix\", t, func() {\n\t\tprefix, suffix := parse(\"/a/b/c\")\n\t\tSo(prefix, ShouldEqual, \"/a\")\n\t\tSo(suffix, ShouldEqual, \"/b/c\")\n\t})\n\n\tConvey(\"When I parse non root URIs with a *, I should get * as suffix\", t, func() {\n\t\tprefix, suffix := parse(\"/a/*\")\n\t\tSo(prefix, ShouldEqual, \"/a\")\n\t\tSo(suffix, ShouldEqual, \"/*\")\n\t})\n}\n\nfunc TestAPICacheFind(t *testing.T) {\n\tConvey(\"Given valid API cache\", t, func() {\n\t\tc := NewAPICache(initTrieRules(), \"id\", false)\n\t\tConvey(\"When I search for correct URIs, I should get the right data\", func() {\n\n\t\t\t// GET and PUT combined rule\n\t\t\tfound, rule := c.FindRule(\"GET\", \"/users/bob/name\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy1\"})\n\n\t\t\tfound, rule = c.FindRule(\"BADVERB\", \"/users/bob/name\")\n\t\t\tSo(found, ShouldBeFalse)\n\t\t\tSo(rule, ShouldBeNil)\n\n\t\t\tfound, rule = c.FindRule(\"PUT\", \"/users/bob/name\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy1\"})\n\n\t\t\tfound, rule = c.FindRule(\"GET\", \"/things/something\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy1\"})\n\n\t\t\tfound, rule = c.FindRule(\"GET\", \"/prefix/things/something\")\n\t\t\tSo(found, ShouldBeFalse)\n\t\t\tSo(rule, ShouldBeNil)\n\n\t\t\t// PATCH rule\n\t\t\tfound, rule = c.FindRule(\"PATCH\", \"/things/something\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy2\"})\n\n\t\t\tfound, rule = c.FindRule(\"PATCH\", \"/users/bob/name\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy2\"})\n\n\t\t\t// POST rule\n\t\t\tfound, rule = c.FindRule(\"POST\", \"/v1/users/123/name\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy3\"})\n\n\t\t\tfound, rule = c.FindRule(\"POST\", \"/v1/things/123454656\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy3\"})\n\n\t\t\tfound, rule = c.FindRule(\"POST\", \"/\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy4\"})\n\n\t\t\t// HEAD Rules\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/users/123/name\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\n\t\t\tfound, rule = c.FindRule(\"PATCH\", \"/users/123/name\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy2\"})\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/b/c/d\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy8\"})\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/x/c/d\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy8\"})\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/b/x/e\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy9\"})\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/b/x\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/b/c/d/e/c/x\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy10\"})\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/b/c/w\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy11\"})\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/b/c/z\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy7\"})\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/b/c/d/e/f/w\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy7\"})\n\n\t\t\tfound, rule = c.FindRule(\"HEAD\", \"/a/b/c/y/d/e/f/g/g\")\n\t\t\tSo(found, ShouldBeTrue)\n\t\t\tSo(rule, ShouldNotBeNil)\n\t\t\tSo(rule.ClaimMatchingRules, ShouldContain, []string{\"policy12\"})\n\t\t})\n\n\t\tConvey(\"When I search for bad URIs, I should get not found\", func() {\n\t\t\tfound, _ := c.Find(\"GET\", \"/users/123/name/targets\")\n\t\t\tSo(found, ShouldBeFalse)\n\t\t\tfound, _ = c.Find(\"PUT\", \"/users/name\")\n\t\t\tSo(found, ShouldBeFalse)\n\t\t\tfound, _ = c.Find(\"GET\", \"/v1/things/123\")\n\t\t\tSo(found, ShouldBeFalse)\n\t\t\tfound, _ = c.Find(\"GET\", \"/v1/v2/v3/v54/12312312/12321312/123123\")\n\t\t\tSo(found, ShouldBeFalse)\n\t\t\tfound, _ = c.Find(\"GET\", \"/someapi\")\n\t\t\tSo(found, ShouldBeFalse)\n\t\t\tfound, _ = c.Find(\"GET\", \"/\")\n\t\t\tSo(found, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"Test performacen\", func() {\n\t\t\tfor i := 0; i < 10000; i++ {\n\t\t\t\tfound, _ := c.Find(\"GET\", \"/users/123/name\")\n\t\t\t\tSo(found, ShouldBeTrue)\n\t\t\t}\n\t\t})\n\t})\n}\n\nfunc TestFindAndMachScope(t *testing.T) {\n\tConvey(\"Given a valid API cache\", t, func() {\n\t\tc := NewAPICache(initTrieRules(), \"id\", false)\n\n\t\tConvey(\"When I search for rules matching scopes, it should return true\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"GET\", \"/users/bob/name\", []string{\"policy1\"})\n\t\t\tSo(found, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"When I search for an invalid URI, it should return false\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"GET\", \"/this/doesnot/exist\", []string{\"policy1\"})\n\t\t\tSo(found, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"When I search for a valid URI and not matching scopes, it should return false\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"GET\", \"/users/bob/name\", []string{\"policy10\"})\n\t\t\tSo(found, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"When I search for public rule and bad scopes it should always return true\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"POST\", \"/v1/things/something\", []string{\"policy10\"})\n\t\t\tSo(found, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"When I search for public rule and good scopes it should always return true\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"POST\", \"/v1/things/something\", []string{\"policy3\"})\n\t\t\tSo(found, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"When I search for public rule and multiple scopes in an array it should always return true\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"HEAD\", \"/a/b/c/y/z\", []string{\"policy12\", \"policy13\"})\n\t\t\tSo(found, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"When I search for public rule and need to match the right and clause it should return true\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"HEAD\", \"/a/b/c/y/z\", []string{\"policy14\", \"policy15\"})\n\t\t\tSo(found, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"When I search for public rule at the and clause fails, it should return false\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"HEAD\", \"/a/b/c/y/z\", []string{\"policy1\", \"policy15\"})\n\t\t\tSo(found, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"When I search for public rule at the and clause fails with several labels, but it matched it should return true\", func() {\n\t\t\tfound, _ := c.FindAndMatchScope(\"HEAD\", \"/a/b/c/y/z\", []string{\"x\", \"y\", \"z\", \"policy14\", \"a\", \"b\", \"c\", \"policy15\", \"k\", \"l\", \"m\"})\n\t\t\tSo(found, ShouldBeTrue)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/common/common.go",
    "content": "package common\n\nimport (\n\t\"reflect\"\n\t\"strconv\"\n)\n\n// JWTType is the type of user JWTs that must be implemented.\ntype JWTType int\n\n// Values of JWTType\nconst (\n\tPKI JWTType = iota\n\tOIDC\n)\n\n// convert an unknown int type to int64\nfunc toInt64(i interface{}) int64 {\n\tswitch i := i.(type) {\n\tcase int:\n\t\treturn int64(i)\n\tcase int8:\n\t\treturn int64(i)\n\tcase int16:\n\t\treturn int64(i)\n\tcase int32:\n\t\treturn int64(i)\n\tcase int64:\n\t\treturn i\n\tdefault:\n\t\tpanic(\"toInt64(): expected a signed integer type, got \" + reflect.TypeOf(i).String() + \" instead\")\n\t}\n}\n\n// convert an unknown int type to int64\nfunc toUint64(i interface{}) uint64 {\n\tswitch i := i.(type) {\n\tcase uint:\n\t\treturn uint64(i)\n\tcase uint8:\n\t\treturn uint64(i)\n\tcase uint16:\n\t\treturn uint64(i)\n\tcase uint32:\n\t\treturn uint64(i)\n\tcase uint64:\n\t\treturn i\n\tdefault:\n\t\tpanic(\"toUint64(): expected an unsigned integer type, got \" + reflect.TypeOf(i).String() + \" instead\")\n\t}\n}\n\n// FlattenClaim flattens all the generic claims in a flat array for strings.\nfunc FlattenClaim(key string, claim interface{}) []string {\n\tattributes := []string{}\n\n\tswitch claim := claim.(type) {\n\tcase bool:\n\t\tattributes = append(attributes, key+\"=\"+strconv.FormatBool(claim))\n\tcase int, int8, int16, int32, int64:\n\t\tattributes = append(attributes, key+\"=\"+strconv.FormatInt(toInt64(claim), 10))\n\tcase uint, uint8, uint16, uint32, uint64:\n\t\tattributes = append(attributes, key+\"=\"+strconv.FormatUint(toUint64(claim), 10))\n\tcase float32:\n\t\tattributes = append(attributes, key+\"=\"+strconv.FormatFloat(float64(claim), 'G', -1, 32))\n\tcase float64:\n\t\tattributes = append(attributes, key+\"=\"+strconv.FormatFloat(claim, 'G', -1, 64))\n\tcase string:\n\t\tattributes = append(attributes, key+\"=\"+claim)\n\tcase []string:\n\t\tfor _, data := range claim {\n\t\t\tattributes = append(attributes, key+\"=\"+data)\n\t\t}\n\tcase map[string]interface{}:\n\t\tfor ikey, ivalue := range claim {\n\t\t\tfor _, v := range FlattenClaim(ikey, ivalue) {\n\t\t\t\tattributes = append(attributes, key+\":\"+v)\n\t\t\t}\n\t\t}\n\tcase []interface{}:\n\t\tfor _, value := range claim {\n\t\t\tattributes = append(attributes, FlattenClaim(key, value)...)\n\t\t}\n\tdefault:\n\t\t// do nothing, just return attributes\n\t}\n\treturn attributes\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/common/common_test.go",
    "content": "package common\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/assertions\"\n)\n\nfunc TestFlattenClaim(t *testing.T) {\n\ttype args struct {\n\t\tkey   string\n\t\tclaim interface{}\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    []string\n\t\tnotwant []string\n\t}{\n\t\t{\n\t\t\tname: \"test-slice\",\n\t\t\targs: args{\n\t\t\t\tkey:   \"slicekey\",\n\t\t\t\tclaim: []interface{}{\"claim1\", \"claim2\"},\n\t\t\t},\n\t\t\twant: []string{\"slicekey=claim1\", \"slicekey=claim2\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-string\",\n\t\t\targs: args{key: \"testkey1\", claim: \"testclaim1\"},\n\t\t\twant: []string{\"testkey1=testclaim1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-empty-string\",\n\t\t\targs: args{key: \"testkey1\", claim: \"\"},\n\t\t\twant: []string{\"testkey1=\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-bool-true\",\n\t\t\targs: args{key: \"key\", claim: true},\n\t\t\twant: []string{\"key=true\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-bool-false\",\n\t\t\targs: args{key: \"key\", claim: false},\n\t\t\twant: []string{\"key=false\"},\n\t\t},\n\t\t{\n\t\t\tname:    \"test-bool-not-true\",\n\t\t\targs:    args{key: \"key\", claim: true},\n\t\t\tnotwant: []string{\"key=false\"},\n\t\t},\n\t\t{\n\t\t\tname:    \"test-bool-not-false\",\n\t\t\targs:    args{key: \"key\", claim: false},\n\t\t\tnotwant: []string{\"key=true\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-string-slice-succeed\",\n\t\t\targs: args{key: \"key\", claim: []string{\"claim1\", \"claim2\"}},\n\t\t\twant: []string{\"key=claim1\", \"key=claim2\"},\n\t\t},\n\t\t{\n\t\t\tname:    \"test-string-slice-fail\",\n\t\t\targs:    args{key: \"key\", claim: []string{\"claim1\", \"claim2\"}},\n\t\t\tnotwant: []string{\"key=claim3\", \"key=claim4\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-map-string-string\",\n\t\t\targs: args{key: \"testkey\", claim: map[string]interface{}{\"key1\": \"value1\", \"key2\": \"value2\"}},\n\t\t\twant: []string{\"testkey:key1=value1\", \"testkey:key2=value2\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-map-string-int\",\n\t\t\targs: args{key: \"testkey\", claim: map[string]interface{}{\"key1\": 1, \"key2\": 2}},\n\t\t\twant: []string{\"testkey:key1=1\", \"testkey:key2=2\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-map-string-bool\",\n\t\t\targs: args{key: \"testkey\", claim: map[string]interface{}{\"key1\": true, \"key2\": false}},\n\t\t\twant: []string{\"testkey:key1=true\", \"testkey:key2=false\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-map-string-slice\",\n\t\t\targs: args{key: \"rootkey\", claim: map[string]interface{}{\"key1\": []string{\"val1\", \"val2\"}}},\n\t\t\twant: []string{\"rootkey:key1=val1\", \"rootkey:key1=val2\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-map-string-map\",\n\t\t\targs: args{\n\t\t\t\tkey: \"rootkey\",\n\t\t\t\tclaim: map[string]interface{}{\n\t\t\t\t\t\"level1\": map[string]interface{}{\n\t\t\t\t\t\t\"key1\": \"val1\",\n\t\t\t\t\t},\n\t\t\t\t\t\"level2\": map[string]interface{}{\n\t\t\t\t\t\t\"key2\": \"val2\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: []string{\"rootkey:level1:key1=val1\", \"rootkey:level2:key2=val2\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-positive-int\",\n\t\t\targs: args{key: \"key\", claim: int(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-positive-int8\",\n\t\t\targs: args{key: \"key\", claim: int8(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-positive-int16\",\n\t\t\targs: args{key: \"key\", claim: int16(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-positive-int32\",\n\t\t\targs: args{key: \"key\", claim: int32(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-positive-int64\",\n\t\t\targs: args{key: \"key\", claim: int64(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-int\",\n\t\t\targs: args{key: \"key\", claim: int(-1)},\n\t\t\twant: []string{\"key=-1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-uint\",\n\t\t\targs: args{key: \"key\", claim: uint(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-uint8\",\n\t\t\targs: args{key: \"key\", claim: uint8(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-uint16\",\n\t\t\targs: args{key: \"key\", claim: uint16(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-uint32\",\n\t\t\targs: args{key: \"key\", claim: uint32(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-uint64\",\n\t\t\targs: args{key: \"key\", claim: uint64(1)},\n\t\t\twant: []string{\"key=1\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-float32-small\",\n\t\t\targs: args{key: \"key\", claim: float32(3.14)},\n\t\t\twant: []string{\"key=3.14\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-float32-small\",\n\t\t\targs: args{key: \"key\", claim: float32(-3.14)},\n\t\t\twant: []string{\"key=-3.14\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-float32-rounding\",\n\t\t\targs: args{key: \"key\", claim: float32(3.141592654)},\n\t\t\twant: []string{\"key=3.1415927\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-float32-rounding\",\n\t\t\targs: args{key: \"key\", claim: float32(-3.141592654)},\n\t\t\twant: []string{\"key=-3.1415927\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-float32-exponent\",\n\t\t\targs: args{key: \"key\", claim: float32(3.141592654e22)},\n\t\t\twant: []string{\"key=3.1415927E+22\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-float32-negative-exponent\",\n\t\t\targs: args{key: \"key\", claim: float32(3.141592654e-22)},\n\t\t\twant: []string{\"key=3.1415927E-22\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-float32-exponent\",\n\t\t\targs: args{key: \"key\", claim: float32(-3.141592654e22)},\n\t\t\twant: []string{\"key=-3.1415927E+22\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-float32-negative-exponent\",\n\t\t\targs: args{key: \"key\", claim: float32(-3.141592654e-22)},\n\t\t\twant: []string{\"key=-3.1415927E-22\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-float64-small\",\n\t\t\targs: args{key: \"key\", claim: float64(3.14)},\n\t\t\twant: []string{\"key=3.14\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-float64-small\",\n\t\t\targs: args{key: \"key\", claim: float64(-3.14)},\n\t\t\twant: []string{\"key=-3.14\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-float64-rounding\",\n\t\t\targs: args{key: \"key\", claim: float64(1.412135623730950488)},\n\t\t\twant: []string{\"key=1.4121356237309506\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-float64-rounding\",\n\t\t\targs: args{key: \"key\", claim: float64(-1.412135623730950488)},\n\t\t\twant: []string{\"key=-1.4121356237309506\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-float64-exponent\",\n\t\t\targs: args{key: \"key\", claim: float64(1.41213562373095e22)},\n\t\t\twant: []string{\"key=1.41213562373095E+22\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-float64-negative-exponent\",\n\t\t\targs: args{key: \"key\", claim: float64(1.41213562373095e-22)},\n\t\t\twant: []string{\"key=1.41213562373095E-22\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-float64-exponent\",\n\t\t\targs: args{key: \"key\", claim: float64(-1.41213562373095e22)},\n\t\t\twant: []string{\"key=-1.41213562373095E+22\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-negative-float64-negative-exponent\",\n\t\t\targs: args{key: \"key\", claim: float64(-1.41213562373095e-22)},\n\t\t\twant: []string{\"key=-1.41213562373095E-22\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-map-string-float\",\n\t\t\targs: args{key: \"test\", claim: map[string]interface{}{\"key1\": 3.1415, \"key2\": -1.21e+33}},\n\t\t\twant: []string{\"test:key1=3.1415\", \"test:key2=-1.21E+33\"},\n\t\t},\n\t\t{\n\t\t\tname: \"test-map-string-mixed\",\n\t\t\targs: args{key: \"test\", claim: map[string]interface{}{\"key1\": true, \"key2\": -1.21e+33, \"key3\": \"val3\"}},\n\t\t\twant: []string{\"test:key1=true\", \"test:key2=-1.21E+33\", \"test:key3=val3\"},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot := FlattenClaim(tt.args.key, tt.args.claim)\n\t\t\tt.Logf(\"test: %s, got: %v\\n\", tt.name, got)\n\t\t\tfor _, want := range tt.want {\n\t\t\t\tif ok, errStr := So(got, ShouldContain, want); !ok {\n\t\t\t\t\tt.Errorf(\"%s\", errStr)\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor _, notwant := range tt.notwant {\n\t\t\t\tif ok, errStr := So(got, ShouldNotContain, notwant); !ok {\n\t\t\t\t\tt.Errorf(\"%s\", errStr)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_toInt64(t *testing.T) {\n\ttype args struct {\n\t\ti interface{}\n\t}\n\ttests := []struct {\n\t\tname      string\n\t\targs      args\n\t\twant      int64\n\t\twantPanic bool\n\t}{\n\t\t{\n\t\t\tname: \"test-toInt64-int\",\n\t\t\targs: args{i: int(1)},\n\t\t\twant: int64(1),\n\t\t},\n\t\t{\n\t\t\tname: \"test-toInt64-int8\",\n\t\t\targs: args{i: int8(1)},\n\t\t\twant: int64(1),\n\t\t},\n\t\t{\n\t\t\tname: \"test-toInt64-int16\",\n\t\t\targs: args{i: int16(1)},\n\t\t\twant: int64(1),\n\t\t},\n\t\t{\n\t\t\tname: \"test-toInt64-int32\",\n\t\t\targs: args{i: int32(1)},\n\t\t\twant: int64(1),\n\t\t},\n\t\t{\n\t\t\tname: \"test-toInt64-int64\",\n\t\t\targs: args{i: int64(1)},\n\t\t\twant: int64(1),\n\t\t},\n\t\t{\n\t\t\tname:      \"test-toInt64-string\",\n\t\t\targs:      args{i: \"a string\"},\n\t\t\twant:      int64(1),\n\t\t\twantPanic: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tdefer func() {\n\t\t\t\tr := recover()\n\t\t\t\tif (r != nil) != tt.wantPanic {\n\t\t\t\t\tt.Errorf(\"toInt64() recover = %v, wantPanic = %v\", r, tt.wantPanic)\n\t\t\t\t}\n\t\t\t}()\n\t\t\tif got := toInt64(tt.args.i); got != tt.want {\n\t\t\t\tt.Errorf(\"toInt64() = %v, want %v\", got, tt.want)\n\t\t\t} else {\n\t\t\t\tt.Logf(\"toInt64(): test: %s PASS, want %v, got: %v\\n\", tt.name, tt.want, got)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_toUint64(t *testing.T) {\n\ttype args struct {\n\t\ti interface{}\n\t}\n\ttests := []struct {\n\t\tname      string\n\t\targs      args\n\t\twant      uint64\n\t\twantPanic bool\n\t}{\n\t\t{\n\t\t\tname: \"test-toUint64-int\",\n\t\t\targs: args{i: uint(1)},\n\t\t\twant: uint64(1),\n\t\t},\n\t\t{\n\t\t\tname: \"test-toUint64-int8\",\n\t\t\targs: args{i: uint8(1)},\n\t\t\twant: uint64(1),\n\t\t},\n\t\t{\n\t\t\tname: \"test-toUint64-int16\",\n\t\t\targs: args{i: uint16(1)},\n\t\t\twant: uint64(1),\n\t\t},\n\t\t{\n\t\t\tname: \"test-toUint64-int32\",\n\t\t\targs: args{i: uint32(1)},\n\t\t\twant: uint64(1),\n\t\t},\n\t\t{\n\t\t\tname: \"test-toUint64-int64\",\n\t\t\targs: args{i: uint64(1)},\n\t\t\twant: uint64(1),\n\t\t},\n\t\t{\n\t\t\tname:      \"test-toUint64-string\",\n\t\t\targs:      args{i: \"a string\"},\n\t\t\twant:      uint64(1),\n\t\t\twantPanic: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tdefer func() {\n\t\t\t\tr := recover()\n\t\t\t\tif (r != nil) != tt.wantPanic {\n\t\t\t\t\tt.Errorf(\"toUint64() recover = %v, wantPanic = %v\", r, tt.wantPanic)\n\t\t\t\t}\n\t\t\t}()\n\t\t\tif got := toUint64(tt.args.i); got != tt.want {\n\t\t\t\tt.Errorf(\"toUint64() = %v, want %v\", got, tt.want)\n\t\t\t} else {\n\t\t\t\tt.Logf(\"toUint64(): test: %s PASS, want %v, got: %v\\n\", tt.name, tt.want, got)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/mockusertokens/mockusertokens.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: controller/pkg/usertokens/usertokens.go\n\n// Package mockusertokens is a generated GoMock package.\npackage mockusertokens\n\nimport (\n\tcontext \"context\"\n\turl \"net/url\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tcommon \"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/common\"\n)\n\n// MockVerifier is a mock of Verifier interface\n// nolint\ntype MockVerifier struct {\n\tctrl     *gomock.Controller\n\trecorder *MockVerifierMockRecorder\n}\n\n// MockVerifierMockRecorder is the mock recorder for MockVerifier\n// nolint\ntype MockVerifierMockRecorder struct {\n\tmock *MockVerifier\n}\n\n// NewMockVerifier creates a new mock instance\n// nolint\nfunc NewMockVerifier(ctrl *gomock.Controller) *MockVerifier {\n\tmock := &MockVerifier{ctrl: ctrl}\n\tmock.recorder = &MockVerifierMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockVerifier) EXPECT() *MockVerifierMockRecorder {\n\treturn m.recorder\n}\n\n// VerifierType mocks base method\n// nolint\nfunc (m *MockVerifier) VerifierType() common.JWTType {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VerifierType\")\n\tret0, _ := ret[0].(common.JWTType)\n\treturn ret0\n}\n\n// VerifierType indicates an expected call of VerifierType\n// nolint\nfunc (mr *MockVerifierMockRecorder) VerifierType() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VerifierType\", reflect.TypeOf((*MockVerifier)(nil).VerifierType))\n}\n\n// Validate mocks base method\n// nolint\nfunc (m *MockVerifier) Validate(ctx context.Context, token string) ([]string, bool, string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Validate\", ctx, token)\n\tret0, _ := ret[0].([]string)\n\tret1, _ := ret[1].(bool)\n\tret2, _ := ret[2].(string)\n\tret3, _ := ret[3].(error)\n\treturn ret0, ret1, ret2, ret3\n}\n\n// Validate indicates an expected call of Validate\n// nolint\nfunc (mr *MockVerifierMockRecorder) Validate(ctx, token interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Validate\", reflect.TypeOf((*MockVerifier)(nil).Validate), ctx, token)\n}\n\n// Callback mocks base method\n// nolint\nfunc (m *MockVerifier) Callback(ctx context.Context, u *url.URL) (string, string, int, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Callback\", ctx, u)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(string)\n\tret2, _ := ret[2].(int)\n\tret3, _ := ret[3].(error)\n\treturn ret0, ret1, ret2, ret3\n}\n\n// Callback indicates an expected call of Callback\n// nolint\nfunc (mr *MockVerifierMockRecorder) Callback(ctx, u interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Callback\", reflect.TypeOf((*MockVerifier)(nil).Callback), ctx, u)\n}\n\n// IssueRedirect mocks base method\n// nolint\nfunc (m *MockVerifier) IssueRedirect(arg0 string) string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"IssueRedirect\", arg0)\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// IssueRedirect indicates an expected call of IssueRedirect\n// nolint\nfunc (mr *MockVerifierMockRecorder) IssueRedirect(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"IssueRedirect\", reflect.TypeOf((*MockVerifier)(nil).IssueRedirect), arg0)\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/oidc/oidc.go",
    "content": "package oidc\n\nimport (\n\t\"context\"\n\t\"crypto/rand\"\n\t\"crypto/sha1\"\n\t\"encoding/base64\"\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/bluele/gcache\"\n\toidc \"github.com/coreos/go-oidc\"\n\t\"github.com/rs/xid\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/common\"\n\t\"golang.org/x/oauth2\"\n)\n\nvar (\n\t// We maintain two caches. The first maintains the set of states that\n\t// we issue the redirect requests with. This helps us validate the\n\t// callbacks and verify the state to avoid any cross-origin violations.\n\t// Currently providing 60 seconds for the user to authenticate.\n\tstateCache gcache.Cache\n\t// The second cache will maintain the validations of the tokens so that\n\t// we don't go to the authorizer for every request.\n\ttokenCache gcache.Cache\n)\n\n// clientData is the state maintained for a client to improve response\n// times and hold the refresh tokens.\ntype clientData struct {\n\tattributes  []string\n\ttokenSource oauth2.TokenSource\n\texpiry      time.Time\n\tsync.Mutex\n}\n\n// TokenVerifier is an OIDC validator.\ntype TokenVerifier struct {\n\tProviderURL    string\n\tClientID       string\n\tClientSecret   string\n\tScopes         []string\n\tRedirectURL    string\n\tNonceSize      int\n\tCookieDuration time.Duration\n\tclientConfig   *oauth2.Config\n\toauthVerifier  *oidc.IDTokenVerifier\n\tgoogleHack     bool\n}\n\n// NewClient creates a new validator client\nfunc NewClient(ctx context.Context, v *TokenVerifier) (*TokenVerifier, error) {\n\t// Initialize caches only once if they are nil.\n\tif stateCache == nil {\n\t\tstateCache = gcache.New(2048).LRU().Expiration(120 * time.Second).Build()\n\t}\n\tif tokenCache == nil {\n\t\ttokenCache = gcache.New(2048).LRU().Build()\n\t}\n\n\t// Create a new generic OIDC provider based on the provider URL.\n\t// The library will auto-discover the configuration of the provider.\n\t// If it is not a compliant provider we should report and error here.\n\tprovider, err := oidc.NewProvider(ctx, v.ProviderURL)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Failed to initialize provider: %s\", err)\n\t}\n\n\toidConfig := &oidc.Config{\n\t\tClientID:          v.ClientID,\n\t\tSkipClientIDCheck: true,\n\t}\n\tv.oauthVerifier = provider.Verifier(oidConfig)\n\tscopes := []string{oidc.ScopeOpenID, \"profile\", \"email\"}\n\tfor _, scope := range v.Scopes {\n\t\tif scope != oidc.ScopeOpenID && scope != \"profile\" && scope != \"email\" {\n\t\t\tscopes = append(scopes, scope)\n\t\t}\n\t}\n\n\tv.clientConfig = &oauth2.Config{\n\t\tClientID:     v.ClientID,\n\t\tClientSecret: v.ClientSecret,\n\t\tEndpoint:     provider.Endpoint(),\n\t\tRedirectURL:  v.RedirectURL,\n\t\tScopes:       scopes,\n\t}\n\n\t// Google does not honor the OIDC standard to refresh tokens\n\t// with a proper scope. Instead it requires a prompt parameter\n\t// to be passed. In order to deal wit this, we will have to\n\t// detect Google as the OIDC and pass the parameters.\n\tif strings.Contains(v.ProviderURL, \"accounts.google.com\") {\n\t\tv.googleHack = true\n\t}\n\n\treturn v, nil\n}\n\n// IssueRedirect creates the redirect URL. The URI is created by the provider\n// and it includes a state that is random. The state will be remembered\n// for the return. There is an assumption here that the LBs in front of\n// applications are sticky or the TCP session is re-used. Otherwise, we will\n// need a global state that could introduce additional calls to a central\n// system.\n// TODO: add support for a global state.\nfunc (v *TokenVerifier) IssueRedirect(originURL string) string {\n\tstate, err := randomSha1(v.NonceSize)\n\tif err != nil {\n\t\tstate = xid.New().String()\n\t}\n\tif err := stateCache.Set(state, originURL); err != nil {\n\t\treturn \"\"\n\t}\n\n\tredirectURL := v.clientConfig.AuthCodeURL(state, oauth2.AccessTypeOffline)\n\tif v.googleHack {\n\t\tredirectURL = redirectURL + \"&prompt=consent\"\n\t}\n\n\treturn redirectURL\n}\n\n// Callback is the function that is called back by the IDP to catch the token\n// and perform all other validations. It will return the resulting token,\n// the original URL that was called to initiate the protocol, and the\n// http status response.\nfunc (v *TokenVerifier) Callback(ctx context.Context, u *url.URL) (string, string, int, error) {\n\n\t// We first validate that the callback state matches the original redirect\n\t// state. We clean up the cache once it is validated. During this process\n\t// we recover the original URL that initiated the protocol. This allows\n\t// us to redirect the client to their original request.\n\treceivedState := u.Query().Get(\"state\")\n\toriginURL, err := stateCache.Get(receivedState)\n\tif err != nil {\n\t\treturn \"\", \"\", http.StatusBadRequest, fmt.Errorf(\"bad state\")\n\t}\n\tstateCache.Remove(receivedState)\n\n\t// We exchange the authorization code with an OAUTH token. This is the main\n\t// step where the OAUTH provider will match the code to the token.\n\toauth2Token, err := v.clientConfig.Exchange(ctx, u.Query().Get(\"code\"), oauth2.AccessTypeOffline)\n\tif err != nil {\n\t\treturn \"\", \"\", http.StatusInternalServerError, fmt.Errorf(\"bad code: %s\", err)\n\t}\n\n\t// We extract the rawID token.\n\trawIDToken, ok := oauth2Token.Extra(\"id_token\").(string)\n\tif !ok {\n\t\treturn \"\", \"\", http.StatusInternalServerError, fmt.Errorf(\"bad ID\")\n\t}\n\n\tif err := tokenCache.SetWithExpire(\n\t\trawIDToken,\n\t\t&clientData{\n\t\t\ttokenSource: v.clientConfig.TokenSource(ctx, oauth2Token),\n\t\t\texpiry:      oauth2Token.Expiry,\n\t\t},\n\t\ttime.Until(oauth2Token.Expiry.Add(3600*time.Second)),\n\t); err != nil {\n\t\treturn \"\", \"\", http.StatusInternalServerError, fmt.Errorf(\"failed to insert token in the cache: %s\", err)\n\t}\n\n\treturn rawIDToken, originURL.(string), http.StatusTemporaryRedirect, nil\n}\n\n// Validate checks if the token is valid and returns the claims. The validator\n// maintains an internal cache with tokens to accelerate performance. If the\n// token is not in the cache, it will validate it with the central authorizer.\nfunc (v *TokenVerifier) Validate(ctx context.Context, token string) ([]string, bool, string, error) {\n\n\tif len(token) == 0 {\n\t\treturn []string{}, true, token, fmt.Errorf(\"invalid token presented\")\n\t}\n\n\tvar tokenData *clientData\n\n\t// If it is not found in the cache initiate a call back process.\n\tdata, err := tokenCache.Get(token)\n\tif err == nil {\n\t\tvar ok bool\n\t\ttokenData, ok = data.(*clientData)\n\t\tif !ok {\n\t\t\treturn nil, true, token, fmt.Errorf(\"internal server error\")\n\t\t}\n\n\t\t// If the cached token hasn't expired yet, we can just accept it and not\n\t\t// go through a whole verification process. Nothing new.\n\t\tif tokenData.expiry.After(time.Now()) && len(tokenData.attributes) > 0 {\n\t\t\treturn tokenData.attributes, false, token, nil\n\t\t}\n\t} else { // No token in the cache. Let's try to see if it is valid and we can cache it now.\n\t\t//\n\t\ttokenData = &clientData{}\n\t}\n\n\t// The token has expired. Let's try to refresh it.\n\ttokenData.Lock()\n\tdefer tokenData.Unlock()\n\n\t// If it is the first time we are verifying the token, let's do\n\t// it now. This is possible if the token was created earlier\n\t// but we never had a chance to verify it. In this case, the\n\t// attributes were empty.\n\tidToken, err := v.oauthVerifier.Verify(ctx, token)\n\tif err != nil {\n\t\tvar ok bool\n\t\t// Token is expired. Let's try to refresh it if we have something\n\t\t// in the cache. If we don't have a refresh token, we reject it\n\t\t// and ask the client to validate again.\n\t\tif tokenData.tokenSource == nil {\n\t\t\treturn []string{}, true, token, fmt.Errorf(\"no cached data and expired token - request authorization: %s\", err)\n\t\t}\n\t\trefreshedToken, err := tokenData.tokenSource.Token()\n\t\tif err != nil {\n\t\t\treturn []string{}, true, token, fmt.Errorf(\"token validation failed and cannot refresh: %s\", err)\n\t\t}\n\t\ttoken, ok = refreshedToken.Extra(\"id_token\").(string)\n\t\tif !ok {\n\t\t\treturn []string{}, true, token, fmt.Errorf(\"failed to find id_token - initiate re-authorization\")\n\t\t}\n\t\tidToken, err = v.oauthVerifier.Verify(ctx, token)\n\t\tif err != nil {\n\t\t\treturn []string{}, true, token, fmt.Errorf(\"invalid token derived from refresh - manual authorization is required: %s\", err)\n\t\t}\n\t}\n\n\t// Get the claims out of the token. Use the standard data structure for\n\t// this and ignore the other fields. We are only interested on the ID.\n\tresp := struct {\n\t\tIDTokenClaims map[string]interface{} // ID Token payload is just JSON.\n\t}{map[string]interface{}{}}\n\tif err := idToken.Claims(&resp.IDTokenClaims); err != nil {\n\t\treturn []string{}, true, token, fmt.Errorf(\"unable to process claims: %s\", err)\n\t}\n\n\t// Flatten the claims in a generic format.\n\tattributes := []string{}\n\tfor k, v := range resp.IDTokenClaims {\n\t\tattributes = append(attributes, common.FlattenClaim(k, v)...)\n\t}\n\n\ttokenData.attributes = attributes\n\ttokenData.expiry = idToken.Expiry\n\n\t// Cache the token and attributes to avoid multiple validations and update the\n\t// expiration time.\n\tif err := tokenCache.SetWithExpire(token, tokenData, time.Until(idToken.Expiry.Add(3600*time.Second))); err != nil {\n\t\treturn []string{}, false, token, fmt.Errorf(\"cannot cache token: %s\", err)\n\t}\n\n\treturn attributes, false, token, nil\n}\n\n// VerifierType returns the type of the TokenVerifier.\nfunc (v *TokenVerifier) VerifierType() common.JWTType {\n\treturn common.OIDC\n}\n\nfunc randomSha1(nonceSourceSize int) (string, error) {\n\tnonceSource := make([]byte, nonceSourceSize)\n\tif _, err := rand.Read(nonceSource); err != nil {\n\t\treturn \"\", err\n\t}\n\tsha := sha1.Sum(nonceSource)\n\treturn base64.StdEncoding.EncodeToString(sha[:]), nil\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/pkitokens/jwt.go",
    "content": "package pkitokens\n\nimport (\n\t\"context\"\n\t\"crypto\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"net/url\"\n\n\tjwt \"github.com/dgrijalva/jwt-go\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/common\"\n)\n\n// PKIJWTVerifier is a generic JWT PKI verifier. It assumes that the tokens have\n// been signed by a private key, and it validates them with the provide public key.\n// This is a simple and stateless verifier that doesn't depend on central server\n// for validating the tokens. The public key is provided out-of-band.\ntype PKIJWTVerifier struct {\n\tJWTCertPEM  []byte\n\tkeys        []crypto.PublicKey\n\tRedirectURL string\n}\n\n// NewVerifierFromFile assumes that the input is provided as file path.\nfunc NewVerifierFromFile(jwtcertPath string, redirectURI string, redirectOnFail, redirectOnNoToken bool) (*PKIJWTVerifier, error) {\n\tjwtCertPEM, err := ioutil.ReadFile(jwtcertPath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to read JWT signing certificates or public keys from file: %s\", err)\n\t}\n\treturn NewVerifierFromPEM(jwtCertPEM, redirectURI, redirectOnFail, redirectOnNoToken)\n}\n\n// NewVerifierFromPEM assumes that the input is a PEM byte array.\nfunc NewVerifierFromPEM(jwtCertPEM []byte, redirectURI string, redirectOnFail, redirectOnNoToken bool) (*PKIJWTVerifier, error) {\n\tkeys, err := parsePublicKeysFromPEM(jwtCertPEM)\n\t// pay attention to the return format of parsePublicKeysFromPEM\n\t// when checking for an error here\n\tif keys == nil && err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to read JWT signing certificates or public keys from PEM: %s\", err)\n\t}\n\treturn &PKIJWTVerifier{\n\t\tJWTCertPEM:  jwtCertPEM,\n\t\tkeys:        keys,\n\t\tRedirectURL: redirectURI,\n\t}, nil\n}\n\n// NewVerifier creates a new verifier from the provided configuration.\nfunc NewVerifier(v *PKIJWTVerifier) (*PKIJWTVerifier, error) {\n\tif len(v.JWTCertPEM) == 0 {\n\t\treturn v, nil\n\t}\n\tkeys, err := parsePublicKeysFromPEM(v.JWTCertPEM)\n\t// pay attention to the return format of parsePublicKeysFromPEM\n\t// when checking for an error here\n\tif keys == nil && err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to parse JWT signing certificates or public keys from PEM: %s\", err)\n\t}\n\tv.keys = keys\n\treturn v, nil\n}\n\n// Validate parses a generic JWT token and flattens the claims in a normalized form. It\n// assumes that any of the JWT signing certs or public keys will validate the token.\nfunc (j *PKIJWTVerifier) Validate(ctx context.Context, tokenString string) ([]string, bool, string, error) {\n\tif len(tokenString) == 0 {\n\t\treturn []string{}, false, tokenString, fmt.Errorf(\"Empty token\")\n\t}\n\tif len(j.keys) == 0 {\n\t\treturn []string{}, false, tokenString, fmt.Errorf(\"No public keys loaded into verifier\")\n\t}\n\n\t// iterate over all public keys that we have and try to validate the token\n\t// the first one to succeed will be used\n\tvar errs []error\n\tfor _, key := range j.keys {\n\t\tclaims := &jwt.MapClaims{}\n\t\ttoken, err := jwt.ParseWithClaims(tokenString, claims, func(token *jwt.Token) (interface{}, error) {\n\t\t\tswitch token.Method.(type) {\n\t\t\tcase *jwt.SigningMethodECDSA:\n\t\t\t\tif isECDSAPublicKey(key) {\n\t\t\t\t\treturn key, nil\n\t\t\t\t}\n\t\t\tcase *jwt.SigningMethodRSA:\n\t\t\t\tif isRSAPublicKey(key) {\n\t\t\t\t\treturn key, nil\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t\treturn nil, fmt.Errorf(\"unsupported signing method '%T'\", token.Method)\n\t\t\t}\n\t\t\treturn nil, fmt.Errorf(\"signing method '%T' and public key type '%T' mismatch\", token.Method, key)\n\t\t})\n\n\t\t// cover all error cases after parsing/verifying\n\t\tif err != nil {\n\t\t\terrs = append(errs, err)\n\t\t\tcontinue\n\t\t}\n\t\tif token == nil {\n\t\t\terrs = append(errs, fmt.Errorf(\"no token was parsed\"))\n\t\t\tcontinue\n\t\t}\n\t\tif !token.Valid {\n\t\t\terrs = append(errs, fmt.Errorf(\"token failed to verify against public key\"))\n\t\t\tcontinue\n\t\t}\n\n\t\t// return successful on match/verification with the first key\n\t\tattributes := []string{}\n\t\tfor k, v := range *claims {\n\t\t\tattributes = append(attributes, common.FlattenClaim(k, v)...)\n\t\t}\n\t\treturn attributes, false, tokenString, nil\n\t}\n\n\t// generate a detailed error\n\tvar detailedError string\n\tfor i, err := range errs {\n\t\tdetailedError += err.Error()\n\t\tif i+1 < len(errs) {\n\t\t\tdetailedError += \"; \"\n\t\t}\n\t}\n\treturn []string{}, false, tokenString, fmt.Errorf(\"Invalid token - errors: [%s]\", detailedError)\n}\n\n// VerifierType returns the type of the verifier.\nfunc (j *PKIJWTVerifier) VerifierType() common.JWTType {\n\treturn common.PKI\n}\n\n// Callback is called by an IDP. Not implemented here. No central authorizer for the tokens.\nfunc (j *PKIJWTVerifier) Callback(ctx context.Context, u *url.URL) (string, string, int, error) {\n\treturn \"\", \"\", 0, nil\n}\n\n// IssueRedirect issues a redirect. Not implemented. There is no need for a redirect.\nfunc (j *PKIJWTVerifier) IssueRedirect(originURL string) string {\n\treturn \"\"\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/pkitokens/jwt_test.go",
    "content": "// +build !windows\n\npackage pkitokens\n\nimport (\n\t\"context\"\n\t\"crypto\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestPKIVerifierValidate(t *testing.T) {\n\n\tctx := context.TODO()\n\tpemBytes := []byte(`\n-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyjDEPJD1Fv1IJIq4mnec\noMlSve0vZOTuzDmKuMB4vfBXalKZgbp4ONL+BvWV9OPs22Smv9SAfnoQ25q8Q9so\nihzUKhaIAY2CI70ll4exbLK9FD4uTi1bqn0FdIh04UIyW6s2EqTGMkSKx9THNvAM\nKx++pPt3US2sQVEC24bWPxRN7RsBBpRjoiEamkA04ioGFhMBbas5MdCLt/fd92aR\nQCBISOb6PU08fQiARK8g/wdpBUTxy9/Ud1vUnNaZtWm+eLrwdTXgHM3/LG1M4lc0\nZqHIL3rMxhae5W+j3SL3ApreiUYugv/0bCSypvJZjEXKS7SBR/+rtw0/mQpS8DpI\nkwIDAQAB\n-----END PUBLIC KEY-----\n-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEnlK01BDTYbvRBxGM0o3vXNqqvI25\neZ/s3Cq9OXnNpoCI3/DH/tuD3n7cnWcNSfl1qJIH2LVZ0cWUW/L/9i/jPA==\n-----END PUBLIC KEY-----\n\t`)\n\n\t// Here are the private keys in case you want to regenerate a new token:\n\t//\n\t// -----BEGIN RSA PRIVATE KEY-----\n\t// MIIEpAIBAAKCAQEAyjDEPJD1Fv1IJIq4mnecoMlSve0vZOTuzDmKuMB4vfBXalKZ\n\t// gbp4ONL+BvWV9OPs22Smv9SAfnoQ25q8Q9soihzUKhaIAY2CI70ll4exbLK9FD4u\n\t// Ti1bqn0FdIh04UIyW6s2EqTGMkSKx9THNvAMKx++pPt3US2sQVEC24bWPxRN7RsB\n\t// BpRjoiEamkA04ioGFhMBbas5MdCLt/fd92aRQCBISOb6PU08fQiARK8g/wdpBUTx\n\t// y9/Ud1vUnNaZtWm+eLrwdTXgHM3/LG1M4lc0ZqHIL3rMxhae5W+j3SL3ApreiUYu\n\t// gv/0bCSypvJZjEXKS7SBR/+rtw0/mQpS8DpIkwIDAQABAoIBAQCWkraxfCpp0nn1\n\t// bLGJp2Ynf4Z1Frvi4XLM+FVMvVmt6dzPu2/CYsHBX6/6Ms5YL51mzZA47+I5TmJb\n\t// iOKHjiCkqk9+gIUM0vuF7giezljdYEbbWmtVoQXQ84YqgKy6THgAOILuY3OOX+kS\n\t// ZG1vhlkpjFyHtRXoiKDti40bO1E2a2+O/vpD417hZrezzb97JQ4Cw417jRs3+dpc\n\t// BaVutFUiIm5HFeVdD0/hqwnYMPeoxxxdj4kiuzI2FZOexPufq9MSrSI0RMnegRGL\n\t// 8fgg4ZhVuEONtA8eXFI8EpIEhaKOq9CPZuImyKh+Vx4pwcT7NVld70ohqhQaEVqs\n\t// 6QblHf6hAoGBAOqimWdjGY6PKT6ipF9/6CsNnAAyyG1IRWSLweVDK36DkIxzTKGU\n\t// fk2uXFw6GlAKu1J0lTfQjxtKoYVljUHjUvfvW9KE/GyuW6eWTxUIrvmpvpcyAV6H\n\t// gHkt8/A+l8sQS3oMiLJ14c8/W5d4YdB/VBLQHsOi8I5EOGsO7a52fETLAoGBANyZ\n\t// 3+nq/tyk6hGk+lNJSXnkURydbkONCFhU92iwPC+f/4ILcHdBVjwLOAYa/qUzHvEE\n\t// H+MtMiuGbDrnjjCytvjmIKmMnJ30BHbXwn0dV+hes1O0EwHoIGtvQyWVH/6zB4ar\n\t// YkhK9IBtOxfs3ORVeVBoHx/Mq40BAGzGxQQopVpZAoGAScFtCWPMb9SuuWK02tRB\n\t// Le9sP1+3Qyr5rT6FZ8TykiVXNd80koI0JcUOgWs+RDTrZ2MAWPg1U/XkyiL/AVwt\n\t// A4T5TzbAhoVUiFymZU1Ce3aRU8PDTGy5xN3eFYIHgyyPHUF9YuPNZLFc4ENWNA0i\n\t// Z3uGgCbjCUWGmpipvDLAo3sCgYApQEDlvgLAgbofaIlCz76Eo5QjVLEMwq+fzOui\n\t// 0OnAQhwGVltGgZo9ih+EzMF3ZNLRYOMRmR77kpxke25UXubmLipHajrTMpEvI/OD\n\t// b9xDYIoKCe9P+Pcu/9Q/j942w4WRwjSTriiAZ2yYcbtwmycfSQkg6iXeLSTGMnke\n\t// 6PbaqQKBgQDGNwOgdHtMdHyy2kDMLdGKCysEo2eBNAxdRqjGxmsjm6bsd4xyLxS2\n\t// lkf7v3e9vE24HfBbwMoW4sx1eEDbFc4pai4l4vG3dpbrd3CJa5mpvL3mxGnTlPUy\n\t// 1PopL5pyjSZ6bcRETolZNM4L8X4jgfwHl3Lvc5jBgQW0PCAVtBVp8g==\n\t// -----END RSA PRIVATE KEY-----\n\t//\n\t// -----BEGIN EC PRIVATE KEY-----\n\t// MHcCAQEEIBP/5KHpYJ1GwqdOUOCu4+264KP4loONT+9QIzNJwVGjoAoGCCqGSM49\n\t// AwEHoUQDQgAEnlK01BDTYbvRBxGM0o3vXNqqvI25eZ/s3Cq9OXnNpoCI3/DH/tuD\n\t// 3n7cnWcNSfl1qJIH2LVZ0cWUW/L/9i/jPA==\n\t// -----END EC PRIVATE KEY-----\n\t//\n\t// And here is the payload data:\n\t//\n\t// {\n\t// \t\"sub\": \"Adam\",\n\t// \t\"name\": \"Eve\",\n\t// \t\"admin\": true,\n\t// \t\"iat\": 1516239022\n\t// }\n\t//\n\t// Go to https://jwt.io/ and\n\t// - select the right algorithm (RS256 / ES256)\n\t// - fill in the public key\n\t// - fill in the private key\n\t// - ensure the token is still valid (shows Signature Verified)\n\t// - copy the new token from the encoded section\n\t//\n\tvalidTokenRS256 := \"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJBZGFtIiwibmFtZSI6IkV2ZSIsImFkbWluIjp0cnVlLCJpYXQiOjE1MTYyMzkwMjJ9.WHDAmZzmX50GRSXTvB0MqQl2gFHFrQav-v6V2MdbqGXvmBZXus9Bl965Uuqs8uxdJzDad8Is05kC77iHElTasgQZqLwSTN5-WXpZFsW_EOGVcy0puDgREm2QcD8pQeagy6KxwDs0BAQIWwPSfjTCn05w-CRKveo1t0TsKUSMiZltebaZOtAr9etOAwBHIy7QzexrhIzlG6-7fqMbpsNZ8DbanUBc2fiL6Ogs461TQixBDHoRw2HjykGoPRvH3sy8bSRX5l1olBkRb4kic7xSKhiU_YlvmBo9ybC81TRGUtQZl87aLcnv4foDLtFvNAwTyTxfikt2Ka1peKJNgk82Dw\"\n\tvalidTokenES256 := \"eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJBZGFtIiwibmFtZSI6IkV2ZSIsImFkbWluIjp0cnVlLCJpYXQiOjE1MTYyMzkwMjJ9.V9HGK6yEZBbjgEsyhQeU6i0Io1KZaCMVgSC0u6fQTRd2TX4Ac-FSDnf44s89PPm8RCHqBATJdJMspIQM66y9Hw\"\n\n\t// currently unsupported algorithm PS256\n\tunsupportedTokenPS256 := \"eyJhbGciOiJQUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.P9_X1ctIxnnoUpKSWpYw3rF62e-d8LXe3sETuLn4Lhigw5OQhi-mBBKoBMneHy4kimS84zxnMby0FYo9wKM3I3pEg8Qrz0Q00tNhKCwOnZ7Q-e86sW1luK1z82tufF-sZ9_BY_LGQsym0lQmQaHFzLmEDXnOzWsjUThHGVJTI64\"\n\n\t// supported algorithm but signed with a different key\n\tinvalidTokenRS256 := \"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWUsImlhdCI6MTUxNjIzOTAyMn0.TCYt5XsITJX1CxPCT8yAV-TVkIEq_PbChOMqsLfRoPsnsgw5WEuts01mq-pQy7UJiN5mgRxD-WUcX16dUEMGlv50aqzpqh4Qktb3rk-BuQy72IFLOqV0G_zS245-kronKb78cPN25DGlcTwLtjPAYuNzVBAh4vGHSrQyHUdBBPM\"\n\n\tConvey(\"Given a valid PEM\", t, func() {\n\n\t\tConvey(\"it should create a new verifier successfully using NewVerifier()\", func() {\n\t\t\tverifier, err := NewVerifier(&PKIJWTVerifier{\n\t\t\t\tJWTCertPEM: pemBytes,\n\t\t\t})\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(verifier, ShouldNotBeNil)\n\t\t\tSo(len(verifier.keys), ShouldEqual, 2)\n\t\t})\n\n\t\tConvey(\"it should create a new verifier successfully using NewVerifierFromPEM()\", func() {\n\t\t\tverifier, err := NewVerifierFromPEM(pemBytes, \"\", false, false)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(verifier, ShouldNotBeNil)\n\t\t\tSo(len(verifier.keys), ShouldEqual, 2)\n\t\t})\n\t})\n\n\tConvey(\"Given a verifier with no loaded public keys\", t, func() {\n\t\tverifier := &PKIJWTVerifier{\n\t\t\tJWTCertPEM: pemBytes,\n\t\t\tkeys:       []crypto.PublicKey{},\n\t\t}\n\n\t\tConvey(\"it should fail to validate\", func() {\n\t\t\ttoken := \"not empty token\"\n\t\t\t_, _, _, err := verifier.Validate(ctx, token)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldEqual, \"No public keys loaded into verifier\")\n\t\t})\n\t})\n\n\tConvey(\"Given a valid verifier\", t, func() {\n\t\tverifier, err := NewVerifierFromPEM(pemBytes, \"\", false, false)\n\t\tSo(verifier, ShouldNotBeNil)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(len(verifier.keys), ShouldEqual, 2)\n\n\t\tConvey(\"it should fail to validate an empty token\", func() {\n\t\t\ttoken := \"\"\n\t\t\t_, _, _, err := verifier.Validate(ctx, token)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldEqual, \"Empty token\")\n\t\t})\n\n\t\tConvey(\"it should validate a valid RS256 token successfully\", func() {\n\t\t\ttoken := validTokenRS256\n\t\t\tattributes, _, _, err := verifier.Validate(ctx, token)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(attributes), ShouldBeGreaterThan, 0)\n\t\t\tSo(attributes, ShouldContain, \"sub=Adam\")\n\t\t\tSo(attributes, ShouldContain, \"name=Eve\")\n\t\t})\n\n\t\tConvey(\"it should validate a valid ES256 token successfully\", func() {\n\t\t\ttoken := validTokenES256\n\t\t\tattributes, _, _, err := verifier.Validate(ctx, token)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(attributes), ShouldBeGreaterThan, 0)\n\t\t\tSo(attributes, ShouldContain, \"sub=Adam\")\n\t\t\tSo(attributes, ShouldContain, \"name=Eve\")\n\t\t})\n\n\t\tConvey(\"it should fail to validate a token signed with an unsupported algorithm\", func() {\n\t\t\ttoken := unsupportedTokenPS256\n\t\t\t_, _, _, err := verifier.Validate(ctx, token)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldEqual, \"Invalid token - errors: [unsupported signing method '*jwt.SigningMethodRSAPSS'; unsupported signing method '*jwt.SigningMethodRSAPSS']\")\n\t\t})\n\n\t\tConvey(\"it should fail to validate a token signed with an unexpected key\", func() {\n\t\t\ttoken := invalidTokenRS256\n\t\t\t_, _, _, err := verifier.Validate(ctx, token)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldEqual, \"Invalid token - errors: [crypto/rsa: verification error; signing method '*jwt.SigningMethodRSA' and public key type '*ecdsa.PublicKey' mismatch]\")\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/pkitokens/publickeys.go",
    "content": "package pkitokens\n\nimport (\n\t\"crypto\"\n\t\"crypto/ecdsa\"\n\t\"crypto/rsa\"\n\t\"crypto/x509\"\n\t\"encoding/pem\"\n\t\"fmt\"\n)\n\n// parsePublicKeysFromPEM reads all public keys from PEMs that are either\n// in a \"PUBLIC KEY\", \"RSA PUBLIC KEY\" or \"CERTIFICATE\" type PEM and\n// returns them in an array. Only RSA and ECDSA keys are taken into account.\n// NOTE: pay attention to the special return logic!\n//\n// The return logic is as follows:\n// if no keys could be found or parsed: nil, err\n// if no errors were found at all: keys, nil\n// if some keys could be parsed, but others failed: keys, err\n//\nfunc parsePublicKeysFromPEM(bytesPEM []byte) ([]crypto.PublicKey, error) {\n\tkeys := make([]crypto.PublicKey, 0, 1)\n\trest := bytesPEM\n\tvar errs []error\n\tfor {\n\t\tvar block *pem.Block\n\t\tblock, rest = pem.Decode(rest)\n\t\tif block == nil {\n\t\t\tbreak\n\t\t}\n\t\tswitch block.Type {\n\t\tcase \"CERTIFICATE\":\n\t\t\tcert, err := x509.ParseCertificate(block.Bytes)\n\t\t\tif err != nil {\n\t\t\t\terrs = append(errs, err)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif !isSupportedPublicKeyType(cert.PublicKey) {\n\t\t\t\terrs = append(errs, fmt.Errorf(\"unsupported key type %T\", cert.PublicKey))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tkeys = append(keys, cert.PublicKey)\n\t\tcase \"PUBLIC KEY\":\n\t\t\tpub, err := x509.ParsePKIXPublicKey(block.Bytes)\n\t\t\tif err != nil {\n\t\t\t\terrs = append(errs, err)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif !isSupportedPublicKeyType(pub) {\n\t\t\t\terrs = append(errs, fmt.Errorf(\"unsupported key type %T\", pub))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tkeys = append(keys, pub)\n\t\tcase \"RSA PUBLIC KEY\":\n\t\t\tpub, err := x509.ParsePKCS1PublicKey(block.Bytes)\n\t\t\tif err != nil {\n\t\t\t\terrs = append(errs, err)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif !isSupportedPublicKeyType(pub) {\n\t\t\t\terrs = append(errs, fmt.Errorf(\"unsupported key type %T\", pub))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tkeys = append(keys, pub)\n\t\tdefault:\n\t\t\t// invalid type, read the next entry\n\t\t\terrs = append(errs, fmt.Errorf(\"unsupported PEM type %s\", block.Type))\n\t\t\tcontinue\n\t\t}\n\t}\n\n\t// create detailed error\n\tvar detailedErrors string\n\tfor i, err := range errs {\n\t\tdetailedErrors += err.Error()\n\t\tif i+1 < len(errs) {\n\t\t\tdetailedErrors += \"; \"\n\t\t}\n\t}\n\n\t// if no keys at all were found, be specific about this\n\tif len(keys) == 0 {\n\t\treturn nil, fmt.Errorf(\"no valid certificates or public keys found (errors: [%s])\", detailedErrors)\n\t}\n\n\t// if some errors were encountered, but we have some keys, return both\n\tif len(keys) > 0 && len(errs) > 0 {\n\t\treturn keys, fmt.Errorf(\"[%s]\", detailedErrors)\n\t}\n\n\t// if all went well, return keys, but no error\n\treturn keys, nil\n}\n\n// isSupportedPublicKeyType returns true if `key` is an RSA or ECDSA public key\nfunc isSupportedPublicKeyType(key crypto.PublicKey) bool {\n\treturn isRSAPublicKey(key) || isECDSAPublicKey(key)\n}\n\nfunc isRSAPublicKey(key crypto.PublicKey) bool {\n\t_, ok := key.(*rsa.PublicKey)\n\treturn ok\n}\n\nfunc isECDSAPublicKey(key crypto.PublicKey) bool {\n\t_, ok := key.(*ecdsa.PublicKey)\n\treturn ok\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/pkitokens/publickeys_test.go",
    "content": "// +build !windows\n\npackage pkitokens\n\nimport (\n\t\"crypto/ecdsa\"\n\t\"crypto/rsa\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestParsePublicKeysFromPEM(t *testing.T) {\n\n\tConvey(\"Given a PEM with a PKIX RSA public key, a PKCS#1 RSA public key and an X509 certificate\", t, func() {\n\t\tpemBytes := []byte(`\n-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyjDEPJD1Fv1IJIq4mnec\noMlSve0vZOTuzDmKuMB4vfBXalKZgbp4ONL+BvWV9OPs22Smv9SAfnoQ25q8Q9so\nihzUKhaIAY2CI70ll4exbLK9FD4uTi1bqn0FdIh04UIyW6s2EqTGMkSKx9THNvAM\nKx++pPt3US2sQVEC24bWPxRN7RsBBpRjoiEamkA04ioGFhMBbas5MdCLt/fd92aR\nQCBISOb6PU08fQiARK8g/wdpBUTxy9/Ud1vUnNaZtWm+eLrwdTXgHM3/LG1M4lc0\nZqHIL3rMxhae5W+j3SL3ApreiUYugv/0bCSypvJZjEXKS7SBR/+rtw0/mQpS8DpI\nkwIDAQAB\n-----END PUBLIC KEY-----\n-----BEGIN RSA PUBLIC KEY-----\nMIIBCgKCAQEAyjDEPJD1Fv1IJIq4mnecoMlSve0vZOTuzDmKuMB4vfBXalKZgbp4\nONL+BvWV9OPs22Smv9SAfnoQ25q8Q9soihzUKhaIAY2CI70ll4exbLK9FD4uTi1b\nqn0FdIh04UIyW6s2EqTGMkSKx9THNvAMKx++pPt3US2sQVEC24bWPxRN7RsBBpRj\noiEamkA04ioGFhMBbas5MdCLt/fd92aRQCBISOb6PU08fQiARK8g/wdpBUTxy9/U\nd1vUnNaZtWm+eLrwdTXgHM3/LG1M4lc0ZqHIL3rMxhae5W+j3SL3ApreiUYugv/0\nbCSypvJZjEXKS7SBR/+rtw0/mQpS8DpIkwIDAQAB\n-----END RSA PUBLIC KEY-----\n-----BEGIN CERTIFICATE-----\nMIIDazCCAlOgAwIBAgIUTBdVdOoTt+z1c+25X1WdKLEqc/IwDQYJKoZIhvcNAQEL\nBQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM\nGEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0xOTAxMzEwNTE4MDVaFw0yOTAx\nMjgwNTE4MDVaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw\nHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB\nAQUAA4IBDwAwggEKAoIBAQDKMMQ8kPUW/Ugkiriad5ygyVK97S9k5O7MOYq4wHi9\n8FdqUpmBung40v4G9ZX04+zbZKa/1IB+ehDbmrxD2yiKHNQqFogBjYIjvSWXh7Fs\nsr0UPi5OLVuqfQV0iHThQjJbqzYSpMYyRIrH1Mc28AwrH76k+3dRLaxBUQLbhtY/\nFE3tGwEGlGOiIRqaQDTiKgYWEwFtqzkx0Iu39933ZpFAIEhI5vo9TTx9CIBEryD/\nB2kFRPHL39R3W9Sc1pm1ab54uvB1NeAczf8sbUziVzRmocgveszGFp7lb6PdIvcC\nmt6JRi6C//RsJLKm8lmMRcpLtIFH/6u3DT+ZClLwOkiTAgMBAAGjUzBRMB0GA1Ud\nDgQWBBRzt5Gi91WRLBU1PRlo/wCC44DNnzAfBgNVHSMEGDAWgBRzt5Gi91WRLBU1\nPRlo/wCC44DNnzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAv\n+NayVYU//8QX2TIQ5CcH/3iOCOa9Qx4KHYtyv+/ElBm2WaWRbJiy470D/I2tjkO0\nJ4a0kihMKEkwAVUvskbM+PjTcrgaE205YO/Pyn00s0Xt3yBp2Cf6rmcNtda4hqCs\nZNhCEXxAXbLxGb5oXd+Wis/tzpBNYrw9x9r3Axr9U2pW+sSzXsUqRdBvaHpywIRq\n6FnpawXPJMIOaMohmWAPYnmqILUs0CslzmXQypayslAFC2adr1NQPwZw0FJ3UIQM\nAyfixuFuZbOVlwm/zJqX0G0NbitPybGV5XneC89OF90H0zfv47Us0akzyY6yGLp/\n+3ASkOBz0ypQ6pgZK/kj\n-----END CERTIFICATE-----\n\t\t`)\n\n\t\tConvey(\"then parsePublicKeysFromPEM should return 3 public keys\", func() {\n\t\t\tkeys, err := parsePublicKeysFromPEM(pemBytes)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(keys), ShouldEqual, 3)\n\t\t})\n\t})\n\n\tConvey(\"Given a PEM with an RSA private key and a DSA public key\", t, func() {\n\t\tpemBytes := []byte(`\n-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEAyjDEPJD1Fv1IJIq4mnecoMlSve0vZOTuzDmKuMB4vfBXalKZ\ngbp4ONL+BvWV9OPs22Smv9SAfnoQ25q8Q9soihzUKhaIAY2CI70ll4exbLK9FD4u\nTi1bqn0FdIh04UIyW6s2EqTGMkSKx9THNvAMKx++pPt3US2sQVEC24bWPxRN7RsB\nBpRjoiEamkA04ioGFhMBbas5MdCLt/fd92aRQCBISOb6PU08fQiARK8g/wdpBUTx\ny9/Ud1vUnNaZtWm+eLrwdTXgHM3/LG1M4lc0ZqHIL3rMxhae5W+j3SL3ApreiUYu\ngv/0bCSypvJZjEXKS7SBR/+rtw0/mQpS8DpIkwIDAQABAoIBAQCWkraxfCpp0nn1\nbLGJp2Ynf4Z1Frvi4XLM+FVMvVmt6dzPu2/CYsHBX6/6Ms5YL51mzZA47+I5TmJb\niOKHjiCkqk9+gIUM0vuF7giezljdYEbbWmtVoQXQ84YqgKy6THgAOILuY3OOX+kS\nZG1vhlkpjFyHtRXoiKDti40bO1E2a2+O/vpD417hZrezzb97JQ4Cw417jRs3+dpc\nBaVutFUiIm5HFeVdD0/hqwnYMPeoxxxdj4kiuzI2FZOexPufq9MSrSI0RMnegRGL\n8fgg4ZhVuEONtA8eXFI8EpIEhaKOq9CPZuImyKh+Vx4pwcT7NVld70ohqhQaEVqs\n6QblHf6hAoGBAOqimWdjGY6PKT6ipF9/6CsNnAAyyG1IRWSLweVDK36DkIxzTKGU\nfk2uXFw6GlAKu1J0lTfQjxtKoYVljUHjUvfvW9KE/GyuW6eWTxUIrvmpvpcyAV6H\ngHkt8/A+l8sQS3oMiLJ14c8/W5d4YdB/VBLQHsOi8I5EOGsO7a52fETLAoGBANyZ\n3+nq/tyk6hGk+lNJSXnkURydbkONCFhU92iwPC+f/4ILcHdBVjwLOAYa/qUzHvEE\nH+MtMiuGbDrnjjCytvjmIKmMnJ30BHbXwn0dV+hes1O0EwHoIGtvQyWVH/6zB4ar\nYkhK9IBtOxfs3ORVeVBoHx/Mq40BAGzGxQQopVpZAoGAScFtCWPMb9SuuWK02tRB\nLe9sP1+3Qyr5rT6FZ8TykiVXNd80koI0JcUOgWs+RDTrZ2MAWPg1U/XkyiL/AVwt\nA4T5TzbAhoVUiFymZU1Ce3aRU8PDTGy5xN3eFYIHgyyPHUF9YuPNZLFc4ENWNA0i\nZ3uGgCbjCUWGmpipvDLAo3sCgYApQEDlvgLAgbofaIlCz76Eo5QjVLEMwq+fzOui\n0OnAQhwGVltGgZo9ih+EzMF3ZNLRYOMRmR77kpxke25UXubmLipHajrTMpEvI/OD\nb9xDYIoKCe9P+Pcu/9Q/j942w4WRwjSTriiAZ2yYcbtwmycfSQkg6iXeLSTGMnke\n6PbaqQKBgQDGNwOgdHtMdHyy2kDMLdGKCysEo2eBNAxdRqjGxmsjm6bsd4xyLxS2\nlkf7v3e9vE24HfBbwMoW4sx1eEDbFc4pai4l4vG3dpbrd3CJa5mpvL3mxGnTlPUy\n1PopL5pyjSZ6bcRETolZNM4L8X4jgfwHl3Lvc5jBgQW0PCAVtBVp8g==\n-----END RSA PRIVATE KEY-----\n\t\t`)\n\n\t\tConvey(\"then parsePublicKeysFromPEM should return with an error\", func() {\n\t\t\tkeys, err := parsePublicKeysFromPEM(pemBytes)\n\t\t\tSo(keys, ShouldBeNil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldEqual, \"no valid certificates or public keys found (errors: [unsupported PEM type RSA PRIVATE KEY])\")\n\t\t})\n\t})\n\n\tConvey(\"Given a PEM with a valid ECDSA and RSA public key, and a DSA public key and an invalid PKCS#1 RSA public key\", t, func() {\n\t\tpemBytes := []byte(`\n-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEnlK01BDTYbvRBxGM0o3vXNqqvI25\neZ/s3Cq9OXnNpoCI3/DH/tuD3n7cnWcNSfl1qJIH2LVZ0cWUW/L/9i/jPA==\n-----END PUBLIC KEY-----\n-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyjDEPJD1Fv1IJIq4mnec\noMlSve0vZOTuzDmKuMB4vfBXalKZgbp4ONL+BvWV9OPs22Smv9SAfnoQ25q8Q9so\nihzUKhaIAY2CI70ll4exbLK9FD4uTi1bqn0FdIh04UIyW6s2EqTGMkSKx9THNvAM\nKx++pPt3US2sQVEC24bWPxRN7RsBBpRjoiEamkA04ioGFhMBbas5MdCLt/fd92aR\nQCBISOb6PU08fQiARK8g/wdpBUTxy9/Ud1vUnNaZtWm+eLrwdTXgHM3/LG1M4lc0\nZqHIL3rMxhae5W+j3SL3ApreiUYugv/0bCSypvJZjEXKS7SBR/+rtw0/mQpS8DpI\nkwIDAQAB\n-----END PUBLIC KEY-----\n-----BEGIN PUBLIC KEY-----\nMIIDSDCCAjoGByqGSM44BAEwggItAoIBAQCsVBV4gVV/zdmxWu8cU95vxY5D2RVG\nn6r56BOmnBF6beLZJKIK17FsurubePRfhLiVSk/RIA3aECPe8kRdRYAR23daCptw\nTHaZMZ0s2mNQfJEc6sXCE3/EVlPPEZqvm7RilYxb1PNZY55X7EzMhhBc1zRiSQck\nVa8qDHP98vvZjd4G9W+aF2UOMQko9iN6hTjFkUgmNhqIHS3UAoANQ3y2sYHXZZuq\nEP9EKk8EQ5wv4w73eFJXj84pN6L3VvhLjq1Akjk/gl2p7w8cCdXzcfKBD7qXQZZr\nQt4Pmz/BQu6wr4QBX3FiIghUZULlnCjhFNIrXTYbOskK/XGg62aV7Qn5AiEA6hP4\ncBgclv0kO5Qyg3qLVwMWOO1e4opX6EbqmK+kXysCggEBAIF77NYg4ttsGG2OiIs2\nyVBsV4w7EORIC+lG2+ZzVRSHm3QtNPeLoN6PwDtagpER2pUyjpXuxOcgE47hSUCQ\nRpSjXGtj22WbKjXZ2p8mkTScFvA2btgR+O4Nx0f0eShCz1fkrt8BaKRumzrzgoNI\nmcAuVOVqLLl4VkOXwsGvuH5cBVhW1sNKDc3VMYTsh34MDSJJEutFZeCokYwd6wo2\npYVdXsDmc7uhPRK3YhtBV3lrXIehNlIukyO7li+wKU7SLyneBY/huBzYrw1JBDWK\n1CHqRDJm38yzpEOKhu3gefR+j1BZqev9O2tsbFJe3F/cYV1hDWR8jsZz+gfDUXja\nz9oDggEGAAKCAQEAoIbxish+OZADAwMJRP8nGYVIfSkWBXvC96nfQG4tZtqB4Z14\ncjOyChnMuHlQnDIWYhVVmDiIHJFGtsHUb8iPGqbpGeEmScWG4HsSnsNAK/dOKVTE\nOxGaq/3+Lisg8uyTqzAR5W5OdFlCw3qhzYG6G7kHNxGicN5qLQILTQeHIJiuioiE\noDhpga7IB8pGNsXHpO40KeFe2BaZBpKnCQUF32kMnEFP9AqYnZ/io2vhCViee+O3\nA5/Wjke753qo+HUPj7C41wUwvXbXNfkGpXE4nyJZb37Ed+IMQu3sE/X6A2Vgbl+F\n2mpfWPo/ZC23fGe4ExyTKsD+hRIP2LlxhWI1xw==\n-----END PUBLIC KEY-----\n-----BEGIN RSA PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyjDEPJD1Fv1IJIq4mnec\noMlSve0vZOTuzDmKuMB4vfBXalKZgbp4ONL+BvWV9OPs22Smv9SAfnoQ25q8Q9so\nihzUKhaIAY2CI70ll4exbLK9FD4uTi1bqn0FdIh04UIyW6s2EqTGMkSKx9THNvAM\nKx++pPt3US2sQVEC24bWPxRN7RsBBpRjoiEamkA04ioGFhMBbas5MdCLt/fd92aR\nQCBISOb6PU08fQiARK8g/wdpBUTxy9/Ud1vUnNaZtWm+eLrwdTXgHM3/LG1M4lc0\nZqHIL3rMxhae5W+j3SL3ApreiUYugv/0bCSypvJZjEXKS7SBR/+rtw0/mQpS8DpI\nkwIDAQAB\n-----END RSA PUBLIC KEY-----\n\t\t`)\n\n\t\tConvey(\"then parsePublicKeysFromPEM should return with an error\", func() {\n\t\t\tkeys, err := parsePublicKeysFromPEM(pemBytes)\n\t\t\tSo(keys, ShouldNotBeNil)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(len(keys), ShouldEqual, 2)\n\t\t\tSo(err.Error(), ShouldEqual, \"[unsupported key type *dsa.PublicKey; x509: failed to parse public key (use ParsePKIXPublicKey instead for this key format)]\")\n\t\t\tSo(keys[0], ShouldHaveSameTypeAs, &ecdsa.PublicKey{})\n\t\t\tSo(keys[1], ShouldHaveSameTypeAs, &rsa.PublicKey{})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "controller/pkg/usertokens/usertokens.go",
    "content": "package usertokens\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net/url\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/oidc\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens/pkitokens\"\n\n\t\"go.uber.org/zap\"\n)\n\n// Verifier is a generic JWT verifier interface. Different implementations\n// will use different client libraries to verify the tokens. Currently\n// requires only one method. Given a token, return the claims and whether\n// there is a verification error.\ntype Verifier interface {\n\tVerifierType() common.JWTType\n\tValidate(ctx context.Context, token string) ([]string, bool, string, error)\n\tCallback(ctx context.Context, u *url.URL) (string, string, int, error)\n\tIssueRedirect(string) string\n}\n\n// NewVerifier initializes data structures based on the interface that\n// is transmitted over the RPC between main and remote enforcers.\nfunc NewVerifier(ctx context.Context, v Verifier) (Verifier, error) {\n\tif v == nil {\n\t\treturn nil, nil\n\t}\n\tswitch v.VerifierType() {\n\tcase common.PKI:\n\t\tp := v.(*pkitokens.PKIJWTVerifier)\n\t\tv, err := pkitokens.NewVerifier(p)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn v, nil\n\tcase common.OIDC:\n\t\tp := v.(*oidc.TokenVerifier)\n\t\tverifier, err := oidc.NewClient(ctx, p)\n\t\tif err != nil {\n\t\t\tzap.L().Debug(\"usertokens: oidc.NewClient() failed\", zap.Error(err))\n\t\t\treturn nil, err\n\t\t}\n\t\treturn verifier, nil\n\t}\n\treturn nil, fmt.Errorf(\"unknown verifier type\")\n}\n"
  },
  {
    "path": "controller/runtime/runtime.go",
    "content": "package runtime\n\nimport \"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\n// Configuration is configuration parameters that can be safely updated\n// for the controller after it is started\ntype Configuration struct {\n\t// TCPTargetNetworks is the set of networks that host Trireme.\n\tTCPTargetNetworks []string\n\t// UDPTargetNetworks is the set of UDP networks that host Trireme.\n\tUDPTargetNetworks []string\n\t// ExcludedNetworks is the list of networks that must be excxluded from any enforcement.\n\tExcludedNetworks []string\n\t// LogLevel sets loglevel.\n\tLogLevel constants.LogLevel\n}\n\n// DeepCopy copies the configuration and avoids locking issues.\nfunc (c *Configuration) DeepCopy() *Configuration {\n\treturn &Configuration{\n\t\tTCPTargetNetworks: append([]string{}, c.TCPTargetNetworks...),\n\t\tUDPTargetNetworks: append([]string{}, c.UDPTargetNetworks...),\n\t\tExcludedNetworks:  append([]string{}, c.ExcludedNetworks...),\n\t\tLogLevel:          c.LogLevel,\n\t}\n}\n"
  },
  {
    "path": "doc.go",
    "content": "// Package trireme needs to be documented here for godoc.\npackage trireme\n"
  },
  {
    "path": "docs/README.md",
    "content": "# Documentation and Configuration Examples\n\n\n1. [Secure application segmentation](secure-application_segmentation.md) : Goes over the key ideas behind Trireme security and segmentation concepts.\n1. [Trireme Architecture](trireme_architecture.md) : Describes the general Trireme architecture.\n1. [Trireme and Linux Processes](linux_processes.md) : Using Trireme with Linux processes. \n1. [Policy Design](policy_design.md) : Describes how to create your own Trireme policies.\n"
  },
  {
    "path": "docs/docker_host_networks.md",
    "content": "# Docker Host Networks and Trireme\n\nDocker and most other container systems have several options for networking.\nIn most cases, people will choose to use bridge networks or some overlay.\nHowever, for some particular use cases, there is a need for direct access to\na container to the host network. For example, Consul has a use case of\nexposing a DNS server using host networks. The RedHat OpenShift router or\nseveral ingress implementations in Kubernetes instantiate the ingress proxy\non a host network so that they can fix its IP address.\n(see https://docs.docker.com/engine/userguide/networking/\nor\nhttps://docs.openshift.com/enterprise/3.0/install_config/install/deploy_router.html\nfor some examples).\n\nIn general exposing a container directly on the host network poses some\nchallenges from a security perspective. The container is essentially\nattached to the host namespace and can freely interact with any endpoint\nwith which the host network can communicate. This situation can be\nespecially dangerous for an ingress proxy that is exposed to the Internet.\nEven a non-privileged container with host network access can create a\nsecurity vulnerability.\n\nTrireme de-couples security from networking and since it treats containers\nand Linux processes as equal, it can give you some additional protections\nwhen your implementation requires access to the host network. Essentially\nwith Trireme, you can still isolate the container from a networking\nperspective, even though it uses the same IP address and ports as your host.\n\n## TL;DR Show me how\nTo illustrate how to use Trireme with Docker host networks, we will use the\n Trireme example with default settings.\n1.\tDownload and build trireme-example from \nhttps://go.aporeto.io/trireme-example \nFollow the instructions in the Readme file and make sure all the\ndependencies are installed in your system.\n2.\tStart trireme-example in one window\n\n```bash\n% sudo trireme-example daemon --hybrid\n```\n\n3.\tStart an Nginx container with host network access. In other words, you\nNginx container will be accessible through the host interface without any\nnetwork address translations.\n\n```bash\n% docker run -l app=nginx --net=host -d nginx\n```\n\nIf you want to verify that your container is running in host mode, just\nissue the command\n\n```bash\n% docker inspect <container id> | grep NetworkMode\n  \"NetworkMode\" : \"host\"\n```\n\n4.\tDespite the fact that your container is running in host mode, it is\nstill protected by Trireme and it cannot be actually accessed. The\ndefault policy in Trireme allows two containers or processes to interact\nonly if they are both protected by Trireme and they have the same labels.\nTry:\n\n```bash\n% curl http://127.0.0.1\n```\n\nThis should fail.\n\n5.\tInstantiate a curl container now with the same labels as the Nginx\ncontainer that we just started.\n\n```bash\n% docker run -l app=nginx -it nhoag/curl\n```\n\nAssuming that your local docker bridge is at 172.17.0.1 (the default in\ndocker) the nginx container should be accessible through the bridge IP.\nInitiate a curl command to the bridge IP\n\n```bash\nroot@b84a73c6d5ba:# curl http://172.17.0.1\n```\n\nYou will see that curl succeeded in this case. You can now exit from the\ncurl container.\n\n6.\tSince Trireme also supports Linux Processes through we can actually\naccess the container from the host as well, provided that we use Trireme\nto control the network capabilities of the Linux process. From your host\ncell, just issue the command:\n\n```bash\ntrireme-example run --label=app=nginx curl -- http://127.0.0.1\n```\n\nThis command should succeed and you should see the Nginx output. Note, that we start the curl process with the same labels as the Nginx container.\n\n## What Did We Do?\n\nWe started a docker container with the net=host parameter. The effect of\nthis parameter is that the container uses the host network namespace and\nhas direct access to the interfaces and the network of the host. Doing\nthat without extra controls poses security risks. Trireme allows you to\nprotect even containers started in the host network namespace. Since the\ncontainer was protected by default by Trireme we instantiated another\ncontainer and a Linux process and demonstrated how to use the Trireme\npolicies to control which network or Linux process can interact with the\nhost network container.\n\n## Trireme and Host Networking Architecture\n\nAs we explained in some of the previous sections, Trireme treats containers\nand Linux processes equally from a network security perspective. Trireme\ncan apply granular policy equally well to a container or a Linux process.\n\nWhen a container is activated in host network mode, Trireme detects this\nactivation. Instead of giving full access to the container, it treats it as\na Linux process. It gets the first process (Pid : 1 ) that is run in the\ncontainer and places it on a dedicated net_cls cgroup as it would do with\nLinux processes. All subsequent processes instantiated/forked inside the\ncontainer inherit by default the same policy. It can then apply the\ngranular policy to the particular container, even though the container is\nin the same namespace as the host network. This policy does not affect\nany other process or container running on the same host.\n\nThis capability makes Trireme very useful in environments that you need\nto implement some containers with host network mode.\n\n## Taking it to the extreme\nThe Trireme isolation for host networks is of course not as strong as a\nfull network namespace isolation. Containers will still share several\nothers of the stack. However, in several cases, users are looking for a\nless granular isolation, since networking is a pain. See\n (https://medium.com/@copyconstruct/schedulers-kubernetes-and-nomad-b0f2e14a896)\n for some discussion on the topic.\n\nEven the original Borg architecture used this approach to minimize network\ncomplexities. From the corresponding ACM article\n\n\"All containers running on a Borg machine share the host’s IP address, so\nBorg assigns the containers unique port numbers as part of the scheduling\nprocess.\"\n\nAlthough this is not optimal, there are several implementations of docker\nin production environments that have decided to take the risk and\ninstantiate all containers in host network mode since they don't want to\nadapt their applications to the concept of either random ports (docker\nbridge approach) or IP per container (Kubernetes default). They just did not\nwant to deal with the complexities of networking. One can not discount\nthe pragmatic operational reasons that are leading teams down this path.\n\nObviously, there is an isolation risk with this decision and Trireme can\nbridge this gap. By using Trireme as the network policy mechanism, we have\ndecoupled security from the network. Security is delegated to an end-to-end\nauthorization function, and the fact that containers live in the same\nnetwork namespace does not affect the capability of Trireme to provide\nstrong isolation.\n\nTherefore, one can use Trireme to implement a container based system\nwithout the need for complex networking. Yes, there are architectural\nlimitations in some of these choices, but nevertheless it is useful in\nsome environments.\n"
  },
  {
    "path": "docs/linux_processes.md",
    "content": "# Trireme and Linux Processes\n\nThe goal of Trireme is to assign an identity with every application and use this\nidentity to transparently insert authentication and authorization in any communication.\nThe Trireme library has been designed to support both containers as well as any\napplication supporting linux processes.\n\nIn this document we will describe how one can use Trireme with Linux processes\nand what are the underlying mechanisms used by the library.\n\nEssentially, Trireme allows you to introduce end-to-end authentication and\nauthorization between any two Linux processes without ever touching the\nprocess. Through this mechanism you can achieve detailed access control\nin your Linux environment without the need for firewall rules, ACLs,\nand a complex infrastructure.\n\n# TL;DR - Show me how\n\nIn order to illustrate how to use Trireme with Linux processes we will use\nthe Trireme example with default settings.\n\n1. Download and build trireme-example from https://go.aporeto.io/trireme-example\n   Follow the instructions in the Readme file and make sure all the dependencies\n   are installed in your system.\n\n2. Start trireme-example in one window\n```bash\n% sudo trireme-example daemon --hybrid\n```\nThe above command will start the Trireme daemon supporting both Linux Processes\nand docker containers and basic PSK authentication. You will be able to see\nall the Trireme messages in the window.\n\n3. In a different window start an nginx server protected by Trireme\n```bash\n% sudo trireme-example run --ports=80 --label=app=web nginx\n```\nNote, that we use sudo for this since nginx requires root access by default. The\nadditional parameters are:\n- Nginx will be listening on port 80\n- The label app=web will be attached to the nginx service.\n\n4. Try to access the nginx server with a standard curl command\n```bash\n% curl http://127.0.0.1\n```\nYou will see that the curl command fails to connect to the nginx server, even\nthough the server is running and listening on port 80.\n\n5. Run a curl command with a Trireme identity so that it can actually access\nthe server:\n```bash\n% trireme-example run --label=app=web curl -- http://127.0.0.1\n```\nIn this case your curl command will succeed.\n\n6. Let's run now a bash shell in the Trireme context\n```bash\n% trireme-example run --label=app=web /bin/bash\n```\nWe essentially started just a standard bash shell within the Trireme context\nand protected by Trireme. In this case our bash shell can actually access the\nnginx server. A simple curl will succeed.\n```bash\ncurl http://127.0.0.1\n```\n\n## What did we do?\n\nWe started the Trireme daemon and started an nginx server protected by Trireme. By\ndefault only authorized traffic will be able to ever reach this nginx server\nat this point. Even processes in the same host will be unable to reach the\nserver, unless they are also started with Trireme and they are protected by\nthe same authorization framework.\n\nTrireme allows us very easily to introduce authorization to\nany process in the Linux subsystem within the same host or across hosts over\nthe network.\n\n# Trireme and Linux Processes\n\nIn this section we will describe how Trireme introduces the transparent authorization\nfor every Linux Process. This has several use cases that we will illustrate\nin some subsequent blogs. Some examples are:\n\n1. Separate Linux users in different authorization contexts and allow them only\nspecific access to network resources.\n2. Isolate important applications like databases from the network and restrict\naccess to authorized sources only.\n3. Isolate the sshd daemon and allow access only to management tool like\nAnsible to ssh into the machine.\n4. Restrict communication between processes based on libraries, checksums on\nexecutables, SELinux labels or other structures.\n\n## The forgotten cgroup : net_cls\n\nFor several reasons the Linux kernel does not have an easy method to differentiate\ntraffic based on the source or destination process. However, it has a very\nimportant facility that is not very well documented, but very useful. One\nof the cgroup controllers is known as net_cls. When a process is associated with\nthis controller, the Linux kernel will mark all packets initiated by a process\nwith a mark that is set in the configuration of the net_cls cgroup. For example,\nin a standard Ubuntu distribution you can see the mark in\n/sys/fs/cgroup/net_cls/net_cls.classid. The standard value there is 0, indicating\nthat there is essentially no mark placed on the packets.\n\nIn the case of Trireme, you will find a trireme directory under this controller\nin /sys/fs/cgroup/net_cls/trireme and Trireme will create a sub-directory there\nfor every process that is protected by Trireme. Let us assume that the nginx\nprocess above had a process ID of 100. Then Trireme will create the directory\n/sys/fs/cgroup/net_cls/trireme/100 and it will populate the net_cls.classid\nfile there with a mark value. For example 100.\n\nOnce Trireme does that, it means that all packets out of the nginx server or any\nof its children will be marked with the same mark. We cannot apply the Trireme\nACLs that capture Syn/SynAck traffic only on packets with this mark. As a result,\nwe can apply policy to a specific Linux processes and not just a container.\n\nOf course we need some more work. Eventually the process will die and we need\nto clean up. Fortunately, there is a kernel  facility for that. The file\n/sys/fs/cgroup/net_cls/trireme/100/notify_on_release is populated with 1, meaning\nthat the kernel should notify a release agent that there are no more processes\nassociated with the particular cgroup. The file /sys/fs/cgroup/net_cls/release_agent\nidentifies the binary that should be executed when such an event happens.\n\nWe can now put all the parts together.\n1. The trireme-example command sends an\nRPC message to the Trireme daemon requesting to run a process in the Trireme\ncontext.\n2. The daemon resolves the policy for the requested command and populates\nthe right fields in the net_cls file structures.\n3. It then does a simple exec and release command to the requested process.\n4. When the process dies, the release_agent notifies the Trireme daemon about\nthe event, and the Trireme daemon cleans up the state.\n\n## Metadata Extractor and Linux Processes\n\nOne of the most powerful features of the Trireme approach is the metadata extractor.\nIn the above example we used a simple method where the labels that are forming\nthe identity are provided by the user. However, these are not the only attributes\nthat Trireme identifies and it can be extended to actually associate any attribute\nwith this identity model.\n\nBy default the metadata extractor will capture the following information:\n- Md5 checksum of the binary that is executed.\n- Library dependencies of the binary\n- User ID\n- Group ID\n- Executable path\n\nThe result is that a user can define an authorization policy across any of these\nattributes. For example, you can define a policy that only a binary with a specific\nchecksum can access a database. Or that only a process with a given User ID can\naccess an application. Or that users with sudo capability can never talk to the\nInternet.\n\nOne can enhance this metadata extractor with additional sources. For example in\nan AWS environment, the ami ID, VPC ID, subnet, SELinux/AppArmor labels and so on.\nThe possibilities are unlimited.\n\nBy associating custom and system metadata with a process, Trireme allows\nyou to create a \"contextual identity\" for every Linux process. You can then\nuse this identity to control access over the network without worrying about IP\naddresses and port numbers. It is the identity that matters.\n"
  },
  {
    "path": "docs/policy_design.md",
    "content": "# Policy design in Trireme\n\nTrireme includes a powerful policy language that defines authorization policies between containers or processes\n(often referred as Processing Units). This document aims to explain the basic concepts behind the Trireme\npolicies and how to get started to define your own policies.\n\nAs a user of the Trireme library, you need to implement a `Policy Resolver` interface that will fully\ndefine the policies that will apply to your traffic.\n\nThe example part of Trireme can be used as a starting point for implementing your own `Policy Resolver`\n\n# Trireme Cluster\n\nWhen using Trireme, two different perimeters are defined:\n* Trireme endpoints is the set of all Processing Units (typically represents a container) that implement authorization\nand the related policies are captured through `Trireme Internal Policies`\n* Outside world: Everything else that is not being explicitly authorized. Traffic to those endpoints is\nmanaged through the `External policies`\n\nThe complex and granular Trireme policies can only be applied if both the receiver and destination are being part of the Trireme endpoints.\nIn any other cases, a standard set of ACLs will be applied in egress and ingress.\n\n## Trireme CIDR\n\nTrireme is typically installed inside a private cluster. This cluster is a large set of servers under the same\nadministrative control. Each node part of the cluster will get one Trireme agent installed.\nWe recommend that the endpoints used inside the private Trireme cluster use a well-defined Network CIDR,\nalthough this is not mandatory.  The endpoints addresses are the Processing Unit (typically docker) IPs that\nwill be used on your cluster and that will be policed through the Trireme agent.\nThe typical server on which the Trireme agent runs is typically not an endpoint,\nbut the containers or processes that will run on that servers are endpoints.\n\nThis can be for example `10.0.0.0/8` and `172.17.0.0/16` It is referred to as the `Trireme CIDRs`\nand can be composed of a large set of independent CIDRs.\n\nThose `Trireme CIDRs` is given as parameter to the Trireme agent at startup. The agent uses those CIDRs to\ndecide if a socket Endpoint is going to be inside your Trireme cluster, and therefore if there\nis a need to add the Trireme metadata to the socket.\n\nTrireme also supports an auto-discovery mechanism that automatically detects these end-points. The\nauto-discovery assumes that all endpoints are Trireme enabled and initiates an authorization process\nto all endpoints. If the authorization fails, the Trireme falls back to a list of ACL rules based\non IP addresses.\n\n## Excluding IPs from `Trireme CIDRs` cluster.\n\nIn some specific use-case you want to be able to define a set of CIDRs for Trireme with the\nexception of a couple of well defined subnets or/and IPs. In order to achieve this,\nTrireme supports an Exclusion API that can exclude specific endpoints out of the\ngeneral `Trireme CIDRs` dynamically during runtime.\n\nAny set of IPs in the `Trireme CIDRs` that are not going to get policed through the agent need\nto be removed through this exclusion API. This API is defined in supervisor/interfaces.go:\n\n\n```go\n// An Excluder can add/remove specific IPs that are not part of Trireme.\ntype Excluder interface {\n\n\t// AddExcludedIP adds an exception for the destination parameter IP, allowing all the traffic.\n\tAddExcludedIP(ip string) error\n\n\t// RemoveExcludedIP removes the exception for the destination IP given in parameter.\n\tRemoveExcludedIP(ip string) error\n}\n```\n\n\n# Whitelist model for Trireme\n\nTrireme uses a whitelist model. That is, everything that is not explicitely allowed will be denied.\n\n# General logic for policy application.\n\nFor Traffic reaching the Processing Unit, the following logic is applied:\n```\n- If traffic source is part of Trireme CIDRs or it has authorization information:\n    - If traffic is matched through one of the Trireme rules:\n        - If action is ALLOW: Allow traffic.\n        - If action is DROP: Drop traffic.\n    - Drop unmatched traffic\n- If traffic source matches one of the Network ACLs:\n    - If action is ALLOW: Allow traffic.\n    - If action is DROP: Drop traffic.\n- Drop unmatched traffic\n```\n\nFor traffic exiting the Processing Unit, the following logic is applied:\n\n```\n- If traffic destination is part of Trireme CIDRs:\n    - Allow traffic (Add Trireme information to the TCP session)\n- If traffic destination matches one of the App. ACLs:\n    - If action is ALLOW: Allow traffic.\n    - If action is DROP: Drop traffic.\n- Drop unmatched traffic\n```\n\n# Policies for Trireme traffic.\n\nTraffic flowing inside a cluster between two endpoints that are both policed by Trireme is subject to the Trireme policies.\n\nThose policies rely heavily on a set of metadata identity that is sent as part of the Trireme traffic\nand decapsulated/encapsulated by the endpoint agents. Those metadata are labels in the form of `Key:values`\nand are defined by the  Policy Resolver. Each Processing Unit will have a set of those labels associated.\nEach processing Unit also got a set of Trireme Policies that define which remote Trireme processing\nunits are allowed to connect to the local processing unit.\n\nThe Trireme policy is defined as a logical set of `OR` Rules that are each defined as `AND` Clauses:\nThe action of a Trireme policy is applied IF at least one of the Rules is matched successfully. (Logical `OR`)\nIn order for a rule to be matched successfully, each clause inside the rule needs to be successfully matched (Logical `AND`)\n\nEach clause is built as a `Key`, Set of `Values` and `Operator`.\nEach clause translated to a binary TRUE or FALSE.\nThe following operations are supported:\n\n* `Equal` returns true if the PU got a label associated to the `Key` with a `value` equal to one of the `values` defined in the policy.\nExample:\nThe clause\n```\nKEY: App\nVALUE: {'nginx', 'centos', 'mysql'}\nOPERATOR: `Equal`\n```\nwill return TRUE for the following PU metadata:\n```\nImage:centos\nApp:centos\nowner:admin\n```\n\nwill return FALSE for the following PU metadata:\n```\nImage:server\nowner:root\n```\n\n* `NotEqual` returns true if the PU got a label associated to the `Key` with a `value` NOT equal to one of the `values` defined in the policy\nExample:\nThe clause\n```\nKEY: App\nVALUE: {'nginx', 'centos', 'mysql'}\nOPERATOR: `NotEqual`\n```\nwill return FALSE for the following PU metadata:\n```\nImage:centos\nApp:centos\nowner:admin\n```\n\nwill return TRUE for the following PU metadata:\n```\nImage:server\nowner:root\n```\n\nwill return TRUE for the following PU metadata:\n```\nImage:server\nowner:root\nApp:redis\n```\n\n* `KeyExists` returns true if the PU got a label with  that key in it.\n\nExample:\nThe clause\n```\nKEY: App\nVALUE: *\nOPERATOR: `KeyExists`\n```\nwill return TRUE for the following PU metadata:\n```\nImage:centos\nApp:abcd\nowner:admin\n```\n\nwill return FALSE for the following PU metadata:\n```\nImage:server\nowner:root\n```\n\n* `KeyNotExists` returns true if the PU doesn't have a label with the specified key in it.\n\nExample:\nThe clause\n```\nKEY: App\nVALUE: *\nOPERATOR: `KeyNotExists`\n```\nwill return FALSE for the following PU metadata:\n```\nImage:centos\nApp:centos\nowner:admin\n```\n\nwill return TRUE for the following PU metadata:\n```\nImage:server\nowner:root\n```\n\n# Special tags for Port matching.\n\nTrireme introduces dynamically an extra label per TCP connection that represents the TCP destination port.\nThat extra label got the following format:\n```\n@port:xx\n```\nThis label can then be used for matching in any of the previously defined rules, like any other usual label.\n\n# Policies for External traffic.\n\nIf the source or receiver endpoint is not part of the Trireme CIDRs, then the Policies for external traffic are used.\nThose policies are defined as usual Network ACLs with Network and port matches.\n\nFor each Processing Unit, the following two policies are defined:\n* Application policy: The allowed traffic that originates from that processing unit.\n\n* Net policy: The traffic that is allowed to reach the Processing unit from the network.\n\nBoth these policies take the format of a set of (Network/Port-range/Protocol type).\n* Network is the CIDR of the network traffic we want to allow (Example: `192.169.0.0/16`)\n* Port-range can be a single port or any range of port (Example: `100-200`)\n* Protocol type is the L4 protocol type (Must be one of `TCP`/`UDP`/`ICMP`)\n"
  },
  {
    "path": "docs/secure-application_segmentation.md",
    "content": "# Secure Application Segmentation\n\nThe concept of segmentation, or separating applications in different domains, is one of the most widely used security practices. Segmentation protects application deployments by minimizing lateral movement of attackers or reducing the blast radius by containing a system component compromise within a small subsystem.\n\nOver the last several years segmentation was often translated to a network isolation problem where, by restricting network reachability, we could achieve isolation. VLANs, VXLAN, MPLS, firewall ACLs, host ACLs are all trying to segment applications by associating application context to an IP address/port and then managing reachability between two components by controlling routing or five-tuple rules.\n\nNetwork segmentation approaches had to solve two problems:\n\n1. Associate a workload with an IP address and port number; and\n2. create the necessary network structures or reachability rules to limit communication.\n\nThese techniques worked well in traditional server environments and virtual machine deployments where the creation of new servers or VMs is an infrequent event. When new servers are created they are placed in the proper network segment (VLAN or even AWS VPC) or the proper ACL rules are added to the host. However, as we move to containers, microservices, and serverless architectures, scaling this model has become increasingly more complex and cumbersome. Because we are shifting away from monolithic architectures towards microservice architectures, the rate of instantiation of new application components is increasing by orders of magnitude.\n\nAdditionally, several of these techniques introduce significant complexity in the network by requiring gateways, middle-boxes, and other devices to co-ordinate this segmentation and deal with the problems of reachability outside the simple domain of a network segment.\n\nWith Trireme we take an entirely different approach to segmentation issues by introducing transparent authentication and authorization in any communication between workloads. Instead of using an IP address as the identifier of a workload, we use a proper identity mechanism. Instead of using ACLs and network reachability for controlling communication between applications, we introduce an end-to-end authentication and authorization step.\n\nTransitioning to such a model comes with some fundamental benefits for any deployment:\n\n1. Security is decoupled from network IP addressing, allowing operators to optimize their network (transport) with simple techniques and delegate security to the application layer where it belongs.\n2. The network architecture for large scale container deployments can be reduced to a simple L3 network. In short:  no VLANs, no tunnels, no firewall rules, no ACLs, no fast route updates that will never converge. Crossing domains, NAT, IPv4/IPv6 translations are orthogonal and irrelevant to the end-to-end isolation.\n3. Security is much stronger since workloads are isolated by cryptographically verified end-to-end authentication and authorization that can defend against spoofing, man-in-the-middle, and replay attacks.\n4. Developers do not need to deal or depend on network segments and IP address assignments. An isolation segment can be created simply by attaching an identity property to a workload.\n5. The scheme does not require any control plane. The technique is completely distributed with no shared state and no eventual consistency problems.\n\nThe biggest value of Trireme is its simplicity: simple deployment and simple operations. One would naturally ask the question why this has never be done before. Actually,similar techniques have been attempted in the past but with less success. The first example of such a technique was [the CIPSO standard] (https://tools.ietf.org/html/draft-ietf-cipso-ipsecurity-01). The idea, embraced by US government agencies, Trusted Solaris, and SELinux, was was to carry applicationcontext in IP options. There are several differences of Trireme compared to this initial approach:\n\n1. IP options are dropped or improperly handled by the majority of high-speed routers and switches;\n2. although some application context is carried in packets, this context is rather limited and not cryptographically protected.\n\nThe benefit we have today is that cloud deployments enable the automatic distribution of identity. Systems like Kubernetes make it much simpler to distribute and manage workload identity, and it is exactly these capabilities that make the Trireme approach attractive and viable.\n\nWe are happy to share Trireme with the community and solicit feedback. We believe that we are at the first stages of a transformation where the transition to cloud native deployments will become a catalyst for stronger security. Trireme is just the first step.\n"
  },
  {
    "path": "docs/trireme_architecture.md",
    "content": "#Trireme Architecture\n\nTrireme takes a different approach to application segmentation by treating the problem as what it is: an authentication and authorization problem. Every application component, such as process, a container, a Kubernetes POD, has an identity.  A segmentation function is a simple policy that defines identities of the endpoints that are allowed to communicate with each other. \n\nBy following this approach instead of managing either IP addresses/port numbers or ACL five-tuple rules, both of which have limited context and are ephemeral by definition, we deal with security policy as a function of identities. This allows us to scale to very large systems, decouple networking from security, and streamline the operational models. \n\nIndeed, networking is simplified to a transport layer that can use a flat L3 network structure and ignore complex routing protocols. At its simplest form, every container host gets a layer-3 subnet.  As such, there is no need for any route advertisements or route distribution with every container activation.  You can see [the default Kubernetes network architecture] (https://github.com/kubernetes/kubernetes/blob/master/docs/design/networking.md) for an example.\n\nOne could argue that identity and policy are complex mechanisms;however, some key components allow us to implement this identity-based application segmentation at the scale that we outline in the next sections.  Specifically: \n\n1. How do we determine identity and manage policy?\n2. How do we insert end-to-end authentication and authorization in applications that we do not control?\n3. What is the security model that we can achieve with such a mechanism?\n\n##Identity and Policy Management \n\nDetermining and distributing workload identity can be achieved in multiple ways. If we focus on cloud-native environments where an orchestrator like Kubernetes is used for deploying containers, then identity is  simple to define:  Together with orchestrating the workload, the orchestrator can distribute the identity. \n\nOne could, for example, distribute private/public key pairs to each workload.  Subsequently, the certificate will automatically define the identity of the workload.  However, such an identity is not very useful in policy definition.  One would need to set access policies based on specific workload identities that can change regularly. Imagine, for example, if identity was simply the UUID of the container.  Mapping the container to roles or authorization policies would be challenging by itself.\n\nAn alternative approach is to define identity as a collection of attributes that describe a workload. For example, in a container environment, the labels associated with a container can become the identity description. In a Kubernetes cluster, the Kubernetes label selectors can define the identity.  In the Trireme context, we define identity simply as a collection of attributes that describe a workload; moreover, we allow users of the library to determine how  identity attributes are created. For example, attributes can be metadata labels, users that activated the service, or even IP addresses in case someone wants to create a policy that takes IP information into account. \n\nOnce identity is defined as a collection of attributes, it is straightforward to start thinking about authorization as an extension of Attribute-Based Access Control (ABAC).  An authorization policy is just defined as a logical relation between attributes. For example, if a workload is identified with a label “environment=production”, an authorization policy can be “Accept traffic from workloads with “environment=production.”  At its simple form, one can achieve isolation with just a single policy that allows connectivity when two entities have the same labels.  In this way, managing isolation is achieved by just managing the label namespace. \n\nThe Kubernetes Network Policy extension uses a very similar approach and defines access control rules as a function of label selectors. As a result, using Trireme in a Kubernetes environment is a straightforward mapping of label selectors to identities and Network Policies to authorization policies. \n\n##Transparent Enforcement of Authorization\n\nEnforcing an authorization process for any application has its specific challenges. In some environments, organizations have the freedom to mandate the use of specific RPC libraries for all their components. In these environments, one could enforce the authorization step in the library and be done with it.  In fact, some indications are that certain web-scale providers are doing just this.\n\nUnfortunately, however, for the majority of software deployments, this is not possible because applications use external components and a full control of the software stack is not a viable choice. Therefore, we need a mechanism that can transparently insert end-to-end authorization without modifying applications. Interestingly enough, the IETF community attempted something like that several years ago (see https://tools.ietf.org/html/draft-ietf-cipso-ipsecurity-01) by encoding segmentation information as IP header options. Unfortunately, most modern high-speed routers tend to drop or not process IP options because of the increased overhead. \n\nIn Trireme, we chose to overlay the authorization step in the TCP connection setup protocol with a very simple approach.  Once identity has been defined and cryptographically signed, it can be communicated to the other parties during the Syn/SynAck negotiation as a payload that the application never sees. To achieve this, we have implemented a TCP Authorization Proxy that encapsulates identity in the connection setup packets and allows the two ends to cryptographically verify the validity of the identity attributes and enable connection establishment based on mutual, end-to-end authorization.  In other words, a Syn packet is accepted if and only if it carries a valid identity and the receiving identity is authorized to receive traffic from the given source. Similarly, a SynAck packet is accepted if and only if the identity is valid and the policy allows such a connection. Our proxy implementation only captures/modifies the connection establishment packets and releases all other packets to the kernel for forwarding. \n\nThere are several other benefits of this implementation.  Namely,\n- The method does not require any modifications in the application stack or Linux kernel.\n- TCP offloads and the TCP negotiation and protocols just work as designed, significantly improving performance over tunneling mechanisms. \n- Although there is an increased connection setup latency, this increase does not require additional round-trip times in the - - TCP negotiation. \n- Since only connection setup packets are processed in the user space the performance impact is minimized. \n\n##Security Model\n\nThe security protection that can be provided with Trireme is significantly more robust than any IP-based mechanism. The classic problem that some Trireme users are solving already is that they operate in environments that they do not trust the network infrastructure. With implementations that tie identity to IP addresses only, it is easy for an attacker that takes control of the network to spoof IP addresses and gain access to applications without authorization. The Trireme approach enforces end-to-end authorization, and the risk of a man-in-the-middle attack is limited to someone taking complete control of the end-system.\n\nIndeed, the Trireme protocol implements a three-way handshake that includes nonces (random numbers) at every step of the negotiation to defend against man-in-the-middle and replay or spoofing attacks.  \n\n##Kubernetes Integration\n\nTogether with Trireme, we also provide a Kubernetes integration that implements the Network Policy API without any centralized controllers or coordinated state.  An instance of Trireme runs on every minion, deployed through daemon sets. This local instance listens to the relevant APIs (policies, namespace changes), and POD activation events. When a POD is instantiated, the local instance associates an identity with the POD based on the labels and implements the authorization policy in a completely distributed manner.  From a deployment standpoint, the only requirement is the daemon set deployment. \n\nNote that the integration can be extended easily to federated clusters and cross NAT boundaries since IP addresses are of no importance. \n"
  },
  {
    "path": "fix_bpf",
    "content": "#!/bin/bash\n\nprintf \"copy .c and .h files as dep ensure isn't pulling them\\n\"\n\nrm -rf /tmp/gobpf\ngit clone https://github.com/iovisor/gobpf.git /tmp/gobpf\n\ncp -r /tmp/gobpf/elf vendor/github.com/iovisor/gobpf/\n\n"
  },
  {
    "path": "mockgen.sh",
    "content": "#! /bin/bash -e\n\nGO111MODULE=on go get github.com/golang/mock/mockgen@latest\ngo get github.com/golang/mock/mockgen/model\ngo get -u golang.org/x/tools/cmd/goimports\n\ngoimport_sanitize () {\n  tmp=$(mktemp)\n  goimports \"$1\" > \"$tmp\"\n  sed  $'s/^func /\\/\\/ nolint\\\\\\nfunc /g' < \"$tmp\" | sed  $'s/^type /\\/\\/ nolint\\\\\\ntype /g' > \"$1\"\n  rm -f \"$tmp\"\n}\n\necho \"Cgnetcls Mocks\"\nmkdir -p utils/cgnetcls/mockcgnetcls\nmockgen -source utils/cgnetcls/interfaces.go -destination utils/cgnetcls/mockcgnetcls/mockcgnetcls.go -package mockcgnetcls\ngoimport_sanitize utils/cgnetcls/mockcgnetcls/mockcgnetcls.go\n\necho \"Controller/internal/supervisor/Provider Mocks\"\nmkdir -p controller/internal/supervisor/mocksupervisor\nmockgen -source controller/internal/supervisor/interfaces.go -destination controller/internal/supervisor/mocksupervisor/mocksupervisor.go -package mocksupervisor\ngoimport_sanitize controller/internal/supervisor/mocksupervisor/mocksupervisor.go\n\necho \"Enforcer Mocks\"\nmkdir -p controller/internal/enforcer/mockenforcer\nmockgen -source controller/internal/enforcer/enforcer.go -destination controller/internal/enforcer/mockenforcer/mockenforcer.go -package mockenforcer\ngoimport_sanitize controller/internal/enforcer/mockenforcer/mockenforcer.go\n\necho \"DNSProxy Mocks\"\nmkdir -p controller/internal/enforcer/dnsproxy/mockdnsproxy\nmockgen -source controller/internal/enforcer/dnsproxy/dnsproxy.go -destination controller/internal/enforcer/dnsproxy/mockdnsproxy/mockdnsproxy.go -package mockdnsproxy\ngoimport_sanitize controller/internal/enforcer/dnsproxy/mockdnsproxy/mockdnsproxy.go\n\n\necho \"Controller/Processmon Mocks\"\nmkdir -p controller/internal/processmon/mockprocessmon\nmockgen -source controller/internal/processmon/interfaces.go -destination controller/internal/processmon/mockprocessmon/mockprocessmon.go -package mockprocessmon\ngoimport_sanitize controller/internal/processmon/mockprocessmon/mockprocessmon.go\n\necho \"controller/pkg/remoteenforcer Mocks\"\nmkdir -p controller/pkg/remoteenforcer/mockremoteenforcer\nmockgen -source controller/pkg/remoteenforcer/interfaces.go -destination controller/pkg/remoteenforcer/mockremoteenforcer/mockremoteenforcer.go -package mockremoteenforcer\ngoimport_sanitize controller/pkg/remoteenforcer/mockremoteenforcer/mockremoteenforcer.go\n\necho \"controller/pkg/remoteenforcer/client Mocks\"\nmkdir -p controller/pkg/remoteenforcer/internal/client/mockclient\nmockgen -source controller/pkg/remoteenforcer/internal/client/interfaces.go -destination controller/pkg/remoteenforcer/internal/client/mockclient/mockclient.go -package mockclient\ngoimport_sanitize controller/pkg/remoteenforcer/internal/client/mockclient/mockclient.go\n\necho \"controller/pkg/remoteenforcer/TokenIssuer Mocks\"\nmkdir -p controller/pkg/remoteenforcer/internal/tokenissuer/mocktokenclient\nmockgen -source controller/pkg/remoteenforcer/internal/tokenissuer/tokenissuer.go -destination controller/pkg/remoteenforcer/internal/tokenissuer/mocktokenclient/mocktokenclient.go -package mocktokenclient\ngoimport_sanitize controller/pkg/remoteenforcer/internal/tokenissuer/mocktokenclient/mocktokenclient.go\n\necho \"controller/pkg/remoteenforcer/StatsCollector Mocks\"\nmkdir -p controller/pkg/remoteenforcer/internal/statscollector/mockstatscollector\nmockgen \\\n-source controller/pkg/remoteenforcer/internal/statscollector/interfaces.go \\\n-destination controller/pkg/remoteenforcer/internal/statscollector/mockstatscollector/mockstatscollector.go \\\n-package mockstatscollector \\\n-aux_files collector=collector/interfaces.go \\\n-imports statscollector=go.aporeto.io/enforcerd/trireme-lib/controller/pkg/remoteenforcer/internal/statscollector\ngoimport_sanitize controller/pkg/remoteenforcer/internal/statscollector/mockstatscollector/mockstatscollector.go\n\necho \"controller/pkg/usertokens Mocks\"\nmkdir -p controller/pkg/usertokens/mockusertokens\nmockgen -source controller/pkg/usertokens/usertokens.go -destination controller/pkg/usertokens/mockusertokens/mockusertokens.go -package mockusertokens\ngoimport_sanitize controller/pkg/usertokens/mockusertokens/mockusertokens.go\n\necho \"controller/pkg/flowtracking Mocks\"\nmkdir -p controller/pkg/flowtracking/mockflowclient\nmockgen -source controller/pkg/flowtracking/interfaces.go -destination controller/pkg/flowtracking/mockflowclient/mockflowclient.go -package mockflowclient\ngoimport_sanitize controller/pkg/flowtracking/mockflowclient/mockflowclient.go\n\necho \"Collector Mocks\"\nmkdir -p collector/mockcollector\nmockgen -source collector/interfaces.go -destination collector/mockcollector/mockcollector.go -package mockcollector\ngoimport_sanitize collector/mockcollector/mockcollector.go\n\necho \"Monitor Mocks\"\nmkdir -p monitor/mockmonitor\nmockgen -source monitor/interfaces.go -destination monitor/mockmonitor/mockmonitor.go -package mockmonitor\ngoimport_sanitize monitor/mockmonitor/mockmonitor.go\n\necho \"Monitor remoteapi client mocks\"\nmkdir -p monitor/remoteapi/client/mockclient\nmockgen -source monitor/remoteapi/client/interfaces.go -destination monitor/remoteapi/client/mockclient/mockclient.go -package mockclient\ngoimport_sanitize monitor/remoteapi/client/mockclient/mockclient.go\n\necho \"Monitor/processor Mocks\"\nmkdir -p monitor/processor/mockprocessor\nmockgen -source monitor/processor/interfaces.go -destination monitor/processor/mockprocessor/mockprocessor.go -aux_files collector=collector/interfaces.go -package mockprocessor\ngoimport_sanitize monitor/processor/mockprocessor/mockprocessor.go\n\necho \"controller/internal/enforcer/nfqdatapath/tokenaccessor Mocks\"\nmkdir -p controller/internal/enforcer/nfqdatapath/tokenaccessor/mocktokenaccessor\nmockgen -source controller/internal/enforcer/nfqdatapath/tokenaccessor/interfaces.go -destination controller/internal/enforcer/nfqdatapath/tokenaccessor/mocktokenaccessor/mocktokenaccessor.go -package mocktokenaccessor\ngoimport_sanitize controller/internal/enforcer/nfqdatapath/tokenaccessor/mocktokenaccessor/mocktokenaccessor.go\n\necho \"controller/internal/enforcer/utils/ephemeralkeys Mocks\"\nmkdir -p controller/internal/enforcer/utils/ephemeralkeys/mockephemeralkeys\nmockgen -source controller/internal/enforcer/utils/ephemeralkeys/interfaces.go -destination controller/internal/enforcer/utils/ephemeralkeys/mockephemeralkeys/mockephemeralkeys.go -package mockephemeralkeys\ngoimport_sanitize controller/internal/enforcer/utils/ephemeralkeys/mockephemeralkeys/mockephemeralkeys.go\n\necho \"controller/pkg/tokens Mocks\"\nmkdir -p controller/pkg/tokens/mocktokens\nmockgen -source controller/pkg/tokens/tokens.go -destination controller/pkg/tokens/mocktokens/mocktokens.go -package mocktokens\ngoimport_sanitize controller/pkg/tokens/mocktokens/mocktokens.go\n\necho \"RPC Wrapper Mocks\"\nmkdir -p controller/internal/enforcer/utils/rpcwrapper/mockrpcwrapper\nmockgen -source controller/internal/enforcer/utils/rpcwrapper/interfaces.go -destination controller/internal/enforcer/utils/rpcwrapper/mockrpcwrapper/mockrpcwrapper.go -package mockrpcwrapper\ngoimport_sanitize controller/internal/enforcer/utils/rpcwrapper/mockrpcwrapper/mockrpcwrapper.go\n\necho \"Policy Interfaces Mock\"\nmkdir -p policy/mockpolicy\nmockgen -source policy/interfaces.go -destination policy/mockpolicy/mockpolicy.go -package mockpolicy\ngoimport_sanitize policy/mockpolicy/mockpolicy.go\n\necho \"Trireme Controller Mock\"\nmkdir -p controller/mockcontroller\nmockgen -source controller/interfaces.go -destination controller/mockcontroller/mocktrireme.go -package mockcontroller  -aux_files constants=controller/constants/constants.go events=common/events.go policy=policy/interfaces.go processor=monitor/processor/interfaces.go supervisor=controller/internal/supervisor/interfaces.go\ngoimport_sanitize controller/mockcontroller/mocktrireme.go\n\necho \"Pod Monitor Mocks (manager, client and zap core)\"\n# NOTE: this uses interface mode because these are all 3rd party dependencies\nmockgen -package podmonitor -destination monitor/internal/pod/mockzapcore_test.go go.uber.org/zap/zapcore Core\ngoimport_sanitize monitor/internal/pod/mockzapcore_test.go\nmockgen -package podmonitor -destination monitor/internal/pod/mockclient_test.go sigs.k8s.io/controller-runtime/pkg/client Client\ngoimport_sanitize monitor/internal/pod/mockclient_test.go\nmockgen -package podmonitor -destination monitor/internal/pod/mockcache_test.go sigs.k8s.io/controller-runtime/pkg/cache Cache\ngoimport_sanitize monitor/internal/pod/mockcache_test.go\nmockgen -package podmonitor -destination monitor/internal/pod/mockinformer_test.go sigs.k8s.io/controller-runtime/pkg/cache Informer\ngoimport_sanitize monitor/internal/pod/mockinformer_test.go\n# mockgen -package podmonitor -destination monitor/internal/pod/mockinformer_test.go k8s.io/client-go/tools/cache SharedIndexInformer\n# goimport_sanitize monitor/internal/pod/mockinformer_test.go\nmockgen -package podmonitor -destination monitor/internal/pod/mockmanager_test.go sigs.k8s.io/controller-runtime/pkg/manager Manager\ngoimport_sanitize monitor/internal/pod/mockmanager_test.go\n\necho >&2 \"OK\"\n"
  },
  {
    "path": "monitor/api/spec/Makefile",
    "content": "PROTOS_PATH = protos\n\n.PHONY: all protos\n\nall: protos\n\n# Please use version v1.3.3 of protobuf to compile: \n# go get -u github.com/golang/protobuf/proto@v1.3.3\n# go get -u github.com/golang/protobuf/protoc-gen-go@v1.3.3\nprotos: $(PROTOS_PATH)/monitor.proto\n\tprotoc -I $(PROTOS_PATH) --go_out=plugins=grpc:$(PROTOS_PATH) \\\n\t\t$(PROTOS_PATH)/monitor.proto\n"
  },
  {
    "path": "monitor/api/spec/protos/monitor.pb.go",
    "content": "// Code generated by protoc-gen-go. DO NOT EDIT.\n// versions:\n// \tprotoc-gen-go v1.25.0-devel\n// \tprotoc        v3.6.1\n// source: monitor.proto\n\npackage protos\n\nimport (\n\tcontext \"context\"\n\tproto \"github.com/golang/protobuf/proto\"\n\tempty \"github.com/golang/protobuf/ptypes/empty\"\n\tgrpc \"google.golang.org/grpc\"\n\tcodes \"google.golang.org/grpc/codes\"\n\tstatus \"google.golang.org/grpc/status\"\n\tprotoreflect \"google.golang.org/protobuf/reflect/protoreflect\"\n\tprotoimpl \"google.golang.org/protobuf/runtime/protoimpl\"\n\treflect \"reflect\"\n\tsync \"sync\"\n)\n\nconst (\n\t// Verify that this generated code is sufficiently up-to-date.\n\t_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)\n\t// Verify that runtime/protoimpl is sufficiently up-to-date.\n\t_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)\n)\n\n// This is a compile-time assertion that a sufficiently up-to-date version\n// of the legacy proto package is being used.\nconst _ = proto.ProtoPackageIsVersion4\n\ntype CNIContainerEventRequest_Type int32\n\nconst (\n\tCNIContainerEventRequest_ADD    CNIContainerEventRequest_Type = 0\n\tCNIContainerEventRequest_DELETE CNIContainerEventRequest_Type = 1\n)\n\n// Enum value maps for CNIContainerEventRequest_Type.\nvar (\n\tCNIContainerEventRequest_Type_name = map[int32]string{\n\t\t0: \"ADD\",\n\t\t1: \"DELETE\",\n\t}\n\tCNIContainerEventRequest_Type_value = map[string]int32{\n\t\t\"ADD\":    0,\n\t\t\"DELETE\": 1,\n\t}\n)\n\nfunc (x CNIContainerEventRequest_Type) Enum() *CNIContainerEventRequest_Type {\n\tp := new(CNIContainerEventRequest_Type)\n\t*p = x\n\treturn p\n}\n\nfunc (x CNIContainerEventRequest_Type) String() string {\n\treturn protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))\n}\n\nfunc (CNIContainerEventRequest_Type) Descriptor() protoreflect.EnumDescriptor {\n\treturn file_monitor_proto_enumTypes[0].Descriptor()\n}\n\nfunc (CNIContainerEventRequest_Type) Type() protoreflect.EnumType {\n\treturn &file_monitor_proto_enumTypes[0]\n}\n\nfunc (x CNIContainerEventRequest_Type) Number() protoreflect.EnumNumber {\n\treturn protoreflect.EnumNumber(x)\n}\n\n// Deprecated: Use CNIContainerEventRequest_Type.Descriptor instead.\nfunc (CNIContainerEventRequest_Type) EnumDescriptor() ([]byte, []int) {\n\treturn file_monitor_proto_rawDescGZIP(), []int{1, 0}\n}\n\ntype RunCContainerEventRequest struct {\n\tstate         protoimpl.MessageState\n\tsizeCache     protoimpl.SizeCache\n\tunknownFields protoimpl.UnknownFields\n\n\tCommandLine []string `protobuf:\"bytes,1,rep,name=commandLine,proto3\" json:\"commandLine,omitempty\"` // the full commandline of the runc command incl. flags, etc. - this is expected to come from `os.Args`\n}\n\nfunc (x *RunCContainerEventRequest) Reset() {\n\t*x = RunCContainerEventRequest{}\n\tif protoimpl.UnsafeEnabled {\n\t\tmi := &file_monitor_proto_msgTypes[0]\n\t\tms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))\n\t\tms.StoreMessageInfo(mi)\n\t}\n}\n\nfunc (x *RunCContainerEventRequest) String() string {\n\treturn protoimpl.X.MessageStringOf(x)\n}\n\nfunc (*RunCContainerEventRequest) ProtoMessage() {}\n\nfunc (x *RunCContainerEventRequest) ProtoReflect() protoreflect.Message {\n\tmi := &file_monitor_proto_msgTypes[0]\n\tif protoimpl.UnsafeEnabled && x != nil {\n\t\tms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))\n\t\tif ms.LoadMessageInfo() == nil {\n\t\t\tms.StoreMessageInfo(mi)\n\t\t}\n\t\treturn ms\n\t}\n\treturn mi.MessageOf(x)\n}\n\n// Deprecated: Use RunCContainerEventRequest.ProtoReflect.Descriptor instead.\nfunc (*RunCContainerEventRequest) Descriptor() ([]byte, []int) {\n\treturn file_monitor_proto_rawDescGZIP(), []int{0}\n}\n\nfunc (x *RunCContainerEventRequest) GetCommandLine() []string {\n\tif x != nil {\n\t\treturn x.CommandLine\n\t}\n\treturn nil\n}\n\ntype CNIContainerEventRequest struct {\n\tstate         protoimpl.MessageState\n\tsizeCache     protoimpl.SizeCache\n\tunknownFields protoimpl.UnknownFields\n\n\tType         CNIContainerEventRequest_Type `protobuf:\"varint,1,opt,name=type,proto3,enum=protos.CNIContainerEventRequest_Type\" json:\"type,omitempty\"`\n\tContainerID  string                        `protobuf:\"bytes,2,opt,name=containerID,proto3\" json:\"containerID,omitempty\"`\n\tNetnsPath    string                        `protobuf:\"bytes,3,opt,name=netnsPath,proto3\" json:\"netnsPath,omitempty\"`\n\tPodName      string                        `protobuf:\"bytes,4,opt,name=podName,proto3\" json:\"podName,omitempty\"`\n\tPodNamespace string                        `protobuf:\"bytes,5,opt,name=podNamespace,proto3\" json:\"podNamespace,omitempty\"`\n}\n\nfunc (x *CNIContainerEventRequest) Reset() {\n\t*x = CNIContainerEventRequest{}\n\tif protoimpl.UnsafeEnabled {\n\t\tmi := &file_monitor_proto_msgTypes[1]\n\t\tms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))\n\t\tms.StoreMessageInfo(mi)\n\t}\n}\n\nfunc (x *CNIContainerEventRequest) String() string {\n\treturn protoimpl.X.MessageStringOf(x)\n}\n\nfunc (*CNIContainerEventRequest) ProtoMessage() {}\n\nfunc (x *CNIContainerEventRequest) ProtoReflect() protoreflect.Message {\n\tmi := &file_monitor_proto_msgTypes[1]\n\tif protoimpl.UnsafeEnabled && x != nil {\n\t\tms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))\n\t\tif ms.LoadMessageInfo() == nil {\n\t\t\tms.StoreMessageInfo(mi)\n\t\t}\n\t\treturn ms\n\t}\n\treturn mi.MessageOf(x)\n}\n\n// Deprecated: Use CNIContainerEventRequest.ProtoReflect.Descriptor instead.\nfunc (*CNIContainerEventRequest) Descriptor() ([]byte, []int) {\n\treturn file_monitor_proto_rawDescGZIP(), []int{1}\n}\n\nfunc (x *CNIContainerEventRequest) GetType() CNIContainerEventRequest_Type {\n\tif x != nil {\n\t\treturn x.Type\n\t}\n\treturn CNIContainerEventRequest_ADD\n}\n\nfunc (x *CNIContainerEventRequest) GetContainerID() string {\n\tif x != nil {\n\t\treturn x.ContainerID\n\t}\n\treturn \"\"\n}\n\nfunc (x *CNIContainerEventRequest) GetNetnsPath() string {\n\tif x != nil {\n\t\treturn x.NetnsPath\n\t}\n\treturn \"\"\n}\n\nfunc (x *CNIContainerEventRequest) GetPodName() string {\n\tif x != nil {\n\t\treturn x.PodName\n\t}\n\treturn \"\"\n}\n\nfunc (x *CNIContainerEventRequest) GetPodNamespace() string {\n\tif x != nil {\n\t\treturn x.PodNamespace\n\t}\n\treturn \"\"\n}\n\ntype ContainerEventResponse struct {\n\tstate         protoimpl.MessageState\n\tsizeCache     protoimpl.SizeCache\n\tunknownFields protoimpl.UnknownFields\n\n\tErrorMessage string `protobuf:\"bytes,1,opt,name=errorMessage,proto3\" json:\"errorMessage,omitempty\"` // errorMessage will be empty on success, and have an error message set only on an error\n}\n\nfunc (x *ContainerEventResponse) Reset() {\n\t*x = ContainerEventResponse{}\n\tif protoimpl.UnsafeEnabled {\n\t\tmi := &file_monitor_proto_msgTypes[2]\n\t\tms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))\n\t\tms.StoreMessageInfo(mi)\n\t}\n}\n\nfunc (x *ContainerEventResponse) String() string {\n\treturn protoimpl.X.MessageStringOf(x)\n}\n\nfunc (*ContainerEventResponse) ProtoMessage() {}\n\nfunc (x *ContainerEventResponse) ProtoReflect() protoreflect.Message {\n\tmi := &file_monitor_proto_msgTypes[2]\n\tif protoimpl.UnsafeEnabled && x != nil {\n\t\tms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))\n\t\tif ms.LoadMessageInfo() == nil {\n\t\t\tms.StoreMessageInfo(mi)\n\t\t}\n\t\treturn ms\n\t}\n\treturn mi.MessageOf(x)\n}\n\n// Deprecated: Use ContainerEventResponse.ProtoReflect.Descriptor instead.\nfunc (*ContainerEventResponse) Descriptor() ([]byte, []int) {\n\treturn file_monitor_proto_rawDescGZIP(), []int{2}\n}\n\nfunc (x *ContainerEventResponse) GetErrorMessage() string {\n\tif x != nil {\n\t\treturn x.ErrorMessage\n\t}\n\treturn \"\"\n}\n\nvar File_monitor_proto protoreflect.FileDescriptor\n\nvar file_monitor_proto_rawDesc = []byte{\n\t0x0a, 0x0d, 0x6d, 0x6f, 0x6e, 0x69, 0x74, 0x6f, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12,\n\t0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x73, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f,\n\t0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70,\n\t0x72, 0x6f, 0x74, 0x6f, 0x22, 0x3d, 0x0a, 0x19, 0x52, 0x75, 0x6e, 0x43, 0x43, 0x6f, 0x6e, 0x74,\n\t0x61, 0x69, 0x6e, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,\n\t0x74, 0x12, 0x20, 0x0a, 0x0b, 0x63, 0x6f, 0x6d, 0x6d, 0x61, 0x6e, 0x64, 0x4c, 0x69, 0x6e, 0x65,\n\t0x18, 0x01, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0b, 0x63, 0x6f, 0x6d, 0x6d, 0x61, 0x6e, 0x64, 0x4c,\n\t0x69, 0x6e, 0x65, 0x22, 0xf0, 0x01, 0x0a, 0x18, 0x43, 0x4e, 0x49, 0x43, 0x6f, 0x6e, 0x74, 0x61,\n\t0x69, 0x6e, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,\n\t0x12, 0x39, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x25,\n\t0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x73, 0x2e, 0x43, 0x4e, 0x49, 0x43, 0x6f, 0x6e, 0x74, 0x61,\n\t0x69, 0x6e, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,\n\t0x2e, 0x54, 0x79, 0x70, 0x65, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x20, 0x0a, 0x0b, 0x63,\n\t0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x44, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09,\n\t0x52, 0x0b, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x44, 0x12, 0x1c, 0x0a,\n\t0x09, 0x6e, 0x65, 0x74, 0x6e, 0x73, 0x50, 0x61, 0x74, 0x68, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09,\n\t0x52, 0x09, 0x6e, 0x65, 0x74, 0x6e, 0x73, 0x50, 0x61, 0x74, 0x68, 0x12, 0x18, 0x0a, 0x07, 0x70,\n\t0x6f, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x70, 0x6f,\n\t0x64, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x22, 0x0a, 0x0c, 0x70, 0x6f, 0x64, 0x4e, 0x61, 0x6d, 0x65,\n\t0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x70, 0x6f, 0x64,\n\t0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x22, 0x1b, 0x0a, 0x04, 0x54, 0x79, 0x70,\n\t0x65, 0x12, 0x07, 0x0a, 0x03, 0x41, 0x44, 0x44, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x44, 0x45,\n\t0x4c, 0x45, 0x54, 0x45, 0x10, 0x01, 0x22, 0x3c, 0x0a, 0x16, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69,\n\t0x6e, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,\n\t0x12, 0x22, 0x0a, 0x0c, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,\n\t0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x65, 0x72, 0x72, 0x6f, 0x72, 0x4d, 0x65, 0x73,\n\t0x73, 0x61, 0x67, 0x65, 0x32, 0xa7, 0x01, 0x0a, 0x04, 0x52, 0x75, 0x6e, 0x43, 0x12, 0x44, 0x0a,\n\t0x10, 0x52, 0x75, 0x6e, 0x63, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x53, 0x74, 0x61, 0x72, 0x74, 0x65,\n\t0x64, 0x12, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,\n\t0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x1a, 0x16, 0x2e, 0x67, 0x6f, 0x6f, 0x67,\n\t0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70, 0x74,\n\t0x79, 0x22, 0x00, 0x12, 0x59, 0x0a, 0x12, 0x52, 0x75, 0x6e, 0x43, 0x43, 0x6f, 0x6e, 0x74, 0x61,\n\t0x69, 0x6e, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12, 0x21, 0x2e, 0x70, 0x72, 0x6f, 0x74,\n\t0x6f, 0x73, 0x2e, 0x52, 0x75, 0x6e, 0x43, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,\n\t0x45, 0x76, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1e, 0x2e, 0x70,\n\t0x72, 0x6f, 0x74, 0x6f, 0x73, 0x2e, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x45,\n\t0x76, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x32, 0x5e,\n\t0x0a, 0x03, 0x43, 0x4e, 0x49, 0x12, 0x57, 0x0a, 0x11, 0x43, 0x4e, 0x49, 0x43, 0x6f, 0x6e, 0x74,\n\t0x61, 0x69, 0x6e, 0x65, 0x72, 0x45, 0x76, 0x65, 0x6e, 0x74, 0x12, 0x20, 0x2e, 0x70, 0x72, 0x6f,\n\t0x74, 0x6f, 0x73, 0x2e, 0x43, 0x4e, 0x49, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72,\n\t0x45, 0x76, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1e, 0x2e, 0x70,\n\t0x72, 0x6f, 0x74, 0x6f, 0x73, 0x2e, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x45,\n\t0x76, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x42, 0x0a,\n\t0x5a, 0x08, 0x2e, 0x3b, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x73, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,\n\t0x6f, 0x33,\n}\n\nvar (\n\tfile_monitor_proto_rawDescOnce sync.Once\n\tfile_monitor_proto_rawDescData = file_monitor_proto_rawDesc\n)\n\nfunc file_monitor_proto_rawDescGZIP() []byte {\n\tfile_monitor_proto_rawDescOnce.Do(func() {\n\t\tfile_monitor_proto_rawDescData = protoimpl.X.CompressGZIP(file_monitor_proto_rawDescData)\n\t})\n\treturn file_monitor_proto_rawDescData\n}\n\nvar file_monitor_proto_enumTypes = make([]protoimpl.EnumInfo, 1)\nvar file_monitor_proto_msgTypes = make([]protoimpl.MessageInfo, 3)\nvar file_monitor_proto_goTypes = []interface{}{\n\t(CNIContainerEventRequest_Type)(0), // 0: protos.CNIContainerEventRequest.Type\n\t(*RunCContainerEventRequest)(nil),  // 1: protos.RunCContainerEventRequest\n\t(*CNIContainerEventRequest)(nil),   // 2: protos.CNIContainerEventRequest\n\t(*ContainerEventResponse)(nil),     // 3: protos.ContainerEventResponse\n\t(*empty.Empty)(nil),                // 4: google.protobuf.Empty\n}\nvar file_monitor_proto_depIdxs = []int32{\n\t0, // 0: protos.CNIContainerEventRequest.type:type_name -> protos.CNIContainerEventRequest.Type\n\t4, // 1: protos.RunC.RuncProxyStarted:input_type -> google.protobuf.Empty\n\t1, // 2: protos.RunC.RunCContainerEvent:input_type -> protos.RunCContainerEventRequest\n\t2, // 3: protos.CNI.CNIContainerEvent:input_type -> protos.CNIContainerEventRequest\n\t4, // 4: protos.RunC.RuncProxyStarted:output_type -> google.protobuf.Empty\n\t3, // 5: protos.RunC.RunCContainerEvent:output_type -> protos.ContainerEventResponse\n\t3, // 6: protos.CNI.CNIContainerEvent:output_type -> protos.ContainerEventResponse\n\t4, // [4:7] is the sub-list for method output_type\n\t1, // [1:4] is the sub-list for method input_type\n\t1, // [1:1] is the sub-list for extension type_name\n\t1, // [1:1] is the sub-list for extension extendee\n\t0, // [0:1] is the sub-list for field type_name\n}\n\nfunc init() { file_monitor_proto_init() }\nfunc file_monitor_proto_init() {\n\tif File_monitor_proto != nil {\n\t\treturn\n\t}\n\tif !protoimpl.UnsafeEnabled {\n\t\tfile_monitor_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {\n\t\t\tswitch v := v.(*RunCContainerEventRequest); i {\n\t\t\tcase 0:\n\t\t\t\treturn &v.state\n\t\t\tcase 1:\n\t\t\t\treturn &v.sizeCache\n\t\t\tcase 2:\n\t\t\t\treturn &v.unknownFields\n\t\t\tdefault:\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t\tfile_monitor_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {\n\t\t\tswitch v := v.(*CNIContainerEventRequest); i {\n\t\t\tcase 0:\n\t\t\t\treturn &v.state\n\t\t\tcase 1:\n\t\t\t\treturn &v.sizeCache\n\t\t\tcase 2:\n\t\t\t\treturn &v.unknownFields\n\t\t\tdefault:\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t\tfile_monitor_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {\n\t\t\tswitch v := v.(*ContainerEventResponse); i {\n\t\t\tcase 0:\n\t\t\t\treturn &v.state\n\t\t\tcase 1:\n\t\t\t\treturn &v.sizeCache\n\t\t\tcase 2:\n\t\t\t\treturn &v.unknownFields\n\t\t\tdefault:\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t}\n\ttype x struct{}\n\tout := protoimpl.TypeBuilder{\n\t\tFile: protoimpl.DescBuilder{\n\t\t\tGoPackagePath: reflect.TypeOf(x{}).PkgPath(),\n\t\t\tRawDescriptor: file_monitor_proto_rawDesc,\n\t\t\tNumEnums:      1,\n\t\t\tNumMessages:   3,\n\t\t\tNumExtensions: 0,\n\t\t\tNumServices:   2,\n\t\t},\n\t\tGoTypes:           file_monitor_proto_goTypes,\n\t\tDependencyIndexes: file_monitor_proto_depIdxs,\n\t\tEnumInfos:         file_monitor_proto_enumTypes,\n\t\tMessageInfos:      file_monitor_proto_msgTypes,\n\t}.Build()\n\tFile_monitor_proto = out.File\n\tfile_monitor_proto_rawDesc = nil\n\tfile_monitor_proto_goTypes = nil\n\tfile_monitor_proto_depIdxs = nil\n}\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ context.Context\nvar _ grpc.ClientConnInterface\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the grpc package it is being compiled against.\nconst _ = grpc.SupportPackageIsVersion6\n\n// RunCClient is the client API for RunC service.\n//\n// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.\ntype RunCClient interface {\n\t// RuncProxyStarted is called by the PCC agent once the runc proxy has been started\n\tRuncProxyStarted(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*empty.Empty, error)\n\t// ContainerEvent will be invoked by the runc proxy on the following events at this point:\n\t// - ‘runc start’\n\t// - 'runc delete'\n\tRunCContainerEvent(ctx context.Context, in *RunCContainerEventRequest, opts ...grpc.CallOption) (*ContainerEventResponse, error)\n}\n\ntype runCClient struct {\n\tcc grpc.ClientConnInterface\n}\n\nfunc NewRunCClient(cc grpc.ClientConnInterface) RunCClient {\n\treturn &runCClient{cc}\n}\n\nfunc (c *runCClient) RuncProxyStarted(ctx context.Context, in *empty.Empty, opts ...grpc.CallOption) (*empty.Empty, error) {\n\tout := new(empty.Empty)\n\terr := c.cc.Invoke(ctx, \"/protos.RunC/RuncProxyStarted\", in, out, opts...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn out, nil\n}\n\nfunc (c *runCClient) RunCContainerEvent(ctx context.Context, in *RunCContainerEventRequest, opts ...grpc.CallOption) (*ContainerEventResponse, error) {\n\tout := new(ContainerEventResponse)\n\terr := c.cc.Invoke(ctx, \"/protos.RunC/RunCContainerEvent\", in, out, opts...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn out, nil\n}\n\n// RunCServer is the server API for RunC service.\ntype RunCServer interface {\n\t// RuncProxyStarted is called by the PCC agent once the runc proxy has been started\n\tRuncProxyStarted(context.Context, *empty.Empty) (*empty.Empty, error)\n\t// ContainerEvent will be invoked by the runc proxy on the following events at this point:\n\t// - ‘runc start’\n\t// - 'runc delete'\n\tRunCContainerEvent(context.Context, *RunCContainerEventRequest) (*ContainerEventResponse, error)\n}\n\n// UnimplementedRunCServer can be embedded to have forward compatible implementations.\ntype UnimplementedRunCServer struct {\n}\n\nfunc (*UnimplementedRunCServer) RuncProxyStarted(context.Context, *empty.Empty) (*empty.Empty, error) {\n\treturn nil, status.Errorf(codes.Unimplemented, \"method RuncProxyStarted not implemented\")\n}\nfunc (*UnimplementedRunCServer) RunCContainerEvent(context.Context, *RunCContainerEventRequest) (*ContainerEventResponse, error) {\n\treturn nil, status.Errorf(codes.Unimplemented, \"method RunCContainerEvent not implemented\")\n}\n\nfunc RegisterRunCServer(s *grpc.Server, srv RunCServer) {\n\ts.RegisterService(&_RunC_serviceDesc, srv)\n}\n\nfunc _RunC_RuncProxyStarted_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {\n\tin := new(empty.Empty)\n\tif err := dec(in); err != nil {\n\t\treturn nil, err\n\t}\n\tif interceptor == nil {\n\t\treturn srv.(RunCServer).RuncProxyStarted(ctx, in)\n\t}\n\tinfo := &grpc.UnaryServerInfo{\n\t\tServer:     srv,\n\t\tFullMethod: \"/protos.RunC/RuncProxyStarted\",\n\t}\n\thandler := func(ctx context.Context, req interface{}) (interface{}, error) {\n\t\treturn srv.(RunCServer).RuncProxyStarted(ctx, req.(*empty.Empty))\n\t}\n\treturn interceptor(ctx, in, info, handler)\n}\n\nfunc _RunC_RunCContainerEvent_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {\n\tin := new(RunCContainerEventRequest)\n\tif err := dec(in); err != nil {\n\t\treturn nil, err\n\t}\n\tif interceptor == nil {\n\t\treturn srv.(RunCServer).RunCContainerEvent(ctx, in)\n\t}\n\tinfo := &grpc.UnaryServerInfo{\n\t\tServer:     srv,\n\t\tFullMethod: \"/protos.RunC/RunCContainerEvent\",\n\t}\n\thandler := func(ctx context.Context, req interface{}) (interface{}, error) {\n\t\treturn srv.(RunCServer).RunCContainerEvent(ctx, req.(*RunCContainerEventRequest))\n\t}\n\treturn interceptor(ctx, in, info, handler)\n}\n\nvar _RunC_serviceDesc = grpc.ServiceDesc{\n\tServiceName: \"protos.RunC\",\n\tHandlerType: (*RunCServer)(nil),\n\tMethods: []grpc.MethodDesc{\n\t\t{\n\t\t\tMethodName: \"RuncProxyStarted\",\n\t\t\tHandler:    _RunC_RuncProxyStarted_Handler,\n\t\t},\n\t\t{\n\t\t\tMethodName: \"RunCContainerEvent\",\n\t\t\tHandler:    _RunC_RunCContainerEvent_Handler,\n\t\t},\n\t},\n\tStreams:  []grpc.StreamDesc{},\n\tMetadata: \"monitor.proto\",\n}\n\n// CNIClient is the client API for CNI service.\n//\n// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.\ntype CNIClient interface {\n\t// ContainerEvent will be invoked by the CNI plugin on the following events at this point:\n\t// - ‘cmdADD start’\n\t// - 'cmdDEL delete'\n\tCNIContainerEvent(ctx context.Context, in *CNIContainerEventRequest, opts ...grpc.CallOption) (*ContainerEventResponse, error)\n}\n\ntype cNIClient struct {\n\tcc grpc.ClientConnInterface\n}\n\nfunc NewCNIClient(cc grpc.ClientConnInterface) CNIClient {\n\treturn &cNIClient{cc}\n}\n\nfunc (c *cNIClient) CNIContainerEvent(ctx context.Context, in *CNIContainerEventRequest, opts ...grpc.CallOption) (*ContainerEventResponse, error) {\n\tout := new(ContainerEventResponse)\n\terr := c.cc.Invoke(ctx, \"/protos.CNI/CNIContainerEvent\", in, out, opts...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn out, nil\n}\n\n// CNIServer is the server API for CNI service.\ntype CNIServer interface {\n\t// ContainerEvent will be invoked by the CNI plugin on the following events at this point:\n\t// - ‘cmdADD start’\n\t// - 'cmdDEL delete'\n\tCNIContainerEvent(context.Context, *CNIContainerEventRequest) (*ContainerEventResponse, error)\n}\n\n// UnimplementedCNIServer can be embedded to have forward compatible implementations.\ntype UnimplementedCNIServer struct {\n}\n\nfunc (*UnimplementedCNIServer) CNIContainerEvent(context.Context, *CNIContainerEventRequest) (*ContainerEventResponse, error) {\n\treturn nil, status.Errorf(codes.Unimplemented, \"method CNIContainerEvent not implemented\")\n}\n\nfunc RegisterCNIServer(s *grpc.Server, srv CNIServer) {\n\ts.RegisterService(&_CNI_serviceDesc, srv)\n}\n\nfunc _CNI_CNIContainerEvent_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {\n\tin := new(CNIContainerEventRequest)\n\tif err := dec(in); err != nil {\n\t\treturn nil, err\n\t}\n\tif interceptor == nil {\n\t\treturn srv.(CNIServer).CNIContainerEvent(ctx, in)\n\t}\n\tinfo := &grpc.UnaryServerInfo{\n\t\tServer:     srv,\n\t\tFullMethod: \"/protos.CNI/CNIContainerEvent\",\n\t}\n\thandler := func(ctx context.Context, req interface{}) (interface{}, error) {\n\t\treturn srv.(CNIServer).CNIContainerEvent(ctx, req.(*CNIContainerEventRequest))\n\t}\n\treturn interceptor(ctx, in, info, handler)\n}\n\nvar _CNI_serviceDesc = grpc.ServiceDesc{\n\tServiceName: \"protos.CNI\",\n\tHandlerType: (*CNIServer)(nil),\n\tMethods: []grpc.MethodDesc{\n\t\t{\n\t\t\tMethodName: \"CNIContainerEvent\",\n\t\t\tHandler:    _CNI_CNIContainerEvent_Handler,\n\t\t},\n\t},\n\tStreams:  []grpc.StreamDesc{},\n\tMetadata: \"monitor.proto\",\n}\n"
  },
  {
    "path": "monitor/api/spec/protos/monitor.proto",
    "content": "syntax = \"proto3\";\noption go_package = \".;protos\";\npackage protos;\n\nimport \"google/protobuf/empty.proto\";\n\n// RunCClient is the API between runc-proxy  and  enforcerd\nservice RunC {\n    // RuncProxyStarted is called by the PCC agent once the runc proxy has been started\n    rpc RuncProxyStarted (google.protobuf.Empty) returns (google.protobuf.Empty) {}\n\n \n    // ContainerEvent will be invoked by the runc proxy on the following events at this point:\n    // - ‘runc start’\n    // - 'runc delete'\n    rpc RunCContainerEvent (RunCContainerEventRequest) returns (ContainerEventResponse) {} \n}\n\n// CNIClient is the API between cni-plugin  and  enforcerd\nservice CNI {\n    // ContainerEvent will be invoked by the CNI plugin on the following events at this point:\n    // - ‘cmdADD start’\n    // - 'cmdDEL delete'\n    rpc CNIContainerEvent (CNIContainerEventRequest) returns (ContainerEventResponse) {} \n}\n\nmessage RunCContainerEventRequest {\n    repeated string commandLine = 1; // the full commandline of the runc command incl. flags, etc. - this is expected to come from `os.Args`\n}\n\nmessage CNIContainerEventRequest {\n    enum Type {\n        ADD = 0;\n        DELETE = 1;\n    }\n    Type type = 1;\n    string containerID\t= 2;\n    string netnsPath = 3;\n    string podName = 4;\n    string podNamespace = 5;\n}\n\nmessage ContainerEventResponse {\n    string errorMessage = 1; // errorMessage will be empty on success, and have an error message set only on an error\n}\n"
  },
  {
    "path": "monitor/config/config.go",
    "content": "package config\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/external\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// Type specifies the type of monitors supported.\ntype Type int\n\n// Types supported.\nconst (\n\tDocker Type = iota + 1\n\tLinuxProcess\n\tLinuxHost\n\tK8s\n\tWindows\n)\n\n// MonitorConfig specifies the configs for monitors.\ntype MonitorConfig struct {\n\tCommon    *ProcessorConfig\n\tMergeTags []string\n\tMonitors  map[Type]interface{}\n}\n\n// String returns the configuration in string\nfunc (c *MonitorConfig) String() string {\n\tbuf := fmt.Sprintf(\"MergeTags:[%s] \", strings.Join(c.MergeTags, \",\"))\n\tbuf += fmt.Sprintf(\"Common:%+v \", c.Common)\n\tbuf += fmt.Sprintf(\"Monitors:{\") // nolint\n\tfor k, v := range c.Monitors {\n\t\tbuf += fmt.Sprintf(\"{%d:%+v},\", k, v)\n\t}\n\tbuf += fmt.Sprintf(\"}\") // nolint:gosimple // lint:ignore S1039\n\treturn buf\n}\n\n// ProcessorConfig holds configuration for the processors\ntype ProcessorConfig struct {\n\tCollector           collector.EventCollector\n\tPolicy              policy.Resolver\n\tExternalEventSender []external.ReceiverRegistration\n\tMergeTags           []string\n\tResyncLock          *sync.RWMutex\n}\n\n// IsComplete checks if configuration is complete\nfunc (c *ProcessorConfig) IsComplete() error {\n\n\tif c.Collector == nil {\n\t\treturn fmt.Errorf(\"Missing configuration: collector\")\n\t}\n\n\tif c.Policy == nil {\n\t\treturn fmt.Errorf(\"Missing configuration: puHandler\")\n\t}\n\tif c.ResyncLock == nil {\n\t\treturn fmt.Errorf(\"Missing resyncLock: puHandler\")\n\t}\n\t// not all monitors implement external.ReceiveEvents\n\t// so ExternalEventSender is optional\n\n\treturn nil\n}\n"
  },
  {
    "path": "monitor/constants/constants.go",
    "content": "package constants\n\nconst (\n\t// DefaultDockerSocket is the default socket to use to communicate with docker\n\t// it is canonicalized with utils.GetPathOnHostViaProcRoot() at point of use\n\tDefaultDockerSocket = \"/var/run/docker.sock\"\n\n\t// DefaultDockerSocketType is unix\n\tDefaultDockerSocketType = \"unix\"\n\n\t// K8sPodName is pod name of K8s pod.\n\tK8sPodName = \"io.kubernetes.pod.name\"\n\n\t// K8sPodNamespace is the namespace of K8s pod.\n\tK8sPodNamespace = \"io.kubernetes.pod.namespace\"\n)\n\nconst (\n\t// DockerHostMode is the string of the network mode that indicates a host namespace\n\tDockerHostMode = \"host\"\n\t// DockerLinkedMode is the string of the network mode that indicates shared network namespace\n\tDockerLinkedMode = \"container:\"\n\n\t// DockerHostPUID represents the PUID of the host network container.\n\tDockerHostPUID = \"HostPUID\"\n\n\t// UserLabelPrefix is the label prefix for all user defined labels\n\tUserLabelPrefix = \"@usr:\"\n)\n\nconst (\n\t// K8sMonitorRegistrationName is used as the registration constant with the external sender (gRPC server)\n\tK8sMonitorRegistrationName = \"k8sMonitor\"\n\n\t// MonitorExtSenderName is the name of the monitor that registers with the trireme monitors to send events\n\tMonitorExtSenderName = \"grpcMonitorServer\"\n)\n"
  },
  {
    "path": "monitor/external/interfaces.go",
    "content": "package external\n\nimport (\n\t\"context\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// ReceiveEvents can be implemented by monitors which receive their monitoring events\n// for processing from different parts of the stack.\ntype ReceiveEvents interface {\n\t// Event will receive event `data` for processing a common.Event in the monitor.\n\t// The sent data is implementation specific - therefore it has no type in the interface.\n\t// If the sent data is of an unexpected type, its implementor must return an error\n\t// indicating so.\n\tEvent(ctx context.Context, ev common.Event, data interface{}) error\n\n\t// SenderReady will be called by the sender to notify the receiver that the sender\n\t// is now ready to send events.\n\tSenderReady()\n}\n\n// ReceiverRegistration allows the trireme monitors to register themselves to receive events\n// from an implementor. This interface is expected to be implemented outside of the monitor\n// for the component which generates the event data for the registering monitor.\n// The implementor must have a unique name which gets returned from `SenderName()`.\n// The implementor is responsible for calling `Event()` on all monitors once they have\n// registered through `Register()`.\ntype ReceiverRegistration interface {\n\t// SenderName must return a globally unique name of the implementor.\n\tSenderName() string\n\n\t// Register will register the given `monitor` for receiving events under `name`.\n\t// The registering monitor must implement `ReceiveEvents` before it can register.\n\t// Multiple calls to this function for the same `name` must update the internal\n\t// state of the implementor to now send events to the newly regitered monitor of this\n\t// name. Only one registration of a monitor of the same name is allowed.\n\tRegister(name string, monitor ReceiveEvents) error\n}\n"
  },
  {
    "path": "monitor/external/mockexternal/mockinterfaces.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: go.aporeto.io/enforcerd/trireme-lib/monitor/external (interfaces: ReceiveEvents,ReceiverRegistration)\n\n// Package mockexternal is a generated GoMock package.\npackage mockexternal\n\nimport (\n\tcontext \"context\"\n\tgomock \"github.com/golang/mock/gomock\"\n\tcommon \"go.aporeto.io/enforcerd/trireme-lib/common\"\n\texternal \"go.aporeto.io/enforcerd/trireme-lib/monitor/external\"\n\treflect \"reflect\"\n)\n\n// MockReceiveEvents is a mock of ReceiveEvents interface\ntype MockReceiveEvents struct {\n\tctrl     *gomock.Controller\n\trecorder *MockReceiveEventsMockRecorder\n}\n\n// MockReceiveEventsMockRecorder is the mock recorder for MockReceiveEvents\ntype MockReceiveEventsMockRecorder struct {\n\tmock *MockReceiveEvents\n}\n\n// NewMockReceiveEvents creates a new mock instance\nfunc NewMockReceiveEvents(ctrl *gomock.Controller) *MockReceiveEvents {\n\tmock := &MockReceiveEvents{ctrl: ctrl}\n\tmock.recorder = &MockReceiveEventsMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockReceiveEvents) EXPECT() *MockReceiveEventsMockRecorder {\n\treturn m.recorder\n}\n\n// Event mocks base method\nfunc (m *MockReceiveEvents) Event(arg0 context.Context, arg1 common.Event, arg2 interface{}) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Event\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Event indicates an expected call of Event\nfunc (mr *MockReceiveEventsMockRecorder) Event(arg0, arg1, arg2 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Event\", reflect.TypeOf((*MockReceiveEvents)(nil).Event), arg0, arg1, arg2)\n}\n\n// SenderReady mocks base method\nfunc (m *MockReceiveEvents) SenderReady() {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"SenderReady\")\n}\n\n// SenderReady indicates an expected call of SenderReady\nfunc (mr *MockReceiveEventsMockRecorder) SenderReady() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SenderReady\", reflect.TypeOf((*MockReceiveEvents)(nil).SenderReady))\n}\n\n// MockReceiverRegistration is a mock of ReceiverRegistration interface\ntype MockReceiverRegistration struct {\n\tctrl     *gomock.Controller\n\trecorder *MockReceiverRegistrationMockRecorder\n}\n\n// MockReceiverRegistrationMockRecorder is the mock recorder for MockReceiverRegistration\ntype MockReceiverRegistrationMockRecorder struct {\n\tmock *MockReceiverRegistration\n}\n\n// NewMockReceiverRegistration creates a new mock instance\nfunc NewMockReceiverRegistration(ctrl *gomock.Controller) *MockReceiverRegistration {\n\tmock := &MockReceiverRegistration{ctrl: ctrl}\n\tmock.recorder = &MockReceiverRegistrationMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockReceiverRegistration) EXPECT() *MockReceiverRegistrationMockRecorder {\n\treturn m.recorder\n}\n\n// Register mocks base method\nfunc (m *MockReceiverRegistration) Register(arg0 string, arg1 external.ReceiveEvents) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Register\", arg0, arg1)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Register indicates an expected call of Register\nfunc (mr *MockReceiverRegistrationMockRecorder) Register(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Register\", reflect.TypeOf((*MockReceiverRegistration)(nil).Register), arg0, arg1)\n}\n\n// SenderName mocks base method\nfunc (m *MockReceiverRegistration) SenderName() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SenderName\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// SenderName indicates an expected call of SenderName\nfunc (mr *MockReceiverRegistrationMockRecorder) SenderName() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SenderName\", reflect.TypeOf((*MockReceiverRegistration)(nil).SenderName))\n}\n"
  },
  {
    "path": "monitor/extractors/constants.go",
    "content": "// +build !windows\n\npackage extractors\n\n// OSHostString holds the OS host string\nconst OSHostString = \"linux\"\n"
  },
  {
    "path": "monitor/extractors/constants_windows.go",
    "content": "package extractors\n\n// OSHostString holds the windows host string\nconst OSHostString = \"windows\"\n"
  },
  {
    "path": "monitor/extractors/docker.go",
    "content": "package extractors\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"os/exec\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/docker/docker/api/types\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\n// A DockerMetadataExtractor is a function used to extract a *policy.PURuntime from a given\n// docker ContainerJSON.\ntype DockerMetadataExtractor func(*types.ContainerJSON) (*policy.PURuntime, error)\n\n// DefaultMetadataExtractor is the default metadata extractor for Docker\nfunc DefaultMetadataExtractor(info *types.ContainerJSON) (*policy.PURuntime, error) {\n\n\t// trigger new build\n\ttags := policy.NewTagStore()\n\ttags.AppendKeyValue(\"@app:image\", info.Config.Image)\n\ttags.AppendKeyValue(\"@app:extractor\", \"docker\")\n\ttags.AppendKeyValue(\"@app:docker:name\", info.Name)\n\n\tfor k, v := range info.Config.Labels {\n\t\tif len(strings.TrimSpace(k)) == 0 {\n\t\t\tcontinue\n\t\t}\n\t\tvalue := v\n\t\tif len(v) == 0 {\n\t\t\tvalue = \"<empty>\"\n\t\t}\n\t\tif !strings.HasPrefix(k, constants.UserLabelPrefix) {\n\t\t\ttags.AppendKeyValue(constants.UserLabelPrefix+k, value)\n\t\t} else {\n\t\t\ttags.AppendKeyValue(k, value)\n\t\t}\n\t}\n\n\tipa := policy.ExtendedMap{\n\t\t\"bridge\": info.NetworkSettings.IPAddress,\n\t}\n\n\tif info.HostConfig.NetworkMode == constants.DockerHostMode {\n\t\treturn policy.NewPURuntime(info.Name, info.State.Pid, \"\", tags, ipa, common.LinuxProcessPU, policy.None, hostModeOptions(info)), nil\n\t}\n\n\treturn policy.NewPURuntime(info.Name, info.State.Pid, \"\", tags, ipa, common.ContainerPU, policy.None, nil), nil\n}\n\n// hostModeOptions creates the default options for a host-mode container. This is done\n// based on the policy and the metadata extractor logic and can very by implementation\nfunc hostModeOptions(dockerInfo *types.ContainerJSON) *policy.OptionsType {\n\n\toptions := policy.OptionsType{\n\t\tCgroupName: strconv.Itoa(dockerInfo.State.Pid),\n\t\tCgroupMark: strconv.FormatUint(cgnetcls.MarkVal(), 10),\n\t\tAutoPort:   true,\n\t}\n\n\tfor p := range dockerInfo.Config.ExposedPorts {\n\t\tif p.Proto() == \"tcp\" {\n\t\t\ts, err := portspec.NewPortSpecFromString(p.Port(), nil)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\toptions.Services = append(options.Services, common.Service{\n\t\t\t\tProtocol: uint8(6),\n\t\t\t\tPorts:    s,\n\t\t\t})\n\t\t}\n\t}\n\n\treturn &options\n}\n\n// NewExternalExtractor returns a new bash metadata extractor for Docker that will call\n// the executable given in parameter and will generate a Policy Runtime as standard output\n// The format of Input/Output of the executable are in standard JSON.\nfunc NewExternalExtractor(filePath string) (DockerMetadataExtractor, error) {\n\n\tif filePath == \"\" {\n\t\treturn nil, errors.New(\"file argument is empty in bash extractor\")\n\t}\n\n\tpath, err := exec.LookPath(filePath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"exec file not found %s: %s\", filePath, err)\n\t}\n\n\tif _, err := os.Stat(path); os.IsNotExist(err) {\n\t\treturn nil, fmt.Errorf(\"exec file not found %s: %s\", filePath, err)\n\t}\n\n\t// Generate a new function\n\texternalExtractor := func(dockerInfo *types.ContainerJSON) (*policy.PURuntime, error) {\n\n\t\tdockerInfoJSON, err := json.Marshal(dockerInfo)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to marshal docker info: %s\", err)\n\t\t}\n\n\t\tcmd := exec.Command(path, string(dockerInfoJSON))\n\t\tjsonResult, err := cmd.Output()\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to run bash extractor: %s\", err)\n\t\t}\n\n\t\tvar m policy.PURuntime\n\t\terr = json.Unmarshal(jsonResult, &m)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"unable to unmarshal data from bash extractor: %s\", err)\n\t\t}\n\n\t\treturn &m, nil\n\t}\n\n\treturn externalExtractor, nil\n}\n"
  },
  {
    "path": "monitor/extractors/docker_test.go",
    "content": "// +build !windows\n\npackage extractors\n\nimport (\n\t\"bufio\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/docker/docker/api/types\"\n\t\"github.com/docker/docker/api/types/container\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n)\n\nfunc TestDefaultMetadataExtractor(t *testing.T) {\n\tinfo := &types.ContainerJSON{\n\t\tContainerJSONBase: &types.ContainerJSONBase{\n\t\t\tName:  \"name\",\n\t\t\tState: &types.ContainerState{},\n\t\t\tHostConfig: &container.HostConfig{\n\t\t\t\tNetworkMode: constants.DockerHostMode,\n\t\t\t},\n\t\t},\n\t\tNetworkSettings: &types.NetworkSettings{\n\t\t\tDefaultNetworkSettings: types.DefaultNetworkSettings{\n\t\t\t\tIPAddress: \"10.0.0.1\",\n\t\t\t},\n\t\t},\n\t\tConfig: &container.Config{\n\t\t\tImage: \"image\",\n\t\t\tLabels: map[string]string{\n\t\t\t\t\"   \":            \"remove me\",\n\t\t\t\t\"empty-label\":    \"\",\n\t\t\t\t\"standard-label\": \"one\",\n\t\t\t},\n\t\t},\n\t}\n\n\tpu, err := DefaultMetadataExtractor(info)\n\tif err != nil {\n\t\tt.Error(err)\n\t}\n\tvar foundEmptyTag bool\n\tfor _, tag := range pu.Tags().GetSlice() {\n\t\tif tag == \"@usr:empty-label=<empty>\" {\n\t\t\tfoundEmptyTag = true\n\t\t\tbreak\n\t\t}\n\t}\n\tif !foundEmptyTag {\n\t\tt.Error(\"empty tag not found\")\n\t}\n}\n\nfunc TestCreate(t *testing.T) {\n\t// Test for Empty file.\n\t_, err := NewExternalExtractor(\"\")\n\tif err == nil {\n\t\tt.Errorf(\"Expected Error, but got none\")\n\t}\n\n\t// Test for NonExistent file.\n\t_, err = NewExternalExtractor(\"/tmp/abcde.test\")\n\tif err == nil {\n\t\tt.Errorf(\"Expected Error, but got none\")\n\t}\n}\n\nconst testfile = `#!/bin/sh\necho '{\"Pid\":16823,\"Name\":\"/stoic_snyder\",\"IPAddresses\":{\"bridge\":\"172.17.0.2\"},\"Tags\":{\"image\":\"nginx\",\"name\":\"/stoic_snyder\"}}'\n`\n\nfunc createFileTest(destination string) error {\n\n\tfileHandle, err := os.Create(destination)\n\tif err != nil {\n\t\treturn err\n\t}\n\twriter := bufio.NewWriter(fileHandle)\n\tfmt.Fprintln(writer, testfile)\n\twriter.Flush() // nolint: errcheck\n\treturn nil\n}\n\nfunc TestReturnedFunc(t *testing.T) {\n\n\tif err := createFileTest(\"/tmp/test.sh\"); err != nil {\n\t\tt.Skipf(\"Skip test because no support for writing files to /tmp\")\n\t}\n\tfunction, err := NewExternalExtractor(\"/tmp/test.sh\")\n\tif err != nil {\n\t\tt.Skipf(\"Skip test because no support for writing files to /tmp\")\n\t}\n\t_, err = function(nil)\n\tif err != nil {\n\t\tt.Errorf(\"Failed to create extractor\")\n\t}\n}\n"
  },
  {
    "path": "monitor/extractors/error.go",
    "content": "package extractors\n\nimport (\n\t\"fmt\"\n)\n\ntype errNetclsAlreadyProgrammed struct {\n\tmark string\n}\n\nfunc (e *errNetclsAlreadyProgrammed) Error() string {\n\treturn fmt.Sprintf(\"net_cls cgroup already programmed with mark %s\", e.mark)\n}\n\n// ErrNetclsAlreadyProgrammed is returned from the NetclsProgrammer when the net_cls cgroup for this pod has already been programmed\nfunc ErrNetclsAlreadyProgrammed(mark string) error {\n\treturn &errNetclsAlreadyProgrammed{mark: mark}\n}\n\n// ErrNoHostNetworkPod is returned from the NetclsProgrammer if the given pod is not a host network pod.\nvar ErrNoHostNetworkPod = fmt.Errorf(\"pod is not a host network pod\")\n\n// IsErrNetclsAlreadyProgrammed checks if the provided error is an ErrNetclsAlreadyProgrammed error\nfunc IsErrNetclsAlreadyProgrammed(err error) bool {\n\tswitch err.(type) {\n\tcase *errNetclsAlreadyProgrammed:\n\t\treturn true\n\tdefault:\n\t\treturn false\n\t}\n}\n\n// IsErrNoHostNetworkPod checks if the provided error is an ErrNoHostNetworkPod error\nfunc IsErrNoHostNetworkPod(err error) bool {\n\treturn err.Error() == ErrNoHostNetworkPod.Error()\n}\n"
  },
  {
    "path": "monitor/extractors/error_test.go",
    "content": "package extractors\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestErrors(t *testing.T) {\n\tConvey(\"Testing ErrNetclsAlreadyProgrammed handling functions\", t, func() {\n\t\terr := ErrNetclsAlreadyProgrammed(\"mark\")\n\t\tSo(err, ShouldNotBeNil)\n\n\t\texpected := fmt.Sprintf(\"net_cls cgroup already programmed with mark %s\", \"mark\")\n\t\tSo(err.Error(), ShouldEqual, expected)\n\n\t\tSo(IsErrNetclsAlreadyProgrammed(err), ShouldBeTrue)\n\t\tSo(IsErrNetclsAlreadyProgrammed(ErrNoHostNetworkPod), ShouldBeFalse)\n\t})\n\tConvey(\"Testing ErrNoHostNetworkPod handling functions\", t, func() {\n\t\tSo(IsErrNoHostNetworkPod(ErrNoHostNetworkPod), ShouldBeTrue)\n\t\tSo(IsErrNoHostNetworkPod(fmt.Errorf(\"random\")), ShouldBeFalse)\n\t})\n}\n"
  },
  {
    "path": "monitor/extractors/interface.go",
    "content": "package extractors\n\nimport (\n\t\"context\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n)\n\n// EventMetadataExtractor is a function used to extract a *policy.PURuntime from a given\n// EventInfo. The EventInfo is generic and is provided over the RPC interface\ntype EventMetadataExtractor func(*common.EventInfo) (*policy.PURuntime, error)\n\n// PodMetadataExtractor is a function used to extract a *policy.PURuntime from a given\n// Kubernetes pod. It can furthermore extract more information using the client.\n// The 5th argument (bool) indicates if a network namespace should get extracted\ntype PodMetadataExtractor func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error)\n\n// PodPidsSetMaxProcsProgrammer is a function used to program the pids cgroup of a pod for Trireme.\ntype PodPidsSetMaxProcsProgrammer func(ctx context.Context, pod *corev1.Pod, maxProcs int) error\n"
  },
  {
    "path": "monitor/extractors/kubernetes.go",
    "content": "package extractors\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n\tapi \"k8s.io/api/core/v1\"\n)\n\n// KubernetesPodNameIdentifier is the label used by Docker for the K8S pod name.\nconst KubernetesPodNameIdentifier = \"@usr:io.kubernetes.pod.name\"\n\n// KubernetesPodNamespaceIdentifier is the label used by Docker for the K8S namespace.\nconst KubernetesPodNamespaceIdentifier = \"@usr:io.kubernetes.pod.namespace\"\n\n// KubernetesContainerNameIdentifier is the label used by Docker for the K8S container name.\nconst KubernetesContainerNameIdentifier = \"@usr:io.kubernetes.container.name\"\n\n// KubernetesInfraContainerName is the name of the infra POD.\nconst KubernetesInfraContainerName = \"POD\"\n\n// UpstreamOldNameIdentifier is the identifier used to identify the nane on the resulting PU\n// TODO: Remove OLDTAGS\nconst UpstreamOldNameIdentifier = \"@k8s:name\"\n\n// UpstreamNameIdentifier is the identifier used to identify the nane on the resulting PU\nconst UpstreamNameIdentifier = \"@app:k8s:name\"\n\n// UpstreamOldNamespaceIdentifier is the identifier used to identify the nanespace on the resulting PU\nconst UpstreamOldNamespaceIdentifier = \"@k8s:namespace\"\n\n// UpstreamNamespaceIdentifier is the identifier used to identify the nanespace on the resulting PU\nconst UpstreamNamespaceIdentifier = \"@app:k8s:namespace\"\n\n// UserLabelPrefix is the label prefix for all user defined labels\nconst UserLabelPrefix = \"@usr:\"\n\n// KubernetesMetadataExtractorType is an extractor function for Kubernetes.\n// It takes as parameter a standard Docker runtime and a Pod Kubernetes definition and return a PolicyRuntime\n// This extractor also provides an extra boolean parameter that is used as a token to decide if activation is required.\ntype KubernetesMetadataExtractorType func(runtime policy.RuntimeReader, pod *api.Pod) (*policy.PURuntime, bool, error)\n\n// DefaultKubernetesMetadataExtractor is a default implementation for the medatadata extractor for Kubernetes\n// It only activates the POD//INFRA containers and strips all the labels from docker to only keep the ones from Kubernetes\nfunc DefaultKubernetesMetadataExtractor(runtime policy.RuntimeReader, pod *api.Pod) (*policy.PURuntime, bool, error) {\n\n\tif runtime == nil {\n\t\treturn nil, false, fmt.Errorf(\"empty runtime\")\n\t}\n\n\tif pod == nil {\n\t\treturn nil, false, fmt.Errorf(\"empty pod\")\n\t}\n\n\t// In this specific metadataExtractor we only want to activate the Infra Container for each pod.\n\tif !isPodInfraContainer(runtime) {\n\t\treturn nil, false, nil\n\t}\n\n\tpodLabels := pod.GetLabels()\n\tif podLabels == nil {\n\t\tpodLabels = make(map[string]string)\n\t}\n\tfor key, value := range podLabels {\n\t\tif len(strings.TrimSpace(key)) == 0 {\n\t\t\tdelete(podLabels, key)\n\t\t}\n\t\tif len(value) == 0 {\n\t\t\tpodLabels[key] = \"<empty>\"\n\t\t}\n\t}\n\n\ttags := policy.NewTagStoreFromMap(podLabels)\n\ttags.AppendKeyValue(UpstreamOldNameIdentifier, pod.GetName())\n\ttags.AppendKeyValue(UpstreamNameIdentifier, pod.GetName())\n\ttags.AppendKeyValue(UpstreamOldNamespaceIdentifier, pod.GetNamespace())\n\ttags.AppendKeyValue(UpstreamNamespaceIdentifier, pod.GetNamespace())\n\n\toriginalRuntime, ok := runtime.(*policy.PURuntime)\n\tif !ok {\n\t\treturn nil, false, fmt.Errorf(\"Error casting puruntime\")\n\t}\n\n\tnewRuntime := originalRuntime.Clone()\n\tnewRuntime.SetTags(tags)\n\n\tzap.L().Debug(\"kubernetes runtime tags\", zap.String(\"name\", pod.GetName()), zap.String(\"namespace\", pod.GetNamespace()), zap.Strings(\"tags\", newRuntime.Tags().GetSlice()))\n\n\treturn newRuntime, true, nil\n}\n\n// isPodInfraContainer returns true if the runtime represents the infra container for the POD\nfunc isPodInfraContainer(runtime policy.RuntimeReader) bool {\n\t// The Infra container can be found by checking env. variable.\n\ttagContent, ok := runtime.Tag(KubernetesContainerNameIdentifier)\n\tif !ok || tagContent != KubernetesInfraContainerName {\n\t\treturn false\n\t}\n\n\treturn true\n}\n"
  },
  {
    "path": "monitor/extractors/kubernetes_test.go",
    "content": "// +build !windows\n\npackage extractors\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\n\t\"go.aporeto.io/trireme-lib/policy\"\n\tapi \"k8s.io/api/core/v1\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\nfunc TestDefaultKubernetesMetadataExtractor(t *testing.T) {\n\tConvey(\"TestDefaultKubernetesMetadataExtractor\", t, func() {\n\t\tpod1 := &corev1.Pod{}\n\t\tpod1.SetName(\"test\")\n\t\tpod1.SetNamespace(\"ns\")\n\t\tpod1.SetLabels(map[string]string{\n\t\t\t\"    \":        \"removeme\",\n\t\t\t\"label\":       \"one\",\n\t\t\t\"empty-label\": \"\",\n\t\t})\n\n\t\tpod2 := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"test2\",\n\t\t\t\tNamespace: \"ns\",\n\t\t\t},\n\t\t}\n\n\t\truntimeDocker := policy.NewPURuntimeWithDefaults()\n\t\ttags := runtimeDocker.Tags()\n\t\ttags.AppendKeyValue(KubernetesContainerNameIdentifier, \"POD\")\n\t\truntimeDocker.SetTags(tags)\n\n\t\truntimeResult1 := policy.NewPURuntimeWithDefaults()\n\t\ttags = runtimeResult1.Tags()\n\t\ttags.AppendKeyValue(\"label\", \"one\")\n\t\ttags.AppendKeyValue(\"empty-label\", \"<empty>\")\n\t\ttags.AppendKeyValue(UpstreamOldNameIdentifier, \"test\")\n\t\ttags.AppendKeyValue(UpstreamNameIdentifier, \"test\")\n\t\ttags.AppendKeyValue(UpstreamOldNamespaceIdentifier, \"ns\")\n\t\ttags.AppendKeyValue(UpstreamNamespaceIdentifier, \"ns\")\n\t\truntimeResult1.SetTags(tags)\n\n\t\truntimeResult2 := policy.NewPURuntimeWithDefaults()\n\t\ttags = runtimeResult2.Tags()\n\t\ttags.AppendKeyValue(UpstreamOldNameIdentifier, \"test2\")\n\t\ttags.AppendKeyValue(UpstreamNameIdentifier, \"test2\")\n\t\ttags.AppendKeyValue(UpstreamOldNamespaceIdentifier, \"ns\")\n\t\ttags.AppendKeyValue(UpstreamNamespaceIdentifier, \"ns\")\n\t\truntimeResult2.SetTags(tags)\n\n\t\ttype args struct {\n\t\t\truntime policy.RuntimeReader\n\t\t\tpod     *api.Pod\n\t\t}\n\t\ttests := []struct {\n\t\t\tname    string\n\t\t\targs    args\n\t\t\twant    *policy.PURuntime\n\t\t\twant1   bool\n\t\t\twantErr bool\n\t\t}{\n\t\t\t{\n\t\t\t\tname: \"empty1\",\n\t\t\t\targs: args{\n\t\t\t\t\truntime: nil,\n\t\t\t\t\tpod:     &api.Pod{},\n\t\t\t\t},\n\t\t\t\twantErr: true,\n\t\t\t},\n\t\t\t{\n\t\t\t\tname: \"empty2\",\n\t\t\t\targs: args{\n\t\t\t\t\truntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t\t\tpod:     nil,\n\t\t\t\t},\n\t\t\t\twantErr: true,\n\t\t\t},\n\t\t\t{\n\t\t\t\tname: \"Simple test, non Kubernetes Container\",\n\t\t\t\targs: args{\n\t\t\t\t\truntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t\t\tpod:     pod1,\n\t\t\t\t},\n\t\t\t\twant:    nil,\n\t\t\t\twant1:   false,\n\t\t\t\twantErr: false,\n\t\t\t},\n\t\t\t{\n\t\t\t\tname: \"Simple test\",\n\t\t\t\targs: args{\n\t\t\t\t\truntime: runtimeDocker,\n\t\t\t\t\tpod:     pod1,\n\t\t\t\t},\n\t\t\t\twant:    runtimeResult1,\n\t\t\t\twant1:   true,\n\t\t\t\twantErr: false,\n\t\t\t},\n\t\t\t{\n\t\t\t\tname: \"Simple test 2\",\n\t\t\t\targs: args{\n\t\t\t\t\truntime: runtimeDocker,\n\t\t\t\t\tpod:     pod2,\n\t\t\t\t},\n\t\t\t\twant:    runtimeResult2,\n\t\t\t\twant1:   true,\n\t\t\t\twantErr: false,\n\t\t\t},\n\t\t}\n\t\tfor _, tt := range tests {\n\t\t\tConvey(tt.name, func() {\n\t\t\t\tgot, got1, err := DefaultKubernetesMetadataExtractor(tt.args.runtime, tt.args.pod)\n\t\t\t\tSo(err != nil, ShouldEqual, tt.wantErr)\n\t\t\t\tif got != nil && got.Tags() != nil {\n\t\t\t\t\tSo(got.Tags().Tags, ShouldHaveLength, len(tt.want.Tags().Tags))\n\t\t\t\t\tfor _, tag := range tt.want.Tags().Tags {\n\t\t\t\t\t\tSo(got.Tags().Tags, ShouldContain, tag)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tSo(got1, ShouldEqual, tt.want1)\n\t\t\t})\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "monitor/extractors/linux.go",
    "content": "package extractors\n\nimport (\n\t\"debug/elf\"\n\t\"fmt\"\n\t\"os/user\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/shirou/gopsutil/process\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls\"\n\tportspec \"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\n// LinuxMetadataExtractorType is a type of Linux metadata extractors\ntype LinuxMetadataExtractorType func(event *common.EventInfo) (*policy.PURuntime, error)\n\n// DefaultHostMetadataExtractor is a host specific metadata extractor\nfunc DefaultHostMetadataExtractor(event *common.EventInfo) (*policy.PURuntime, error) {\n\n\truntimeTags := policy.NewTagStore()\n\n\tfor _, tag := range event.Tags {\n\t\tparts := strings.SplitN(tag, \"=\", 2)\n\t\tif len(parts) != 2 {\n\t\t\treturn nil, fmt.Errorf(\"invalid tag: %s\", tag)\n\t\t}\n\t\truntimeTags.AppendKeyValue(\"@usr:\"+parts[0], parts[1])\n\t}\n\n\toptions := &policy.OptionsType{\n\t\tCgroupName: event.PUID,\n\t\tCgroupMark: strconv.FormatUint(cgnetcls.MarkVal(), 10),\n\t\tServices:   event.Services,\n\t}\n\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\n\treturn policy.NewPURuntime(event.Name, int(event.PID), \"\", runtimeTags, runtimeIps, event.PUType, policy.None, options), nil\n}\n\n// SystemdEventMetadataExtractor is a systemd based metadata extractor\n// TODO: Remove OLDTAGS\nfunc SystemdEventMetadataExtractor(event *common.EventInfo) (*policy.PURuntime, error) {\n\n\truntimeTags := policy.NewTagStore()\n\n\tfor _, tag := range event.Tags {\n\t\tparts := strings.SplitN(tag, \"=\", 2)\n\t\tif len(parts) != 2 {\n\t\t\treturn nil, fmt.Errorf(\"invalid tag: %s\", tag)\n\t\t}\n\t\tkey, value := parts[0], parts[1]\n\n\t\tif strings.HasPrefix(key, \"@app:linux:\") {\n\t\t\truntimeTags.AppendKeyValue(key, value)\n\t\t\tcontinue\n\t\t}\n\n\t\truntimeTags.AppendKeyValue(\"@usr:\"+key, value)\n\t}\n\n\tuserdata := ProcessInfo(event.PID)\n\n\tfor _, u := range userdata {\n\t\truntimeTags.AppendKeyValue(\"@app:linux:\"+u, \"true\")\n\t}\n\n\truntimeTags.AppendKeyValue(\"@os:hostname\", findFQDN(time.Second))\n\n\toptions := policy.OptionsType{}\n\tfor index, s := range event.Services {\n\t\tif s.Port != 0 && s.Ports == nil {\n\t\t\tif pspec, err := portspec.NewPortSpec(s.Port, s.Port, nil); err == nil {\n\t\t\t\tevent.Services[index].Ports = pspec\n\t\t\t\tevent.Services[index].Port = 0\n\t\t\t} else {\n\t\t\t\treturn nil, fmt.Errorf(\"Invalid Port Spec %s\", err)\n\t\t\t}\n\t\t}\n\t}\n\toptions.Services = event.Services\n\toptions.UserID, _ = runtimeTags.Get(\"@usr:originaluser\")\n\toptions.CgroupMark = strconv.FormatUint(cgnetcls.MarkVal(), 10)\n\toptions.AutoPort = event.AutoPort\n\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\n\treturn policy.NewPURuntime(event.Name, int(event.PID), \"\", runtimeTags, runtimeIps, event.PUType, policy.None, &options), nil\n}\n\n// ProcessInfo returns all metadata captured by a process\nfunc ProcessInfo(pid int32) []string {\n\tuserdata := []string{}\n\n\tp, err := process.NewProcess(pid)\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tuids, err := p.Uids()\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tgroups, err := p.Gids()\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tusername, err := p.Username()\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tfor _, uid := range uids {\n\t\tuserdata = append(userdata, \"uid:\"+strconv.Itoa(int(uid)))\n\t}\n\n\tfor _, gid := range groups {\n\t\tuserdata = append(userdata, \"gid:\"+strconv.Itoa(int(gid)))\n\t}\n\n\tuserdata = append(userdata, \"username:\"+username)\n\n\tuserid, err := user.Lookup(username)\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tgids, err := userid.GroupIds()\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tfor i := 0; i < len(gids); i++ {\n\t\tuserdata = append(userdata, \"gids:\"+gids[i])\n\t\tgroup, err := user.LookupGroupId(gids[i])\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tuserdata = append(userdata, \"groups:\"+group.Name)\n\t}\n\n\treturn userdata\n}\n\n// Libs returns the list of dynamic library dependencies of an executable\nfunc Libs(binpath string) []string {\n\tf, err := elf.Open(binpath)\n\tif err != nil {\n\t\treturn []string{}\n\t}\n\n\tlibraries, _ := f.ImportedLibraries()\n\treturn libraries\n}\n"
  },
  {
    "path": "monitor/extractors/linux_test.go",
    "content": "// +build linux windows\n\npackage extractors\n\nimport (\n\t\"encoding/hex\"\n\t\"net\"\n\t\"reflect\"\n\t\"testing\"\n\t\"time\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\tportspec \"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nfunc TestComputeFileMd5(t *testing.T) {\n\n\tConvey(\"When I calculate the MD5 of a bad file\", t, func() {\n\t\t_, err := ComputeFileMd5(\"testdata/nofile\")\n\t\tConvey(\"I should get an error\", func() {\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I calculate the MD5 of a good file\", t, func() {\n\t\thash, err := ComputeFileMd5(\"testdata/curl\")\n\t\tConvey(\"I should get no error and the right value\", func() {\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(hex.EncodeToString(hash), ShouldResemble, \"bf7e66d7bbd0465cfcba5b1cf68a9b59\")\n\t\t})\n\t})\n}\n\nfunc TestFindFQDN(t *testing.T) {\n\n\tConvey(\"When I try to get the hostname of a good host\", t, func() {\n\t\thostname := findFQDN(1000 * time.Second)\n\n\t\tConvey(\"I should be able to resolve this hostname\", func() {\n\t\t\taddr, err := net.LookupHost(hostname)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(addr), ShouldBeGreaterThan, 0)\n\t\t})\n\t})\n}\n\nfunc TestLibs(t *testing.T) {\n\n\tConvey(\"When I try to get the libraries of a known binary\", t, func() {\n\t\tlibraries := Libs(\"./testdata/curl\")\n\t\tConvey(\"I should get the execpted libraries\", func() {\n\t\t\tSo(len(libraries), ShouldEqual, 4)\n\t\t\tSo(libraries, ShouldContain, \"libcurl-gnutls.so.4\")\n\t\t\tSo(libraries, ShouldContain, \"libz.so.1\")\n\t\t\tSo(libraries, ShouldContain, \"libpthread.so.0\")\n\t\t\tSo(libraries, ShouldContain, \"libc.so.6\")\n\t\t})\n\t})\n\n\tConvey(\"When I try to get the libraries of a bad binary\", t, func() {\n\n\t\tlibraries := Libs(\"./testdata/nofile\")\n\t\tConvey(\"I should get an empty array\", func() {\n\t\t\tSo(len(libraries), ShouldEqual, 0)\n\t\t})\n\t})\n}\n\nfunc TestSystemdEventMetadataExtractor(t *testing.T) {\n\n\tConvey(\"When I call the metadata extrator\", t, func() {\n\n\t\tConvey(\"If all data are present\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:       \"./testdata/curl\",\n\t\t\t\tExecutable: \"./testdata/curl\",\n\t\t\t\tPID:        1234,\n\t\t\t\tPUID:       \"/1234\",\n\t\t\t\tTags:       []string{\"app=web\"},\n\t\t\t}\n\n\t\t\tpu, err := SystemdEventMetadataExtractor(event)\n\t\t\tConvey(\"I should get no error and a valid PU runitime\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(pu, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestDefaultHostMetadataExtractor(t *testing.T) {\n\n\tConvey(\"When I call the host metadata extractor\", t, func() {\n\n\t\tConvey(\"If its valid data\", func() {\n\n\t\t\ts, _ := portspec.NewPortSpecFromString(\"1000\", nil) // nolint\n\t\t\tservices := []common.Service{\n\t\t\t\t{\n\t\t\t\t\tProtocol: uint8(6),\n\t\t\t\t\tPorts:    s,\n\t\t\t\t},\n\t\t\t}\n\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:     \"Web\",\n\t\t\t\tPID:      1234,\n\t\t\t\tPUID:     \"Web\",\n\t\t\t\tTags:     []string{\"app=web\"},\n\t\t\t\tServices: services,\n\t\t\t}\n\n\t\t\tpu, err := DefaultHostMetadataExtractor(event)\n\t\t\tConvey(\"I should get no error and a valid PU runtimg\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(pu, ShouldNotBeNil)\n\t\t\t\tSo(pu.Options().CgroupName, ShouldResemble, \"Web\")\n\t\t\t\tSo(pu.Options().Services, ShouldResemble, services)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"If I get invalid tags\", func() {\n\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName: \"Web\",\n\t\t\t\tPID:  1234,\n\t\t\t\tPUID: \"Web\",\n\t\t\t\tTags: []string{\"invalid\"},\n\t\t\t}\n\n\t\t\t_, err := DefaultHostMetadataExtractor(event)\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"If I get an invalid PID\", func() {\n\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName: \"Web\",\n\t\t\t\tPID:  -1233,\n\t\t\t\tPUID: \"Web\",\n\t\t\t\tTags: []string{\"invalid\"},\n\t\t\t}\n\n\t\t\t_, err := DefaultHostMetadataExtractor(event)\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc Test_policyExtensions(t *testing.T) {\n\ttype args struct {\n\t\truntime policy.RuntimeReader\n\t}\n\n\tpur1 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.LinuxProcessPU, policy.None, nil)\n\tem1 := policy.ExtendedMap{\n\t\t\"Key\": \"Value\",\n\t}\n\toptions := pur1.Options()\n\toptions.PolicyExtensions = em1\n\tpur1.SetOptions(options)\n\n\t// 2nd Runtime\n\tpur2 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.LinuxProcessPU, policy.None, nil)\n\toptions = pur2.Options()\n\tpur2.SetOptions(options)\n\n\t// 3rd runtime\n\tpur3 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.LinuxProcessPU, policy.None, nil)\n\toptions = pur3.Options()\n\toptions.PolicyExtensions = nil\n\tpur3.SetOptions(options)\n\n\ttests := []struct {\n\t\tname           string\n\t\targs           args\n\t\twantExtensions policy.ExtendedMap\n\t}{\n\t\t// TODO: Add test cases.\n\t\t{\n\t\t\tname: \"Test if runtime is nil\",\n\t\t\targs: args{\n\t\t\t\truntime: nil,\n\t\t\t},\n\t\t\twantExtensions: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if policy extensions are nil\",\n\t\t\targs: args{\n\t\t\t\truntime: pur3,\n\t\t\t},\n\t\t\twantExtensions: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if policy extensions are defined\",\n\t\t\targs: args{\n\t\t\t\truntime: pur1,\n\t\t\t},\n\t\t\twantExtensions: em1,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if policy extensions are not defined\",\n\t\t\targs: args{\n\t\t\t\truntime: pur2,\n\t\t\t},\n\t\t\twantExtensions: nil,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif gotExtensions := policyExtensions(tt.args.runtime); !reflect.DeepEqual(gotExtensions, tt.wantExtensions) {\n\t\t\t\tt.Errorf(\"policyExtensions() = %v, want %v\", gotExtensions, tt.wantExtensions)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestIsHostmodePU(t *testing.T) {\n\ttype args struct {\n\t\truntime policy.RuntimeReader\n\t\tmode    constants.ModeType\n\t}\n\n\tpur1 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.HostPU, policy.None, nil)\n\n\t// 2nd Runtime\n\tpur2 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.HostNetworkPU, policy.None, nil)\n\n\t// 3rd runtime\n\tpur3 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.LinuxProcessPU, policy.None, nil)\n\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant bool\n\t}{\n\t\t{\n\t\t\tname: \"Test if pu type is Container\",\n\t\t\targs: args{\n\t\t\t\truntime: nil,\n\t\t\t\tmode:    constants.RemoteContainer,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if PU type is hostpu\",\n\t\t\targs: args{\n\t\t\t\truntime: pur1,\n\t\t\t\tmode:    constants.LocalServer,\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if pu type is hostnetworkmode pu\",\n\t\t\targs: args{\n\t\t\t\truntime: pur2,\n\t\t\t\tmode:    constants.LocalServer,\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if pu type is linux pu\",\n\t\t\targs: args{\n\t\t\t\truntime: pur3,\n\t\t\t\tmode:    constants.LocalServer,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Test invalid runtime\",\n\t\t\targs: args{\n\t\t\t\truntime: nil,\n\t\t\t\tmode:    constants.LocalServer,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := IsHostmodePU(tt.args.runtime, tt.args.mode); got != tt.want {\n\t\t\t\tt.Errorf(\"IsHostmodePU() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestIsHostPU(t *testing.T) {\n\ttype args struct {\n\t\truntime policy.RuntimeReader\n\t\tmode    constants.ModeType\n\t}\n\n\tpur1 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.HostPU, policy.None, nil)\n\n\t// 2nd Runtime\n\tpur2 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.HostNetworkPU, policy.None, nil)\n\n\t// 3rd runtime\n\tpur3 := policy.NewPURuntime(\"\", 0, \"\", nil, nil, common.LinuxProcessPU, policy.None, nil)\n\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant bool\n\t}{{\n\t\tname: \"Test if pu type is Container\",\n\t\targs: args{\n\t\t\truntime: nil,\n\t\t\tmode:    constants.RemoteContainer,\n\t\t},\n\t\twant: false,\n\t},\n\t\t{\n\t\t\tname: \"Test if PU type is hostpu\",\n\t\t\targs: args{\n\t\t\t\truntime: pur1,\n\t\t\t\tmode:    constants.LocalServer,\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if pu type is hostnetworkmode pu\",\n\t\t\targs: args{\n\t\t\t\truntime: pur2,\n\t\t\t\tmode:    constants.LocalServer,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if pu type is linux pu\",\n\t\t\targs: args{\n\t\t\t\truntime: pur3,\n\t\t\t\tmode:    constants.LocalServer,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Test invalid runtime\",\n\t\t\targs: args{\n\t\t\t\truntime: nil,\n\t\t\t\tmode:    constants.LocalServer,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := IsHostPU(tt.args.runtime, tt.args.mode); got != tt.want {\n\t\t\t\tt.Errorf(\"IsHostPU() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/extractors/ssh.go",
    "content": "package extractors\n\nimport (\n\t\"fmt\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/cgnetcls\"\n)\n\n// SSHMetadataExtractor is a metadata extractor for ssh.\nfunc SSHMetadataExtractor(event *common.EventInfo) (*policy.PURuntime, error) {\n\n\truntimeTags := policy.NewTagStore()\n\n\tfor _, tag := range event.Tags {\n\t\tparts := strings.SplitN(tag, \"=\", 2)\n\t\tif len(parts) != 2 {\n\t\t\treturn nil, fmt.Errorf(\"invalid tag: %s\", tag)\n\t\t}\n\n\t\t// This means we send something that is for internal purposes only\n\t\t// We add it as it is.\n\t\tif strings.HasPrefix(tag, \"$\") {\n\t\t\truntimeTags.AppendKeyValue(parts[0], parts[1])\n\t\t\tcontinue\n\t\t}\n\n\t\truntimeTags.AppendKeyValue(\"@user:ssh:\"+parts[0], parts[1])\n\t}\n\n\toptions := &policy.OptionsType{\n\t\tCgroupName: event.PUID,\n\t\tCgroupMark: strconv.FormatUint(cgnetcls.MarkVal(), 10),\n\t}\n\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\n\treturn policy.NewPURuntime(event.Name, int(event.PID), \"\", runtimeTags, runtimeIps, event.PUType, options), nil\n}\n"
  },
  {
    "path": "monitor/extractors/ssh_test.go",
    "content": "// +build !windows\n\npackage extractors\n\nimport (\n\t\"strconv\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n)\n\nfunc testRuntime() *policy.PURuntime {\n\n\ttags := policy.NewTagStore()\n\ttags.AppendKeyValue(\"@user:ssh:app\", \"web\")\n\ttags.AppendKeyValue(\"$cert\", \"ss\")\n\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\toptions := &policy.OptionsType{\n\t\tCgroupName: \"/1234\",\n\t\tCgroupMark: strconv.FormatUint(104, 10),\n\t}\n\n\treturn policy.NewPURuntime(\"curl\", 1234, \"\", tags, runtimeIps, common.SSHSessionPU, options)\n}\n\nfunc TestSSHMetadataExtractor(t *testing.T) {\n\n\tConvey(\"When I call the ssh metadata extrator\", t, func() {\n\n\t\tConvey(\"If all data are present\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:   \"curl\",\n\t\t\t\tPID:    1234,\n\t\t\t\tPUID:   \"/1234\",\n\t\t\t\tPUType: common.SSHSessionPU,\n\t\t\t\tTags:   []string{\"app=web\", \"$cert=ss\"},\n\t\t\t}\n\n\t\t\tpu, err := SSHMetadataExtractor(event)\n\t\t\tConvey(\"I should get no error and a valid PU runtime\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(pu, ShouldResemble, testRuntime())\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/extractors/uid.go",
    "content": "package extractors\n\nimport (\n\t\"fmt\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/cgnetcls\"\n)\n\n// UIDMetadataExtractor is a metadata extractor for uid/gid.\nfunc UIDMetadataExtractor(event *common.EventInfo) (*policy.PURuntime, error) {\n\n\truntimeTags := policy.NewTagStore()\n\n\tfor _, tag := range event.Tags {\n\t\tparts := strings.SplitN(tag, \"=\", 2)\n\t\tif len(parts) != 2 {\n\t\t\treturn nil, fmt.Errorf(\"invalid tag: %s\", tag)\n\t\t}\n\t\t// TODO: Remove OLDTAGS\n\t\truntimeTags.AppendKeyValue(\"@sys:\"+parts[0], parts[1])\n\t\truntimeTags.AppendKeyValue(\"@app:linux:\"+parts[0], parts[1])\n\t}\n\n\tif event.Name == \"\" {\n\t\tevent.Name = event.PUID\n\t}\n\n\t// TODO: improve with additional information here.\n\toptions := &policy.OptionsType{\n\t\tCgroupName: event.PUID,\n\t\tCgroupMark: strconv.FormatUint(cgnetcls.MarkVal(), 10),\n\t\tUserID:     event.PUID,\n\t\tServices:   event.Services,\n\t}\n\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\n\treturn policy.NewPURuntime(event.Name, int(event.PID), \"\", runtimeTags, runtimeIps, common.UIDLoginPU, options), nil\n}\n"
  },
  {
    "path": "monitor/extractors/uid_test.go",
    "content": "// +build !windows\n\npackage extractors\n\nimport (\n\t\"reflect\"\n\t\"strconv\"\n\t\"testing\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n)\n\nfunc createDummyPolicy(event *common.EventInfo) *policy.PURuntime {\n\truntimeTags := policy.NewTagStore()\n\truntimeTags.AppendKeyValue(\"@sys:test\", \"valid\")\n\truntimeTags.AppendKeyValue(\"@app:linux:test\", \"valid\")\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\toptions := &policy.OptionsType{\n\t\tCgroupName: event.PUID,\n\t\tCgroupMark: strconv.Itoa(105),\n\t\tUserID:     event.PUID,\n\t\tServices:   nil,\n\t}\n\treturn policy.NewPURuntime(event.Name, int(event.PID), \"\", runtimeTags, runtimeIps, common.UIDLoginPU, options)\n}\nfunc TestUIDMetadataExtractor(t *testing.T) {\n\tvar marshaledgot, marshalledwant []byte\n\ttype args struct {\n\t\tevent *common.EventInfo\n\t}\n\te := &common.EventInfo{\n\t\tPID:      100,\n\t\tName:     \"TestPU\",\n\t\tTags:     []string{\"test=valid\"},\n\t\tPUID:     \"TestPU\",\n\t\tServices: nil,\n\t\tPUType:   common.LinuxProcessPU,\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    *policy.PURuntime\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"Invalid Tags\",\n\t\t\targs: args{\n\t\t\t\tevent: &common.EventInfo{\n\t\t\t\t\tTags: []string{\"InvalidTagFormat\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Valid Tags\",\n\t\t\targs: args{\n\t\t\t\tevent: e,\n\t\t\t},\n\t\t\twant:    createDummyPolicy(e),\n\t\t\twantErr: false,\n\t\t},\n\n\t\t// TODO: Add test cases.\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, err := UIDMetadataExtractor(tt.args.event)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"UIDMetadataExtractor() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif got != nil && tt.want != nil {\n\t\t\t\tmarshaledgot, _ = got.MarshalJSON()\n\t\t\t\tmarshalledwant, _ = tt.want.MarshalJSON()\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(marshaledgot, marshalledwant) {\n\t\t\t\tt.Errorf(\"\\nUIDMetadataExtractor()\\ngot  = %s\\n, want %s\\n\", string(marshaledgot), string(marshalledwant))\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/extractors/util.go",
    "content": "package extractors\n\nimport (\n\t\"crypto/md5\"\n\t\"io\"\n\t\"os\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/fqdn\"\n)\n\n// ComputeFileMd5 computes the Md5 of a file\nfunc ComputeFileMd5(filePath string) ([]byte, error) {\n\n\tvar result []byte\n\tfile, err := os.Open(filePath)\n\tif err != nil {\n\t\treturn result, err\n\t}\n\tdefer file.Close() // nolint: errcheck\n\n\thash := md5.New()\n\tif _, err := io.Copy(hash, file); err != nil {\n\t\treturn result, err\n\t}\n\n\treturn hash.Sum(result), nil\n}\n\nfunc findFQDN(expiration time.Duration) string {\n\n\thostname, err := os.Hostname()\n\tif err != nil {\n\t\treturn \"unknown\"\n\t}\n\n\t// Try to find FQDN\n\tglobalHostname := make(chan string, 1)\n\tgo func() {\n\t\tglobalHostname <- fqdn.Find()\n\n\t}()\n\n\t// Use OS hostname if we dont hear back in a second\n\tselect {\n\tcase <-time.After(expiration):\n\t\treturn hostname\n\tcase name := <-globalHostname:\n\t\treturn name\n\t}\n}\n\n// policyExtensions retrieves policy extensions. Moving this function from extractor package.\nfunc policyExtensions(runtime policy.RuntimeReader) (extensions policy.ExtendedMap) {\n\n\tif runtime == nil {\n\t\treturn nil\n\t}\n\n\tif runtime.Options().PolicyExtensions == nil {\n\t\treturn nil\n\t}\n\n\tif extensions, ok := runtime.Options().PolicyExtensions.(policy.ExtendedMap); ok {\n\t\treturn extensions\n\t}\n\treturn nil\n}\n\n// IsHostmodePU returns true if puType stored by policy extensions is hostmode PU\nfunc IsHostmodePU(runtime policy.RuntimeReader, mode constants.ModeType) bool {\n\n\tif runtime == nil {\n\t\treturn false\n\t}\n\n\tif mode != constants.LocalServer {\n\t\treturn false\n\t}\n\n\treturn runtime.PUType() == common.HostPU || runtime.PUType() == common.HostNetworkPU\n}\n\n// IsHostPU returns true if puType stored by policy extensions is host PU\nfunc IsHostPU(runtime policy.RuntimeReader, mode constants.ModeType) bool {\n\n\tif runtime == nil {\n\t\treturn false\n\t}\n\n\tif mode != constants.LocalServer {\n\t\treturn false\n\t}\n\n\treturn runtime.PUType() == common.HostPU\n}\n"
  },
  {
    "path": "monitor/extractors/windows.go",
    "content": "// +build windows\n\npackage extractors\n\nimport (\n\t\"encoding/hex\"\n\t\"fmt\"\n\t\"os/user\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/shirou/gopsutil/process\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls\"\n\tportspec \"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\n// WindowsMetadataExtractorType is a type of Windows metadata extractors\ntype WindowsMetadataExtractorType func(event *common.EventInfo) (*policy.PURuntime, error)\n\n// WindowsServiceEventMetadataExtractor is a windows service based metadata extractor\nfunc WindowsServiceEventMetadataExtractor(event *common.EventInfo) (*policy.PURuntime, error) {\n\n\truntimeTags := policy.NewTagStore()\n\n\tfor _, tag := range event.Tags {\n\t\tparts := strings.SplitN(tag, \"=\", 2)\n\t\tif len(parts) != 2 {\n\t\t\treturn nil, fmt.Errorf(\"invalid tag: %s\", tag)\n\t\t}\n\t\tkey, value := parts[0], parts[1]\n\n\t\tif strings.HasPrefix(key, \"@app:windows:\") {\n\t\t\truntimeTags.AppendKeyValue(key, value)\n\t\t\tcontinue\n\t\t}\n\n\t\truntimeTags.AppendKeyValue(\"@usr:\"+key, value)\n\t}\n\n\tuserdata := WinProcessInfo(event.PID)\n\n\tfor _, u := range userdata {\n\t\truntimeTags.AppendKeyValue(\"@app:windows:\"+u, \"true\")\n\t}\n\n\truntimeTags.AppendKeyValue(\"@os:hostname\", findFQDN(time.Second))\n\n\tif fileMd5, err := ComputeFileMd5(event.Executable); err == nil {\n\t\truntimeTags.AppendKeyValue(\"@app:windows:filechecksum\", hex.EncodeToString(fileMd5))\n\t}\n\n\tdepends := getDllImports(event.Name)\n\tfor _, lib := range depends {\n\t\truntimeTags.AppendKeyValue(\"@app:windows:lib:\"+lib, \"true\")\n\t}\n\n\toptions := policy.OptionsType{}\n\tfor index, s := range event.Services {\n\t\tif s.Port != 0 && s.Ports == nil {\n\t\t\tpspec, err := portspec.NewPortSpec(s.Port, s.Port, nil)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Invalid Port Spec %s\", err)\n\t\t\t}\n\t\t\tevent.Services[index].Ports = pspec\n\t\t\tevent.Services[index].Port = 0\n\t\t}\n\t}\n\toptions.Services = event.Services\n\toptions.UserID, _ = runtimeTags.Get(\"@usr:originaluser\")\n\toptions.CgroupMark = strconv.FormatUint(cgnetcls.MarkVal(), 10)\n\toptions.AutoPort = event.AutoPort\n\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\n\treturn policy.NewPURuntime(event.Name, int(event.PID), \"\", runtimeTags, runtimeIps, event.PUType, policy.None, &options), nil\n}\n\n// WinProcessInfo returns all metadata captured by a Windows process\nfunc WinProcessInfo(pid int32) []string {\n\tuserdata := []string{}\n\n\tp, err := process.NewProcess(pid)\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\t// TODO(windows): do equivalent of uids and gids (using GetNamedSecurityInfo and LookupAccountSid, eg)\n\tuids, err := p.Uids()\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tgroups, err := p.Gids()\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tusername, err := p.Username()\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tfor _, uid := range uids {\n\t\tuserdata = append(userdata, \"uid:\"+strconv.Itoa(int(uid)))\n\t}\n\n\tfor _, gid := range groups {\n\t\tuserdata = append(userdata, \"gid:\"+strconv.Itoa(int(gid)))\n\t}\n\n\tuserdata = append(userdata, \"username:\"+username)\n\n\tuserid, err := user.Lookup(username)\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tgids, err := userid.GroupIds()\n\tif err != nil {\n\t\treturn userdata\n\t}\n\n\tfor i := 0; i < len(gids); i++ {\n\t\tuserdata = append(userdata, \"gids:\"+gids[i])\n\t\tgroup, err := user.LookupGroupId(gids[i])\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tuserdata = append(userdata, \"groups:\"+group.Name)\n\t}\n\n\treturn userdata\n}\n\n// getDllImports returns the list of dynamic library dependencies of an executable\n// TODO(windows): debug/pe File.ImportedLibraries is not implemented currently\nfunc getDllImports(binpath string) []string {\n\treturn []string{}\n}\n"
  },
  {
    "path": "monitor/extractors/windows_test.go",
    "content": "// +build windows\n\npackage extractors\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\nfunc TestWindowsServiceEventMetadataExtractor(t *testing.T) {\n\n\tConvey(\"When I call the windows metadata extrator\", t, func() {\n\n\t\tConvey(\"If all data are present\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:       \"./testdata/curl\",\n\t\t\t\tExecutable: \"./testdata/curl\",\n\t\t\t\tPID:        1234,\n\t\t\t\tPUID:       \"/1234\",\n\t\t\t\tTags:       []string{\"app=web\"},\n\t\t\t}\n\n\t\t\tpu, err := WindowsServiceEventMetadataExtractor(event)\n\t\t\tConvey(\"I should get no error and a valid PU runitime\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(pu, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/interfaces.go",
    "content": "package monitor\n\nimport (\n\t\"context\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n)\n\n// A Monitor is an interface implmented to start/stop monitors.\ntype Monitor interface {\n\n\t// Start starts the monitor.\n\tRun(ctx context.Context) error\n\n\t// UpdateConfiguration updates the configuration of the monitor\n\tUpdateConfiguration(ctx context.Context, config *config.MonitorConfig) error\n\n\t// Resync requests to the monitor to do a resync.\n\tResync(ctx context.Context) error\n}\n\n// Implementation for a monitor.\ntype Implementation interface {\n\n\t// Run starts the monitor implementation.\n\tRun(ctx context.Context) error\n\n\t// SetupConfig provides a configuration to implmentations. Every implmentation\n\t// can have its own config type.\n\tSetupConfig(registerer registerer.Registerer, cfg interface{}) error\n\n\t// SetupHandlers sets up handlers for monitors to invoke for various events such as\n\t// processing unit events and synchronization events. This will be called before Start()\n\t// by the consumer of the monitor\n\tSetupHandlers(c *config.ProcessorConfig)\n\n\t// Resync should resynchronize PUs. This should be done while starting up.\n\tResync(ctx context.Context) error\n}\n"
  },
  {
    "path": "monitor/internal/cni/extractor.go",
    "content": "package cnimonitor\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n)\n\n// KubernetesMetadataExtractor is a systemd based metadata extractor\nfunc KubernetesMetadataExtractor(event *common.EventInfo) (*policy.PURuntime, error) {\n\n\tif event.NS == \"\" {\n\t\treturn nil, errors.New(\"namespace path is required when using cni\")\n\t}\n\n\truntimeTags := policy.NewTagStore()\n\tfor _, tag := range event.Tags {\n\t\tparts := strings.Split(tag, \"=\")\n\t\tif len(parts) != 2 {\n\t\t\treturn nil, fmt.Errorf(\"invalid tag: %s\", tag)\n\t\t}\n\t\truntimeTags.AppendKeyValue(\"@usr:\"+parts[0], parts[1])\n\t}\n\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\n\treturn policy.NewPURuntime(event.Name, 1, \"\", runtimeTags, runtimeIps, common.LinuxProcessPU, nil), nil\n}\n\n// DockerMetadataExtractor is a systemd based metadata extractor\nfunc DockerMetadataExtractor(event *common.EventInfo) (*policy.PURuntime, error) {\n\n\tif event.NS == \"\" {\n\t\treturn nil, errors.New(\"namespace path is required when using cni\")\n\t}\n\n\truntimeTags := policy.NewTagStore()\n\tfor _, tag := range event.Tags {\n\t\tparts := strings.Split(tag, \"=\")\n\t\tif len(parts) != 2 {\n\t\t\treturn nil, fmt.Errorf(\"invalid tag: %s\", tag)\n\t\t}\n\t\truntimeTags.AppendKeyValue(\"@usr:\"+parts[0], parts[1])\n\t}\n\n\truntimeIps := policy.ExtendedMap{\"bridge\": \"0.0.0.0/0\"}\n\n\treturn policy.NewPURuntime(event.Name, 0, event.NS, runtimeTags, runtimeIps, common.ContainerPU, nil), nil\n}\n"
  },
  {
    "path": "monitor/internal/cni/monitor.go",
    "content": "package cnimonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/trireme-lib/monitor/registerer\"\n)\n\n// Config is the configuration options to start a CNI monitor\ntype Config struct {\n\tEventMetadataExtractor extractors.EventMetadataExtractor\n}\n\n// DefaultConfig provides a default configuration\nfunc DefaultConfig() *Config {\n\treturn &Config{\n\t\tEventMetadataExtractor: DockerMetadataExtractor,\n\t}\n}\n\n// SetupDefaultConfig adds defaults to a partial configuration\nfunc SetupDefaultConfig(cniConfig *Config) *Config {\n\n\tdefaultConfig := DefaultConfig()\n\n\tif cniConfig.EventMetadataExtractor == nil {\n\t\tcniConfig.EventMetadataExtractor = defaultConfig.EventMetadataExtractor\n\t}\n\n\treturn cniConfig\n}\n\n// CniMonitor captures all the monitor processor information\n// It implements the EventProcessor interface of the rpc monitor\ntype CniMonitor struct {\n\tproc *cniProcessor\n}\n\n// New returns a new implmentation of a monitor implmentation\nfunc New() *CniMonitor {\n\n\treturn &CniMonitor{\n\t\tproc: &cniProcessor{},\n\t}\n}\n\n// Run implements Implementation interface\nfunc (c *CniMonitor) Run(ctx context.Context) error {\n\n\treturn c.proc.config.IsComplete()\n}\n\n// SetupConfig provides a configuration to implmentations. Every implmentation\n// can have its own config type.\nfunc (c *CniMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) error {\n\n\tdefaultConfig := DefaultConfig()\n\tif cfg == nil {\n\t\tcfg = defaultConfig\n\t}\n\n\tcniConfig, ok := cfg.(*Config)\n\tif !ok {\n\t\treturn fmt.Errorf(\"Invalid configuration specified\")\n\t}\n\n\tif registerer != nil {\n\t\tif err := registerer.RegisterProcessor(common.KubernetesPU, c.proc); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Setup defaults\n\tcniConfig = SetupDefaultConfig(cniConfig)\n\n\t// Setup configuration\n\tc.proc.metadataExtractor = cniConfig.EventMetadataExtractor\n\tif c.proc.metadataExtractor == nil {\n\t\treturn fmt.Errorf(\"Unable to setup a metadata extractor\")\n\t}\n\n\treturn nil\n}\n\n// SetupHandlers sets up handlers for monitors to invoke for various events such as\n// processing unit events and synchronization events. This will be called before Start()\n// by the consumer of the monitor\nfunc (c *CniMonitor) SetupHandlers(m *config.ProcessorConfig) {\n\n\tc.proc.config = m\n}\n\n// Resync instructs the monitor to do a resync.\nfunc (c *CniMonitor) Resync(ctx context.Context) error {\n\n\t// TODO: Implement resync\n\treturn fmt.Errorf(\"resync not implemented\")\n}\n"
  },
  {
    "path": "monitor/internal/cni/processor.go",
    "content": "package cnimonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/trireme-lib/collector\"\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n)\n\ntype cniProcessor struct {\n\tconfig            *config.ProcessorConfig\n\tmetadataExtractor extractors.EventMetadataExtractor\n}\n\n// Create handles create events\nfunc (c *cniProcessor) Create(ctx context.Context, eventInfo *common.EventInfo) error {\n\treturn nil\n}\n\n// Start handles start events\nfunc (c *cniProcessor) Start(ctx context.Context, eventInfo *common.EventInfo) error {\n\tcontextID, err := generateContextID(eventInfo)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\truntimeInfo, err := c.metadataExtractor(eventInfo)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err := c.config.Policy.HandlePUEvent(ctx, contextID, common.EventCreate, runtimeInfo); err != nil {\n\t\treturn err\n\t}\n\n\tif err := c.config.Policy.HandlePUEvent(ctx, contextID, common.EventStart, runtimeInfo); err != nil {\n\t\treturn err\n\t}\n\n\tc.config.Collector.CollectContainerEvent(&collector.ContainerRecord{\n\t\tContextID: contextID,\n\t\tIPAddress: runtimeInfo.IPAddresses(),\n\t\tTags:      runtimeInfo.Tags(),\n\t\tEvent:     collector.ContainerStart,\n\t})\n\n\treturn nil\n}\n\n// Stop handles a stop event\nfunc (c *cniProcessor) Stop(ctx context.Context, eventInfo *common.EventInfo) error {\n\n\tcontextID, err := generateContextID(eventInfo)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to generate context id: %s\", err)\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\n\tif err := c.config.Policy.HandlePUEvent(ctx, contextID, common.EventStop, runtime); err != nil {\n\t\treturn err\n\t}\n\n\treturn c.config.Policy.HandlePUEvent(ctx, contextID, common.EventDestroy, runtime)\n}\n\n// Destroy handles a destroy event\nfunc (c *cniProcessor) Destroy(ctx context.Context, eventInfo *common.EventInfo) error {\n\treturn nil\n}\n\n// Pause handles a pause event\nfunc (c *cniProcessor) Pause(ctx context.Context, eventInfo *common.EventInfo) error {\n\treturn nil\n}\n\n// Resync resyncs with all the existing services that were there before we start\nfunc (c *cniProcessor) Resync(ctx context.Context, e *common.EventInfo) error {\n\treturn nil\n}\n\n// generateContextID creates the contextID from the event information\nfunc generateContextID(eventInfo *common.EventInfo) (string, error) {\n\n\tif eventInfo.PUID == \"\" {\n\t\treturn \"\", errors.New(\"puid is empty from event info\")\n\t}\n\n\tif len(eventInfo.PUID) < 12 {\n\t\treturn \"\", errors.New(\"puid smaller than 12 characters\")\n\t}\n\n\treturn eventInfo.PUID[:12], nil\n}\n"
  },
  {
    "path": "monitor/internal/docker/config.go",
    "content": "package dockermonitor\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n)\n\n// Config is the configuration options to start a CNI monitor\ntype Config struct {\n\tEventMetadataExtractor   extractors.DockerMetadataExtractor\n\tSocketType               string\n\tSocketAddress            string\n\tSyncAtStart              bool\n\tDestroyStoppedContainers bool\n\tignoreHostModeContainers bool\n}\n\n// DefaultConfig provides a default configuration\nfunc DefaultConfig() *Config {\n\treturn &Config{\n\t\tEventMetadataExtractor:   extractors.DefaultMetadataExtractor,\n\t\tSocketType:               string(constants.DefaultDockerSocketType),\n\t\tSocketAddress:            constants.DefaultDockerSocket,\n\t\tSyncAtStart:              true,\n\t\tignoreHostModeContainers: true,\n\t}\n}\n\n// SetupDefaultConfig adds defaults to a partial configuration\nfunc SetupDefaultConfig(dockerConfig *Config) *Config {\n\n\tdefaultConfig := DefaultConfig()\n\n\tif dockerConfig.EventMetadataExtractor == nil {\n\t\tdockerConfig.EventMetadataExtractor = defaultConfig.EventMetadataExtractor\n\t}\n\tif dockerConfig.SocketType == \"\" {\n\t\tdockerConfig.SocketType = defaultConfig.SocketType\n\t}\n\tif dockerConfig.SocketAddress == \"\" {\n\t\tdockerConfig.SocketAddress = defaultConfig.SocketAddress\n\t}\n\treturn dockerConfig\n}\n"
  },
  {
    "path": "monitor/internal/docker/helpers.go",
    "content": "package dockermonitor\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\n// getPausePUID returns puid of pause container.\nfunc getPausePUID(extensions policy.ExtendedMap) string {\n\n\tif extensions == nil {\n\t\treturn \"\"\n\t}\n\n\tif puid, ok := extensions.Get(constants.DockerHostPUID); ok {\n\t\tzap.L().Debug(\"puid of pause container is\", zap.String(\"puid\", puid))\n\t\treturn puid\n\t}\n\n\treturn \"\"\n}\n\n// PolicyExtensions retrieves policy extensions\nfunc policyExtensions(runtime policy.RuntimeReader) (extensions policy.ExtendedMap) {\n\n\tif runtime == nil {\n\t\treturn nil\n\t}\n\n\tif runtime.Options().PolicyExtensions == nil {\n\t\treturn nil\n\t}\n\n\tif extensions, ok := runtime.Options().PolicyExtensions.(policy.ExtendedMap); ok {\n\t\treturn extensions\n\t}\n\treturn nil\n}\n\n// IsHostNetworkContainer returns true if container has hostnetwork set\n// to true or is linked to container with hostnetwork set to true.\nfunc isHostNetworkContainer(runtime policy.RuntimeReader) bool {\n\n\treturn runtime.PUType() == common.LinuxProcessPU || (getPausePUID(policyExtensions(runtime)) != \"\")\n}\n\n// IsKubernetesContainer checks if the container is in K8s.\nfunc isKubernetesContainer(labels map[string]string) bool {\n\n\tif _, ok := labels[constants.K8sPodNamespace]; ok {\n\t\treturn true\n\t}\n\treturn false\n}\n\n// KubePodIdentifier returns identifier for K8s pod.\nfunc kubePodIdentifier(labels map[string]string) string {\n\n\tif !isKubernetesContainer(labels) {\n\t\treturn \"\"\n\t}\n\tpodName := \"\"\n\tpodNamespace := \"\"\n\n\tpodNamespace, ok := labels[constants.K8sPodNamespace]\n\tif !ok {\n\t\tpodNamespace = \"\"\n\t}\n\n\tpodName, ok = labels[constants.K8sPodName]\n\tif !ok {\n\t\tpodName = \"\"\n\t}\n\n\tif podName == \"\" || podNamespace == \"\" {\n\t\tzap.L().Warn(\"K8s pod does not have podname/podnamespace labels\")\n\t\treturn \"\"\n\t}\n\n\treturn podNamespace + \"/\" + podName\n}\n"
  },
  {
    "path": "monitor/internal/docker/helpers_test.go",
    "content": "// +build linux\n\npackage dockermonitor\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc TestGetPausePUID(t *testing.T) {\n\ttype args struct {\n\t\textensions policy.ExtendedMap\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant string\n\t}{\n\t\t{\n\t\t\tname: \"Test when puid is populated\",\n\t\t\targs: args{\n\t\t\t\textensions: policy.ExtendedMap{\n\t\t\t\t\tconstants.DockerHostPUID: \"1234\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: \"1234\",\n\t\t},\n\t\t{\n\t\t\tname: \"Test when puid is not populated\",\n\t\t\targs: args{\n\t\t\t\textensions: policy.ExtendedMap{},\n\t\t\t},\n\t\t\twant: \"\",\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := getPausePUID(tt.args.extensions); got != tt.want {\n\t\t\t\tt.Errorf(\"GetPausePUID() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestPolicyExtensions(t *testing.T) {\n\n\tpuRuntimeWithExtensions := func() *policy.PURuntime {\n\t\textensions := policy.ExtendedMap{}\n\t\textensions[constants.DockerHostPUID] = \"1234\"\n\t\tpuRuntime := policy.NewPURuntimeWithDefaults()\n\t\toptions := puRuntime.Options()\n\t\toptions.PolicyExtensions = extensions\n\t\tpuRuntime.SetOptions(options)\n\t\treturn puRuntime\n\t}\n\n\tpuRuntimeWithWrongExtensions := func() *policy.PURuntime {\n\t\tpuRuntime := policy.NewPURuntimeWithDefaults()\n\t\textensions := \"abcd\"\n\t\toptions := puRuntime.Options()\n\t\toptions.PolicyExtensions = extensions\n\t\tpuRuntime.SetOptions(options)\n\t\treturn puRuntime\n\t}\n\n\ttype args struct {\n\t\truntime policy.RuntimeReader\n\t}\n\ttests := []struct {\n\t\tname           string\n\t\targs           args\n\t\twantExtensions policy.ExtendedMap\n\t}{\n\t\t{\n\t\t\tname: \"Valid case with extensions defined\",\n\t\t\targs: args{\n\t\t\t\truntime: puRuntimeWithExtensions(),\n\t\t\t},\n\t\t\twantExtensions: policy.ExtendedMap{\n\t\t\t\tconstants.DockerHostPUID: \"1234\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Runtime with no extensions defined\",\n\t\t\targs: args{\n\t\t\t\truntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t\twantExtensions: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"Nil runntime\",\n\t\t\targs: args{\n\t\t\t\truntime: nil,\n\t\t\t},\n\t\t\twantExtensions: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"extensions which isnt a map\",\n\t\t\targs: args{\n\t\t\t\truntime: puRuntimeWithWrongExtensions(),\n\t\t\t},\n\t\t\twantExtensions: nil,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif gotExtensions := policyExtensions(tt.args.runtime); !reflect.DeepEqual(gotExtensions, tt.wantExtensions) {\n\t\t\t\tt.Errorf(\"PolicyExtensions() = %v, want %v\", gotExtensions, tt.wantExtensions)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestIsKubernetesContainer(t *testing.T) {\n\n\ttype args struct {\n\t\tlabels map[string]string\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant bool\n\t}{\n\t\t{\n\t\t\tname: \"Test if container is in kubernetes\",\n\t\t\targs: args{\n\t\t\t\tlabels: map[string]string{\n\t\t\t\t\tconstants.K8sPodNamespace: \"abcd\",\n\t\t\t\t\tconstants.K8sPodName:      \"abcd\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if container is not in kubernetes\",\n\t\t\targs: args{\n\t\t\t\tlabels: map[string]string{\n\t\t\t\t\t\"app\": \"nginx\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Test for empty labels\",\n\t\t\targs: args{\n\t\t\t\tlabels: nil,\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := isKubernetesContainer(tt.args.labels); got != tt.want {\n\t\t\t\tt.Errorf(\"IsKubernetesContainer() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_isHostNetworkContainer(t *testing.T) {\n\n\tpuRuntimeWithExtensions := func() *policy.PURuntime {\n\t\textensions := policy.ExtendedMap{\n\t\t\tconstants.DockerHostPUID: \"1234\",\n\t\t}\n\t\tpuRuntime := policy.NewPURuntimeWithDefaults()\n\t\toptions := puRuntime.Options()\n\t\toptions.PolicyExtensions = extensions\n\t\tpuRuntime.SetOptions(options)\n\t\treturn puRuntime\n\t}\n\n\tpuRuntimeForProcess := func() *policy.PURuntime {\n\t\tpuRuntime := policy.NewPURuntimeWithDefaults()\n\t\tpuRuntime.SetPUType(common.LinuxProcessPU)\n\t\treturn puRuntime\n\t}\n\n\ttype args struct {\n\t\truntime policy.RuntimeReader\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant bool\n\t}{\n\t\t{\n\t\t\tname: \"Test when puid is populated\",\n\t\t\targs: args{\n\t\t\t\truntime: puRuntimeWithExtensions(),\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Test when puid is not populated\",\n\t\t\targs: args{\n\t\t\t\truntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Test when runtime is of Linux process pu\",\n\t\t\targs: args{\n\t\t\t\truntime: puRuntimeForProcess(),\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := isHostNetworkContainer(tt.args.runtime); got != tt.want {\n\t\t\t\tt.Errorf(\"isHostNetworkContainer() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_kubePodIdentifier(t *testing.T) {\n\ttype args struct {\n\t\tlabels map[string]string\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant string\n\t}{\n\t\t{\n\t\t\tname: \"Test valid scenario\",\n\t\t\targs: args{\n\t\t\t\tlabels: map[string]string{\n\t\t\t\t\tconstants.K8sPodNamespace: \"abcd\",\n\t\t\t\t\tconstants.K8sPodName:      \"abcd\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: \"abcd/abcd\",\n\t\t},\n\t\t{\n\t\t\tname: \"Test when only one of tags are present\",\n\t\t\targs: args{\n\t\t\t\tlabels: map[string]string{\n\t\t\t\t\tconstants.K8sPodNamespace: \"abcd\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: \"\",\n\t\t},\n\t\t{\n\t\t\tname: \"Test when only no k8s tags are present\",\n\t\t\targs: args{\n\t\t\t\tlabels: map[string]string{},\n\t\t\t},\n\t\t\twant: \"\",\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := kubePodIdentifier(tt.args.labels); got != tt.want {\n\t\t\t\tt.Errorf(\"kubePodIdentifier() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/docker/mockdocker/mockdocker.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: github.com/docker/docker/client/interface.go\n\n// Package mockdocker is a generated GoMock package.\npackage mockdocker\n\nimport (\n\tcontext \"context\"\n\tio \"io\"\n\tnet \"net\"\n\thttp \"net/http\"\n\treflect \"reflect\"\n\ttime \"time\"\n\n\ttypes \"github.com/docker/docker/api/types\"\n\tcontainerpkg \"github.com/docker/docker/api/types/container\"\n\tevents \"github.com/docker/docker/api/types/events\"\n\tfilters \"github.com/docker/docker/api/types/filters\"\n\timagepkg \"github.com/docker/docker/api/types/image\"\n\tnetwork \"github.com/docker/docker/api/types/network\"\n\tregistry \"github.com/docker/docker/api/types/registry\"\n\tswarm \"github.com/docker/docker/api/types/swarm\"\n\tvolume \"github.com/docker/docker/api/types/volume\"\n\tgomock \"github.com/golang/mock/gomock\"\n)\n\n// MockCommonAPIClient is a mock of CommonAPIClient interface\ntype MockCommonAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockCommonAPIClientMockRecorder\n}\n\n// MockCommonAPIClientMockRecorder is the mock recorder for MockCommonAPIClient\ntype MockCommonAPIClientMockRecorder struct {\n\tmock *MockCommonAPIClient\n}\n\n// NewMockCommonAPIClient creates a new mock instance\nfunc NewMockCommonAPIClient(ctrl *gomock.Controller) *MockCommonAPIClient {\n\tmock := &MockCommonAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockCommonAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockCommonAPIClient) EXPECT() *MockCommonAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// ConfigList mocks base method\nfunc (m *MockCommonAPIClient) ConfigList(ctx context.Context, options types.ConfigListOptions) ([]swarm.Config, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Config)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigList indicates an expected call of ConfigList\nfunc (mr *MockCommonAPIClientMockRecorder) ConfigList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigList\", reflect.TypeOf((*MockCommonAPIClient)(nil).ConfigList), ctx, options)\n}\n\n// ConfigCreate mocks base method\nfunc (m *MockCommonAPIClient) ConfigCreate(ctx context.Context, config swarm.ConfigSpec) (types.ConfigCreateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigCreate\", ctx, config)\n\tret0, _ := ret[0].(types.ConfigCreateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigCreate indicates an expected call of ConfigCreate\nfunc (mr *MockCommonAPIClientMockRecorder) ConfigCreate(ctx, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).ConfigCreate), ctx, config)\n}\n\n// ConfigRemove mocks base method\nfunc (m *MockCommonAPIClient) ConfigRemove(ctx context.Context, id string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigRemove\", ctx, id)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ConfigRemove indicates an expected call of ConfigRemove\nfunc (mr *MockCommonAPIClientMockRecorder) ConfigRemove(ctx, id interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).ConfigRemove), ctx, id)\n}\n\n// ConfigInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) ConfigInspectWithRaw(ctx context.Context, name string) (swarm.Config, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigInspectWithRaw\", ctx, name)\n\tret0, _ := ret[0].(swarm.Config)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ConfigInspectWithRaw indicates an expected call of ConfigInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) ConfigInspectWithRaw(ctx, name interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).ConfigInspectWithRaw), ctx, name)\n}\n\n// ConfigUpdate mocks base method\nfunc (m *MockCommonAPIClient) ConfigUpdate(ctx context.Context, id string, version swarm.Version, config swarm.ConfigSpec) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigUpdate\", ctx, id, version, config)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ConfigUpdate indicates an expected call of ConfigUpdate\nfunc (mr *MockCommonAPIClientMockRecorder) ConfigUpdate(ctx, id, version, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigUpdate\", reflect.TypeOf((*MockCommonAPIClient)(nil).ConfigUpdate), ctx, id, version, config)\n}\n\n// ContainerAttach mocks base method\nfunc (m *MockCommonAPIClient) ContainerAttach(ctx context.Context, container string, options types.ContainerAttachOptions) (types.HijackedResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerAttach\", ctx, container, options)\n\tret0, _ := ret[0].(types.HijackedResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerAttach indicates an expected call of ContainerAttach\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerAttach(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerAttach\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerAttach), ctx, container, options)\n}\n\n// ContainerCommit mocks base method\nfunc (m *MockCommonAPIClient) ContainerCommit(ctx context.Context, container string, options types.ContainerCommitOptions) (types.IDResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerCommit\", ctx, container, options)\n\tret0, _ := ret[0].(types.IDResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerCommit indicates an expected call of ContainerCommit\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerCommit(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerCommit\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerCommit), ctx, container, options)\n}\n\n// ContainerCreate mocks base method\nfunc (m *MockCommonAPIClient) ContainerCreate(ctx context.Context, config *containerpkg.Config, hostConfig *containerpkg.HostConfig, networkingConfig *network.NetworkingConfig, containerName string) (containerpkg.ContainerCreateCreatedBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerCreate\", ctx, config, hostConfig, networkingConfig, containerName)\n\tret0, _ := ret[0].(containerpkg.ContainerCreateCreatedBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerCreate indicates an expected call of ContainerCreate\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerCreate(ctx, config, hostConfig, networkingConfig, containerName interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerCreate), ctx, config, hostConfig, networkingConfig, containerName)\n}\n\n// ContainerDiff mocks base method\nfunc (m *MockCommonAPIClient) ContainerDiff(ctx context.Context, container string) ([]containerpkg.ContainerChangeResponseItem, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerDiff\", ctx, container)\n\tret0, _ := ret[0].([]containerpkg.ContainerChangeResponseItem)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerDiff indicates an expected call of ContainerDiff\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerDiff(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerDiff\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerDiff), ctx, container)\n}\n\n// ContainerExecAttach mocks base method\nfunc (m *MockCommonAPIClient) ContainerExecAttach(ctx context.Context, execID string, config types.ExecStartCheck) (types.HijackedResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecAttach\", ctx, execID, config)\n\tret0, _ := ret[0].(types.HijackedResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExecAttach indicates an expected call of ContainerExecAttach\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerExecAttach(ctx, execID, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecAttach\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerExecAttach), ctx, execID, config)\n}\n\n// ContainerExecCreate mocks base method\nfunc (m *MockCommonAPIClient) ContainerExecCreate(ctx context.Context, container string, config types.ExecConfig) (types.IDResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecCreate\", ctx, container, config)\n\tret0, _ := ret[0].(types.IDResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExecCreate indicates an expected call of ContainerExecCreate\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerExecCreate(ctx, container, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerExecCreate), ctx, container, config)\n}\n\n// ContainerExecInspect mocks base method\nfunc (m *MockCommonAPIClient) ContainerExecInspect(ctx context.Context, execID string) (types.ContainerExecInspect, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecInspect\", ctx, execID)\n\tret0, _ := ret[0].(types.ContainerExecInspect)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExecInspect indicates an expected call of ContainerExecInspect\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerExecInspect(ctx, execID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecInspect\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerExecInspect), ctx, execID)\n}\n\n// ContainerExecResize mocks base method\nfunc (m *MockCommonAPIClient) ContainerExecResize(ctx context.Context, execID string, options types.ResizeOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecResize\", ctx, execID, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerExecResize indicates an expected call of ContainerExecResize\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerExecResize(ctx, execID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecResize\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerExecResize), ctx, execID, options)\n}\n\n// ContainerExecStart mocks base method\nfunc (m *MockCommonAPIClient) ContainerExecStart(ctx context.Context, execID string, config types.ExecStartCheck) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecStart\", ctx, execID, config)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerExecStart indicates an expected call of ContainerExecStart\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerExecStart(ctx, execID, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecStart\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerExecStart), ctx, execID, config)\n}\n\n// ContainerExport mocks base method\nfunc (m *MockCommonAPIClient) ContainerExport(ctx context.Context, container string) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExport\", ctx, container)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExport indicates an expected call of ContainerExport\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerExport(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExport\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerExport), ctx, container)\n}\n\n// ContainerInspect mocks base method\nfunc (m *MockCommonAPIClient) ContainerInspect(ctx context.Context, container string) (types.ContainerJSON, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerInspect\", ctx, container)\n\tret0, _ := ret[0].(types.ContainerJSON)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerInspect indicates an expected call of ContainerInspect\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerInspect(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerInspect\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerInspect), ctx, container)\n}\n\n// ContainerInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) ContainerInspectWithRaw(ctx context.Context, container string, getSize bool) (types.ContainerJSON, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerInspectWithRaw\", ctx, container, getSize)\n\tret0, _ := ret[0].(types.ContainerJSON)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ContainerInspectWithRaw indicates an expected call of ContainerInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerInspectWithRaw(ctx, container, getSize interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerInspectWithRaw), ctx, container, getSize)\n}\n\n// ContainerKill mocks base method\nfunc (m *MockCommonAPIClient) ContainerKill(ctx context.Context, container, signal string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerKill\", ctx, container, signal)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerKill indicates an expected call of ContainerKill\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerKill(ctx, container, signal interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerKill\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerKill), ctx, container, signal)\n}\n\n// ContainerList mocks base method\nfunc (m *MockCommonAPIClient) ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerList\", ctx, options)\n\tret0, _ := ret[0].([]types.Container)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerList indicates an expected call of ContainerList\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerList\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerList), ctx, options)\n}\n\n// ContainerLogs mocks base method\nfunc (m *MockCommonAPIClient) ContainerLogs(ctx context.Context, container string, options types.ContainerLogsOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerLogs\", ctx, container, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerLogs indicates an expected call of ContainerLogs\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerLogs(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerLogs\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerLogs), ctx, container, options)\n}\n\n// ContainerPause mocks base method\nfunc (m *MockCommonAPIClient) ContainerPause(ctx context.Context, container string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerPause\", ctx, container)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerPause indicates an expected call of ContainerPause\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerPause(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerPause\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerPause), ctx, container)\n}\n\n// ContainerRemove mocks base method\nfunc (m *MockCommonAPIClient) ContainerRemove(ctx context.Context, container string, options types.ContainerRemoveOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRemove\", ctx, container, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerRemove indicates an expected call of ContainerRemove\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerRemove(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerRemove), ctx, container, options)\n}\n\n// ContainerRename mocks base method\nfunc (m *MockCommonAPIClient) ContainerRename(ctx context.Context, container, newContainerName string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRename\", ctx, container, newContainerName)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerRename indicates an expected call of ContainerRename\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerRename(ctx, container, newContainerName interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRename\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerRename), ctx, container, newContainerName)\n}\n\n// ContainerResize mocks base method\nfunc (m *MockCommonAPIClient) ContainerResize(ctx context.Context, container string, options types.ResizeOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerResize\", ctx, container, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerResize indicates an expected call of ContainerResize\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerResize(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerResize\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerResize), ctx, container, options)\n}\n\n// ContainerRestart mocks base method\nfunc (m *MockCommonAPIClient) ContainerRestart(ctx context.Context, container string, timeout *time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRestart\", ctx, container, timeout)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerRestart indicates an expected call of ContainerRestart\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerRestart(ctx, container, timeout interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRestart\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerRestart), ctx, container, timeout)\n}\n\n// ContainerStatPath mocks base method\nfunc (m *MockCommonAPIClient) ContainerStatPath(ctx context.Context, container, path string) (types.ContainerPathStat, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStatPath\", ctx, container, path)\n\tret0, _ := ret[0].(types.ContainerPathStat)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStatPath indicates an expected call of ContainerStatPath\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerStatPath(ctx, container, path interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStatPath\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerStatPath), ctx, container, path)\n}\n\n// ContainerStats mocks base method\nfunc (m *MockCommonAPIClient) ContainerStats(ctx context.Context, container string, stream bool) (types.ContainerStats, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStats\", ctx, container, stream)\n\tret0, _ := ret[0].(types.ContainerStats)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStats indicates an expected call of ContainerStats\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerStats(ctx, container, stream interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStats\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerStats), ctx, container, stream)\n}\n\n// ContainerStart mocks base method\nfunc (m *MockCommonAPIClient) ContainerStart(ctx context.Context, container string, options types.ContainerStartOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStart\", ctx, container, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerStart indicates an expected call of ContainerStart\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerStart(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStart\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerStart), ctx, container, options)\n}\n\n// ContainerStop mocks base method\nfunc (m *MockCommonAPIClient) ContainerStop(ctx context.Context, container string, timeout *time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStop\", ctx, container, timeout)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerStop indicates an expected call of ContainerStop\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerStop(ctx, container, timeout interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStop\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerStop), ctx, container, timeout)\n}\n\n// ContainerTop mocks base method\nfunc (m *MockCommonAPIClient) ContainerTop(ctx context.Context, container string, arguments []string) (containerpkg.ContainerTopOKBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerTop\", ctx, container, arguments)\n\tret0, _ := ret[0].(containerpkg.ContainerTopOKBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerTop indicates an expected call of ContainerTop\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerTop(ctx, container, arguments interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerTop\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerTop), ctx, container, arguments)\n}\n\n// ContainerUnpause mocks base method\nfunc (m *MockCommonAPIClient) ContainerUnpause(ctx context.Context, container string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerUnpause\", ctx, container)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerUnpause indicates an expected call of ContainerUnpause\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerUnpause(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerUnpause\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerUnpause), ctx, container)\n}\n\n// ContainerUpdate mocks base method\nfunc (m *MockCommonAPIClient) ContainerUpdate(ctx context.Context, container string, updateConfig containerpkg.UpdateConfig) (containerpkg.ContainerUpdateOKBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerUpdate\", ctx, container, updateConfig)\n\tret0, _ := ret[0].(containerpkg.ContainerUpdateOKBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerUpdate indicates an expected call of ContainerUpdate\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerUpdate(ctx, container, updateConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerUpdate\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerUpdate), ctx, container, updateConfig)\n}\n\n// ContainerWait mocks base method\nfunc (m *MockCommonAPIClient) ContainerWait(ctx context.Context, container string, condition containerpkg.WaitCondition) (<-chan containerpkg.ContainerWaitOKBody, <-chan error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerWait\", ctx, container, condition)\n\tret0, _ := ret[0].(<-chan containerpkg.ContainerWaitOKBody)\n\tret1, _ := ret[1].(<-chan error)\n\treturn ret0, ret1\n}\n\n// ContainerWait indicates an expected call of ContainerWait\nfunc (mr *MockCommonAPIClientMockRecorder) ContainerWait(ctx, container, condition interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerWait\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainerWait), ctx, container, condition)\n}\n\n// CopyFromContainer mocks base method\nfunc (m *MockCommonAPIClient) CopyFromContainer(ctx context.Context, container, srcPath string) (io.ReadCloser, types.ContainerPathStat, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CopyFromContainer\", ctx, container, srcPath)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(types.ContainerPathStat)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// CopyFromContainer indicates an expected call of CopyFromContainer\nfunc (mr *MockCommonAPIClientMockRecorder) CopyFromContainer(ctx, container, srcPath interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CopyFromContainer\", reflect.TypeOf((*MockCommonAPIClient)(nil).CopyFromContainer), ctx, container, srcPath)\n}\n\n// CopyToContainer mocks base method\nfunc (m *MockCommonAPIClient) CopyToContainer(ctx context.Context, container, path string, content io.Reader, options types.CopyToContainerOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CopyToContainer\", ctx, container, path, content, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CopyToContainer indicates an expected call of CopyToContainer\nfunc (mr *MockCommonAPIClientMockRecorder) CopyToContainer(ctx, container, path, content, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CopyToContainer\", reflect.TypeOf((*MockCommonAPIClient)(nil).CopyToContainer), ctx, container, path, content, options)\n}\n\n// ContainersPrune mocks base method\nfunc (m *MockCommonAPIClient) ContainersPrune(ctx context.Context, pruneFilters filters.Args) (types.ContainersPruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainersPrune\", ctx, pruneFilters)\n\tret0, _ := ret[0].(types.ContainersPruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainersPrune indicates an expected call of ContainersPrune\nfunc (mr *MockCommonAPIClientMockRecorder) ContainersPrune(ctx, pruneFilters interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainersPrune\", reflect.TypeOf((*MockCommonAPIClient)(nil).ContainersPrune), ctx, pruneFilters)\n}\n\n// DistributionInspect mocks base method\nfunc (m *MockCommonAPIClient) DistributionInspect(ctx context.Context, image, encodedRegistryAuth string) (registry.DistributionInspect, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DistributionInspect\", ctx, image, encodedRegistryAuth)\n\tret0, _ := ret[0].(registry.DistributionInspect)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DistributionInspect indicates an expected call of DistributionInspect\nfunc (mr *MockCommonAPIClientMockRecorder) DistributionInspect(ctx, image, encodedRegistryAuth interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DistributionInspect\", reflect.TypeOf((*MockCommonAPIClient)(nil).DistributionInspect), ctx, image, encodedRegistryAuth)\n}\n\n// ImageBuild mocks base method\nfunc (m *MockCommonAPIClient) ImageBuild(ctx context.Context, context io.Reader, options types.ImageBuildOptions) (types.ImageBuildResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageBuild\", ctx, context, options)\n\tret0, _ := ret[0].(types.ImageBuildResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageBuild indicates an expected call of ImageBuild\nfunc (mr *MockCommonAPIClientMockRecorder) ImageBuild(ctx, context, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageBuild\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageBuild), ctx, context, options)\n}\n\n// BuildCachePrune mocks base method\nfunc (m *MockCommonAPIClient) BuildCachePrune(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"BuildCachePrune\", ctx, opts)\n\tret0, _ := ret[0].(*types.BuildCachePruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// BuildCachePrune indicates an expected call of BuildCachePrune\nfunc (mr *MockCommonAPIClientMockRecorder) BuildCachePrune(ctx, opts interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"BuildCachePrune\", reflect.TypeOf((*MockCommonAPIClient)(nil).BuildCachePrune), ctx, opts)\n}\n\n// BuildCancel mocks base method\nfunc (m *MockCommonAPIClient) BuildCancel(ctx context.Context, id string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"BuildCancel\", ctx, id)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// BuildCancel indicates an expected call of BuildCancel\nfunc (mr *MockCommonAPIClientMockRecorder) BuildCancel(ctx, id interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"BuildCancel\", reflect.TypeOf((*MockCommonAPIClient)(nil).BuildCancel), ctx, id)\n}\n\n// ImageCreate mocks base method\nfunc (m *MockCommonAPIClient) ImageCreate(ctx context.Context, parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageCreate\", ctx, parentReference, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageCreate indicates an expected call of ImageCreate\nfunc (mr *MockCommonAPIClientMockRecorder) ImageCreate(ctx, parentReference, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageCreate), ctx, parentReference, options)\n}\n\n// ImageHistory mocks base method\nfunc (m *MockCommonAPIClient) ImageHistory(ctx context.Context, image string) ([]imagepkg.HistoryResponseItem, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageHistory\", ctx, image)\n\tret0, _ := ret[0].([]imagepkg.HistoryResponseItem)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageHistory indicates an expected call of ImageHistory\nfunc (mr *MockCommonAPIClientMockRecorder) ImageHistory(ctx, image interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageHistory\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageHistory), ctx, image)\n}\n\n// ImageImport mocks base method\nfunc (m *MockCommonAPIClient) ImageImport(ctx context.Context, source types.ImageImportSource, ref string, options types.ImageImportOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageImport\", ctx, source, ref, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageImport indicates an expected call of ImageImport\nfunc (mr *MockCommonAPIClientMockRecorder) ImageImport(ctx, source, ref, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageImport\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageImport), ctx, source, ref, options)\n}\n\n// ImageInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) ImageInspectWithRaw(ctx context.Context, image string) (types.ImageInspect, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageInspectWithRaw\", ctx, image)\n\tret0, _ := ret[0].(types.ImageInspect)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ImageInspectWithRaw indicates an expected call of ImageInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) ImageInspectWithRaw(ctx, image interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageInspectWithRaw), ctx, image)\n}\n\n// ImageList mocks base method\nfunc (m *MockCommonAPIClient) ImageList(ctx context.Context, options types.ImageListOptions) ([]types.ImageSummary, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageList\", ctx, options)\n\tret0, _ := ret[0].([]types.ImageSummary)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageList indicates an expected call of ImageList\nfunc (mr *MockCommonAPIClientMockRecorder) ImageList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageList\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageList), ctx, options)\n}\n\n// ImageLoad mocks base method\nfunc (m *MockCommonAPIClient) ImageLoad(ctx context.Context, input io.Reader, quiet bool) (types.ImageLoadResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageLoad\", ctx, input, quiet)\n\tret0, _ := ret[0].(types.ImageLoadResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageLoad indicates an expected call of ImageLoad\nfunc (mr *MockCommonAPIClientMockRecorder) ImageLoad(ctx, input, quiet interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageLoad\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageLoad), ctx, input, quiet)\n}\n\n// ImagePull mocks base method\nfunc (m *MockCommonAPIClient) ImagePull(ctx context.Context, ref string, options types.ImagePullOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagePull\", ctx, ref, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagePull indicates an expected call of ImagePull\nfunc (mr *MockCommonAPIClientMockRecorder) ImagePull(ctx, ref, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagePull\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImagePull), ctx, ref, options)\n}\n\n// ImagePush mocks base method\nfunc (m *MockCommonAPIClient) ImagePush(ctx context.Context, ref string, options types.ImagePushOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagePush\", ctx, ref, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagePush indicates an expected call of ImagePush\nfunc (mr *MockCommonAPIClientMockRecorder) ImagePush(ctx, ref, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagePush\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImagePush), ctx, ref, options)\n}\n\n// ImageRemove mocks base method\nfunc (m *MockCommonAPIClient) ImageRemove(ctx context.Context, image string, options types.ImageRemoveOptions) ([]types.ImageDeleteResponseItem, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageRemove\", ctx, image, options)\n\tret0, _ := ret[0].([]types.ImageDeleteResponseItem)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageRemove indicates an expected call of ImageRemove\nfunc (mr *MockCommonAPIClientMockRecorder) ImageRemove(ctx, image, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageRemove), ctx, image, options)\n}\n\n// ImageSearch mocks base method\nfunc (m *MockCommonAPIClient) ImageSearch(ctx context.Context, term string, options types.ImageSearchOptions) ([]registry.SearchResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageSearch\", ctx, term, options)\n\tret0, _ := ret[0].([]registry.SearchResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageSearch indicates an expected call of ImageSearch\nfunc (mr *MockCommonAPIClientMockRecorder) ImageSearch(ctx, term, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageSearch\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageSearch), ctx, term, options)\n}\n\n// ImageSave mocks base method\nfunc (m *MockCommonAPIClient) ImageSave(ctx context.Context, images []string) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageSave\", ctx, images)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageSave indicates an expected call of ImageSave\nfunc (mr *MockCommonAPIClientMockRecorder) ImageSave(ctx, images interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageSave\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageSave), ctx, images)\n}\n\n// ImageTag mocks base method\nfunc (m *MockCommonAPIClient) ImageTag(ctx context.Context, image, ref string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageTag\", ctx, image, ref)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ImageTag indicates an expected call of ImageTag\nfunc (mr *MockCommonAPIClientMockRecorder) ImageTag(ctx, image, ref interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageTag\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImageTag), ctx, image, ref)\n}\n\n// ImagesPrune mocks base method\nfunc (m *MockCommonAPIClient) ImagesPrune(ctx context.Context, pruneFilter filters.Args) (types.ImagesPruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagesPrune\", ctx, pruneFilter)\n\tret0, _ := ret[0].(types.ImagesPruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagesPrune indicates an expected call of ImagesPrune\nfunc (mr *MockCommonAPIClientMockRecorder) ImagesPrune(ctx, pruneFilter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagesPrune\", reflect.TypeOf((*MockCommonAPIClient)(nil).ImagesPrune), ctx, pruneFilter)\n}\n\n// NodeInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) NodeInspectWithRaw(ctx context.Context, nodeID string) (swarm.Node, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeInspectWithRaw\", ctx, nodeID)\n\tret0, _ := ret[0].(swarm.Node)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// NodeInspectWithRaw indicates an expected call of NodeInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) NodeInspectWithRaw(ctx, nodeID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).NodeInspectWithRaw), ctx, nodeID)\n}\n\n// NodeList mocks base method\nfunc (m *MockCommonAPIClient) NodeList(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Node)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NodeList indicates an expected call of NodeList\nfunc (mr *MockCommonAPIClientMockRecorder) NodeList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeList\", reflect.TypeOf((*MockCommonAPIClient)(nil).NodeList), ctx, options)\n}\n\n// NodeRemove mocks base method\nfunc (m *MockCommonAPIClient) NodeRemove(ctx context.Context, nodeID string, options types.NodeRemoveOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeRemove\", ctx, nodeID, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NodeRemove indicates an expected call of NodeRemove\nfunc (mr *MockCommonAPIClientMockRecorder) NodeRemove(ctx, nodeID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).NodeRemove), ctx, nodeID, options)\n}\n\n// NodeUpdate mocks base method\nfunc (m *MockCommonAPIClient) NodeUpdate(ctx context.Context, nodeID string, version swarm.Version, node swarm.NodeSpec) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeUpdate\", ctx, nodeID, version, node)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NodeUpdate indicates an expected call of NodeUpdate\nfunc (mr *MockCommonAPIClientMockRecorder) NodeUpdate(ctx, nodeID, version, node interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeUpdate\", reflect.TypeOf((*MockCommonAPIClient)(nil).NodeUpdate), ctx, nodeID, version, node)\n}\n\n// NetworkConnect mocks base method\nfunc (m *MockCommonAPIClient) NetworkConnect(ctx context.Context, network, container string, config *network.EndpointSettings) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkConnect\", ctx, network, container, config)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NetworkConnect indicates an expected call of NetworkConnect\nfunc (mr *MockCommonAPIClientMockRecorder) NetworkConnect(ctx, network, container, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkConnect\", reflect.TypeOf((*MockCommonAPIClient)(nil).NetworkConnect), ctx, network, container, config)\n}\n\n// NetworkCreate mocks base method\nfunc (m *MockCommonAPIClient) NetworkCreate(ctx context.Context, name string, options types.NetworkCreate) (types.NetworkCreateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkCreate\", ctx, name, options)\n\tret0, _ := ret[0].(types.NetworkCreateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkCreate indicates an expected call of NetworkCreate\nfunc (mr *MockCommonAPIClientMockRecorder) NetworkCreate(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).NetworkCreate), ctx, name, options)\n}\n\n// NetworkDisconnect mocks base method\nfunc (m *MockCommonAPIClient) NetworkDisconnect(ctx context.Context, network, container string, force bool) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkDisconnect\", ctx, network, container, force)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NetworkDisconnect indicates an expected call of NetworkDisconnect\nfunc (mr *MockCommonAPIClientMockRecorder) NetworkDisconnect(ctx, network, container, force interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkDisconnect\", reflect.TypeOf((*MockCommonAPIClient)(nil).NetworkDisconnect), ctx, network, container, force)\n}\n\n// NetworkInspect mocks base method\nfunc (m *MockCommonAPIClient) NetworkInspect(ctx context.Context, network string, options types.NetworkInspectOptions) (types.NetworkResource, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkInspect\", ctx, network, options)\n\tret0, _ := ret[0].(types.NetworkResource)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkInspect indicates an expected call of NetworkInspect\nfunc (mr *MockCommonAPIClientMockRecorder) NetworkInspect(ctx, network, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkInspect\", reflect.TypeOf((*MockCommonAPIClient)(nil).NetworkInspect), ctx, network, options)\n}\n\n// NetworkInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) NetworkInspectWithRaw(ctx context.Context, network string, options types.NetworkInspectOptions) (types.NetworkResource, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkInspectWithRaw\", ctx, network, options)\n\tret0, _ := ret[0].(types.NetworkResource)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// NetworkInspectWithRaw indicates an expected call of NetworkInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) NetworkInspectWithRaw(ctx, network, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).NetworkInspectWithRaw), ctx, network, options)\n}\n\n// NetworkList mocks base method\nfunc (m *MockCommonAPIClient) NetworkList(ctx context.Context, options types.NetworkListOptions) ([]types.NetworkResource, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkList\", ctx, options)\n\tret0, _ := ret[0].([]types.NetworkResource)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkList indicates an expected call of NetworkList\nfunc (mr *MockCommonAPIClientMockRecorder) NetworkList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkList\", reflect.TypeOf((*MockCommonAPIClient)(nil).NetworkList), ctx, options)\n}\n\n// NetworkRemove mocks base method\nfunc (m *MockCommonAPIClient) NetworkRemove(ctx context.Context, network string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkRemove\", ctx, network)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NetworkRemove indicates an expected call of NetworkRemove\nfunc (mr *MockCommonAPIClientMockRecorder) NetworkRemove(ctx, network interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).NetworkRemove), ctx, network)\n}\n\n// NetworksPrune mocks base method\nfunc (m *MockCommonAPIClient) NetworksPrune(ctx context.Context, pruneFilter filters.Args) (types.NetworksPruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworksPrune\", ctx, pruneFilter)\n\tret0, _ := ret[0].(types.NetworksPruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworksPrune indicates an expected call of NetworksPrune\nfunc (mr *MockCommonAPIClientMockRecorder) NetworksPrune(ctx, pruneFilter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworksPrune\", reflect.TypeOf((*MockCommonAPIClient)(nil).NetworksPrune), ctx, pruneFilter)\n}\n\n// PluginList mocks base method\nfunc (m *MockCommonAPIClient) PluginList(ctx context.Context, filter filters.Args) (types.PluginsListResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginList\", ctx, filter)\n\tret0, _ := ret[0].(types.PluginsListResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginList indicates an expected call of PluginList\nfunc (mr *MockCommonAPIClientMockRecorder) PluginList(ctx, filter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginList\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginList), ctx, filter)\n}\n\n// PluginRemove mocks base method\nfunc (m *MockCommonAPIClient) PluginRemove(ctx context.Context, name string, options types.PluginRemoveOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginRemove\", ctx, name, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginRemove indicates an expected call of PluginRemove\nfunc (mr *MockCommonAPIClientMockRecorder) PluginRemove(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginRemove), ctx, name, options)\n}\n\n// PluginEnable mocks base method\nfunc (m *MockCommonAPIClient) PluginEnable(ctx context.Context, name string, options types.PluginEnableOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginEnable\", ctx, name, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginEnable indicates an expected call of PluginEnable\nfunc (mr *MockCommonAPIClientMockRecorder) PluginEnable(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginEnable\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginEnable), ctx, name, options)\n}\n\n// PluginDisable mocks base method\nfunc (m *MockCommonAPIClient) PluginDisable(ctx context.Context, name string, options types.PluginDisableOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginDisable\", ctx, name, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginDisable indicates an expected call of PluginDisable\nfunc (mr *MockCommonAPIClientMockRecorder) PluginDisable(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginDisable\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginDisable), ctx, name, options)\n}\n\n// PluginInstall mocks base method\nfunc (m *MockCommonAPIClient) PluginInstall(ctx context.Context, name string, options types.PluginInstallOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginInstall\", ctx, name, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginInstall indicates an expected call of PluginInstall\nfunc (mr *MockCommonAPIClientMockRecorder) PluginInstall(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginInstall\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginInstall), ctx, name, options)\n}\n\n// PluginUpgrade mocks base method\nfunc (m *MockCommonAPIClient) PluginUpgrade(ctx context.Context, name string, options types.PluginInstallOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginUpgrade\", ctx, name, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginUpgrade indicates an expected call of PluginUpgrade\nfunc (mr *MockCommonAPIClientMockRecorder) PluginUpgrade(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginUpgrade\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginUpgrade), ctx, name, options)\n}\n\n// PluginPush mocks base method\nfunc (m *MockCommonAPIClient) PluginPush(ctx context.Context, name, registryAuth string) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginPush\", ctx, name, registryAuth)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginPush indicates an expected call of PluginPush\nfunc (mr *MockCommonAPIClientMockRecorder) PluginPush(ctx, name, registryAuth interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginPush\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginPush), ctx, name, registryAuth)\n}\n\n// PluginSet mocks base method\nfunc (m *MockCommonAPIClient) PluginSet(ctx context.Context, name string, args []string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginSet\", ctx, name, args)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginSet indicates an expected call of PluginSet\nfunc (mr *MockCommonAPIClientMockRecorder) PluginSet(ctx, name, args interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginSet\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginSet), ctx, name, args)\n}\n\n// PluginInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) PluginInspectWithRaw(ctx context.Context, name string) (*types.Plugin, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginInspectWithRaw\", ctx, name)\n\tret0, _ := ret[0].(*types.Plugin)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// PluginInspectWithRaw indicates an expected call of PluginInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) PluginInspectWithRaw(ctx, name interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginInspectWithRaw), ctx, name)\n}\n\n// PluginCreate mocks base method\nfunc (m *MockCommonAPIClient) PluginCreate(ctx context.Context, createContext io.Reader, options types.PluginCreateOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginCreate\", ctx, createContext, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginCreate indicates an expected call of PluginCreate\nfunc (mr *MockCommonAPIClientMockRecorder) PluginCreate(ctx, createContext, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).PluginCreate), ctx, createContext, options)\n}\n\n// ServiceCreate mocks base method\nfunc (m *MockCommonAPIClient) ServiceCreate(ctx context.Context, service swarm.ServiceSpec, options types.ServiceCreateOptions) (types.ServiceCreateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceCreate\", ctx, service, options)\n\tret0, _ := ret[0].(types.ServiceCreateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceCreate indicates an expected call of ServiceCreate\nfunc (mr *MockCommonAPIClientMockRecorder) ServiceCreate(ctx, service, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).ServiceCreate), ctx, service, options)\n}\n\n// ServiceInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) ServiceInspectWithRaw(ctx context.Context, serviceID string, options types.ServiceInspectOptions) (swarm.Service, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceInspectWithRaw\", ctx, serviceID, options)\n\tret0, _ := ret[0].(swarm.Service)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ServiceInspectWithRaw indicates an expected call of ServiceInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) ServiceInspectWithRaw(ctx, serviceID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).ServiceInspectWithRaw), ctx, serviceID, options)\n}\n\n// ServiceList mocks base method\nfunc (m *MockCommonAPIClient) ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Service)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceList indicates an expected call of ServiceList\nfunc (mr *MockCommonAPIClientMockRecorder) ServiceList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceList\", reflect.TypeOf((*MockCommonAPIClient)(nil).ServiceList), ctx, options)\n}\n\n// ServiceRemove mocks base method\nfunc (m *MockCommonAPIClient) ServiceRemove(ctx context.Context, serviceID string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceRemove\", ctx, serviceID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ServiceRemove indicates an expected call of ServiceRemove\nfunc (mr *MockCommonAPIClientMockRecorder) ServiceRemove(ctx, serviceID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).ServiceRemove), ctx, serviceID)\n}\n\n// ServiceUpdate mocks base method\nfunc (m *MockCommonAPIClient) ServiceUpdate(ctx context.Context, serviceID string, version swarm.Version, service swarm.ServiceSpec, options types.ServiceUpdateOptions) (types.ServiceUpdateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceUpdate\", ctx, serviceID, version, service, options)\n\tret0, _ := ret[0].(types.ServiceUpdateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceUpdate indicates an expected call of ServiceUpdate\nfunc (mr *MockCommonAPIClientMockRecorder) ServiceUpdate(ctx, serviceID, version, service, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceUpdate\", reflect.TypeOf((*MockCommonAPIClient)(nil).ServiceUpdate), ctx, serviceID, version, service, options)\n}\n\n// ServiceLogs mocks base method\nfunc (m *MockCommonAPIClient) ServiceLogs(ctx context.Context, serviceID string, options types.ContainerLogsOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceLogs\", ctx, serviceID, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceLogs indicates an expected call of ServiceLogs\nfunc (mr *MockCommonAPIClientMockRecorder) ServiceLogs(ctx, serviceID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceLogs\", reflect.TypeOf((*MockCommonAPIClient)(nil).ServiceLogs), ctx, serviceID, options)\n}\n\n// TaskLogs mocks base method\nfunc (m *MockCommonAPIClient) TaskLogs(ctx context.Context, taskID string, options types.ContainerLogsOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskLogs\", ctx, taskID, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// TaskLogs indicates an expected call of TaskLogs\nfunc (mr *MockCommonAPIClientMockRecorder) TaskLogs(ctx, taskID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskLogs\", reflect.TypeOf((*MockCommonAPIClient)(nil).TaskLogs), ctx, taskID, options)\n}\n\n// TaskInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) TaskInspectWithRaw(ctx context.Context, taskID string) (swarm.Task, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskInspectWithRaw\", ctx, taskID)\n\tret0, _ := ret[0].(swarm.Task)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// TaskInspectWithRaw indicates an expected call of TaskInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) TaskInspectWithRaw(ctx, taskID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).TaskInspectWithRaw), ctx, taskID)\n}\n\n// TaskList mocks base method\nfunc (m *MockCommonAPIClient) TaskList(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Task)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// TaskList indicates an expected call of TaskList\nfunc (mr *MockCommonAPIClientMockRecorder) TaskList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskList\", reflect.TypeOf((*MockCommonAPIClient)(nil).TaskList), ctx, options)\n}\n\n// SwarmInit mocks base method\nfunc (m *MockCommonAPIClient) SwarmInit(ctx context.Context, req swarm.InitRequest) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmInit\", ctx, req)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmInit indicates an expected call of SwarmInit\nfunc (mr *MockCommonAPIClientMockRecorder) SwarmInit(ctx, req interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmInit\", reflect.TypeOf((*MockCommonAPIClient)(nil).SwarmInit), ctx, req)\n}\n\n// SwarmJoin mocks base method\nfunc (m *MockCommonAPIClient) SwarmJoin(ctx context.Context, req swarm.JoinRequest) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmJoin\", ctx, req)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SwarmJoin indicates an expected call of SwarmJoin\nfunc (mr *MockCommonAPIClientMockRecorder) SwarmJoin(ctx, req interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmJoin\", reflect.TypeOf((*MockCommonAPIClient)(nil).SwarmJoin), ctx, req)\n}\n\n// SwarmGetUnlockKey mocks base method\nfunc (m *MockCommonAPIClient) SwarmGetUnlockKey(ctx context.Context) (types.SwarmUnlockKeyResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmGetUnlockKey\", ctx)\n\tret0, _ := ret[0].(types.SwarmUnlockKeyResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmGetUnlockKey indicates an expected call of SwarmGetUnlockKey\nfunc (mr *MockCommonAPIClientMockRecorder) SwarmGetUnlockKey(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmGetUnlockKey\", reflect.TypeOf((*MockCommonAPIClient)(nil).SwarmGetUnlockKey), ctx)\n}\n\n// SwarmUnlock mocks base method\nfunc (m *MockCommonAPIClient) SwarmUnlock(ctx context.Context, req swarm.UnlockRequest) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmUnlock\", ctx, req)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SwarmUnlock indicates an expected call of SwarmUnlock\nfunc (mr *MockCommonAPIClientMockRecorder) SwarmUnlock(ctx, req interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmUnlock\", reflect.TypeOf((*MockCommonAPIClient)(nil).SwarmUnlock), ctx, req)\n}\n\n// SwarmLeave mocks base method\nfunc (m *MockCommonAPIClient) SwarmLeave(ctx context.Context, force bool) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmLeave\", ctx, force)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SwarmLeave indicates an expected call of SwarmLeave\nfunc (mr *MockCommonAPIClientMockRecorder) SwarmLeave(ctx, force interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmLeave\", reflect.TypeOf((*MockCommonAPIClient)(nil).SwarmLeave), ctx, force)\n}\n\n// SwarmInspect mocks base method\nfunc (m *MockCommonAPIClient) SwarmInspect(ctx context.Context) (swarm.Swarm, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmInspect\", ctx)\n\tret0, _ := ret[0].(swarm.Swarm)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmInspect indicates an expected call of SwarmInspect\nfunc (mr *MockCommonAPIClientMockRecorder) SwarmInspect(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmInspect\", reflect.TypeOf((*MockCommonAPIClient)(nil).SwarmInspect), ctx)\n}\n\n// SwarmUpdate mocks base method\nfunc (m *MockCommonAPIClient) SwarmUpdate(ctx context.Context, version swarm.Version, swarm swarm.Spec, flags swarm.UpdateFlags) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmUpdate\", ctx, version, swarm, flags)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SwarmUpdate indicates an expected call of SwarmUpdate\nfunc (mr *MockCommonAPIClientMockRecorder) SwarmUpdate(ctx, version, swarm, flags interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmUpdate\", reflect.TypeOf((*MockCommonAPIClient)(nil).SwarmUpdate), ctx, version, swarm, flags)\n}\n\n// SecretList mocks base method\nfunc (m *MockCommonAPIClient) SecretList(ctx context.Context, options types.SecretListOptions) ([]swarm.Secret, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Secret)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretList indicates an expected call of SecretList\nfunc (mr *MockCommonAPIClientMockRecorder) SecretList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretList\", reflect.TypeOf((*MockCommonAPIClient)(nil).SecretList), ctx, options)\n}\n\n// SecretCreate mocks base method\nfunc (m *MockCommonAPIClient) SecretCreate(ctx context.Context, secret swarm.SecretSpec) (types.SecretCreateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretCreate\", ctx, secret)\n\tret0, _ := ret[0].(types.SecretCreateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretCreate indicates an expected call of SecretCreate\nfunc (mr *MockCommonAPIClientMockRecorder) SecretCreate(ctx, secret interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).SecretCreate), ctx, secret)\n}\n\n// SecretRemove mocks base method\nfunc (m *MockCommonAPIClient) SecretRemove(ctx context.Context, id string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretRemove\", ctx, id)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SecretRemove indicates an expected call of SecretRemove\nfunc (mr *MockCommonAPIClientMockRecorder) SecretRemove(ctx, id interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).SecretRemove), ctx, id)\n}\n\n// SecretInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) SecretInspectWithRaw(ctx context.Context, name string) (swarm.Secret, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretInspectWithRaw\", ctx, name)\n\tret0, _ := ret[0].(swarm.Secret)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// SecretInspectWithRaw indicates an expected call of SecretInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) SecretInspectWithRaw(ctx, name interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).SecretInspectWithRaw), ctx, name)\n}\n\n// SecretUpdate mocks base method\nfunc (m *MockCommonAPIClient) SecretUpdate(ctx context.Context, id string, version swarm.Version, secret swarm.SecretSpec) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretUpdate\", ctx, id, version, secret)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SecretUpdate indicates an expected call of SecretUpdate\nfunc (mr *MockCommonAPIClientMockRecorder) SecretUpdate(ctx, id, version, secret interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretUpdate\", reflect.TypeOf((*MockCommonAPIClient)(nil).SecretUpdate), ctx, id, version, secret)\n}\n\n// Events mocks base method\nfunc (m *MockCommonAPIClient) Events(ctx context.Context, options types.EventsOptions) (<-chan events.Message, <-chan error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Events\", ctx, options)\n\tret0, _ := ret[0].(<-chan events.Message)\n\tret1, _ := ret[1].(<-chan error)\n\treturn ret0, ret1\n}\n\n// Events indicates an expected call of Events\nfunc (mr *MockCommonAPIClientMockRecorder) Events(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Events\", reflect.TypeOf((*MockCommonAPIClient)(nil).Events), ctx, options)\n}\n\n// Info mocks base method\nfunc (m *MockCommonAPIClient) Info(ctx context.Context) (types.Info, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Info\", ctx)\n\tret0, _ := ret[0].(types.Info)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Info indicates an expected call of Info\nfunc (mr *MockCommonAPIClientMockRecorder) Info(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Info\", reflect.TypeOf((*MockCommonAPIClient)(nil).Info), ctx)\n}\n\n// RegistryLogin mocks base method\nfunc (m *MockCommonAPIClient) RegistryLogin(ctx context.Context, auth types.AuthConfig) (registry.AuthenticateOKBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RegistryLogin\", ctx, auth)\n\tret0, _ := ret[0].(registry.AuthenticateOKBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// RegistryLogin indicates an expected call of RegistryLogin\nfunc (mr *MockCommonAPIClientMockRecorder) RegistryLogin(ctx, auth interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RegistryLogin\", reflect.TypeOf((*MockCommonAPIClient)(nil).RegistryLogin), ctx, auth)\n}\n\n// DiskUsage mocks base method\nfunc (m *MockCommonAPIClient) DiskUsage(ctx context.Context) (types.DiskUsage, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DiskUsage\", ctx)\n\tret0, _ := ret[0].(types.DiskUsage)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DiskUsage indicates an expected call of DiskUsage\nfunc (mr *MockCommonAPIClientMockRecorder) DiskUsage(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DiskUsage\", reflect.TypeOf((*MockCommonAPIClient)(nil).DiskUsage), ctx)\n}\n\n// Ping mocks base method\nfunc (m *MockCommonAPIClient) Ping(ctx context.Context) (types.Ping, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Ping\", ctx)\n\tret0, _ := ret[0].(types.Ping)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Ping indicates an expected call of Ping\nfunc (mr *MockCommonAPIClientMockRecorder) Ping(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Ping\", reflect.TypeOf((*MockCommonAPIClient)(nil).Ping), ctx)\n}\n\n// VolumeCreate mocks base method\nfunc (m *MockCommonAPIClient) VolumeCreate(ctx context.Context, options volume.VolumeCreateBody) (types.Volume, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeCreate\", ctx, options)\n\tret0, _ := ret[0].(types.Volume)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeCreate indicates an expected call of VolumeCreate\nfunc (mr *MockCommonAPIClientMockRecorder) VolumeCreate(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeCreate\", reflect.TypeOf((*MockCommonAPIClient)(nil).VolumeCreate), ctx, options)\n}\n\n// VolumeInspect mocks base method\nfunc (m *MockCommonAPIClient) VolumeInspect(ctx context.Context, volumeID string) (types.Volume, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeInspect\", ctx, volumeID)\n\tret0, _ := ret[0].(types.Volume)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeInspect indicates an expected call of VolumeInspect\nfunc (mr *MockCommonAPIClientMockRecorder) VolumeInspect(ctx, volumeID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeInspect\", reflect.TypeOf((*MockCommonAPIClient)(nil).VolumeInspect), ctx, volumeID)\n}\n\n// VolumeInspectWithRaw mocks base method\nfunc (m *MockCommonAPIClient) VolumeInspectWithRaw(ctx context.Context, volumeID string) (types.Volume, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeInspectWithRaw\", ctx, volumeID)\n\tret0, _ := ret[0].(types.Volume)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// VolumeInspectWithRaw indicates an expected call of VolumeInspectWithRaw\nfunc (mr *MockCommonAPIClientMockRecorder) VolumeInspectWithRaw(ctx, volumeID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeInspectWithRaw\", reflect.TypeOf((*MockCommonAPIClient)(nil).VolumeInspectWithRaw), ctx, volumeID)\n}\n\n// VolumeList mocks base method\nfunc (m *MockCommonAPIClient) VolumeList(ctx context.Context, filter filters.Args) (volume.VolumeListOKBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeList\", ctx, filter)\n\tret0, _ := ret[0].(volume.VolumeListOKBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeList indicates an expected call of VolumeList\nfunc (mr *MockCommonAPIClientMockRecorder) VolumeList(ctx, filter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeList\", reflect.TypeOf((*MockCommonAPIClient)(nil).VolumeList), ctx, filter)\n}\n\n// VolumeRemove mocks base method\nfunc (m *MockCommonAPIClient) VolumeRemove(ctx context.Context, volumeID string, force bool) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeRemove\", ctx, volumeID, force)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// VolumeRemove indicates an expected call of VolumeRemove\nfunc (mr *MockCommonAPIClientMockRecorder) VolumeRemove(ctx, volumeID, force interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeRemove\", reflect.TypeOf((*MockCommonAPIClient)(nil).VolumeRemove), ctx, volumeID, force)\n}\n\n// VolumesPrune mocks base method\nfunc (m *MockCommonAPIClient) VolumesPrune(ctx context.Context, pruneFilter filters.Args) (types.VolumesPruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumesPrune\", ctx, pruneFilter)\n\tret0, _ := ret[0].(types.VolumesPruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumesPrune indicates an expected call of VolumesPrune\nfunc (mr *MockCommonAPIClientMockRecorder) VolumesPrune(ctx, pruneFilter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumesPrune\", reflect.TypeOf((*MockCommonAPIClient)(nil).VolumesPrune), ctx, pruneFilter)\n}\n\n// ClientVersion mocks base method\nfunc (m *MockCommonAPIClient) ClientVersion() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ClientVersion\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// ClientVersion indicates an expected call of ClientVersion\nfunc (mr *MockCommonAPIClientMockRecorder) ClientVersion() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ClientVersion\", reflect.TypeOf((*MockCommonAPIClient)(nil).ClientVersion))\n}\n\n// DaemonHost mocks base method\nfunc (m *MockCommonAPIClient) DaemonHost() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DaemonHost\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// DaemonHost indicates an expected call of DaemonHost\nfunc (mr *MockCommonAPIClientMockRecorder) DaemonHost() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DaemonHost\", reflect.TypeOf((*MockCommonAPIClient)(nil).DaemonHost))\n}\n\n// HTTPClient mocks base method\nfunc (m *MockCommonAPIClient) HTTPClient() *http.Client {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"HTTPClient\")\n\tret0, _ := ret[0].(*http.Client)\n\treturn ret0\n}\n\n// HTTPClient indicates an expected call of HTTPClient\nfunc (mr *MockCommonAPIClientMockRecorder) HTTPClient() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"HTTPClient\", reflect.TypeOf((*MockCommonAPIClient)(nil).HTTPClient))\n}\n\n// ServerVersion mocks base method\nfunc (m *MockCommonAPIClient) ServerVersion(ctx context.Context) (types.Version, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServerVersion\", ctx)\n\tret0, _ := ret[0].(types.Version)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServerVersion indicates an expected call of ServerVersion\nfunc (mr *MockCommonAPIClientMockRecorder) ServerVersion(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServerVersion\", reflect.TypeOf((*MockCommonAPIClient)(nil).ServerVersion), ctx)\n}\n\n// NegotiateAPIVersion mocks base method\nfunc (m *MockCommonAPIClient) NegotiateAPIVersion(ctx context.Context) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"NegotiateAPIVersion\", ctx)\n}\n\n// NegotiateAPIVersion indicates an expected call of NegotiateAPIVersion\nfunc (mr *MockCommonAPIClientMockRecorder) NegotiateAPIVersion(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NegotiateAPIVersion\", reflect.TypeOf((*MockCommonAPIClient)(nil).NegotiateAPIVersion), ctx)\n}\n\n// NegotiateAPIVersionPing mocks base method\nfunc (m *MockCommonAPIClient) NegotiateAPIVersionPing(arg0 types.Ping) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"NegotiateAPIVersionPing\", arg0)\n}\n\n// NegotiateAPIVersionPing indicates an expected call of NegotiateAPIVersionPing\nfunc (mr *MockCommonAPIClientMockRecorder) NegotiateAPIVersionPing(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NegotiateAPIVersionPing\", reflect.TypeOf((*MockCommonAPIClient)(nil).NegotiateAPIVersionPing), arg0)\n}\n\n// DialHijack mocks base method\nfunc (m *MockCommonAPIClient) DialHijack(ctx context.Context, url, proto string, meta map[string][]string) (net.Conn, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DialHijack\", ctx, url, proto, meta)\n\tret0, _ := ret[0].(net.Conn)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DialHijack indicates an expected call of DialHijack\nfunc (mr *MockCommonAPIClientMockRecorder) DialHijack(ctx, url, proto, meta interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DialHijack\", reflect.TypeOf((*MockCommonAPIClient)(nil).DialHijack), ctx, url, proto, meta)\n}\n\n// Dialer mocks base method\nfunc (m *MockCommonAPIClient) Dialer() func(context.Context) (net.Conn, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Dialer\")\n\tret0, _ := ret[0].(func(context.Context) (net.Conn, error))\n\treturn ret0\n}\n\n// Dialer indicates an expected call of Dialer\nfunc (mr *MockCommonAPIClientMockRecorder) Dialer() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Dialer\", reflect.TypeOf((*MockCommonAPIClient)(nil).Dialer))\n}\n\n// Close mocks base method\nfunc (m *MockCommonAPIClient) Close() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Close\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Close indicates an expected call of Close\nfunc (mr *MockCommonAPIClientMockRecorder) Close() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Close\", reflect.TypeOf((*MockCommonAPIClient)(nil).Close))\n}\n\n// MockContainerAPIClient is a mock of ContainerAPIClient interface\ntype MockContainerAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockContainerAPIClientMockRecorder\n}\n\n// MockContainerAPIClientMockRecorder is the mock recorder for MockContainerAPIClient\ntype MockContainerAPIClientMockRecorder struct {\n\tmock *MockContainerAPIClient\n}\n\n// NewMockContainerAPIClient creates a new mock instance\nfunc NewMockContainerAPIClient(ctrl *gomock.Controller) *MockContainerAPIClient {\n\tmock := &MockContainerAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockContainerAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockContainerAPIClient) EXPECT() *MockContainerAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// ContainerAttach mocks base method\nfunc (m *MockContainerAPIClient) ContainerAttach(ctx context.Context, container string, options types.ContainerAttachOptions) (types.HijackedResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerAttach\", ctx, container, options)\n\tret0, _ := ret[0].(types.HijackedResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerAttach indicates an expected call of ContainerAttach\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerAttach(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerAttach\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerAttach), ctx, container, options)\n}\n\n// ContainerCommit mocks base method\nfunc (m *MockContainerAPIClient) ContainerCommit(ctx context.Context, container string, options types.ContainerCommitOptions) (types.IDResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerCommit\", ctx, container, options)\n\tret0, _ := ret[0].(types.IDResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerCommit indicates an expected call of ContainerCommit\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerCommit(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerCommit\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerCommit), ctx, container, options)\n}\n\n// ContainerCreate mocks base method\nfunc (m *MockContainerAPIClient) ContainerCreate(ctx context.Context, config *containerpkg.Config, hostConfig *containerpkg.HostConfig, networkingConfig *network.NetworkingConfig, containerName string) (containerpkg.ContainerCreateCreatedBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerCreate\", ctx, config, hostConfig, networkingConfig, containerName)\n\tret0, _ := ret[0].(containerpkg.ContainerCreateCreatedBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerCreate indicates an expected call of ContainerCreate\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerCreate(ctx, config, hostConfig, networkingConfig, containerName interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerCreate\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerCreate), ctx, config, hostConfig, networkingConfig, containerName)\n}\n\n// ContainerDiff mocks base method\nfunc (m *MockContainerAPIClient) ContainerDiff(ctx context.Context, container string) ([]containerpkg.ContainerChangeResponseItem, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerDiff\", ctx, container)\n\tret0, _ := ret[0].([]containerpkg.ContainerChangeResponseItem)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerDiff indicates an expected call of ContainerDiff\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerDiff(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerDiff\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerDiff), ctx, container)\n}\n\n// ContainerExecAttach mocks base method\nfunc (m *MockContainerAPIClient) ContainerExecAttach(ctx context.Context, execID string, config types.ExecStartCheck) (types.HijackedResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecAttach\", ctx, execID, config)\n\tret0, _ := ret[0].(types.HijackedResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExecAttach indicates an expected call of ContainerExecAttach\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerExecAttach(ctx, execID, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecAttach\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerExecAttach), ctx, execID, config)\n}\n\n// ContainerExecCreate mocks base method\nfunc (m *MockContainerAPIClient) ContainerExecCreate(ctx context.Context, container string, config types.ExecConfig) (types.IDResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecCreate\", ctx, container, config)\n\tret0, _ := ret[0].(types.IDResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExecCreate indicates an expected call of ContainerExecCreate\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerExecCreate(ctx, container, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecCreate\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerExecCreate), ctx, container, config)\n}\n\n// ContainerExecInspect mocks base method\nfunc (m *MockContainerAPIClient) ContainerExecInspect(ctx context.Context, execID string) (types.ContainerExecInspect, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecInspect\", ctx, execID)\n\tret0, _ := ret[0].(types.ContainerExecInspect)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExecInspect indicates an expected call of ContainerExecInspect\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerExecInspect(ctx, execID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecInspect\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerExecInspect), ctx, execID)\n}\n\n// ContainerExecResize mocks base method\nfunc (m *MockContainerAPIClient) ContainerExecResize(ctx context.Context, execID string, options types.ResizeOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecResize\", ctx, execID, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerExecResize indicates an expected call of ContainerExecResize\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerExecResize(ctx, execID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecResize\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerExecResize), ctx, execID, options)\n}\n\n// ContainerExecStart mocks base method\nfunc (m *MockContainerAPIClient) ContainerExecStart(ctx context.Context, execID string, config types.ExecStartCheck) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExecStart\", ctx, execID, config)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerExecStart indicates an expected call of ContainerExecStart\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerExecStart(ctx, execID, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExecStart\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerExecStart), ctx, execID, config)\n}\n\n// ContainerExport mocks base method\nfunc (m *MockContainerAPIClient) ContainerExport(ctx context.Context, container string) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerExport\", ctx, container)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerExport indicates an expected call of ContainerExport\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerExport(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerExport\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerExport), ctx, container)\n}\n\n// ContainerInspect mocks base method\nfunc (m *MockContainerAPIClient) ContainerInspect(ctx context.Context, container string) (types.ContainerJSON, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerInspect\", ctx, container)\n\tret0, _ := ret[0].(types.ContainerJSON)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerInspect indicates an expected call of ContainerInspect\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerInspect(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerInspect\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerInspect), ctx, container)\n}\n\n// ContainerInspectWithRaw mocks base method\nfunc (m *MockContainerAPIClient) ContainerInspectWithRaw(ctx context.Context, container string, getSize bool) (types.ContainerJSON, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerInspectWithRaw\", ctx, container, getSize)\n\tret0, _ := ret[0].(types.ContainerJSON)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ContainerInspectWithRaw indicates an expected call of ContainerInspectWithRaw\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerInspectWithRaw(ctx, container, getSize interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerInspectWithRaw\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerInspectWithRaw), ctx, container, getSize)\n}\n\n// ContainerKill mocks base method\nfunc (m *MockContainerAPIClient) ContainerKill(ctx context.Context, container, signal string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerKill\", ctx, container, signal)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerKill indicates an expected call of ContainerKill\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerKill(ctx, container, signal interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerKill\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerKill), ctx, container, signal)\n}\n\n// ContainerList mocks base method\nfunc (m *MockContainerAPIClient) ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerList\", ctx, options)\n\tret0, _ := ret[0].([]types.Container)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerList indicates an expected call of ContainerList\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerList\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerList), ctx, options)\n}\n\n// ContainerLogs mocks base method\nfunc (m *MockContainerAPIClient) ContainerLogs(ctx context.Context, container string, options types.ContainerLogsOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerLogs\", ctx, container, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerLogs indicates an expected call of ContainerLogs\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerLogs(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerLogs\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerLogs), ctx, container, options)\n}\n\n// ContainerPause mocks base method\nfunc (m *MockContainerAPIClient) ContainerPause(ctx context.Context, container string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerPause\", ctx, container)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerPause indicates an expected call of ContainerPause\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerPause(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerPause\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerPause), ctx, container)\n}\n\n// ContainerRemove mocks base method\nfunc (m *MockContainerAPIClient) ContainerRemove(ctx context.Context, container string, options types.ContainerRemoveOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRemove\", ctx, container, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerRemove indicates an expected call of ContainerRemove\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerRemove(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRemove\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerRemove), ctx, container, options)\n}\n\n// ContainerRename mocks base method\nfunc (m *MockContainerAPIClient) ContainerRename(ctx context.Context, container, newContainerName string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRename\", ctx, container, newContainerName)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerRename indicates an expected call of ContainerRename\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerRename(ctx, container, newContainerName interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRename\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerRename), ctx, container, newContainerName)\n}\n\n// ContainerResize mocks base method\nfunc (m *MockContainerAPIClient) ContainerResize(ctx context.Context, container string, options types.ResizeOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerResize\", ctx, container, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerResize indicates an expected call of ContainerResize\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerResize(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerResize\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerResize), ctx, container, options)\n}\n\n// ContainerRestart mocks base method\nfunc (m *MockContainerAPIClient) ContainerRestart(ctx context.Context, container string, timeout *time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerRestart\", ctx, container, timeout)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerRestart indicates an expected call of ContainerRestart\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerRestart(ctx, container, timeout interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerRestart\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerRestart), ctx, container, timeout)\n}\n\n// ContainerStatPath mocks base method\nfunc (m *MockContainerAPIClient) ContainerStatPath(ctx context.Context, container, path string) (types.ContainerPathStat, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStatPath\", ctx, container, path)\n\tret0, _ := ret[0].(types.ContainerPathStat)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStatPath indicates an expected call of ContainerStatPath\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerStatPath(ctx, container, path interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStatPath\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerStatPath), ctx, container, path)\n}\n\n// ContainerStats mocks base method\nfunc (m *MockContainerAPIClient) ContainerStats(ctx context.Context, container string, stream bool) (types.ContainerStats, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStats\", ctx, container, stream)\n\tret0, _ := ret[0].(types.ContainerStats)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStats indicates an expected call of ContainerStats\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerStats(ctx, container, stream interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStats\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerStats), ctx, container, stream)\n}\n\n// ContainerStart mocks base method\nfunc (m *MockContainerAPIClient) ContainerStart(ctx context.Context, container string, options types.ContainerStartOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStart\", ctx, container, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerStart indicates an expected call of ContainerStart\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerStart(ctx, container, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStart\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerStart), ctx, container, options)\n}\n\n// ContainerStop mocks base method\nfunc (m *MockContainerAPIClient) ContainerStop(ctx context.Context, container string, timeout *time.Duration) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStop\", ctx, container, timeout)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerStop indicates an expected call of ContainerStop\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerStop(ctx, container, timeout interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStop\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerStop), ctx, container, timeout)\n}\n\n// ContainerTop mocks base method\nfunc (m *MockContainerAPIClient) ContainerTop(ctx context.Context, container string, arguments []string) (containerpkg.ContainerTopOKBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerTop\", ctx, container, arguments)\n\tret0, _ := ret[0].(containerpkg.ContainerTopOKBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerTop indicates an expected call of ContainerTop\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerTop(ctx, container, arguments interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerTop\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerTop), ctx, container, arguments)\n}\n\n// ContainerUnpause mocks base method\nfunc (m *MockContainerAPIClient) ContainerUnpause(ctx context.Context, container string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerUnpause\", ctx, container)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ContainerUnpause indicates an expected call of ContainerUnpause\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerUnpause(ctx, container interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerUnpause\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerUnpause), ctx, container)\n}\n\n// ContainerUpdate mocks base method\nfunc (m *MockContainerAPIClient) ContainerUpdate(ctx context.Context, container string, updateConfig containerpkg.UpdateConfig) (containerpkg.ContainerUpdateOKBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerUpdate\", ctx, container, updateConfig)\n\tret0, _ := ret[0].(containerpkg.ContainerUpdateOKBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerUpdate indicates an expected call of ContainerUpdate\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerUpdate(ctx, container, updateConfig interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerUpdate\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerUpdate), ctx, container, updateConfig)\n}\n\n// ContainerWait mocks base method\nfunc (m *MockContainerAPIClient) ContainerWait(ctx context.Context, container string, condition containerpkg.WaitCondition) (<-chan containerpkg.ContainerWaitOKBody, <-chan error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerWait\", ctx, container, condition)\n\tret0, _ := ret[0].(<-chan containerpkg.ContainerWaitOKBody)\n\tret1, _ := ret[1].(<-chan error)\n\treturn ret0, ret1\n}\n\n// ContainerWait indicates an expected call of ContainerWait\nfunc (mr *MockContainerAPIClientMockRecorder) ContainerWait(ctx, container, condition interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerWait\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainerWait), ctx, container, condition)\n}\n\n// CopyFromContainer mocks base method\nfunc (m *MockContainerAPIClient) CopyFromContainer(ctx context.Context, container, srcPath string) (io.ReadCloser, types.ContainerPathStat, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CopyFromContainer\", ctx, container, srcPath)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(types.ContainerPathStat)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// CopyFromContainer indicates an expected call of CopyFromContainer\nfunc (mr *MockContainerAPIClientMockRecorder) CopyFromContainer(ctx, container, srcPath interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CopyFromContainer\", reflect.TypeOf((*MockContainerAPIClient)(nil).CopyFromContainer), ctx, container, srcPath)\n}\n\n// CopyToContainer mocks base method\nfunc (m *MockContainerAPIClient) CopyToContainer(ctx context.Context, container, path string, content io.Reader, options types.CopyToContainerOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CopyToContainer\", ctx, container, path, content, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// CopyToContainer indicates an expected call of CopyToContainer\nfunc (mr *MockContainerAPIClientMockRecorder) CopyToContainer(ctx, container, path, content, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CopyToContainer\", reflect.TypeOf((*MockContainerAPIClient)(nil).CopyToContainer), ctx, container, path, content, options)\n}\n\n// ContainersPrune mocks base method\nfunc (m *MockContainerAPIClient) ContainersPrune(ctx context.Context, pruneFilters filters.Args) (types.ContainersPruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainersPrune\", ctx, pruneFilters)\n\tret0, _ := ret[0].(types.ContainersPruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainersPrune indicates an expected call of ContainersPrune\nfunc (mr *MockContainerAPIClientMockRecorder) ContainersPrune(ctx, pruneFilters interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainersPrune\", reflect.TypeOf((*MockContainerAPIClient)(nil).ContainersPrune), ctx, pruneFilters)\n}\n\n// MockDistributionAPIClient is a mock of DistributionAPIClient interface\ntype MockDistributionAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockDistributionAPIClientMockRecorder\n}\n\n// MockDistributionAPIClientMockRecorder is the mock recorder for MockDistributionAPIClient\ntype MockDistributionAPIClientMockRecorder struct {\n\tmock *MockDistributionAPIClient\n}\n\n// NewMockDistributionAPIClient creates a new mock instance\nfunc NewMockDistributionAPIClient(ctrl *gomock.Controller) *MockDistributionAPIClient {\n\tmock := &MockDistributionAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockDistributionAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockDistributionAPIClient) EXPECT() *MockDistributionAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// DistributionInspect mocks base method\nfunc (m *MockDistributionAPIClient) DistributionInspect(ctx context.Context, image, encodedRegistryAuth string) (registry.DistributionInspect, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DistributionInspect\", ctx, image, encodedRegistryAuth)\n\tret0, _ := ret[0].(registry.DistributionInspect)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DistributionInspect indicates an expected call of DistributionInspect\nfunc (mr *MockDistributionAPIClientMockRecorder) DistributionInspect(ctx, image, encodedRegistryAuth interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DistributionInspect\", reflect.TypeOf((*MockDistributionAPIClient)(nil).DistributionInspect), ctx, image, encodedRegistryAuth)\n}\n\n// MockImageAPIClient is a mock of ImageAPIClient interface\ntype MockImageAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockImageAPIClientMockRecorder\n}\n\n// MockImageAPIClientMockRecorder is the mock recorder for MockImageAPIClient\ntype MockImageAPIClientMockRecorder struct {\n\tmock *MockImageAPIClient\n}\n\n// NewMockImageAPIClient creates a new mock instance\nfunc NewMockImageAPIClient(ctrl *gomock.Controller) *MockImageAPIClient {\n\tmock := &MockImageAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockImageAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockImageAPIClient) EXPECT() *MockImageAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// ImageBuild mocks base method\nfunc (m *MockImageAPIClient) ImageBuild(ctx context.Context, context io.Reader, options types.ImageBuildOptions) (types.ImageBuildResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageBuild\", ctx, context, options)\n\tret0, _ := ret[0].(types.ImageBuildResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageBuild indicates an expected call of ImageBuild\nfunc (mr *MockImageAPIClientMockRecorder) ImageBuild(ctx, context, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageBuild\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageBuild), ctx, context, options)\n}\n\n// BuildCachePrune mocks base method\nfunc (m *MockImageAPIClient) BuildCachePrune(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"BuildCachePrune\", ctx, opts)\n\tret0, _ := ret[0].(*types.BuildCachePruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// BuildCachePrune indicates an expected call of BuildCachePrune\nfunc (mr *MockImageAPIClientMockRecorder) BuildCachePrune(ctx, opts interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"BuildCachePrune\", reflect.TypeOf((*MockImageAPIClient)(nil).BuildCachePrune), ctx, opts)\n}\n\n// BuildCancel mocks base method\nfunc (m *MockImageAPIClient) BuildCancel(ctx context.Context, id string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"BuildCancel\", ctx, id)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// BuildCancel indicates an expected call of BuildCancel\nfunc (mr *MockImageAPIClientMockRecorder) BuildCancel(ctx, id interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"BuildCancel\", reflect.TypeOf((*MockImageAPIClient)(nil).BuildCancel), ctx, id)\n}\n\n// ImageCreate mocks base method\nfunc (m *MockImageAPIClient) ImageCreate(ctx context.Context, parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageCreate\", ctx, parentReference, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageCreate indicates an expected call of ImageCreate\nfunc (mr *MockImageAPIClientMockRecorder) ImageCreate(ctx, parentReference, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageCreate\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageCreate), ctx, parentReference, options)\n}\n\n// ImageHistory mocks base method\nfunc (m *MockImageAPIClient) ImageHistory(ctx context.Context, image string) ([]imagepkg.HistoryResponseItem, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageHistory\", ctx, image)\n\tret0, _ := ret[0].([]imagepkg.HistoryResponseItem)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageHistory indicates an expected call of ImageHistory\nfunc (mr *MockImageAPIClientMockRecorder) ImageHistory(ctx, image interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageHistory\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageHistory), ctx, image)\n}\n\n// ImageImport mocks base method\nfunc (m *MockImageAPIClient) ImageImport(ctx context.Context, source types.ImageImportSource, ref string, options types.ImageImportOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageImport\", ctx, source, ref, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageImport indicates an expected call of ImageImport\nfunc (mr *MockImageAPIClientMockRecorder) ImageImport(ctx, source, ref, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageImport\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageImport), ctx, source, ref, options)\n}\n\n// ImageInspectWithRaw mocks base method\nfunc (m *MockImageAPIClient) ImageInspectWithRaw(ctx context.Context, image string) (types.ImageInspect, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageInspectWithRaw\", ctx, image)\n\tret0, _ := ret[0].(types.ImageInspect)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ImageInspectWithRaw indicates an expected call of ImageInspectWithRaw\nfunc (mr *MockImageAPIClientMockRecorder) ImageInspectWithRaw(ctx, image interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageInspectWithRaw\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageInspectWithRaw), ctx, image)\n}\n\n// ImageList mocks base method\nfunc (m *MockImageAPIClient) ImageList(ctx context.Context, options types.ImageListOptions) ([]types.ImageSummary, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageList\", ctx, options)\n\tret0, _ := ret[0].([]types.ImageSummary)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageList indicates an expected call of ImageList\nfunc (mr *MockImageAPIClientMockRecorder) ImageList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageList\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageList), ctx, options)\n}\n\n// ImageLoad mocks base method\nfunc (m *MockImageAPIClient) ImageLoad(ctx context.Context, input io.Reader, quiet bool) (types.ImageLoadResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageLoad\", ctx, input, quiet)\n\tret0, _ := ret[0].(types.ImageLoadResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageLoad indicates an expected call of ImageLoad\nfunc (mr *MockImageAPIClientMockRecorder) ImageLoad(ctx, input, quiet interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageLoad\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageLoad), ctx, input, quiet)\n}\n\n// ImagePull mocks base method\nfunc (m *MockImageAPIClient) ImagePull(ctx context.Context, ref string, options types.ImagePullOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagePull\", ctx, ref, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagePull indicates an expected call of ImagePull\nfunc (mr *MockImageAPIClientMockRecorder) ImagePull(ctx, ref, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagePull\", reflect.TypeOf((*MockImageAPIClient)(nil).ImagePull), ctx, ref, options)\n}\n\n// ImagePush mocks base method\nfunc (m *MockImageAPIClient) ImagePush(ctx context.Context, ref string, options types.ImagePushOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagePush\", ctx, ref, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagePush indicates an expected call of ImagePush\nfunc (mr *MockImageAPIClientMockRecorder) ImagePush(ctx, ref, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagePush\", reflect.TypeOf((*MockImageAPIClient)(nil).ImagePush), ctx, ref, options)\n}\n\n// ImageRemove mocks base method\nfunc (m *MockImageAPIClient) ImageRemove(ctx context.Context, image string, options types.ImageRemoveOptions) ([]types.ImageDeleteResponseItem, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageRemove\", ctx, image, options)\n\tret0, _ := ret[0].([]types.ImageDeleteResponseItem)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageRemove indicates an expected call of ImageRemove\nfunc (mr *MockImageAPIClientMockRecorder) ImageRemove(ctx, image, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageRemove\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageRemove), ctx, image, options)\n}\n\n// ImageSearch mocks base method\nfunc (m *MockImageAPIClient) ImageSearch(ctx context.Context, term string, options types.ImageSearchOptions) ([]registry.SearchResult, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageSearch\", ctx, term, options)\n\tret0, _ := ret[0].([]registry.SearchResult)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageSearch indicates an expected call of ImageSearch\nfunc (mr *MockImageAPIClientMockRecorder) ImageSearch(ctx, term, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageSearch\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageSearch), ctx, term, options)\n}\n\n// ImageSave mocks base method\nfunc (m *MockImageAPIClient) ImageSave(ctx context.Context, images []string) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageSave\", ctx, images)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImageSave indicates an expected call of ImageSave\nfunc (mr *MockImageAPIClientMockRecorder) ImageSave(ctx, images interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageSave\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageSave), ctx, images)\n}\n\n// ImageTag mocks base method\nfunc (m *MockImageAPIClient) ImageTag(ctx context.Context, image, ref string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImageTag\", ctx, image, ref)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ImageTag indicates an expected call of ImageTag\nfunc (mr *MockImageAPIClientMockRecorder) ImageTag(ctx, image, ref interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImageTag\", reflect.TypeOf((*MockImageAPIClient)(nil).ImageTag), ctx, image, ref)\n}\n\n// ImagesPrune mocks base method\nfunc (m *MockImageAPIClient) ImagesPrune(ctx context.Context, pruneFilter filters.Args) (types.ImagesPruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ImagesPrune\", ctx, pruneFilter)\n\tret0, _ := ret[0].(types.ImagesPruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ImagesPrune indicates an expected call of ImagesPrune\nfunc (mr *MockImageAPIClientMockRecorder) ImagesPrune(ctx, pruneFilter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ImagesPrune\", reflect.TypeOf((*MockImageAPIClient)(nil).ImagesPrune), ctx, pruneFilter)\n}\n\n// MockNetworkAPIClient is a mock of NetworkAPIClient interface\ntype MockNetworkAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockNetworkAPIClientMockRecorder\n}\n\n// MockNetworkAPIClientMockRecorder is the mock recorder for MockNetworkAPIClient\ntype MockNetworkAPIClientMockRecorder struct {\n\tmock *MockNetworkAPIClient\n}\n\n// NewMockNetworkAPIClient creates a new mock instance\nfunc NewMockNetworkAPIClient(ctrl *gomock.Controller) *MockNetworkAPIClient {\n\tmock := &MockNetworkAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockNetworkAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockNetworkAPIClient) EXPECT() *MockNetworkAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// NetworkConnect mocks base method\nfunc (m *MockNetworkAPIClient) NetworkConnect(ctx context.Context, network, container string, config *network.EndpointSettings) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkConnect\", ctx, network, container, config)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NetworkConnect indicates an expected call of NetworkConnect\nfunc (mr *MockNetworkAPIClientMockRecorder) NetworkConnect(ctx, network, container, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkConnect\", reflect.TypeOf((*MockNetworkAPIClient)(nil).NetworkConnect), ctx, network, container, config)\n}\n\n// NetworkCreate mocks base method\nfunc (m *MockNetworkAPIClient) NetworkCreate(ctx context.Context, name string, options types.NetworkCreate) (types.NetworkCreateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkCreate\", ctx, name, options)\n\tret0, _ := ret[0].(types.NetworkCreateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkCreate indicates an expected call of NetworkCreate\nfunc (mr *MockNetworkAPIClientMockRecorder) NetworkCreate(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkCreate\", reflect.TypeOf((*MockNetworkAPIClient)(nil).NetworkCreate), ctx, name, options)\n}\n\n// NetworkDisconnect mocks base method\nfunc (m *MockNetworkAPIClient) NetworkDisconnect(ctx context.Context, network, container string, force bool) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkDisconnect\", ctx, network, container, force)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NetworkDisconnect indicates an expected call of NetworkDisconnect\nfunc (mr *MockNetworkAPIClientMockRecorder) NetworkDisconnect(ctx, network, container, force interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkDisconnect\", reflect.TypeOf((*MockNetworkAPIClient)(nil).NetworkDisconnect), ctx, network, container, force)\n}\n\n// NetworkInspect mocks base method\nfunc (m *MockNetworkAPIClient) NetworkInspect(ctx context.Context, network string, options types.NetworkInspectOptions) (types.NetworkResource, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkInspect\", ctx, network, options)\n\tret0, _ := ret[0].(types.NetworkResource)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkInspect indicates an expected call of NetworkInspect\nfunc (mr *MockNetworkAPIClientMockRecorder) NetworkInspect(ctx, network, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkInspect\", reflect.TypeOf((*MockNetworkAPIClient)(nil).NetworkInspect), ctx, network, options)\n}\n\n// NetworkInspectWithRaw mocks base method\nfunc (m *MockNetworkAPIClient) NetworkInspectWithRaw(ctx context.Context, network string, options types.NetworkInspectOptions) (types.NetworkResource, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkInspectWithRaw\", ctx, network, options)\n\tret0, _ := ret[0].(types.NetworkResource)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// NetworkInspectWithRaw indicates an expected call of NetworkInspectWithRaw\nfunc (mr *MockNetworkAPIClientMockRecorder) NetworkInspectWithRaw(ctx, network, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkInspectWithRaw\", reflect.TypeOf((*MockNetworkAPIClient)(nil).NetworkInspectWithRaw), ctx, network, options)\n}\n\n// NetworkList mocks base method\nfunc (m *MockNetworkAPIClient) NetworkList(ctx context.Context, options types.NetworkListOptions) ([]types.NetworkResource, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkList\", ctx, options)\n\tret0, _ := ret[0].([]types.NetworkResource)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworkList indicates an expected call of NetworkList\nfunc (mr *MockNetworkAPIClientMockRecorder) NetworkList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkList\", reflect.TypeOf((*MockNetworkAPIClient)(nil).NetworkList), ctx, options)\n}\n\n// NetworkRemove mocks base method\nfunc (m *MockNetworkAPIClient) NetworkRemove(ctx context.Context, network string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworkRemove\", ctx, network)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NetworkRemove indicates an expected call of NetworkRemove\nfunc (mr *MockNetworkAPIClientMockRecorder) NetworkRemove(ctx, network interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworkRemove\", reflect.TypeOf((*MockNetworkAPIClient)(nil).NetworkRemove), ctx, network)\n}\n\n// NetworksPrune mocks base method\nfunc (m *MockNetworkAPIClient) NetworksPrune(ctx context.Context, pruneFilter filters.Args) (types.NetworksPruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NetworksPrune\", ctx, pruneFilter)\n\tret0, _ := ret[0].(types.NetworksPruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NetworksPrune indicates an expected call of NetworksPrune\nfunc (mr *MockNetworkAPIClientMockRecorder) NetworksPrune(ctx, pruneFilter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NetworksPrune\", reflect.TypeOf((*MockNetworkAPIClient)(nil).NetworksPrune), ctx, pruneFilter)\n}\n\n// MockNodeAPIClient is a mock of NodeAPIClient interface\ntype MockNodeAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockNodeAPIClientMockRecorder\n}\n\n// MockNodeAPIClientMockRecorder is the mock recorder for MockNodeAPIClient\ntype MockNodeAPIClientMockRecorder struct {\n\tmock *MockNodeAPIClient\n}\n\n// NewMockNodeAPIClient creates a new mock instance\nfunc NewMockNodeAPIClient(ctrl *gomock.Controller) *MockNodeAPIClient {\n\tmock := &MockNodeAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockNodeAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockNodeAPIClient) EXPECT() *MockNodeAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// NodeInspectWithRaw mocks base method\nfunc (m *MockNodeAPIClient) NodeInspectWithRaw(ctx context.Context, nodeID string) (swarm.Node, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeInspectWithRaw\", ctx, nodeID)\n\tret0, _ := ret[0].(swarm.Node)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// NodeInspectWithRaw indicates an expected call of NodeInspectWithRaw\nfunc (mr *MockNodeAPIClientMockRecorder) NodeInspectWithRaw(ctx, nodeID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeInspectWithRaw\", reflect.TypeOf((*MockNodeAPIClient)(nil).NodeInspectWithRaw), ctx, nodeID)\n}\n\n// NodeList mocks base method\nfunc (m *MockNodeAPIClient) NodeList(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Node)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// NodeList indicates an expected call of NodeList\nfunc (mr *MockNodeAPIClientMockRecorder) NodeList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeList\", reflect.TypeOf((*MockNodeAPIClient)(nil).NodeList), ctx, options)\n}\n\n// NodeRemove mocks base method\nfunc (m *MockNodeAPIClient) NodeRemove(ctx context.Context, nodeID string, options types.NodeRemoveOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeRemove\", ctx, nodeID, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NodeRemove indicates an expected call of NodeRemove\nfunc (mr *MockNodeAPIClientMockRecorder) NodeRemove(ctx, nodeID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeRemove\", reflect.TypeOf((*MockNodeAPIClient)(nil).NodeRemove), ctx, nodeID, options)\n}\n\n// NodeUpdate mocks base method\nfunc (m *MockNodeAPIClient) NodeUpdate(ctx context.Context, nodeID string, version swarm.Version, node swarm.NodeSpec) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NodeUpdate\", ctx, nodeID, version, node)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// NodeUpdate indicates an expected call of NodeUpdate\nfunc (mr *MockNodeAPIClientMockRecorder) NodeUpdate(ctx, nodeID, version, node interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NodeUpdate\", reflect.TypeOf((*MockNodeAPIClient)(nil).NodeUpdate), ctx, nodeID, version, node)\n}\n\n// MockPluginAPIClient is a mock of PluginAPIClient interface\ntype MockPluginAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockPluginAPIClientMockRecorder\n}\n\n// MockPluginAPIClientMockRecorder is the mock recorder for MockPluginAPIClient\ntype MockPluginAPIClientMockRecorder struct {\n\tmock *MockPluginAPIClient\n}\n\n// NewMockPluginAPIClient creates a new mock instance\nfunc NewMockPluginAPIClient(ctrl *gomock.Controller) *MockPluginAPIClient {\n\tmock := &MockPluginAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockPluginAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockPluginAPIClient) EXPECT() *MockPluginAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// PluginList mocks base method\nfunc (m *MockPluginAPIClient) PluginList(ctx context.Context, filter filters.Args) (types.PluginsListResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginList\", ctx, filter)\n\tret0, _ := ret[0].(types.PluginsListResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginList indicates an expected call of PluginList\nfunc (mr *MockPluginAPIClientMockRecorder) PluginList(ctx, filter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginList\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginList), ctx, filter)\n}\n\n// PluginRemove mocks base method\nfunc (m *MockPluginAPIClient) PluginRemove(ctx context.Context, name string, options types.PluginRemoveOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginRemove\", ctx, name, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginRemove indicates an expected call of PluginRemove\nfunc (mr *MockPluginAPIClientMockRecorder) PluginRemove(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginRemove\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginRemove), ctx, name, options)\n}\n\n// PluginEnable mocks base method\nfunc (m *MockPluginAPIClient) PluginEnable(ctx context.Context, name string, options types.PluginEnableOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginEnable\", ctx, name, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginEnable indicates an expected call of PluginEnable\nfunc (mr *MockPluginAPIClientMockRecorder) PluginEnable(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginEnable\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginEnable), ctx, name, options)\n}\n\n// PluginDisable mocks base method\nfunc (m *MockPluginAPIClient) PluginDisable(ctx context.Context, name string, options types.PluginDisableOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginDisable\", ctx, name, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginDisable indicates an expected call of PluginDisable\nfunc (mr *MockPluginAPIClientMockRecorder) PluginDisable(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginDisable\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginDisable), ctx, name, options)\n}\n\n// PluginInstall mocks base method\nfunc (m *MockPluginAPIClient) PluginInstall(ctx context.Context, name string, options types.PluginInstallOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginInstall\", ctx, name, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginInstall indicates an expected call of PluginInstall\nfunc (mr *MockPluginAPIClientMockRecorder) PluginInstall(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginInstall\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginInstall), ctx, name, options)\n}\n\n// PluginUpgrade mocks base method\nfunc (m *MockPluginAPIClient) PluginUpgrade(ctx context.Context, name string, options types.PluginInstallOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginUpgrade\", ctx, name, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginUpgrade indicates an expected call of PluginUpgrade\nfunc (mr *MockPluginAPIClientMockRecorder) PluginUpgrade(ctx, name, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginUpgrade\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginUpgrade), ctx, name, options)\n}\n\n// PluginPush mocks base method\nfunc (m *MockPluginAPIClient) PluginPush(ctx context.Context, name, registryAuth string) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginPush\", ctx, name, registryAuth)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PluginPush indicates an expected call of PluginPush\nfunc (mr *MockPluginAPIClientMockRecorder) PluginPush(ctx, name, registryAuth interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginPush\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginPush), ctx, name, registryAuth)\n}\n\n// PluginSet mocks base method\nfunc (m *MockPluginAPIClient) PluginSet(ctx context.Context, name string, args []string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginSet\", ctx, name, args)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginSet indicates an expected call of PluginSet\nfunc (mr *MockPluginAPIClientMockRecorder) PluginSet(ctx, name, args interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginSet\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginSet), ctx, name, args)\n}\n\n// PluginInspectWithRaw mocks base method\nfunc (m *MockPluginAPIClient) PluginInspectWithRaw(ctx context.Context, name string) (*types.Plugin, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginInspectWithRaw\", ctx, name)\n\tret0, _ := ret[0].(*types.Plugin)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// PluginInspectWithRaw indicates an expected call of PluginInspectWithRaw\nfunc (mr *MockPluginAPIClientMockRecorder) PluginInspectWithRaw(ctx, name interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginInspectWithRaw\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginInspectWithRaw), ctx, name)\n}\n\n// PluginCreate mocks base method\nfunc (m *MockPluginAPIClient) PluginCreate(ctx context.Context, createContext io.Reader, options types.PluginCreateOptions) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PluginCreate\", ctx, createContext, options)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// PluginCreate indicates an expected call of PluginCreate\nfunc (mr *MockPluginAPIClientMockRecorder) PluginCreate(ctx, createContext, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PluginCreate\", reflect.TypeOf((*MockPluginAPIClient)(nil).PluginCreate), ctx, createContext, options)\n}\n\n// MockServiceAPIClient is a mock of ServiceAPIClient interface\ntype MockServiceAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockServiceAPIClientMockRecorder\n}\n\n// MockServiceAPIClientMockRecorder is the mock recorder for MockServiceAPIClient\ntype MockServiceAPIClientMockRecorder struct {\n\tmock *MockServiceAPIClient\n}\n\n// NewMockServiceAPIClient creates a new mock instance\nfunc NewMockServiceAPIClient(ctrl *gomock.Controller) *MockServiceAPIClient {\n\tmock := &MockServiceAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockServiceAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockServiceAPIClient) EXPECT() *MockServiceAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// ServiceCreate mocks base method\nfunc (m *MockServiceAPIClient) ServiceCreate(ctx context.Context, service swarm.ServiceSpec, options types.ServiceCreateOptions) (types.ServiceCreateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceCreate\", ctx, service, options)\n\tret0, _ := ret[0].(types.ServiceCreateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceCreate indicates an expected call of ServiceCreate\nfunc (mr *MockServiceAPIClientMockRecorder) ServiceCreate(ctx, service, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceCreate\", reflect.TypeOf((*MockServiceAPIClient)(nil).ServiceCreate), ctx, service, options)\n}\n\n// ServiceInspectWithRaw mocks base method\nfunc (m *MockServiceAPIClient) ServiceInspectWithRaw(ctx context.Context, serviceID string, options types.ServiceInspectOptions) (swarm.Service, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceInspectWithRaw\", ctx, serviceID, options)\n\tret0, _ := ret[0].(swarm.Service)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ServiceInspectWithRaw indicates an expected call of ServiceInspectWithRaw\nfunc (mr *MockServiceAPIClientMockRecorder) ServiceInspectWithRaw(ctx, serviceID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceInspectWithRaw\", reflect.TypeOf((*MockServiceAPIClient)(nil).ServiceInspectWithRaw), ctx, serviceID, options)\n}\n\n// ServiceList mocks base method\nfunc (m *MockServiceAPIClient) ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Service)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceList indicates an expected call of ServiceList\nfunc (mr *MockServiceAPIClientMockRecorder) ServiceList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceList\", reflect.TypeOf((*MockServiceAPIClient)(nil).ServiceList), ctx, options)\n}\n\n// ServiceRemove mocks base method\nfunc (m *MockServiceAPIClient) ServiceRemove(ctx context.Context, serviceID string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceRemove\", ctx, serviceID)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ServiceRemove indicates an expected call of ServiceRemove\nfunc (mr *MockServiceAPIClientMockRecorder) ServiceRemove(ctx, serviceID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceRemove\", reflect.TypeOf((*MockServiceAPIClient)(nil).ServiceRemove), ctx, serviceID)\n}\n\n// ServiceUpdate mocks base method\nfunc (m *MockServiceAPIClient) ServiceUpdate(ctx context.Context, serviceID string, version swarm.Version, service swarm.ServiceSpec, options types.ServiceUpdateOptions) (types.ServiceUpdateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceUpdate\", ctx, serviceID, version, service, options)\n\tret0, _ := ret[0].(types.ServiceUpdateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceUpdate indicates an expected call of ServiceUpdate\nfunc (mr *MockServiceAPIClientMockRecorder) ServiceUpdate(ctx, serviceID, version, service, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceUpdate\", reflect.TypeOf((*MockServiceAPIClient)(nil).ServiceUpdate), ctx, serviceID, version, service, options)\n}\n\n// ServiceLogs mocks base method\nfunc (m *MockServiceAPIClient) ServiceLogs(ctx context.Context, serviceID string, options types.ContainerLogsOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ServiceLogs\", ctx, serviceID, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ServiceLogs indicates an expected call of ServiceLogs\nfunc (mr *MockServiceAPIClientMockRecorder) ServiceLogs(ctx, serviceID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ServiceLogs\", reflect.TypeOf((*MockServiceAPIClient)(nil).ServiceLogs), ctx, serviceID, options)\n}\n\n// TaskLogs mocks base method\nfunc (m *MockServiceAPIClient) TaskLogs(ctx context.Context, taskID string, options types.ContainerLogsOptions) (io.ReadCloser, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskLogs\", ctx, taskID, options)\n\tret0, _ := ret[0].(io.ReadCloser)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// TaskLogs indicates an expected call of TaskLogs\nfunc (mr *MockServiceAPIClientMockRecorder) TaskLogs(ctx, taskID, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskLogs\", reflect.TypeOf((*MockServiceAPIClient)(nil).TaskLogs), ctx, taskID, options)\n}\n\n// TaskInspectWithRaw mocks base method\nfunc (m *MockServiceAPIClient) TaskInspectWithRaw(ctx context.Context, taskID string) (swarm.Task, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskInspectWithRaw\", ctx, taskID)\n\tret0, _ := ret[0].(swarm.Task)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// TaskInspectWithRaw indicates an expected call of TaskInspectWithRaw\nfunc (mr *MockServiceAPIClientMockRecorder) TaskInspectWithRaw(ctx, taskID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskInspectWithRaw\", reflect.TypeOf((*MockServiceAPIClient)(nil).TaskInspectWithRaw), ctx, taskID)\n}\n\n// TaskList mocks base method\nfunc (m *MockServiceAPIClient) TaskList(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"TaskList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Task)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// TaskList indicates an expected call of TaskList\nfunc (mr *MockServiceAPIClientMockRecorder) TaskList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"TaskList\", reflect.TypeOf((*MockServiceAPIClient)(nil).TaskList), ctx, options)\n}\n\n// MockSwarmAPIClient is a mock of SwarmAPIClient interface\ntype MockSwarmAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockSwarmAPIClientMockRecorder\n}\n\n// MockSwarmAPIClientMockRecorder is the mock recorder for MockSwarmAPIClient\ntype MockSwarmAPIClientMockRecorder struct {\n\tmock *MockSwarmAPIClient\n}\n\n// NewMockSwarmAPIClient creates a new mock instance\nfunc NewMockSwarmAPIClient(ctrl *gomock.Controller) *MockSwarmAPIClient {\n\tmock := &MockSwarmAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockSwarmAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockSwarmAPIClient) EXPECT() *MockSwarmAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// SwarmInit mocks base method\nfunc (m *MockSwarmAPIClient) SwarmInit(ctx context.Context, req swarm.InitRequest) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmInit\", ctx, req)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmInit indicates an expected call of SwarmInit\nfunc (mr *MockSwarmAPIClientMockRecorder) SwarmInit(ctx, req interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmInit\", reflect.TypeOf((*MockSwarmAPIClient)(nil).SwarmInit), ctx, req)\n}\n\n// SwarmJoin mocks base method\nfunc (m *MockSwarmAPIClient) SwarmJoin(ctx context.Context, req swarm.JoinRequest) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmJoin\", ctx, req)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SwarmJoin indicates an expected call of SwarmJoin\nfunc (mr *MockSwarmAPIClientMockRecorder) SwarmJoin(ctx, req interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmJoin\", reflect.TypeOf((*MockSwarmAPIClient)(nil).SwarmJoin), ctx, req)\n}\n\n// SwarmGetUnlockKey mocks base method\nfunc (m *MockSwarmAPIClient) SwarmGetUnlockKey(ctx context.Context) (types.SwarmUnlockKeyResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmGetUnlockKey\", ctx)\n\tret0, _ := ret[0].(types.SwarmUnlockKeyResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmGetUnlockKey indicates an expected call of SwarmGetUnlockKey\nfunc (mr *MockSwarmAPIClientMockRecorder) SwarmGetUnlockKey(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmGetUnlockKey\", reflect.TypeOf((*MockSwarmAPIClient)(nil).SwarmGetUnlockKey), ctx)\n}\n\n// SwarmUnlock mocks base method\nfunc (m *MockSwarmAPIClient) SwarmUnlock(ctx context.Context, req swarm.UnlockRequest) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmUnlock\", ctx, req)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SwarmUnlock indicates an expected call of SwarmUnlock\nfunc (mr *MockSwarmAPIClientMockRecorder) SwarmUnlock(ctx, req interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmUnlock\", reflect.TypeOf((*MockSwarmAPIClient)(nil).SwarmUnlock), ctx, req)\n}\n\n// SwarmLeave mocks base method\nfunc (m *MockSwarmAPIClient) SwarmLeave(ctx context.Context, force bool) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmLeave\", ctx, force)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SwarmLeave indicates an expected call of SwarmLeave\nfunc (mr *MockSwarmAPIClientMockRecorder) SwarmLeave(ctx, force interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmLeave\", reflect.TypeOf((*MockSwarmAPIClient)(nil).SwarmLeave), ctx, force)\n}\n\n// SwarmInspect mocks base method\nfunc (m *MockSwarmAPIClient) SwarmInspect(ctx context.Context) (swarm.Swarm, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmInspect\", ctx)\n\tret0, _ := ret[0].(swarm.Swarm)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SwarmInspect indicates an expected call of SwarmInspect\nfunc (mr *MockSwarmAPIClientMockRecorder) SwarmInspect(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmInspect\", reflect.TypeOf((*MockSwarmAPIClient)(nil).SwarmInspect), ctx)\n}\n\n// SwarmUpdate mocks base method\nfunc (m *MockSwarmAPIClient) SwarmUpdate(ctx context.Context, version swarm.Version, swarm swarm.Spec, flags swarm.UpdateFlags) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SwarmUpdate\", ctx, version, swarm, flags)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SwarmUpdate indicates an expected call of SwarmUpdate\nfunc (mr *MockSwarmAPIClientMockRecorder) SwarmUpdate(ctx, version, swarm, flags interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SwarmUpdate\", reflect.TypeOf((*MockSwarmAPIClient)(nil).SwarmUpdate), ctx, version, swarm, flags)\n}\n\n// MockSystemAPIClient is a mock of SystemAPIClient interface\ntype MockSystemAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockSystemAPIClientMockRecorder\n}\n\n// MockSystemAPIClientMockRecorder is the mock recorder for MockSystemAPIClient\ntype MockSystemAPIClientMockRecorder struct {\n\tmock *MockSystemAPIClient\n}\n\n// NewMockSystemAPIClient creates a new mock instance\nfunc NewMockSystemAPIClient(ctrl *gomock.Controller) *MockSystemAPIClient {\n\tmock := &MockSystemAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockSystemAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockSystemAPIClient) EXPECT() *MockSystemAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// Events mocks base method\nfunc (m *MockSystemAPIClient) Events(ctx context.Context, options types.EventsOptions) (<-chan events.Message, <-chan error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Events\", ctx, options)\n\tret0, _ := ret[0].(<-chan events.Message)\n\tret1, _ := ret[1].(<-chan error)\n\treturn ret0, ret1\n}\n\n// Events indicates an expected call of Events\nfunc (mr *MockSystemAPIClientMockRecorder) Events(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Events\", reflect.TypeOf((*MockSystemAPIClient)(nil).Events), ctx, options)\n}\n\n// Info mocks base method\nfunc (m *MockSystemAPIClient) Info(ctx context.Context) (types.Info, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Info\", ctx)\n\tret0, _ := ret[0].(types.Info)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Info indicates an expected call of Info\nfunc (mr *MockSystemAPIClientMockRecorder) Info(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Info\", reflect.TypeOf((*MockSystemAPIClient)(nil).Info), ctx)\n}\n\n// RegistryLogin mocks base method\nfunc (m *MockSystemAPIClient) RegistryLogin(ctx context.Context, auth types.AuthConfig) (registry.AuthenticateOKBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RegistryLogin\", ctx, auth)\n\tret0, _ := ret[0].(registry.AuthenticateOKBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// RegistryLogin indicates an expected call of RegistryLogin\nfunc (mr *MockSystemAPIClientMockRecorder) RegistryLogin(ctx, auth interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RegistryLogin\", reflect.TypeOf((*MockSystemAPIClient)(nil).RegistryLogin), ctx, auth)\n}\n\n// DiskUsage mocks base method\nfunc (m *MockSystemAPIClient) DiskUsage(ctx context.Context) (types.DiskUsage, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DiskUsage\", ctx)\n\tret0, _ := ret[0].(types.DiskUsage)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// DiskUsage indicates an expected call of DiskUsage\nfunc (mr *MockSystemAPIClientMockRecorder) DiskUsage(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DiskUsage\", reflect.TypeOf((*MockSystemAPIClient)(nil).DiskUsage), ctx)\n}\n\n// Ping mocks base method\nfunc (m *MockSystemAPIClient) Ping(ctx context.Context) (types.Ping, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Ping\", ctx)\n\tret0, _ := ret[0].(types.Ping)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Ping indicates an expected call of Ping\nfunc (mr *MockSystemAPIClientMockRecorder) Ping(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Ping\", reflect.TypeOf((*MockSystemAPIClient)(nil).Ping), ctx)\n}\n\n// MockVolumeAPIClient is a mock of VolumeAPIClient interface\ntype MockVolumeAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockVolumeAPIClientMockRecorder\n}\n\n// MockVolumeAPIClientMockRecorder is the mock recorder for MockVolumeAPIClient\ntype MockVolumeAPIClientMockRecorder struct {\n\tmock *MockVolumeAPIClient\n}\n\n// NewMockVolumeAPIClient creates a new mock instance\nfunc NewMockVolumeAPIClient(ctrl *gomock.Controller) *MockVolumeAPIClient {\n\tmock := &MockVolumeAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockVolumeAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockVolumeAPIClient) EXPECT() *MockVolumeAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// VolumeCreate mocks base method\nfunc (m *MockVolumeAPIClient) VolumeCreate(ctx context.Context, options volume.VolumeCreateBody) (types.Volume, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeCreate\", ctx, options)\n\tret0, _ := ret[0].(types.Volume)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeCreate indicates an expected call of VolumeCreate\nfunc (mr *MockVolumeAPIClientMockRecorder) VolumeCreate(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeCreate\", reflect.TypeOf((*MockVolumeAPIClient)(nil).VolumeCreate), ctx, options)\n}\n\n// VolumeInspect mocks base method\nfunc (m *MockVolumeAPIClient) VolumeInspect(ctx context.Context, volumeID string) (types.Volume, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeInspect\", ctx, volumeID)\n\tret0, _ := ret[0].(types.Volume)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeInspect indicates an expected call of VolumeInspect\nfunc (mr *MockVolumeAPIClientMockRecorder) VolumeInspect(ctx, volumeID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeInspect\", reflect.TypeOf((*MockVolumeAPIClient)(nil).VolumeInspect), ctx, volumeID)\n}\n\n// VolumeInspectWithRaw mocks base method\nfunc (m *MockVolumeAPIClient) VolumeInspectWithRaw(ctx context.Context, volumeID string) (types.Volume, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeInspectWithRaw\", ctx, volumeID)\n\tret0, _ := ret[0].(types.Volume)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// VolumeInspectWithRaw indicates an expected call of VolumeInspectWithRaw\nfunc (mr *MockVolumeAPIClientMockRecorder) VolumeInspectWithRaw(ctx, volumeID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeInspectWithRaw\", reflect.TypeOf((*MockVolumeAPIClient)(nil).VolumeInspectWithRaw), ctx, volumeID)\n}\n\n// VolumeList mocks base method\nfunc (m *MockVolumeAPIClient) VolumeList(ctx context.Context, filter filters.Args) (volume.VolumeListOKBody, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeList\", ctx, filter)\n\tret0, _ := ret[0].(volume.VolumeListOKBody)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumeList indicates an expected call of VolumeList\nfunc (mr *MockVolumeAPIClientMockRecorder) VolumeList(ctx, filter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeList\", reflect.TypeOf((*MockVolumeAPIClient)(nil).VolumeList), ctx, filter)\n}\n\n// VolumeRemove mocks base method\nfunc (m *MockVolumeAPIClient) VolumeRemove(ctx context.Context, volumeID string, force bool) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumeRemove\", ctx, volumeID, force)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// VolumeRemove indicates an expected call of VolumeRemove\nfunc (mr *MockVolumeAPIClientMockRecorder) VolumeRemove(ctx, volumeID, force interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumeRemove\", reflect.TypeOf((*MockVolumeAPIClient)(nil).VolumeRemove), ctx, volumeID, force)\n}\n\n// VolumesPrune mocks base method\nfunc (m *MockVolumeAPIClient) VolumesPrune(ctx context.Context, pruneFilter filters.Args) (types.VolumesPruneReport, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"VolumesPrune\", ctx, pruneFilter)\n\tret0, _ := ret[0].(types.VolumesPruneReport)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// VolumesPrune indicates an expected call of VolumesPrune\nfunc (mr *MockVolumeAPIClientMockRecorder) VolumesPrune(ctx, pruneFilter interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"VolumesPrune\", reflect.TypeOf((*MockVolumeAPIClient)(nil).VolumesPrune), ctx, pruneFilter)\n}\n\n// MockSecretAPIClient is a mock of SecretAPIClient interface\ntype MockSecretAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockSecretAPIClientMockRecorder\n}\n\n// MockSecretAPIClientMockRecorder is the mock recorder for MockSecretAPIClient\ntype MockSecretAPIClientMockRecorder struct {\n\tmock *MockSecretAPIClient\n}\n\n// NewMockSecretAPIClient creates a new mock instance\nfunc NewMockSecretAPIClient(ctrl *gomock.Controller) *MockSecretAPIClient {\n\tmock := &MockSecretAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockSecretAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockSecretAPIClient) EXPECT() *MockSecretAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// SecretList mocks base method\nfunc (m *MockSecretAPIClient) SecretList(ctx context.Context, options types.SecretListOptions) ([]swarm.Secret, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Secret)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretList indicates an expected call of SecretList\nfunc (mr *MockSecretAPIClientMockRecorder) SecretList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretList\", reflect.TypeOf((*MockSecretAPIClient)(nil).SecretList), ctx, options)\n}\n\n// SecretCreate mocks base method\nfunc (m *MockSecretAPIClient) SecretCreate(ctx context.Context, secret swarm.SecretSpec) (types.SecretCreateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretCreate\", ctx, secret)\n\tret0, _ := ret[0].(types.SecretCreateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// SecretCreate indicates an expected call of SecretCreate\nfunc (mr *MockSecretAPIClientMockRecorder) SecretCreate(ctx, secret interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretCreate\", reflect.TypeOf((*MockSecretAPIClient)(nil).SecretCreate), ctx, secret)\n}\n\n// SecretRemove mocks base method\nfunc (m *MockSecretAPIClient) SecretRemove(ctx context.Context, id string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretRemove\", ctx, id)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SecretRemove indicates an expected call of SecretRemove\nfunc (mr *MockSecretAPIClientMockRecorder) SecretRemove(ctx, id interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretRemove\", reflect.TypeOf((*MockSecretAPIClient)(nil).SecretRemove), ctx, id)\n}\n\n// SecretInspectWithRaw mocks base method\nfunc (m *MockSecretAPIClient) SecretInspectWithRaw(ctx context.Context, name string) (swarm.Secret, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretInspectWithRaw\", ctx, name)\n\tret0, _ := ret[0].(swarm.Secret)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// SecretInspectWithRaw indicates an expected call of SecretInspectWithRaw\nfunc (mr *MockSecretAPIClientMockRecorder) SecretInspectWithRaw(ctx, name interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretInspectWithRaw\", reflect.TypeOf((*MockSecretAPIClient)(nil).SecretInspectWithRaw), ctx, name)\n}\n\n// SecretUpdate mocks base method\nfunc (m *MockSecretAPIClient) SecretUpdate(ctx context.Context, id string, version swarm.Version, secret swarm.SecretSpec) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SecretUpdate\", ctx, id, version, secret)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SecretUpdate indicates an expected call of SecretUpdate\nfunc (mr *MockSecretAPIClientMockRecorder) SecretUpdate(ctx, id, version, secret interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SecretUpdate\", reflect.TypeOf((*MockSecretAPIClient)(nil).SecretUpdate), ctx, id, version, secret)\n}\n\n// MockConfigAPIClient is a mock of ConfigAPIClient interface\ntype MockConfigAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockConfigAPIClientMockRecorder\n}\n\n// MockConfigAPIClientMockRecorder is the mock recorder for MockConfigAPIClient\ntype MockConfigAPIClientMockRecorder struct {\n\tmock *MockConfigAPIClient\n}\n\n// NewMockConfigAPIClient creates a new mock instance\nfunc NewMockConfigAPIClient(ctrl *gomock.Controller) *MockConfigAPIClient {\n\tmock := &MockConfigAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockConfigAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockConfigAPIClient) EXPECT() *MockConfigAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// ConfigList mocks base method\nfunc (m *MockConfigAPIClient) ConfigList(ctx context.Context, options types.ConfigListOptions) ([]swarm.Config, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigList\", ctx, options)\n\tret0, _ := ret[0].([]swarm.Config)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigList indicates an expected call of ConfigList\nfunc (mr *MockConfigAPIClientMockRecorder) ConfigList(ctx, options interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigList\", reflect.TypeOf((*MockConfigAPIClient)(nil).ConfigList), ctx, options)\n}\n\n// ConfigCreate mocks base method\nfunc (m *MockConfigAPIClient) ConfigCreate(ctx context.Context, config swarm.ConfigSpec) (types.ConfigCreateResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigCreate\", ctx, config)\n\tret0, _ := ret[0].(types.ConfigCreateResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ConfigCreate indicates an expected call of ConfigCreate\nfunc (mr *MockConfigAPIClientMockRecorder) ConfigCreate(ctx, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigCreate\", reflect.TypeOf((*MockConfigAPIClient)(nil).ConfigCreate), ctx, config)\n}\n\n// ConfigRemove mocks base method\nfunc (m *MockConfigAPIClient) ConfigRemove(ctx context.Context, id string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigRemove\", ctx, id)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ConfigRemove indicates an expected call of ConfigRemove\nfunc (mr *MockConfigAPIClientMockRecorder) ConfigRemove(ctx, id interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigRemove\", reflect.TypeOf((*MockConfigAPIClient)(nil).ConfigRemove), ctx, id)\n}\n\n// ConfigInspectWithRaw mocks base method\nfunc (m *MockConfigAPIClient) ConfigInspectWithRaw(ctx context.Context, name string) (swarm.Config, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigInspectWithRaw\", ctx, name)\n\tret0, _ := ret[0].(swarm.Config)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ConfigInspectWithRaw indicates an expected call of ConfigInspectWithRaw\nfunc (mr *MockConfigAPIClientMockRecorder) ConfigInspectWithRaw(ctx, name interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigInspectWithRaw\", reflect.TypeOf((*MockConfigAPIClient)(nil).ConfigInspectWithRaw), ctx, name)\n}\n\n// ConfigUpdate mocks base method\nfunc (m *MockConfigAPIClient) ConfigUpdate(ctx context.Context, id string, version swarm.Version, config swarm.ConfigSpec) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ConfigUpdate\", ctx, id, version, config)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ConfigUpdate indicates an expected call of ConfigUpdate\nfunc (mr *MockConfigAPIClientMockRecorder) ConfigUpdate(ctx, id, version, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ConfigUpdate\", reflect.TypeOf((*MockConfigAPIClient)(nil).ConfigUpdate), ctx, id, version, config)\n}\n"
  },
  {
    "path": "monitor/internal/docker/monitor.go",
    "content": "package dockermonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/dchest/siphash\"\n\t\"github.com/docker/docker/api/types\"\n\t\"github.com/docker/docker/api/types/events\"\n\t\"github.com/docker/docker/api/types/filters\"\n\tdockerClient \"github.com/docker/docker/client\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\ttevents \"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n\t\"go.uber.org/zap\"\n)\n\ntype lockedDockerClient struct {\n\tclient           dockerClient.CommonAPIClient\n\tdockerClientLock sync.Mutex\n}\n\n// DockerMonitor implements the connection to Docker and monitoring based on docker events.\ntype DockerMonitor struct {\n\tclientHdl                  lockedDockerClient\n\tsocketType                 string\n\tsocketAddress              string\n\tmetadataExtractor          extractors.DockerMetadataExtractor\n\thandlers                   map[Event]func(ctx context.Context, event *events.Message) error\n\teventnotifications         []chan *events.Message\n\tstopprocessor              []chan bool\n\tnumberOfQueues             int\n\tstoplistener               chan bool\n\tconfig                     *config.ProcessorConfig\n\tnetcls                     cgnetcls.Cgroupnetcls\n\tsyncAtStart                bool\n\tterminateStoppedContainers bool\n\tignoreHostModeContainers   bool\n}\n\n// New returns a new docker monitor.\nfunc New(context.Context) *DockerMonitor {\n\treturn &DockerMonitor{}\n}\n\n// SetupConfig provides a configuration to implmentations. Every implementation\n// can have its own config type.\nfunc (d *DockerMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) (err error) {\n\n\tdefaultConfig := DefaultConfig()\n\n\tif cfg == nil {\n\t\tcfg = defaultConfig\n\t}\n\n\tdockerConfig, ok := cfg.(*Config)\n\tif !ok {\n\t\treturn fmt.Errorf(\"Invalid configuration specified\")\n\t}\n\n\t// Setup defaults\n\tdockerConfig = SetupDefaultConfig(dockerConfig)\n\n\td.socketType = dockerConfig.SocketType\n\td.socketAddress = dockerConfig.SocketAddress\n\td.metadataExtractor = dockerConfig.EventMetadataExtractor\n\td.syncAtStart = dockerConfig.SyncAtStart\n\td.handlers = make(map[Event]func(ctx context.Context, event *events.Message) error)\n\td.stoplistener = make(chan bool)\n\td.netcls = cgnetcls.NewDockerCgroupNetController()\n\td.numberOfQueues = runtime.NumCPU() * 8\n\td.eventnotifications = make([]chan *events.Message, d.numberOfQueues)\n\td.stopprocessor = make([]chan bool, d.numberOfQueues)\n\td.terminateStoppedContainers = dockerConfig.DestroyStoppedContainers\n\td.ignoreHostModeContainers = dockerConfig.ignoreHostModeContainers\n\tfor i := 0; i < d.numberOfQueues; i++ {\n\t\td.eventnotifications[i] = make(chan *events.Message, 1000)\n\t\td.stopprocessor[i] = make(chan bool)\n\t}\n\n\t// Add handlers for the events that we know how to process\n\td.addHandler(EventCreate, d.handleCreateEvent)\n\td.addHandler(EventStart, d.handleStartEvent)\n\td.addHandler(EventDie, d.handleDieEvent)\n\td.addHandler(EventDestroy, d.handleDestroyEvent)\n\td.addHandler(EventPause, d.handlePauseEvent)\n\td.addHandler(EventUnpause, d.handleUnpauseEvent)\n\n\treturn nil\n}\n\nfunc (d *DockerMonitor) dockerClient() dockerClient.CommonAPIClient {\n\td.clientHdl.dockerClientLock.Lock()\n\tdefer d.clientHdl.dockerClientLock.Unlock()\n\tclient := d.clientHdl.client\n\treturn client\n}\n\nfunc (d *DockerMonitor) setDockerClient(client dockerClient.CommonAPIClient) {\n\td.clientHdl.dockerClientLock.Lock()\n\td.clientHdl.client = client\n\td.clientHdl.dockerClientLock.Unlock()\n\n}\n\n// SetupHandlers sets up handlers for monitors to invoke for various events such as\n// processing unit events and synchronization events. This will be called before Start()\n// by the consumer of the monitor\nfunc (d *DockerMonitor) SetupHandlers(c *config.ProcessorConfig) {\n\n\td.config = c\n}\n\n// Run will start the DockerPolicy Enforcement.\n// It applies a policy to each Container already Up and Running.\n// It listens to all ContainerEvents\nfunc (d *DockerMonitor) Run(ctx context.Context) error {\n\n\tif err := d.config.IsComplete(); err != nil {\n\t\treturn fmt.Errorf(\"docker config issue: %s\", err)\n\t}\n\n\tif err := d.waitForDockerDaemon(ctx); err != nil {\n\t\tzap.L().Error(\"Docker daemon is not running at startup - skipping container processing. periodic retries will be attempted\",\n\t\t\tzap.Error(err),\n\t\t\tzap.Duration(\"retry interval\", dockerRetryTimer),\n\t\t)\n\t\treturn nil\n\t}\n\n\treturn nil\n}\n\nfunc (d *DockerMonitor) syncContainers(ctx context.Context) error {\n\tif d.syncAtStart && d.config.Policy != nil {\n\t\toptions := types.ContainerListOptions{\n\t\t\tAll: !d.terminateStoppedContainers,\n\t\t}\n\t\tclient := d.dockerClient()\n\t\tif client == nil {\n\t\t\treturn errors.New(\"unable to init monitor: nil clienthdl\")\n\t\t}\n\t\tcontainers, err := client.ContainerList(ctx, options)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"unable to get container list: %s\", err)\n\t\t}\n\t\t// Syncing all Existing containers depending on MonitorSetting\n\t\tif err := d.resyncContainers(ctx, containers); err != nil {\n\t\t\tzap.L().Error(\"Unable to sync existing containers\", zap.Error(err))\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (d *DockerMonitor) initMonitor(ctx context.Context) {\n\n\t// Starting the eventListener and wait to hear on channel for it to be ready.\n\t// We are not doing resync. We just start the listener.\n\tlistenerReady := make(chan struct{})\n\tgo d.eventListener(ctx, listenerReady)\n\t<-listenerReady\n\n\t// Start processing the events\n\tgo d.eventProcessors(ctx)\n}\n\n// addHandler adds a callback handler for the given docker event.\n// Interesting event names include 'start' and 'die'. For more on events see\n// https://docs.docker.com/engine/reference/api/docker_remote_api/\n// under the section 'Docker Events'.\nfunc (d *DockerMonitor) addHandler(event Event, handler EventHandler) {\n\td.handlers[event] = handler\n}\n\n// getHashKey returns key to loadbalance on. This ensures that all\n// events from a pod/container fall onto the same queue.\nfunc (d *DockerMonitor) getHashKey(r *events.Message) string {\n\n\tif isKubernetesContainer(r.Actor.Attributes) {\n\t\treturn kubePodIdentifier(r.Actor.Attributes)\n\t}\n\treturn r.ID\n}\n\n// sendRequestToQueue sends a request to a channel based on a hash function\nfunc (d *DockerMonitor) sendRequestToQueue(r *events.Message) {\n\n\tkey0 := uint64(256203161)\n\tkey1 := uint64(982451653)\n\n\tkey := d.getHashKey(r)\n\th := siphash.Hash(key0, key1, []byte(key))\n\n\td.eventnotifications[int(h%uint64(d.numberOfQueues))] <- r\n}\n\n// eventProcessor processes docker events. We are processing multiple\n// queues in parallel so that we can activate containers as fast\n// as possible.\nfunc (d *DockerMonitor) eventProcessors(ctx context.Context) {\n\n\tfor i := 0; i < d.numberOfQueues; i++ {\n\t\tgo func(i int) {\n\t\t\tfor {\n\t\t\t\tselect {\n\t\t\t\tcase event := <-d.eventnotifications[i]:\n\t\t\t\t\tif f, ok := d.handlers[Event(event.Action)]; ok {\n\t\t\t\t\t\tif err := f(ctx, event); err != nil {\n\t\t\t\t\t\t\tzap.L().Error(\"Unable to handle docker event\",\n\t\t\t\t\t\t\t\tzap.String(\"action\", event.Action),\n\t\t\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t\t\t)\n\t\t\t\t\t\t}\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\tcase <-ctx.Done():\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t}(i)\n\t}\n}\n\n// eventListener listens to Docker events from the daemon and passes to\n// to the processor through a buffered channel. This minimizes the chances\n// that we will miss events because the processor is delayed\nfunc (d *DockerMonitor) eventListener(ctx context.Context, listenerReady chan struct{}) {\n\n\t// Once the buffered event channel was returned by Docker we return the ready status.\n\tlistenerReady <- struct{}{}\n\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\tdefault:\n\t\t\tif d.dockerClient() == nil {\n\t\t\t\tzap.L().Debug(\"Trying to setup docker daemon\")\n\t\t\t\tif err := d.setupDockerDaemon(ctx); err != nil {\n\t\t\t\t\td.setDockerClient(nil)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\t// We are here means the docker daemon restarted. we need to resync\n\t\t\t\tif err := d.Resync(ctx); err != nil {\n\t\t\t\t\tzap.L().Error(\"Unable to resync containers after reconnecting to docker daemon\", zap.Error(err))\n\t\t\t\t}\n\t\t\t}\n\t\t\td.listener(ctx)\n\t\t}\n\t}\n}\n\nfunc (d *DockerMonitor) listener(ctx context.Context) {\n\tf := filters.NewArgs()\n\tf.Add(\"type\", \"container\")\n\toptions := types.EventsOptions{\n\t\tFilters: f,\n\t}\n\tclient := d.dockerClient()\n\tif client == nil {\n\t\treturn\n\t}\n\tmessages, errs := client.Events(ctx, options)\n\tfor {\n\t\tselect {\n\t\tcase message := <-messages:\n\t\t\tzap.L().Debug(\"Got message from docker client\",\n\t\t\t\tzap.String(\"action\", message.Action),\n\t\t\t\tzap.String(\"ID\", message.ID),\n\t\t\t)\n\t\t\td.sendRequestToQueue(&message)\n\n\t\tcase err := <-errs:\n\t\t\tif err != nil && err != io.EOF {\n\t\t\t\tzap.L().Warn(\"Received docker event error\",\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t\td.setDockerClient(nil)\n\t\t\treturn\n\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\t}\n\t}\n\n}\n\n// Resync resyncs all the existing containers on the Host, using the\n// same process as when a container is initially spawn up\nfunc (d *DockerMonitor) Resync(ctx context.Context) error {\n\n\tif !d.syncAtStart || d.config.Policy == nil {\n\t\tzap.L().Debug(\"No synchronization of containers performed\")\n\t\treturn nil\n\t}\n\n\tzap.L().Debug(\"Syncing all existing containers\")\n\toptions := types.ContainerListOptions{\n\t\tAll: !d.terminateStoppedContainers,\n\t}\n\tclient := d.dockerClient()\n\tif client == nil {\n\t\treturn errors.New(\"unable to resync: nil clienthdl\")\n\t}\n\tcontainers, err := client.ContainerList(ctx, options)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to get container list: %s\", err)\n\t}\n\n\treturn d.resyncContainers(ctx, containers)\n}\n\nfunc (d *DockerMonitor) resyncContainers(ctx context.Context, containers []types.Container) error {\n\n\td.config.ResyncLock.RLock()\n\tdefer d.config.ResyncLock.RUnlock()\n\t// resync containers that share host network first if we are not ignoring host mode container\n\tif !d.ignoreHostModeContainers {\n\t\tif err := d.resyncContainersByOrder(ctx, containers, true); err != nil {\n\t\t\tzap.L().Error(\"Unable to sync container\", zap.Error(err))\n\t\t}\n\t}\n\n\t// resync remaining containers.\n\tif err := d.resyncContainersByOrder(ctx, containers, false); err != nil {\n\t\tzap.L().Error(\"Unable to sync container\", zap.Error(err))\n\t}\n\n\treturn nil\n}\n\n//container.HostConfig.NetworkMode == constants.DockerHostMode\nfunc (d *DockerMonitor) resyncContainersByOrder(ctx context.Context, containers []types.Container, syncHost bool) error {\n\tfor _, c := range containers {\n\t\tclient := d.dockerClient()\n\t\tif client == nil {\n\t\t\treturn errors.New(\"unable to resync: nil clienthdl\")\n\t\t}\n\t\tcontainer, err := client.ContainerInspect(ctx, c.ID)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tif (syncHost && container.HostConfig.NetworkMode != constants.DockerHostMode) ||\n\t\t\t(!syncHost && container.HostConfig.NetworkMode == constants.DockerHostMode) {\n\t\t\tcontinue\n\t\t}\n\n\t\tpuID, _ := puIDFromDockerID(container.ID)\n\n\t\truntime, err := d.extractMetadata(&container)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tevent := common.EventStop\n\t\tif container.State.Running {\n\t\t\tif !container.State.Paused {\n\t\t\t\tevent = common.EventStart\n\t\t\t} else {\n\t\t\t\tevent = common.EventPause\n\t\t\t}\n\t\t}\n\n\t\t// If it is a host container, we need to activate it as a Linux process. We will\n\t\t// override the options that the metadata extractor provided.\n\t\tif container.HostConfig.NetworkMode == constants.DockerHostMode {\n\t\t\toptions := hostModeOptions(&container)\n\t\t\toptions.PolicyExtensions = runtime.Options().PolicyExtensions\n\t\t\truntime.SetOptions(*options)\n\t\t\truntime.SetPUType(common.LinuxProcessPU)\n\t\t}\n\n\t\truntime.SetOptions(runtime.Options())\n\n\t\tif err := d.config.Policy.HandlePUEvent(ctx, puID, event, runtime); err != nil {\n\t\t\tzap.L().Error(\"Unable to sync existing Container\",\n\t\t\t\tzap.String(\"dockerID\", c.ID),\n\t\t\t\tzap.Error(err),\n\t\t\t)\n\t\t}\n\n\t\t// if the container has hostnet set to true or is linked\n\t\t// to container with hostnet set to true, program the cgroup.\n\t\tif isHostNetworkContainer(runtime) {\n\t\t\tif err = d.setupHostMode(puID, runtime, &container); err != nil {\n\t\t\t\treturn fmt.Errorf(\"unable to setup host mode for container %s: %s\", puID, err)\n\t\t\t}\n\t\t}\n\n\t}\n\n\treturn nil\n}\n\n// setupHostMode sets up the net_cls cgroup for the host mode\nfunc (d *DockerMonitor) setupHostMode(puID string, runtimeInfo policy.RuntimeReader, dockerInfo *types.ContainerJSON) (err error) {\n\n\tpausePUID := puID\n\tif dockerInfo.HostConfig.NetworkMode == constants.DockerHostMode {\n\t\tif err = d.netcls.Creategroup(puID); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// Clean the cgroup on exit, if we have failed t activate.\n\t\tdefer func() {\n\t\t\tif err != nil {\n\t\t\t\tif derr := d.netcls.DeleteCgroup(puID); derr != nil {\n\t\t\t\t\tzap.L().Warn(\"Failed to clean cgroup\",\n\t\t\t\t\t\tzap.String(\"puID\", puID),\n\t\t\t\t\t\tzap.Error(derr),\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\n\t\tmarkval := runtimeInfo.Options().CgroupMark\n\t\tif markval == \"\" {\n\t\t\treturn errors.New(\"mark value not found\")\n\t\t}\n\n\t\tmark, _ := strconv.ParseUint(markval, 10, 32)\n\t\tif err := d.netcls.AssignMark(puID, mark); err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\t// Add the container pid that is linked to hostnet to\n\t\t// the cgroup of the parent container.\n\n\t\tpausePUID = getPausePUID(policyExtensions(runtimeInfo))\n\t}\n\n\treturn d.netcls.AddProcess(pausePUID, dockerInfo.State.Pid)\n}\n\nfunc (d *DockerMonitor) retrieveDockerInfo(ctx context.Context, event *events.Message) (*types.ContainerJSON, error) {\n\tclient := d.dockerClient()\n\tif client == nil {\n\t\treturn nil, errors.New(\"unable to get container info: nil clienthdl\")\n\t}\n\tinfo, err := client.ContainerInspect(ctx, event.ID)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to read container information: container %s kept alive per policy: %s\", event.ID, err)\n\t}\n\treturn &info, nil\n}\n\n// ExtractMetadata generates the RuntimeInfo based on Docker primitive\nfunc (d *DockerMonitor) extractMetadata(dockerInfo *types.ContainerJSON) (*policy.PURuntime, error) {\n\n\tif dockerInfo == nil {\n\t\treturn nil, errors.New(\"docker info is empty\")\n\t}\n\n\tif d.metadataExtractor != nil {\n\t\treturn d.metadataExtractor(dockerInfo)\n\t}\n\n\treturn extractors.DefaultMetadataExtractor(dockerInfo)\n}\n\n// handleCreateEvent generates a create event type. We extract the metadata\n// and start the policy resolution at the create event. No need to wait\n// for the start event.\nfunc (d *DockerMonitor) handleCreateEvent(ctx context.Context, event *events.Message) error {\n\n\tpuID, err := puIDFromDockerID(event.ID)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcontainer, err := d.retrieveDockerInfo(ctx, event)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\truntime, err := d.extractMetadata(container)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// If it is a host container, we need to activate it as a Linux process. We will\n\t// override the options that the metadata extractor provided. We will maintain\n\t// any policy extensions in the object.\n\tif container.HostConfig.NetworkMode == constants.DockerHostMode {\n\t\toptions := hostModeOptions(container)\n\t\toptions.PolicyExtensions = runtime.Options().PolicyExtensions\n\t\truntime.SetOptions(*options)\n\t\truntime.SetPUType(common.LinuxProcessPU)\n\t}\n\n\truntime.SetOptions(runtime.Options())\n\n\treturn d.config.Policy.HandlePUEvent(ctx, puID, tevents.EventCreate, runtime)\n}\n\n// handleStartEvent will notify the policy engine immediately about the event in order\n// to start the implementation of the functions. At this point we know the process ID\n// that is needed for the remote enforcers.\nfunc (d *DockerMonitor) handleStartEvent(ctx context.Context, event *events.Message) error {\n\n\tcontainer, err := d.retrieveDockerInfo(ctx, event)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif !container.State.Running {\n\t\treturn nil\n\t}\n\n\tpuID, err := puIDFromDockerID(container.ID)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\truntime, err := d.extractMetadata(container)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// If it is a host container, we need to activate it as a Linux process. We will\n\t// override the options that the metadata extractor provided.\n\tif container.HostConfig.NetworkMode == constants.DockerHostMode {\n\t\toptions := hostModeOptions(container)\n\t\toptions.PolicyExtensions = runtime.Options().PolicyExtensions\n\t\truntime.SetOptions(*options)\n\t\truntime.SetPUType(common.LinuxProcessPU)\n\t}\n\n\truntime.SetOptions(runtime.Options())\n\n\tif err = d.config.Policy.HandlePUEvent(ctx, puID, tevents.EventStart, runtime); err != nil {\n\t\treturn fmt.Errorf(\"unable to set policy: container %s kept alive per policy: %s\", puID, err)\n\t}\n\n\t// if the container has hostnet set to true or is linked\n\t// to container with hostnet set to true, program the cgroup.\n\tif isHostNetworkContainer(runtime) && !d.ignoreHostModeContainers {\n\t\tif err = d.setupHostMode(puID, runtime, container); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to setup host mode for container %s: %s\", puID, err)\n\t\t}\n\t}\n\treturn nil\n}\n\n//handleDie event is called when a container dies. It generates a \"Stop\" event.\nfunc (d *DockerMonitor) handleDieEvent(ctx context.Context, event *events.Message) error {\n\n\tpuID, err := puIDFromDockerID(event.ID)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetOptions(runtime.Options())\n\n\tif err := d.config.Policy.HandlePUEvent(ctx, puID, tevents.EventStop, runtime); err != nil && !d.terminateStoppedContainers {\n\t\treturn err\n\t}\n\n\tif d.terminateStoppedContainers {\n\t\treturn d.config.Policy.HandlePUEvent(ctx, puID, tevents.EventDestroy, runtime)\n\t}\n\treturn nil\n}\n\n// handleDestroyEvent handles destroy events from Docker. It generated a \"Destroy event\"\nfunc (d *DockerMonitor) handleDestroyEvent(ctx context.Context, event *events.Message) error {\n\n\tpuID, err := puIDFromDockerID(event.ID)\n\tif err != nil {\n\t\treturn err\n\t}\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetOptions(runtime.Options())\n\n\tif err = d.config.Policy.HandlePUEvent(ctx, puID, tevents.EventDestroy, runtime); err != nil {\n\t\tzap.L().Error(\"Failed to handle delete event\",\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err = d.netcls.DeleteCgroup(puID); err != nil {\n\t\tzap.L().Warn(\"Failed to clean netcls group\",\n\t\t\tzap.String(\"puID\", puID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\treturn nil\n}\n\n// handlePauseEvent generates a create event type.\nfunc (d *DockerMonitor) handlePauseEvent(ctx context.Context, event *events.Message) error {\n\tzap.L().Info(\"UnPause Event for nativeID\", zap.String(\"ID\", event.ID))\n\n\tpuID, err := puIDFromDockerID(event.ID)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetOptions(runtime.Options())\n\n\treturn d.config.Policy.HandlePUEvent(ctx, puID, tevents.EventPause, runtime)\n}\n\n// handleCreateEvent generates a create event type.\nfunc (d *DockerMonitor) handleUnpauseEvent(ctx context.Context, event *events.Message) error {\n\n\tpuID, err := puIDFromDockerID(event.ID)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetOptions(runtime.Options())\n\n\treturn d.config.Policy.HandlePUEvent(ctx, puID, tevents.EventUnpause, runtime)\n}\n\nfunc puIDFromDockerID(dockerID string) (string, error) {\n\n\tif dockerID == \"\" {\n\t\treturn \"\", errors.New(\"unable to generate context id: empty docker id\")\n\t}\n\n\tif len(dockerID) < 12 {\n\t\treturn \"\", fmt.Errorf(\"unable to generate context id: dockerid smaller than 12 characters: %s\", dockerID)\n\t}\n\n\treturn dockerID[:12], nil\n}\n\nfunc initDockerClient(socketType string, socketAddress string) (*dockerClient.Client, error) {\n\n\tvar socket string\n\n\tswitch socketType {\n\tcase \"tcp\":\n\t\tsocket = \"https://\" + socketAddress\n\tcase \"unix\":\n\t\t// Sanity check that this path exists\n\t\tif _, oserr := os.Stat(socketAddress); os.IsNotExist(oserr) {\n\t\t\treturn nil, oserr\n\t\t}\n\t\tsocket = \"unix://\" + socketAddress\n\tdefault:\n\t\treturn nil, fmt.Errorf(\"bad socket type: %s\", socketType)\n\t}\n\n\tdefaultHeaders := map[string]string{\"User-Agent\": \"engine-api-dockerClient-1.0\"}\n\n\tdc, err := dockerClient.NewClient(socket, DockerClientVersion, nil, defaultHeaders)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to create docker client: %s\", err)\n\t}\n\n\treturn dc, nil\n}\n\nfunc (d *DockerMonitor) setupDockerDaemon(ctx context.Context) (err error) {\n\n\tif d.dockerClient() == nil {\n\t\t// Initialize client\n\t\tdockerClient, err := initDockerClient(d.socketType, d.socketAddress)\n\t\tif err != nil {\n\t\t\t// Reset this here since the interface = nil check will fail later this is partly initialized.\n\t\t\t// cheaper than doing reflect and check later\n\t\t\treturn err\n\t\t}\n\t\td.setDockerClient(dockerClient)\n\t}\n\n\tsubctx, cancel := context.WithTimeout(ctx, dockerPingTimeout)\n\tdefer cancel()\n\n\tclient := d.dockerClient()\n\tif client == nil {\n\t\treturn errors.New(\"unable to Ping: nil clienthdl\")\n\t}\n\t_, err = client.Ping(subctx)\n\treturn err\n}\n\n// waitForDockerDaemon is a blocking call which will try to bring up docker, if not return err\n// with timeout\nfunc (d *DockerMonitor) waitForDockerDaemon(ctx context.Context) (err error) {\n\n\tdone := make(chan bool)\n\tzap.L().Info(\"Trying to initialize docker monitor\")\n\tgo func(gctx context.Context) {\n\n\t\tfor {\n\t\t\terrg := d.setupDockerDaemon(gctx)\n\t\t\tif errg == nil {\n\t\t\t\td.initMonitor(gctx)\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\tselect {\n\t\t\tcase <-gctx.Done():\n\t\t\t\treturn\n\t\t\tcase <-time.After(dockerRetryTimer):\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t}\n\t\tdone <- true\n\t}(ctx)\n\n\tselect {\n\tcase <-ctx.Done():\n\t\treturn nil\n\tcase <-time.After(dockerInitializationWait):\n\t\treturn fmt.Errorf(\"Unable to connect to docker daemon\")\n\tcase <-done:\n\t\tif err := d.syncContainers(ctx); err != nil {\n\t\t\tzap.L().Error(\"Failed To Sync containers at start\", zap.Error(err))\n\t\t}\n\t\tzap.L().Info(\"Started Docker Monitor\")\n\t}\n\n\treturn nil\n}\n\n// hostModeOptions creates the default options for a host-mode container. The\n// container must be activated as a Linux Process.\nfunc hostModeOptions(dockerInfo *types.ContainerJSON) *policy.OptionsType {\n\n\toptions := policy.OptionsType{\n\t\tCgroupName:        strconv.Itoa(dockerInfo.State.Pid),\n\t\tCgroupMark:        strconv.FormatUint(cgnetcls.MarkVal(), 10),\n\t\tConvertedDockerPU: true,\n\t\tAutoPort:          true,\n\t}\n\n\tfor p := range dockerInfo.Config.ExposedPorts {\n\t\tif p.Proto() == \"tcp\" {\n\t\t\ts, err := portspec.NewPortSpecFromString(p.Port(), nil)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\toptions.Services = append(options.Services, common.Service{\n\t\t\t\tProtocol: uint8(6),\n\t\t\t\tPorts:    s,\n\t\t\t})\n\t\t}\n\t}\n\n\treturn &options\n}\n"
  },
  {
    "path": "monitor/internal/docker/monitor_linux_test.go",
    "content": "// +build linux,!rhel6\n\npackage dockermonitor\n\nimport (\n\t\"errors\"\n\t\"os\"\n\t\"syscall\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n)\n\nfunc TestInitDockerClient(t *testing.T) {\n\n\tConvey(\"When I try to initialize a new docker client as unix\", t, func() {\n\t\tdc, err := initDockerClient(constants.DefaultDockerSocketType, constants.DefaultDockerSocket)\n\n\t\tConvey(\"Then docker client should not be nil\", func() {\n\t\t\tSo(dc, ShouldNotBeNil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I try to initialize a new docker client as tcp\", t, func() {\n\t\tdc, err := initDockerClient(\"tcp\", constants.DefaultDockerSocket)\n\n\t\tConvey(\"Then docker client should not be nil\", func() {\n\t\t\tSo(dc, ShouldNotBeNil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I try to initialize a new docker client with some random type\", t, func() {\n\t\tdc, err := initDockerClient(\"wrongtype\", constants.DefaultDockerSocket)\n\n\t\tConvey(\"Then docker client should be nil and I should get error\", func() {\n\t\t\tSo(dc, ShouldBeNil)\n\t\t\tSo(err, ShouldResemble, errors.New(\"bad socket type: wrongtype\"))\n\t\t})\n\t})\n\n\tConvey(\"When I try to initialize a new docker client with some random path\", t, func() {\n\t\tdc, err := initDockerClient(constants.DefaultDockerSocketType, \"/var/random.sock\")\n\n\t\tConvey(\"Then docker client should be nil and I should get error\", func() {\n\t\t\tSo(dc, ShouldBeNil)\n\t\t\tSo(err, ShouldResemble, &os.PathError{Op: \"stat\", Path: \"/var/random.sock\", Err: syscall.Errno(2)})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/internal/docker/monitor_test.go",
    "content": "// +build linux\n\npackage dockermonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"reflect\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\ttypes \"github.com/docker/docker/api/types\"\n\n\t\"github.com/docker/docker/api/types/container\"\n\t\"github.com/docker/docker/api/types/events\"\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\ttevents \"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/docker/mockdocker\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy/mockpolicy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls/mockcgnetcls\"\n)\n\nvar (\n\ttestDockerMetadataExtractor extractors.DockerMetadataExtractor\n\tID                          string\n)\n\nfunc init() {\n\tID = \"74cc486f9ec3256d7bee789853ce05510167c7daf893f90a7577cdcba259d063\"\n}\n\nfunc eventCollector() collector.EventCollector {\n\tnewEvent := &collector.DefaultCollector{}\n\treturn newEvent\n}\n\nfunc initTestDockerInfo(id string, nwmode container.NetworkMode, state bool) *types.ContainerJSON {\n\tvar testInfoBase types.ContainerJSON\n\tvar testInfo types.ContainerJSONBase\n\tvar testConfig container.Config\n\tvar testNetwork types.NetworkSettings\n\tvar testDefaultNW types.DefaultNetworkSettings\n\tvar testContainer types.ContainerState\n\tvar testHostConfig container.HostConfig\n\n\tm := make(map[string]string)\n\tm[\"role\"] = \"client\"\n\tm[\"vendor\"] = \"CentOS\"\n\tm[\"$id\"] = \"598a35a60f79af0001b52ef5\"\n\tm[\"$namespace\"] = \"/sibicentos\"\n\tm[\"build-date\"] = \"20170801\"\n\tm[\"license\"] = \"GPLv2\"\n\tm[\"name\"] = \"CentOS Base Image\"\n\n\ttestDefaultNW.IPAddress = \"172.17.0.2\"\n\n\ttestNetwork.DefaultNetworkSettings = testDefaultNW\n\n\ttestConfig.Image = \"centos\"\n\ttestConfig.Labels = m\n\n\ttestInfo.Name = \"/priceless_rosalind\"\n\ttestInfo.State = &testContainer\n\ttestInfo.HostConfig = &testHostConfig\n\n\ttestContainer.Pid = 4912\n\ttestContainer.Running = state\n\n\ttestHostConfig.NetworkMode = nwmode\n\n\ttestInfoBase.NetworkSettings = &testNetwork\n\ttestInfoBase.ContainerJSONBase = &testInfo\n\ttestInfoBase.Config = &testConfig\n\ttestInfoBase.ID = id\n\ttestInfoBase.Config.Labels[\"storedTags\"] = \"$id=5a3b4e903653d4000133254f,$namespace=/test\"\n\n\treturn &testInfoBase\n}\n\nfunc initTestMessage(id string) *events.Message {\n\tvar testMessage events.Message\n\n\ttestMessage.ID = id\n\n\treturn &testMessage\n}\n\nfunc defaultContainer(host bool) types.ContainerJSON {\n\n\tnetworkMode := \"bridge\"\n\tif host {\n\t\tnetworkMode = \"host\"\n\t}\n\tc := types.ContainerJSON{\n\t\tContainerJSONBase: &types.ContainerJSONBase{\n\t\t\tID: ID,\n\t\t\tState: &types.ContainerState{\n\t\t\t\tRunning: true,\n\t\t\t\tPaused:  false,\n\t\t\t},\n\t\t\tHostConfig: &container.HostConfig{\n\t\t\t\tNetworkMode: container.NetworkMode(networkMode),\n\t\t\t},\n\t\t},\n\t\tMounts: nil,\n\t\tConfig: &container.Config{\n\t\t\tLabels: map[string]string{\"app\": \"web\"},\n\t\t},\n\t\tNetworkSettings: &types.NetworkSettings{\n\t\t\tDefaultNetworkSettings: types.DefaultNetworkSettings{\n\t\t\t\tIPAddress: \"172.17.0.1\",\n\t\t\t},\n\t\t},\n\t}\n\n\treturn c\n}\n\nfunc TestNewDockerMonitor(t *testing.T) {\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\t\tdm := New(context.Background())\n\t\terr := dm.SetupConfig(nil, &Config{\n\t\t\tEventMetadataExtractor: testDockerMetadataExtractor,\n\t\t})\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dm, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc TestContextIDFromDockerID(t *testing.T) {\n\tConvey(\"When I try to retrieve contextID from dockerID\", t, func() {\n\t\tcID, err := puIDFromDockerID(ID)\n\t\tcID1 := \"74cc486f9ec3\"\n\n\t\tConvey(\"Then contextID should match and I should not get any error\", func() {\n\t\t\tSo(cID, ShouldEqual, cID1)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I try to retrieve contextID when dockerID length less than 12\", t, func() {\n\t\tcID, err := puIDFromDockerID(\"6f47830f64\")\n\n\t\tConvey(\"Then I should get error\", func() {\n\t\t\tSo(cID, ShouldEqual, \"\")\n\t\t\tSo(err, ShouldResemble, errors.New(\"unable to generate context id: dockerid smaller than 12 characters: 6f47830f64\"))\n\t\t})\n\t})\n\n\tConvey(\"When I try to retrieve contextID when no dockerID given\", t, func() {\n\t\tcID, err := puIDFromDockerID(\"\")\n\n\t\tConvey(\"Then I should get error\", func() {\n\t\t\tSo(cID, ShouldEqual, \"\")\n\t\t\tSo(err, ShouldResemble, errors.New(\"unable to generate context id: empty docker id\"))\n\t\t})\n\t})\n}\n\nfunc TestDefaultDockerMetadataExtractor(t *testing.T) {\n\tConvey(\"When I try to extract metadata from default docker container\", t, func() {\n\t\tpuR, err := extractors.DefaultMetadataExtractor(initTestDockerInfo(ID, \"default\", false))\n\n\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\tSo(puR, ShouldNotBeNil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I try to extract metadata from host docker container\", t, func() {\n\t\tpuR, err := extractors.DefaultMetadataExtractor(initTestDockerInfo(ID, \"host\", false))\n\n\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\tSo(puR, ShouldNotBeNil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc setupDockerMonitor(ctrl *gomock.Controller) (*DockerMonitor, *mockpolicy.MockResolver) {\n\n\tdm := New(context.Background())\n\tmockPolicy := mockpolicy.NewMockResolver(ctrl)\n\n\tdm.SetupHandlers(&config.ProcessorConfig{\n\t\tCollector:  eventCollector(),\n\t\tPolicy:     mockPolicy,\n\t\tResyncLock: &sync.RWMutex{},\n\t})\n\terr := dm.SetupConfig(nil, &Config{\n\t\tEventMetadataExtractor: testDockerMetadataExtractor,\n\t})\n\tSo(err, ShouldBeNil)\n\n\tmockDocker := mockdocker.NewMockCommonAPIClient(ctrl)\n\tdm.setDockerClient(mockDocker)\n\n\t// ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)\n\t// defer cancel()\n\t// // mockDocker.EXPECT().Ping(gomock.Any()).Return(types.Ping{}, nil)\n\n\t// err = dm.Run(ctx)\n\t// err = dm.waitForDockerDaemon(ctx)\n\tSo(err, ShouldBeNil)\n\treturn dm, mockPolicy\n}\n\nfunc TestStopDockerContainer(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdm, mockPU := setupDockerMonitor(ctrl)\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dm, ShouldNotBeNil)\n\t\t\tSo(dm, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to stop a container\", func() {\n\t\t\tmockPU.EXPECT().HandlePUEvent(gomock.Any(), \"74cc486f9ec3\", tevents.EventStop, gomock.Any()).Times(1).Return(nil)\n\t\t\tdm.SetupHandlers(&config.ProcessorConfig{\n\t\t\t\tCollector:  eventCollector(),\n\t\t\t\tPolicy:     mockPU,\n\t\t\t\tResyncLock: &sync.RWMutex{},\n\t\t\t})\n\n\t\t\terr := dm.handleDieEvent(context.Background(), &events.Message{ID: \"74cc486f9ec3\"})\n\n\t\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestHandleCreateEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdmi, mockPU := setupDockerMonitor(ctrl)\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dmi, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to handle create event\", func() {\n\t\t\tdmi.SetupHandlers(&config.ProcessorConfig{\n\t\t\t\tCollector:  eventCollector(),\n\t\t\t\tPolicy:     mockPU,\n\t\t\t\tResyncLock: &sync.RWMutex{},\n\t\t\t})\n\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), ID).Return(defaultContainer(false), nil)\n\t\t\tmockPU.EXPECT().\n\t\t\t\tHandlePUEvent(gomock.Any(), ID[:12], tevents.EventCreate, gomock.Any()).Times(1).Return(nil)\n\n\t\t\terr := dmi.handleCreateEvent(context.Background(), initTestMessage(ID))\n\n\t\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to handle create event with failed container \", func() {\n\t\t\tdmi.SetupHandlers(&config.ProcessorConfig{\n\t\t\t\tCollector:  eventCollector(),\n\t\t\t\tPolicy:     mockPU,\n\t\t\t\tResyncLock: &sync.RWMutex{},\n\t\t\t})\n\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), ID).Return(defaultContainer(false), errors.New(\"error1\"))\n\t\t\terr := dmi.handleCreateEvent(context.Background(), initTestMessage(ID))\n\n\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to read container information: container 74cc486f9ec3256d7bee789853ce05510167c7daf893f90a7577cdcba259d063 kept alive per policy: error1\"))\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to handle create event with no ID given\", func() {\n\t\t\terr := dmi.handleCreateEvent(context.Background(), initTestMessage(\"\"))\n\n\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to generate context id: empty docker id\"))\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestHandleStartEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdmi, mockPU := setupDockerMonitor(ctrl)\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dmi, ShouldNotBeNil)\n\t\t})\n\n\t\tdmi.SetupHandlers(&config.ProcessorConfig{\n\t\t\tCollector:  eventCollector(),\n\t\t\tPolicy:     mockPU,\n\t\t\tResyncLock: &sync.RWMutex{},\n\t\t})\n\n\t\tConvey(\"When I try to handle start event with a valid container\", func() {\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), ID).Return(defaultContainer(false), nil)\n\t\t\tmockPU.EXPECT().\n\t\t\t\tHandlePUEvent(gomock.Any(), ID[:12], tevents.EventStart, gomock.Any()).Times(1).Return(nil)\n\n\t\t\terr := dmi.handleStartEvent(context.Background(), initTestMessage(ID))\n\t\t\tConvey(\"Then I should get no errors\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to handle start event with a bad container\", func() {\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), ID).Return(defaultContainer(false), errors.New(\"error\"))\n\n\t\t\terr := dmi.handleStartEvent(context.Background(), initTestMessage(ID))\n\n\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to handle start event with no ID given\", func() {\n\t\t\tc := defaultContainer(false)\n\t\t\tc.ID = \"\"\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), gomock.Any()).Return(c, nil)\n\n\t\t\terr := dmi.handleStartEvent(context.Background(), initTestMessage(\"\"))\n\n\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to handle start event with a valid container and policy fails\", func() {\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), ID).Return(defaultContainer(false), nil)\n\t\t\tmockPU.EXPECT().\n\t\t\t\tHandlePUEvent(gomock.Any(), ID[:12], tevents.EventStart, gomock.Any()).Times(1).Return(errors.New(\"policy\"))\n\n\t\t\terr := dmi.handleStartEvent(context.Background(), initTestMessage(ID))\n\t\t\tConvey(\"Then I should an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestHandleDieEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdmi, mockPU := setupDockerMonitor(ctrl)\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dmi, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to handle die event\", func() {\n\t\t\tmockPU.EXPECT().HandlePUEvent(gomock.Any(), \"74cc486f9ec3\", tevents.EventStop, gomock.Any()).Times(1).Return(nil)\n\t\t\tdmi.SetupHandlers(&config.ProcessorConfig{\n\t\t\t\tCollector:  eventCollector(),\n\t\t\t\tPolicy:     mockPU,\n\t\t\t\tResyncLock: &sync.RWMutex{},\n\t\t\t})\n\t\t\terr := dmi.handleDieEvent(context.Background(), initTestMessage(ID))\n\n\t\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestHandleDestroyEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdmi, mockPU := setupDockerMonitor(ctrl)\n\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dmi, ShouldNotBeNil)\n\t\t})\n\n\t\tmockCG := mockcgnetcls.NewMockCgroupnetcls(ctrl)\n\n\t\tConvey(\"When I try to handle destroy event\", func() {\n\t\t\tmockPU.EXPECT().HandlePUEvent(gomock.Any(), \"74cc486f9ec3\", tevents.EventDestroy, gomock.Any()).Times(1).Return(nil)\n\t\t\tmockCG.EXPECT().DeleteCgroup(\"74cc486f9ec3\").Times(1).Return(nil)\n\t\t\tdmi.SetupHandlers(&config.ProcessorConfig{\n\t\t\t\tCollector:  eventCollector(),\n\t\t\t\tPolicy:     mockPU,\n\t\t\t\tResyncLock: &sync.RWMutex{},\n\t\t\t})\n\t\t\tdmi.netcls = mockCG\n\t\t\terr := dmi.handleDestroyEvent(context.Background(), initTestMessage(ID))\n\n\t\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to handle destroy event with no docker ID\", func() {\n\t\t\terr := dmi.handleDestroyEvent(context.Background(), initTestMessage(\"\"))\n\n\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to generate context id: empty docker id\"))\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestHandlePauseEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdmi, mockPU := setupDockerMonitor(ctrl)\n\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dmi, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to handle pause event\", func() {\n\t\t\tmockPU.EXPECT().HandlePUEvent(gomock.Any(), \"74cc486f9ec3\", tevents.EventPause, gomock.Any()).Times(1).Return(nil)\n\t\t\tdmi.SetupHandlers(&config.ProcessorConfig{\n\t\t\t\tCollector:  eventCollector(),\n\t\t\t\tPolicy:     mockPU,\n\t\t\t\tResyncLock: &sync.RWMutex{},\n\t\t\t})\n\t\t\terr := dmi.handlePauseEvent(context.Background(), initTestMessage(ID))\n\n\t\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to handle pause event with no ID\", func() {\n\t\t\terr := dmi.handlePauseEvent(context.Background(), initTestMessage(\"\"))\n\n\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to generate context id: empty docker id\"))\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestHandleUnpauseEvent(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdmi, mockPU := setupDockerMonitor(ctrl)\n\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dmi, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to handle unpause event\", func() {\n\t\t\tmockPU.EXPECT().HandlePUEvent(gomock.Any(), \"74cc486f9ec3\", tevents.EventUnpause, gomock.Any()).Times(1).Return(nil)\n\t\t\tdmi.SetupHandlers(&config.ProcessorConfig{\n\t\t\t\tCollector:  eventCollector(),\n\t\t\t\tPolicy:     mockPU,\n\t\t\t\tResyncLock: &sync.RWMutex{},\n\t\t\t})\n\t\t\terr := dmi.handleUnpauseEvent(context.Background(), initTestMessage(ID))\n\n\t\t\tConvey(\"Then I should not get any error\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I try to handle unpause event with no ID\", func() {\n\t\t\terr := dmi.handleUnpauseEvent(context.Background(), initTestMessage(\"\"))\n\n\t\t\tConvey(\"Then I should get error\", func() {\n\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to generate context id: empty docker id\"))\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestExtractMetadata(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdmi, _ := setupDockerMonitor(ctrl)\n\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dmi, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to call extractmetadata with nil docker info\", func() {\n\t\t\tpuR, err := dmi.extractMetadata(nil)\n\n\t\t\tConvey(\"I should get error\", func() {\n\t\t\t\tSo(puR, ShouldBeNil)\n\t\t\t\tSo(err, ShouldResemble, errors.New(\"docker info is empty\"))\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestSyncContainers(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I try to initialize a new docker monitor\", t, func() {\n\n\t\tdmi, mockPU := setupDockerMonitor(ctrl)\n\t\tdmi.SetupHandlers(&config.ProcessorConfig{\n\t\t\tCollector:  eventCollector(),\n\t\t\tPolicy:     mockPU,\n\t\t\tResyncLock: &sync.RWMutex{},\n\t\t})\n\n\t\tConvey(\"Then docker monitor should not be nil\", func() {\n\t\t\tSo(dmi, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If I try to sync containers where when SyncAtStart is not set, I should get nil\", func() {\n\t\t\tdmi.syncAtStart = false\n\t\t\terr := dmi.Resync(context.Background())\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"If I try to sync containers and docker list fails, I should get an error\", func() {\n\t\t\tdmi.syncAtStart = true\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerList(gomock.Any(), gomock.Any()).Return([]types.Container{{ID: ID}}, errors.New(\"error\"))\n\n\t\t\terr := dmi.Resync(context.Background())\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I try to call sync containers and a policy call fails\", func() {\n\t\t\tdmi.syncAtStart = true\n\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerList(gomock.Any(), gomock.Any()).Return([]types.Container{{ID: ID}}, nil)\n\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), ID).Return(defaultContainer(false), nil).MaxTimes(2)\n\n\t\t\tmockPU.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(errors.New(\"blah\"))\n\n\t\t\terr := dmi.Resync(context.Background())\n\n\t\t\tConvey(\"Then I should  get  error since we ignore bad containers\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\n\t\t})\n\n\t\tConvey(\"When I try to call sync containers\", func() {\n\t\t\tdmi.syncAtStart = true\n\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerList(gomock.Any(), gomock.Any()).Return([]types.Container{{ID: ID}}, nil)\n\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), ID).Return(defaultContainer(false), nil).MaxTimes(2)\n\n\t\t\tmockPU.EXPECT().HandlePUEvent(gomock.Any(), ID[:12], tevents.EventStart, gomock.Any()).AnyTimes().Return(nil)\n\n\t\t\terr := dmi.Resync(context.Background())\n\n\t\t\tConvey(\"Then I should not get no error \", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\n\t\t})\n\n\t\tConvey(\"When I try to call sync host containers\", func() {\n\t\t\tdmi.syncAtStart = true\n\t\t\thostContainer := types.Container{\n\t\t\t\tID: ID,\n\t\t\t\tHostConfig: struct {\n\t\t\t\t\tNetworkMode string `json:\",omitempty\"`\n\t\t\t\t}{NetworkMode: \"host\"}}\n\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerList(gomock.Any(), gomock.Any()).Return([]types.Container{hostContainer}, nil)\n\n\t\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().\n\t\t\t\tContainerInspect(gomock.Any(), ID).Return(defaultContainer(true), nil).MaxTimes(2)\n\n\t\t\tmockPU.EXPECT().HandlePUEvent(gomock.Any(), ID[:12], tevents.EventStart, gomock.Any()).AnyTimes().Return(nil)\n\n\t\t\terr := dmi.Resync(context.Background())\n\n\t\t\tConvey(\"Then I should not get no error \", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\n\t\t})\n\t})\n}\n\nfunc Test_initTestDockerInfo(t *testing.T) {\n\ttype args struct {\n\t\tid     string\n\t\tnwmode container.NetworkMode\n\t\tstate  bool\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant *types.ContainerJSON\n\t}{}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := initTestDockerInfo(tt.args.id, tt.args.nwmode, tt.args.state); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"initTestDockerInfo() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestWaitForDockerDaemon(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"If docker daemon is not running and setup docker daemon returns an error\", t, func() {\n\n\t\tdmi, _ := setupDockerMonitor(ctrl)\n\t\tdmi.dockerClient().(*mockdocker.MockCommonAPIClient).EXPECT().Ping(gomock.Any()).Return(types.Ping{}, errors.New(\"Ping Error\")).AnyTimes()\n\t\t// 30*time.Second is greater then dockerInitializationwait\n\t\twaitforDockerInitializationTimeout := dockerInitializationWait + 5*time.Second\n\t\texpiryTime := time.Now().Add(waitforDockerInitializationTimeout)\n\t\tdmi.socketAddress = \"unix://tmp/test.sock\"\n\t\tctx, cancel := context.WithDeadline(context.Background(), time.Now().Add(waitforDockerInitializationTimeout))\n\t\terr := dmi.waitForDockerDaemon(ctx)\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(time.Now(), ShouldHappenBefore, expiryTime)\n\t\t// this will kill the Goroutine\n\t\tcancel()\n\t})\n}\n\nfunc TestSetupDockerDaemon(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"If setupDockerdaemon returns an error dockerClient is nil\", t, func() {\n\n\t\tdmi, _ := setupDockerMonitor(ctrl)\n\t\tdmi.setDockerClient(nil)\n\t\tdmi.socketType = \"invalid\"\n\t\terr := dmi.setupDockerDaemon(context.Background())\n\t\tSo(err, ShouldNotBeNil)\n\t\tSo(dmi.dockerClient(), ShouldBeNil)\n\n\t})\n}\n"
  },
  {
    "path": "monitor/internal/docker/types.go",
    "content": "package dockermonitor\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"github.com/docker/docker/api/types\"\n\t\"github.com/docker/docker/api/types/events\"\n)\n\n// Event is the type of various docker events.\ntype Event string\n\nconst (\n\t// EventCreate represents the Docker \"create\" event.\n\tEventCreate Event = \"create\"\n\n\t// EventStart represents the Docker \"start\" event.\n\tEventStart Event = \"start\"\n\n\t// EventDie represents the Docker \"die\" event.\n\tEventDie Event = \"die\"\n\n\t// EventDestroy represents the Docker \"destroy\" event.\n\tEventDestroy Event = \"destroy\"\n\n\t// EventPause represents the Docker \"pause\" event.\n\tEventPause Event = \"pause\"\n\n\t// EventUnpause represents the Docker \"unpause\" event.\n\tEventUnpause Event = \"unpause\"\n\n\t// EventConnect represents the Docker \"connect\" event.\n\tEventConnect Event = \"connect\"\n\n\t// DockerClientVersion is the version sent out as the client\n\tDockerClientVersion = \"v1.23\"\n\n\t// dockerPingTimeout is the time to wait for a ping to succeed.\n\tdockerPingTimeout = 2 * time.Second\n\n\t// dockerRetryTimer is the time after which we will retry to bring docker up.\n\tdockerRetryTimer = 2 * time.Second\n\n\t// dockerInitializationWait is the time after which we will retry to bring docker up.\n\tdockerInitializationWait = 2 * dockerRetryTimer\n)\n\n// A EventHandler is type of docker event handler functions.\ntype EventHandler func(ctx context.Context, event *events.Message) error\n\n// DockerClientInterface creates an interface for the docker client so that we can do tests.\ntype DockerClientInterface interface {\n\t// ContainerInspect corresponds to the ContainerInspect of docker.\n\tContainerInspect(ctx context.Context, containerID string) (types.ContainerJSON, error)\n\n\t// ContainerList abstracts the ContainerList as interface.\n\tContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)\n\n\t// ContainerStop abstracts the ContainerStop as interface.\n\tContainerStop(ctx context.Context, containerID string, timeout *time.Duration) error\n\n\t// Events abstracts the Event method as an interface.\n\tEvents(ctx context.Context, options types.EventsOptions) (<-chan events.Message, <-chan error)\n\n\t// Ping abstracts the Event method as an interface\n\tPing(ctx context.Context) (types.Ping, error)\n}\n"
  },
  {
    "path": "monitor/internal/k8s/config.go",
    "content": "package k8smonitor\n\nimport (\n\tcriapi \"k8s.io/cri-api/pkg/apis\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n)\n\n// Config is the config for the Kubernetes monitor\ntype Config struct { // nolint\n\tKubeconfig string\n\tNodename   string\n\n\tCRIRuntimeService criapi.RuntimeService\n\n\tMetadataExtractor extractors.PodMetadataExtractor\n}\n\n// DefaultConfig provides a default configuration\nfunc DefaultConfig() *Config {\n\treturn &Config{\n\t\tMetadataExtractor: nil,\n\t\tCRIRuntimeService: nil,\n\t\tKubeconfig:        \"\",\n\t\tNodename:          \"\",\n\t}\n}\n\n// SetupDefaultConfig adds defaults to a partial configuration\nfunc SetupDefaultConfig(kubernetesConfig *Config) *Config {\n\treturn kubernetesConfig\n}\n"
  },
  {
    "path": "monitor/internal/k8s/event_handler.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/external\"\n\n\t\"go.uber.org/zap\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n)\n\nvar _ external.ReceiveEvents = &K8sMonitor{}\n\nfunc (m *K8sMonitor) isCniInstalledOrRuncProxyStarted() bool {\n\tm.extMonitorStartedLock.RLock()\n\tdefer m.extMonitorStartedLock.RUnlock()\n\treturn m.cniInstalledOrRuncProxyStarted\n}\n\n// SenderReady will be called by the sender to notify the receiver that the sender\n// is now ready to send events.\nfunc (m *K8sMonitor) SenderReady() {\n\tm.extMonitorStartedLock.Lock()\n\tm.cniInstalledOrRuncProxyStarted = true\n\tm.extMonitorStartedLock.Unlock()\n\tclose(m.cniInstalledOrRuncProxyStartedCh)\n\tzap.L().Debug(\"K8sMonitor: CNI plugin is installed and configured or runc-proxy has started\")\n}\n\n// Event will receive event `data` for processing a common.Event in the monitor.\n// The sent data is implementation specific - therefore it has no type in the interface.\n// If the sent data is of an unexpected type, its implementor must return an error\n// indicating so.\nfunc (m *K8sMonitor) Event(ctx context.Context, ev common.Event, data interface{}) error {\n\t// the data is expected to be of type\n\tkmd, ok := data.(containermetadata.CommonKubernetesContainerMetadata)\n\tif !ok {\n\t\treturn fmt.Errorf(\"K8sMonitor: invalid data type: %T\", data)\n\t}\n\n\tswitch ev {\n\tcase common.EventStart:\n\t\tif err := m.startEvent(ctx, kmd, 0); err != nil {\n\t\t\t// TODO: handle retries that we can handle\n\t\t\treturn fmt.Errorf(\"K8sMonitor: startEvent: %s\", err)\n\t\t}\n\tcase common.EventDestroy:\n\t\tif err := m.destroyEvent(ctx, kmd); err != nil {\n\t\t\treturn fmt.Errorf(\"K8sMonitor: destroyEvent: %s\", err)\n\t\t}\n\tdefault:\n\t\treturn fmt.Errorf(\"K8sMonitor: unexpected event %s\", ev)\n\t}\n\treturn nil\n}\n\ntype startEventFunc func(context.Context, containermetadata.CommonKubernetesContainerMetadata, uint) error\n\nfunc (m *K8sMonitor) startEvent(ctx context.Context, kmd containermetadata.CommonKubernetesContainerMetadata, retry uint) error {\n\tswitch kmd.Kind() {\n\tcase containermetadata.PodSandbox:\n\t\tzap.L().Debug(\"K8sMonitor: startEvent: PodSandbox\", zap.String(\"sandboxID\", kmd.ID()), zap.String(\"podName\", kmd.PodName()), zap.String(\"podNamespace\", kmd.PodNamespace()))\n\t\t// get pod\n\t\tpod, err := m.getPod(ctx, kmd.PodNamespace(), kmd.PodName())\n\t\tif err != nil {\n\t\t\t// fire off a retry for this, but simply return with the error\n\t\t\tgo m.startEventRetry(kmd, retry+1)\n\t\t\treturn err\n\t\t}\n\t\t// this should never happen, but if it does, simply return\n\t\tif pod.Spec.HostNetwork {\n\t\t\treturn nil\n\t\t}\n\t\tif err := m.podCache.Set(kmd.ID(), pod); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// metadata exraction\n\t\truntime, err := m.metadataExtractor(ctx, pod, kmd.NetNSPath())\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif err := m.runtimeCache.Set(kmd.ID(), runtime); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn m.handlers.Policy.HandlePUEvent(ctx, kmd.ID(), common.EventStart, runtime)\n\n\tcase containermetadata.PodContainer:\n\t\tzap.L().Debug(\"K8sMonitor: startEvent: PodContainer\", zap.String(\"id\", kmd.ID()), zap.String(\"sandboxID\", kmd.PodSandboxID()), zap.String(\"podName\", kmd.PodName()), zap.String(\"podNamespace\", kmd.PodNamespace()))\n\t\t// as we don't handle host network containers, this is a noop\n\t\treturn nil\n\tdefault:\n\t\treturn fmt.Errorf(\"K8sMonitor: unexpected container kind for start event: %s\", kmd.Kind())\n\t}\n}\n\ntype destroyEventFunc func(context.Context, containermetadata.CommonKubernetesContainerMetadata) error\n\nfunc (m *K8sMonitor) destroyEvent(ctx context.Context, kmd containermetadata.CommonKubernetesContainerMetadata) error {\n\tswitch kmd.Kind() {\n\tcase containermetadata.PodSandbox:\n\t\tzap.L().Debug(\"K8sMonitor: destroyEvent: PodSandbox\", zap.String(\"sandboxID\", kmd.ID()))\n\t\truntime := m.runtimeCache.Get(kmd.ID())\n\t\tif runtime == nil {\n\t\t\t// destroy event was sent previously, not a problem, just return\n\t\t\tzap.L().Debug(\"K8sMonitor: destroyEvent: sandbox not in runtime cache\")\n\t\t\treturn nil\n\t\t}\n\n\t\t// simply delete it from the caches and send a destroy event\n\t\t// even if that fails in the policy engine, there is nothing we can do about it\n\t\tm.runtimeCache.Delete(kmd.ID())\n\t\tm.podCache.Delete(kmd.ID())\n\t\treturn m.handlers.Policy.HandlePUEvent(ctx, kmd.ID(), common.EventDestroy, runtime)\n\n\tcase containermetadata.PodContainer:\n\t\t// if this is a container event that belongs to an existing sandbox\n\t\t// we can simply return, we don't need to do anything\n\t\treturn nil\n\n\tdefault:\n\t\treturn fmt.Errorf(\"K8sMonitor: unexpected container kind for destroy event: %s\", kmd.Kind())\n\t}\n}\n\ntype stopEventFunc func(context.Context, string) error\n\nfunc (m *K8sMonitor) stopEvent(ctx context.Context, sandboxID string) error {\n\tzap.L().Debug(\"K8sMonitor: stopEvent\", zap.String(\"sandboxID\", sandboxID))\n\truntime := m.runtimeCache.Get(sandboxID)\n\tif runtime == nil {\n\t\t// destroy event had been sent already, not a problem, simply return\n\t\tzap.L().Debug(\"K8sMonitor: stopEvent: sandbox not in runtime cache\")\n\t\treturn nil\n\t}\n\n\treturn m.handlers.Policy.HandlePUEvent(ctx, sandboxID, common.EventStop, runtime)\n}\n\ntype updateEventFunc func(context.Context, string) error\n\nfunc (m *K8sMonitor) updateEvent(ctx context.Context, sandboxID string) error {\n\tzap.L().Debug(\"K8sMonitor: updateEvent\", zap.String(\"sandboxID\", sandboxID))\n\truntime := m.runtimeCache.Get(sandboxID)\n\tif runtime == nil {\n\t\t// destroy event had been sent already, not a problem, simply return\n\t\tzap.L().Debug(\"K8sMonitor: updateEvent: sandbox not in runtime cache\")\n\t\treturn nil\n\t}\n\n\tpod := m.podCache.Get(sandboxID)\n\tif pod == nil {\n\t\t// destroy event had been sent already, not a problem, simply return\n\t\tzap.L().Debug(\"K8sMonitor: updateEvent: pod not in pod cache\")\n\t\treturn nil\n\t}\n\n\t// run metadata extraction again\n\t// don't forget to update the runtime cache\n\truntime, err := m.metadataExtractor(ctx, pod, runtime.NSPath())\n\tif err != nil {\n\t\treturn err\n\t}\n\tif err := m.runtimeCache.Set(sandboxID, runtime); err != nil {\n\t\treturn err\n\t}\n\n\t// send an update event\n\treturn m.handlers.Policy.HandlePUEvent(ctx, sandboxID, common.EventUpdate, runtime)\n}\n\n// getPod tries to get the pod from the internal informer cache first, and falls back to the Kubernetes API if that fails\n// the cache is being kept up-to-date by Kubernetes internals, we don't need to care about this\n// NOTE: do not confuse the informer cache with the podCache from this package!\nfunc (m *K8sMonitor) getPod(ctx context.Context, namespace, name string) (*corev1.Pod, error) {\n\tpod, err := m.podLister.Pods(namespace).Get(name)\n\tif err != nil {\n\t\tzap.L().Debug(\"K8sMonitor: getPod: failed to get pod from cache. Using Kubernetes API directly now instead...\", zap.String(\"name\", name), zap.String(\"namespace\", namespace), zap.Error(err))\n\t\treturn m.kubeClient.CoreV1().Pods(namespace).Get(ctx, name, metav1.GetOptions{})\n\t}\n\treturn pod, nil\n}\n"
  },
  {
    "path": "monitor/internal/k8s/event_handler_test.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata/mockcontainermetadata\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\tkubernetes \"k8s.io/client-go/kubernetes\"\n\t\"k8s.io/client-go/kubernetes/fake\"\n)\n\nfunc TestK8sMonitor_updateEvent(t *testing.T) {\n\ttype args struct {\n\t\tctx       context.Context\n\t\tsandboxID string\n\t}\n\ttests := []struct {\n\t\tname              string\n\t\targs              args\n\t\twantErr           bool\n\t\tmetadataExtractor extractors.PodMetadataExtractor\n\t\tprepare           func(t *testing.T, mocks *unitTestMonitorMocks)\n\t}{\n\t\t{\n\t\t\tname: \"runtime not found for sandbox ID\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"not found\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"not found\")).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"runtime found, but pod not found for sandbox ID\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"not found\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"not found\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().Get(gomock.Eq(\"not found\")).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"runtime and pod found, but metadata extraction fails\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"sandboxID\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn nil, fmt.Errorf(\"error\")\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(&corev1.Pod{}).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"runtime and pod found, metadata extraction succeeded, but internal update failed\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"sandboxID\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntimeWithDefaults(), nil\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(&corev1.Pod{}).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(policy.NewPURuntimeWithDefaults())).Return(fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"update event fails in policy engine\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"sandboxID\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntimeWithDefaults(), nil\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(&corev1.Pod{}).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(policy.NewPURuntimeWithDefaults())).Return(nil).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventUpdate),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"update event succeeds\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"sandboxID\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntimeWithDefaults(), nil\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(&corev1.Pod{}).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(policy.NewPURuntimeWithDefaults())).Return(nil).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventUpdate),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tm, mocks := newUnitTestMonitor(ctrl)\n\t\t\tm.metadataExtractor = tt.metadataExtractor\n\t\t\ttt.prepare(t, mocks)\n\t\t\tif err := m.updateEvent(tt.args.ctx, tt.args.sandboxID); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"K8sMonitor.updateEvent() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tctrl.Finish()\n\t\t})\n\t}\n}\n\nfunc TestK8sMonitor_stopEvent(t *testing.T) {\n\ttype args struct {\n\t\tctx       context.Context\n\t\tsandboxID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twantErr bool\n\t\tprepare func(t *testing.T, mocks *unitTestMonitorMocks)\n\t}{\n\t\t{\n\t\t\tname: \"runtime not found for sandbox ID\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"not found\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"not found\")).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"stop event failed in policy engine\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"sandboxID\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventStop),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"stop event succeeds\",\n\t\t\targs: args{\n\t\t\t\tctx:       context.Background(),\n\t\t\t\tsandboxID: \"sandboxID\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventStop),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tm, mocks := newUnitTestMonitor(ctrl)\n\t\t\ttt.prepare(t, mocks)\n\t\t\tif err := m.stopEvent(tt.args.ctx, tt.args.sandboxID); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"K8sMonitor.stopEvent() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tctrl.Finish()\n\t\t})\n\t}\n}\n\nfunc TestK8sMonitor_destroyEvent(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\twantErr bool\n\t\tprepare func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata)\n\t}{\n\t\t{\n\t\t\tname:    \"unexpected container kind\",\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.Container).Times(2)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:    \"nothing happens for a PodContainer\",\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodContainer).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:    \"PodSandbox not found in cache\",\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(2)\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:    \"destroy event failed in policy engine\",\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(5)\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Delete(gomock.Eq(\"sandboxID\")).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().Delete(gomock.Eq(\"sandboxID\")).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventDestroy),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:    \"destroy event succeeds\",\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(5)\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Delete(gomock.Eq(\"sandboxID\")).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().Delete(gomock.Eq(\"sandboxID\")).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventDestroy),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tm, mocks := newUnitTestMonitor(ctrl)\n\t\t\tkmd := mockcontainermetadata.NewMockCommonKubernetesContainerMetadata(ctrl)\n\t\t\ttt.prepare(t, mocks, kmd)\n\t\t\tif err := m.destroyEvent(context.Background(), kmd); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"K8sMonitor.destroyEvent() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tctrl.Finish()\n\t\t})\n\t}\n}\n\nfunc TestK8sMonitor_startEvent(t *testing.T) {\n\tpodTemplate1 := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tNodeName: \"test\",\n\t\t},\n\t}\n\tpodTemplate2 := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-host-network-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tHostNetwork: true,\n\t\t\tNodeName:    \"test\",\n\t\t},\n\t}\n\tc := fake.NewSimpleClientset(\n\t\tpodTemplate1.DeepCopy(),\n\t\tpodTemplate2.DeepCopy(),\n\t)\n\n\ttests := []struct {\n\t\tname              string\n\t\twantErr           bool\n\t\tmetadataExtractor extractors.PodMetadataExtractor\n\t\tkubeClient        kubernetes.Interface\n\t\tprepare           func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata)\n\t}{\n\t\t{\n\t\t\tname:    \"unexpected container kind\",\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.Container).Times(2)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:    \"PodContainer: is simply being ignored\",\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodContainer).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"containerID\").Times(1)\n\t\t\t\tkmd.EXPECT().PodSandboxID().Return(\"sandboxID\").Times(1)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"my-pod\").Times(1)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:       \"PodSandbox: failed to get pod from API\",\n\t\t\twantErr:    true,\n\t\t\tkubeClient: c,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(1)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"not-found\").Times(2)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(2)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:       \"PodSandbox: got pod from API, but failed to update cache\",\n\t\t\twantErr:    true,\n\t\t\tkubeClient: c,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(2)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"my-pod\").Times(2)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(2)\n\t\t\t\tmocks.podCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(podTemplate1.DeepCopy())).Return(fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:       \"PodSandbox: metadata extraction fails\",\n\t\t\twantErr:    true,\n\t\t\tkubeClient: c,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn nil, fmt.Errorf(\"error\")\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(2)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"my-pod\").Times(2)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(2)\n\t\t\t\tkmd.EXPECT().NetNSPath().Return(\"/var/run/netns/container1\")\n\t\t\t\tmocks.podCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(podTemplate1.DeepCopy())).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:       \"PodSandbox: metadata extraction succeeds, but updating cache fails\",\n\t\t\twantErr:    true,\n\t\t\tkubeClient: c,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntimeWithDefaults(), nil\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(3)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"my-pod\").Times(2)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(2)\n\t\t\t\tkmd.EXPECT().NetNSPath().Return(\"/var/run/netns/container1\")\n\t\t\t\tmocks.podCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(podTemplate1.DeepCopy())).Return(nil).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(policy.NewPURuntimeWithDefaults())).Return(fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:       \"PodSandbox: HostNetwork pods are being ignored\",\n\t\t\twantErr:    false,\n\t\t\tkubeClient: c,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn nil, fmt.Errorf(\"we should not get here\")\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(1)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"my-host-network-pod\").Times(2)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(2)\n\t\t\t\t//mocks.podCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(podTemplate2.DeepCopy())).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:       \"PodSandbox: start event fails in policy engine\",\n\t\t\twantErr:    true,\n\t\t\tkubeClient: c,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntimeWithDefaults(), nil\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(4)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"my-pod\").Times(2)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(2)\n\t\t\t\tkmd.EXPECT().NetNSPath().Return(\"/var/run/netns/container1\")\n\t\t\t\tmocks.podCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(podTemplate1.DeepCopy())).Return(nil).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(policy.NewPURuntimeWithDefaults())).Return(nil).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventStart),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:       \"PodSandbox: start event succeeds\",\n\t\t\twantErr:    false,\n\t\t\tkubeClient: c,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntimeWithDefaults(), nil\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(4)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"my-pod\").Times(2)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(2)\n\t\t\t\tkmd.EXPECT().NetNSPath().Return(\"/var/run/netns/container1\")\n\t\t\t\tmocks.podCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(podTemplate1.DeepCopy())).Return(nil).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(policy.NewPURuntimeWithDefaults())).Return(nil).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventStart),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tctx, cancel := context.WithCancel(context.Background())\n\t\tctrl := gomock.NewController(t)\n\t\tm, mocks := newUnitTestMonitor(ctrl)\n\t\tm.metadataExtractor = tt.metadataExtractor\n\t\tkmd := mockcontainermetadata.NewMockCommonKubernetesContainerMetadata(ctrl)\n\t\tm.kubeClient = tt.kubeClient\n\t\tif m.kubeClient != nil {\n\t\t\tm.podLister = setupInformerForUnitTests(ctx, m.kubeClient, m.nodename)\n\t\t}\n\t\ttt.prepare(t, mocks, kmd)\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif err := m.startEvent(ctx, kmd, 0); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"K8sMonitor.startEvent() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t\tctrl.Finish()\n\t\tcancel()\n\t}\n}\n\nfunc TestK8sMonitor_Event(t *testing.T) {\n\tpodTemplate1 := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tNodeName: \"test\",\n\t\t},\n\t}\n\tpodTemplate2 := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-host-network-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tHostNetwork: true,\n\t\t\tNodeName:    \"test\",\n\t\t},\n\t}\n\tc := fake.NewSimpleClientset(\n\t\tpodTemplate1.DeepCopy(),\n\t\tpodTemplate2.DeepCopy(),\n\t)\n\n\ttype args struct {\n\t\tctx  context.Context\n\t\tev   common.Event\n\t\tdata interface{}\n\t}\n\ttests := []struct {\n\t\tname              string\n\t\targs              args\n\t\tmetadataExtractor extractors.PodMetadataExtractor\n\t\tkubeClient        kubernetes.Interface\n\t\tprepare           func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata)\n\t\twantErr           bool\n\t}{\n\t\t{\n\t\t\tname: \"unexpected event\",\n\t\t\targs: args{\n\t\t\t\tctx: context.Background(),\n\t\t\t\tev:  common.EventPause,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"unexpected event data\",\n\t\t\targs: args{\n\t\t\t\tctx:  context.Background(),\n\t\t\t\tev:   common.EventPause,\n\t\t\t\tdata: \"wrong type\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"failing start event\",\n\t\t\targs: args{\n\t\t\t\tctx: context.Background(),\n\t\t\t\tev:  common.EventStart,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.Container).Times(2)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:       \"successful start event for sandbox for pod\",\n\t\t\tkubeClient: c,\n\t\t\tmetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntimeWithDefaults(), nil\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tctx: context.Background(),\n\t\t\t\tev:  common.EventStart,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(4)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"my-pod\").Times(2)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"default\").Times(2)\n\t\t\t\tkmd.EXPECT().NetNSPath().Return(\"/var/run/netns/container1\")\n\t\t\t\tmocks.podCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(podTemplate1.DeepCopy())).Return(nil).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Set(gomock.Eq(\"sandboxID\"), gomock.Eq(policy.NewPURuntimeWithDefaults())).Return(nil).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventStart),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"failing destroy event\",\n\t\t\targs: args{\n\t\t\t\tctx: context.Background(),\n\t\t\t\tev:  common.EventDestroy,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.Container).Times(2)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:    \"successful destroy event for sandbox for pod\",\n\t\t\twantErr: false,\n\t\t\targs: args{\n\t\t\t\tctx: context.Background(),\n\t\t\t\tev:  common.EventDestroy,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"sandboxID\").Times(5)\n\t\t\t\tmocks.runtimeCache.EXPECT().Get(gomock.Eq(\"sandboxID\")).Return(policy.NewPURuntimeWithDefaults()).Times(1)\n\t\t\t\tmocks.runtimeCache.EXPECT().Delete(gomock.Eq(\"sandboxID\")).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().Delete(gomock.Eq(\"sandboxID\")).Times(1)\n\t\t\t\tmocks.policy.EXPECT().HandlePUEvent(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(\"sandboxID\"),\n\t\t\t\t\tgomock.Eq(common.EventDestroy),\n\t\t\t\t\tgomock.Eq(policy.NewPURuntimeWithDefaults()),\n\t\t\t\t).Return(nil).Times(1)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctx, cancel := context.WithCancel(tt.args.ctx)\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tm, mocks := newUnitTestMonitor(ctrl)\n\t\t\tm.metadataExtractor = tt.metadataExtractor\n\t\t\tkmd := mockcontainermetadata.NewMockCommonKubernetesContainerMetadata(ctrl)\n\t\t\tm.kubeClient = tt.kubeClient\n\t\t\tif m.kubeClient != nil {\n\t\t\t\tm.podLister = setupInformerForUnitTests(ctx, m.kubeClient, m.nodename)\n\t\t\t}\n\t\t\ttt.prepare(t, mocks, kmd)\n\t\t\tvar data interface{}\n\t\t\tif tt.args.data != nil {\n\t\t\t\tdata = tt.args.data\n\t\t\t} else {\n\t\t\t\tdata = kmd\n\t\t\t}\n\t\t\tif err := m.Event(ctx, tt.args.ev, data); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"K8sMonitor.Event() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tctrl.Finish()\n\t\t\tcancel()\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/k8s/event_retry_handler.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\tretryWaittimeUnit = time.Second\n\tretryTimeout      = time.Second * 30\n)\n\ntype startEventRetryFunc func(containermetadata.CommonKubernetesContainerMetadata, uint)\n\nfunc newStartEventRetryFunc(mainCtx context.Context, extractor containermetadata.CommonContainerMetadataExtractor, startEvent startEventFunc) startEventRetryFunc {\n\treturn func(kmd containermetadata.CommonKubernetesContainerMetadata, retry uint) {\n\t\t// we only care about pod sandboxes for restarts\n\t\t// make sure that we stick to that\n\t\tif kmd.Kind() != containermetadata.PodSandbox {\n\t\t\tzap.L().Debug(\n\t\t\t\t\"K8sMonitor: startEventRetry: this is not a pod sandbox. Aborting retry...\",\n\t\t\t\tzap.Uint(\"retry\", retry),\n\t\t\t\tzap.String(\"kind\", kmd.Kind().String()),\n\t\t\t\tzap.String(\"id\", kmd.ID()),\n\t\t\t)\n\t\t\treturn\n\t\t}\n\n\t\t// wait before we retry\n\t\twaitTime := calculateWaitTime(retry)\n\t\tzap.L().Debug(\n\t\t\t\"K8sMonitor: startEventRetry: waiting before retry...\",\n\t\t\tzap.Uint(\"retry\", retry),\n\t\t\tzap.Duration(\"waitTime\", waitTime),\n\t\t\tzap.String(\"id\", kmd.ID()),\n\t\t)\n\t\tselect {\n\t\tcase <-mainCtx.Done():\n\t\t\t// no point in continuing if the main context is done\n\t\t\treturn\n\t\tcase <-time.After(waitTime):\n\t\t}\n\n\t\t// check if the sandbox still exists, otherwise we can abort the retries\n\t\tif !extractor.Has(containermetadata.NewRuncArguments(containermetadata.StartAction, kmd.ID())) {\n\t\t\tzap.L().Debug(\n\t\t\t\t\"K8sMonitor: startEventRetry: container for start event does not exist any longer. Aborting...\",\n\t\t\t\tzap.Uint(\"retry\", retry),\n\t\t\t\tzap.String(\"id\", kmd.ID()),\n\t\t\t)\n\t\t\treturn\n\t\t}\n\n\t\t// now create a new context and retry\n\t\t// the recursion occurs within the startEvent\n\t\tctx, cancel := context.WithTimeout(mainCtx, retryTimeout)\n\t\tdefer cancel()\n\t\tif err := startEvent(ctx, kmd, retry); err != nil {\n\t\t\tzap.L().Error(\n\t\t\t\t\"K8sMonitor: startEventRetry: failed to process start event on retry\",\n\t\t\t\tzap.Uint(\"retry\", retry),\n\t\t\t\tzap.Error(err),\n\t\t\t\tzap.String(\"id\", kmd.ID()),\n\t\t\t\tzap.String(\"podUID\", kmd.PodUID()),\n\t\t\t\tzap.String(\"podName\", kmd.PodName()),\n\t\t\t\tzap.String(\"podNamespace\", kmd.PodNamespace()),\n\t\t\t)\n\t\t}\n\t}\n}\n\n// calculateWaitTime calculates a fibonacci style backoff wait time based on the number of retry\n// It uses `retryWaittimeUnit` as the base unit for the wait time\nfunc calculateWaitTime(retry uint) time.Duration {\n\tvar n uint\n\tswitch retry {\n\tcase 0:\n\t\tn = 0\n\tcase 1:\n\t\tn = 1\n\tcase 2:\n\t\tn = 1\n\tcase 3:\n\t\tn = 2\n\tcase 4:\n\t\tn = 3\n\tcase 5:\n\t\tn = 5\n\tcase 6:\n\t\tn = 8\n\tcase 7:\n\t\tn = 13\n\tcase 8:\n\t\tn = 21\n\tcase 9:\n\t\tn = 34\n\tdefault:\n\t\tn = 55\n\t}\n\treturn retryWaittimeUnit * time.Duration(n)\n}\n"
  },
  {
    "path": "monitor/internal/k8s/event_retry_handler_test.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang/mock/gomock\"\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata/mockcontainermetadata\"\n)\n\nfunc Test_calculateWaitTime(t *testing.T) {\n\ttests := []struct {\n\t\tname  string\n\t\tretry uint\n\t\twant  time.Duration\n\t}{\n\t\t{\n\t\t\tretry: 0,\n\t\t\twant:  0,\n\t\t},\n\t\t{\n\t\t\tretry: 1,\n\t\t\twant:  retryWaittimeUnit * time.Duration(1),\n\t\t},\n\t\t{\n\t\t\tretry: 2,\n\t\t\twant:  retryWaittimeUnit * time.Duration(1),\n\t\t},\n\t\t{\n\t\t\tretry: 3,\n\t\t\twant:  retryWaittimeUnit * time.Duration(2),\n\t\t},\n\t\t{\n\t\t\tretry: 4,\n\t\t\twant:  retryWaittimeUnit * time.Duration(3),\n\t\t},\n\t\t{\n\t\t\tretry: 5,\n\t\t\twant:  retryWaittimeUnit * time.Duration(5),\n\t\t},\n\t\t{\n\t\t\tretry: 6,\n\t\t\twant:  retryWaittimeUnit * time.Duration(8),\n\t\t},\n\t\t{\n\t\t\tretry: 7,\n\t\t\twant:  retryWaittimeUnit * time.Duration(13),\n\t\t},\n\t\t{\n\t\t\tretry: 8,\n\t\t\twant:  retryWaittimeUnit * time.Duration(21),\n\t\t},\n\t\t{\n\t\t\tretry: 9,\n\t\t\twant:  retryWaittimeUnit * time.Duration(34),\n\t\t},\n\t\t{\n\t\t\tretry: 10,\n\t\t\twant:  retryWaittimeUnit * time.Duration(55),\n\t\t},\n\t\t{\n\t\t\tretry: 1000000,\n\t\t\twant:  retryWaittimeUnit * time.Duration(55),\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := calculateWaitTime(tt.retry); got != tt.want {\n\t\t\t\tt.Errorf(\"calculateWaitTime() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_newStartEventRetryFunc(t *testing.T) {\n\toldRetryWaittimeUnit := retryWaittimeUnit\n\tdefer func() {\n\t\tretryWaittimeUnit = oldRetryWaittimeUnit\n\t}()\n\tretryWaittimeUnit = 0\n\n\t// used by the test which needs a cancelled context\n\tcancelledCtx, cancel := context.WithCancel(context.Background())\n\tcancel()\n\n\ttests := []struct {\n\t\tname               string\n\t\tmainCtx            context.Context\n\t\tstartEventHandler  unitTestStartEvent\n\t\tretry              uint\n\t\tprepare            func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata)\n\t\texpectedStartEvent bool\n\t}{\n\t\t{\n\t\t\tname:              \"not a pod sandbox\",\n\t\t\tmainCtx:           context.Background(),\n\t\t\tstartEventHandler: newUnitTestStartEventHandler(0, nil),\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodContainer).Times(2)\n\t\t\t\tkmd.EXPECT().ID().Return(\"containerID\").Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:              \"main context is already cancelled\",\n\t\t\tmainCtx:           cancelledCtx,\n\t\t\tstartEventHandler: newUnitTestStartEventHandler(0, nil),\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"containerID\").Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:              \"sandbox does not exist any longer\",\n\t\t\tmainCtx:           context.Background(),\n\t\t\tstartEventHandler: newUnitTestStartEventHandler(0, nil),\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"containerID\").Times(3)\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(containermetadata.NewRuncArguments(containermetadata.StartAction, \"containerID\"))).Return(false).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:              \"retrying with success\",\n\t\t\tmainCtx:           context.Background(),\n\t\t\tstartEventHandler: newUnitTestStartEventHandler(1, nil),\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"containerID\").Times(2)\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(containermetadata.NewRuncArguments(containermetadata.StartAction, \"containerID\"))).Return(true).Times(1)\n\t\t\t},\n\t\t\texpectedStartEvent: true,\n\t\t},\n\t\t{\n\t\t\tname:              \"retrying with error\",\n\t\t\tmainCtx:           context.Background(),\n\t\t\tstartEventHandler: newUnitTestStartEventHandler(1, fmt.Errorf(\"start event failed\")),\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, kmd *mockcontainermetadata.MockCommonKubernetesContainerMetadata) {\n\t\t\t\tkmd.EXPECT().Kind().Return(containermetadata.PodSandbox).Times(1)\n\t\t\t\tkmd.EXPECT().ID().Return(\"containerID\").Times(3)\n\t\t\t\tkmd.EXPECT().PodName().Return(\"podName\").Times(1)\n\t\t\t\tkmd.EXPECT().PodNamespace().Return(\"podNamespace\").Times(1)\n\t\t\t\tkmd.EXPECT().PodUID().Return(\"podUID\").Times(1)\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(containermetadata.NewRuncArguments(containermetadata.StartAction, \"containerID\"))).Return(true).Times(1)\n\t\t\t},\n\t\t\texpectedStartEvent: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctrl := gomock.NewController(t)\n\t\t\textractor := mockcontainermetadata.NewMockCommonContainerMetadataExtractor(ctrl)\n\t\t\tkmd := mockcontainermetadata.NewMockCommonKubernetesContainerMetadata(ctrl)\n\t\t\tctx, cancel := context.WithCancel(tt.mainCtx)\n\t\t\tstartEventRetry := newStartEventRetryFunc(ctx, extractor, tt.startEventHandler.f())\n\t\t\ttt.prepare(t, extractor, kmd)\n\t\t\tstartEventRetry(kmd, tt.retry)\n\t\t\ttt.startEventHandler.wait()\n\t\t\tif tt.expectedStartEvent != tt.startEventHandler.called() {\n\t\t\t\tt.Errorf(\"startEventHandler.called() = %v, want %v\", tt.startEventHandler.called(), tt.expectedStartEvent)\n\t\t\t}\n\t\t\tcancel()\n\t\t\tctrl.Finish()\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/k8s/helpers_test.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/golang/mock/gomock\"\n\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/external\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/external/mockexternal\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy/mockpolicy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cri/mockcri\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/fields\"\n\t\"k8s.io/client-go/informers\"\n\t\"k8s.io/client-go/kubernetes\"\n\tlistersv1 \"k8s.io/client-go/listers/core/v1\"\n\t\"k8s.io/client-go/tools/cache\"\n)\n\ntype unitTestMonitorMocks struct {\n\tpodCache            *MockpodCacheInterface\n\truntimeCache        *MockruntimeCacheInterface\n\tpolicy              *mockpolicy.MockResolver\n\texternalEventSender *mockexternal.MockReceiverRegistration\n\tcri                 *mockcri.MockExtendedRuntimeService\n}\n\nfunc newUnitTestMonitor(ctrl *gomock.Controller) (*K8sMonitor, *unitTestMonitorMocks) {\n\tpodCache := NewMockpodCacheInterface(ctrl)\n\truntimeCache := NewMockruntimeCacheInterface(ctrl)\n\tpolicyResolver := mockpolicy.NewMockResolver(ctrl)\n\texternalEventSender := mockexternal.NewMockReceiverRegistration(ctrl)\n\tcri := mockcri.NewMockExtendedRuntimeService(ctrl)\n\n\tmocks := &unitTestMonitorMocks{\n\t\tpodCache:            podCache,\n\t\truntimeCache:        runtimeCache,\n\t\tpolicy:              policyResolver,\n\t\texternalEventSender: externalEventSender,\n\t\tcri:                 cri,\n\t}\n\n\treturn &K8sMonitor{\n\t\tnodename:        \"test\",\n\t\tstartEventRetry: func(containermetadata.CommonKubernetesContainerMetadata, uint) {},\n\t\tpodCache:        podCache,\n\t\truntimeCache:    runtimeCache,\n\t\thandlers: &config.ProcessorConfig{\n\t\t\tPolicy:              policyResolver,\n\t\t\tExternalEventSender: []external.ReceiverRegistration{externalEventSender},\n\t\t\tResyncLock:          &sync.RWMutex{},\n\t\t},\n\t\tcriRuntimeService:                cri,\n\t\tcniInstalledOrRuncProxyStartedCh: make(chan struct{}),\n\t}, mocks\n}\n\nfunc setupInformerForUnitTests(ctx context.Context, kubeClient kubernetes.Interface, nodeName string) listersv1.PodLister {\n\t// get the pod informer from the default factory\n\t// add a field selector to narrow down our results\n\tfieldSelector := fields.OneTermEqualSelector(nodeNameKeyIndex, nodeName).String()\n\tfactory := informers.NewSharedInformerFactoryWithOptions(kubeClient, time.Hour*24, informers.WithTweakListOptions(func(opts *metav1.ListOptions) {\n\t\topts.FieldSelector = fieldSelector\n\t}))\n\tinformer := factory.Core().V1().Pods().Informer()\n\n\t// add an indexer so that our field selector by node name will work\n\tinformer.AddIndexers(cache.Indexers{ // nolint: errcheck\n\t\tnodeNameKeyIndex: func(obj interface{}) ([]string, error) {\n\t\t\t// this is essentially exactly what the node lifecycel controller uses as well\n\t\t\tpod, ok := obj.(*corev1.Pod)\n\t\t\tif !ok {\n\t\t\t\treturn []string{}, nil\n\t\t\t}\n\t\t\tif len(pod.Spec.NodeName) == 0 {\n\t\t\t\treturn []string{}, nil\n\t\t\t}\n\t\t\treturn []string{pod.Spec.NodeName}, nil\n\t\t},\n\t})\n\n\t// now start the informer\n\tgo informer.Run(ctx.Done())\n\n\t// wait for the caches to sync before we return\n\t// if this fails, we can print a log, but this is not a\n\tif !cache.WaitForNamedCacheSync(\"pods\", ctx.Done(), informer.HasSynced) {\n\t\tpanic(\"K8sMonitor: setupInformer: waiting for caches timed out\")\n\t}\n\n\t// return with a lister of the cache of this informer\n\treturn listersv1.NewPodLister(informer.GetIndexer())\n}\n"
  },
  {
    "path": "monitor/internal/k8s/mocks_pod_cache_test.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: trireme-lib/monitor/internal/k8s/pod_cache.go\n\n// Package k8smonitor is a generated GoMock package.\npackage k8smonitor\n\nimport (\n\tcontext \"context\"\n\tgomock \"github.com/golang/mock/gomock\"\n\tv1 \"k8s.io/api/core/v1\"\n\tkubernetes \"k8s.io/client-go/kubernetes\"\n\tv10 \"k8s.io/client-go/listers/core/v1\"\n\treflect \"reflect\"\n)\n\n// MockpodCacheInterface is a mock of podCacheInterface interface\ntype MockpodCacheInterface struct {\n\tctrl     *gomock.Controller\n\trecorder *MockpodCacheInterfaceMockRecorder\n}\n\n// MockpodCacheInterfaceMockRecorder is the mock recorder for MockpodCacheInterface\ntype MockpodCacheInterfaceMockRecorder struct {\n\tmock *MockpodCacheInterface\n}\n\n// NewMockpodCacheInterface creates a new mock instance\nfunc NewMockpodCacheInterface(ctrl *gomock.Controller) *MockpodCacheInterface {\n\tmock := &MockpodCacheInterface{ctrl: ctrl}\n\tmock.recorder = &MockpodCacheInterfaceMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockpodCacheInterface) EXPECT() *MockpodCacheInterfaceMockRecorder {\n\treturn m.recorder\n}\n\n// Delete mocks base method\nfunc (m *MockpodCacheInterface) Delete(sandboxID string) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"Delete\", sandboxID)\n}\n\n// Delete indicates an expected call of Delete\nfunc (mr *MockpodCacheInterfaceMockRecorder) Delete(sandboxID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Delete\", reflect.TypeOf((*MockpodCacheInterface)(nil).Delete), sandboxID)\n}\n\n// Get mocks base method\nfunc (m *MockpodCacheInterface) Get(sandboxID string) *v1.Pod {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Get\", sandboxID)\n\tret0, _ := ret[0].(*v1.Pod)\n\treturn ret0\n}\n\n// Get indicates an expected call of Get\nfunc (mr *MockpodCacheInterfaceMockRecorder) Get(sandboxID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Get\", reflect.TypeOf((*MockpodCacheInterface)(nil).Get), sandboxID)\n}\n\n// Set mocks base method\nfunc (m *MockpodCacheInterface) Set(sandboxID string, pod *v1.Pod) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Set\", sandboxID, pod)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Set indicates an expected call of Set\nfunc (mr *MockpodCacheInterfaceMockRecorder) Set(sandboxID, pod interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Set\", reflect.TypeOf((*MockpodCacheInterface)(nil).Set), sandboxID, pod)\n}\n\n// FindSandboxID mocks base method\nfunc (m *MockpodCacheInterface) FindSandboxID(name, namespace string) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"FindSandboxID\", name, namespace)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// FindSandboxID indicates an expected call of FindSandboxID\nfunc (mr *MockpodCacheInterfaceMockRecorder) FindSandboxID(name, namespace interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"FindSandboxID\", reflect.TypeOf((*MockpodCacheInterface)(nil).FindSandboxID), name, namespace)\n}\n\n// SetupInformer mocks base method\nfunc (m *MockpodCacheInterface) SetupInformer(ctx context.Context, kubeClient kubernetes.Interface, nodeName string, needsUpdate needsUpdateFunc) v10.PodLister {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SetupInformer\", ctx, kubeClient, nodeName, needsUpdate)\n\tret0, _ := ret[0].(v10.PodLister)\n\treturn ret0\n}\n\n// SetupInformer indicates an expected call of SetupInformer\nfunc (mr *MockpodCacheInterfaceMockRecorder) SetupInformer(ctx, kubeClient, nodeName, needsUpdate interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetupInformer\", reflect.TypeOf((*MockpodCacheInterface)(nil).SetupInformer), ctx, kubeClient, nodeName, needsUpdate)\n}\n"
  },
  {
    "path": "monitor/internal/k8s/mocks_runtime_cache_test.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: trireme-lib/monitor/internal/k8s/runtime_cache.go\n\n// Package k8smonitor is a generated GoMock package.\npackage k8smonitor\n\nimport (\n\tgomock \"github.com/golang/mock/gomock\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\treflect \"reflect\"\n)\n\n// MockruntimeCacheInterface is a mock of runtimeCacheInterface interface\ntype MockruntimeCacheInterface struct {\n\tctrl     *gomock.Controller\n\trecorder *MockruntimeCacheInterfaceMockRecorder\n}\n\n// MockruntimeCacheInterfaceMockRecorder is the mock recorder for MockruntimeCacheInterface\ntype MockruntimeCacheInterfaceMockRecorder struct {\n\tmock *MockruntimeCacheInterface\n}\n\n// NewMockruntimeCacheInterface creates a new mock instance\nfunc NewMockruntimeCacheInterface(ctrl *gomock.Controller) *MockruntimeCacheInterface {\n\tmock := &MockruntimeCacheInterface{ctrl: ctrl}\n\tmock.recorder = &MockruntimeCacheInterfaceMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\nfunc (m *MockruntimeCacheInterface) EXPECT() *MockruntimeCacheInterfaceMockRecorder {\n\treturn m.recorder\n}\n\n// Delete mocks base method\nfunc (m *MockruntimeCacheInterface) Delete(sandboxID string) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"Delete\", sandboxID)\n}\n\n// Delete indicates an expected call of Delete\nfunc (mr *MockruntimeCacheInterfaceMockRecorder) Delete(sandboxID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Delete\", reflect.TypeOf((*MockruntimeCacheInterface)(nil).Delete), sandboxID)\n}\n\n// Get mocks base method\nfunc (m *MockruntimeCacheInterface) Get(sandboxID string) policy.RuntimeReader {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Get\", sandboxID)\n\tret0, _ := ret[0].(policy.RuntimeReader)\n\treturn ret0\n}\n\n// Get indicates an expected call of Get\nfunc (mr *MockruntimeCacheInterfaceMockRecorder) Get(sandboxID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Get\", reflect.TypeOf((*MockruntimeCacheInterface)(nil).Get), sandboxID)\n}\n\n// Set mocks base method\nfunc (m *MockruntimeCacheInterface) Set(sandboxID string, runtime policy.RuntimeReader) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Set\", sandboxID, runtime)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Set indicates an expected call of Set\nfunc (mr *MockruntimeCacheInterfaceMockRecorder) Set(sandboxID, runtime interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Set\", reflect.TypeOf((*MockruntimeCacheInterface)(nil).Set), sandboxID, runtime)\n}\n"
  },
  {
    "path": "monitor/internal/k8s/monitor.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"sync\"\n\n\tcriapi \"k8s.io/cri-api/pkg/apis\"\n\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n\t\"k8s.io/client-go/kubernetes\"\n\tlistersv1 \"k8s.io/client-go/listers/core/v1\"\n\t\"k8s.io/client-go/rest\"\n\t\"k8s.io/client-go/tools/clientcmd\"\n)\n\n// K8sMonitor is the monitor for Kubernetes.\ntype K8sMonitor struct {\n\tnodename                         string\n\thandlers                         *config.ProcessorConfig\n\tmetadataExtractor                extractors.PodMetadataExtractor\n\tkubeClient                       kubernetes.Interface\n\tpodLister                        listersv1.PodLister\n\tcriRuntimeService                criapi.RuntimeService\n\tpodCache                         podCacheInterface\n\truntimeCache                     runtimeCacheInterface\n\tstartEventRetry                  startEventRetryFunc\n\tcniInstalledOrRuncProxyStartedCh chan struct{}\n\tcniInstalledOrRuncProxyStarted   bool\n\textMonitorStartedLock            sync.RWMutex\n}\n\n// New returns a new kubernetes monitor.\nfunc New(ctx context.Context) *K8sMonitor {\n\tm := &K8sMonitor{}\n\tm.podCache = newPodCache(m.updateEvent)\n\tm.runtimeCache = newRuntimeCache(ctx, m.stopEvent)\n\tm.cniInstalledOrRuncProxyStartedCh = make(chan struct{})\n\treturn m\n}\n\n// SetupConfig provides a configuration to implmentations. Every implmentation\n// can have its own config type.\nfunc (m *K8sMonitor) SetupConfig(_ registerer.Registerer, cfg interface{}) error {\n\n\tdefaultConfig := DefaultConfig()\n\n\tif cfg == nil {\n\t\tcfg = defaultConfig\n\t}\n\n\tkubernetesconfig, ok := cfg.(*Config)\n\tif !ok {\n\t\treturn fmt.Errorf(\"Invalid configuration specified (type '%T')\", cfg)\n\t}\n\n\tkubernetesconfig = SetupDefaultConfig(kubernetesconfig)\n\n\t// simple config checks\n\tif kubernetesconfig.MetadataExtractor == nil {\n\t\treturn fmt.Errorf(\"missing metadata extractor\")\n\t}\n\tif kubernetesconfig.CRIRuntimeService == nil {\n\t\treturn fmt.Errorf(\"missing CRIRuntimeService implementation\")\n\t}\n\n\t// Initialize most of our monitor\n\tm.nodename = kubernetesconfig.Nodename\n\tm.metadataExtractor = kubernetesconfig.MetadataExtractor\n\tm.criRuntimeService = kubernetesconfig.CRIRuntimeService\n\n\t// build kubernetes client config\n\tvar kubeCfg *rest.Config\n\tif len(kubernetesconfig.Kubeconfig) > 0 {\n\t\tvar err error\n\t\tkubeCfg, err = clientcmd.BuildConfigFromFlags(\"\", kubernetesconfig.Kubeconfig)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tvar err error\n\t\tkubeCfg, err = rest.InClusterConfig()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// and initialize client from it\n\tvar err error\n\tm.kubeClient, err = kubernetes.NewForConfig(kubeCfg)\n\treturn err\n}\n\n// SetupHandlers sets up handlers for monitors to invoke for various events such as\n// processing unit events and synchronization events. This will be called before Start()\n// by the consumer of the monitor\nfunc (m *K8sMonitor) SetupHandlers(c *config.ProcessorConfig) {\n\tm.handlers = c\n}\n\n// Run starts the monitor implementation.\nfunc (m *K8sMonitor) Run(ctx context.Context) error {\n\tm.startEventRetry = newStartEventRetryFunc(ctx, containermetadata.AutoDetect(), m.startEvent)\n\tif m.kubeClient == nil {\n\t\treturn errors.New(\"K8sMonitor: missing Kubernetes client\")\n\t}\n\n\tif err := m.handlers.IsComplete(); err != nil {\n\t\treturn fmt.Errorf(\"K8sMonitor: handlers are not complete: %s\", err.Error())\n\t}\n\n\tif m.handlers.ExternalEventSender == nil {\n\t\treturn fmt.Errorf(\"K8sMonitor: external event sender option must be used together with this monitor\")\n\t}\n\n\t// setup informer for update events (this starts the informer as well)\n\t// this also returns a pod lister which uses the same underlying cache as the informer\n\tm.podLister = m.podCache.SetupInformer(ctx, m.kubeClient, m.nodename, defaultNeedsUpdate)\n\n\t// register ourselves with the gRPC server to receive events\n\tvar registered bool\n\tfor _, evs := range m.handlers.ExternalEventSender {\n\t\tif evs.SenderName() == constants.MonitorExtSenderName {\n\t\t\tif err := evs.Register(constants.K8sMonitorRegistrationName, m); err != nil {\n\t\t\t\treturn fmt.Errorf(\"K8sMonitor: failed to register with the grpcMonitorServer external events sender: %w\", err)\n\t\t\t}\n\t\t\tregistered = true\n\t\t\tbreak\n\t\t}\n\t}\n\tif !registered {\n\t\treturn fmt.Errorf(\"K8sMonitor: failed to register with the grpcMonitorServer external events sender: unavailable\")\n\t}\n\n\t// get list of pods on node, and handle them\n\tif err := m.onStartup(ctx, m.startEvent); err != nil {\n\t\treturn fmt.Errorf(\"K8sMonitor: failed to get list of pods running sandboxes from CRI and generating events for them: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// Resync should resynchronize PUs. This should be done while starting up.\nfunc (m *K8sMonitor) Resync(ctx context.Context) error {\n\treturn nil\n}\n"
  },
  {
    "path": "monitor/internal/k8s/monitor_test.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/external\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/external/mockexternal\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy/mockpolicy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cri/mockcri\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\tkubernetes \"k8s.io/client-go/kubernetes\"\n\t\"k8s.io/client-go/kubernetes/fake\"\n\truntimeapi \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\nfunc TestK8sMonitor_SetupConfig(t *testing.T) {\n\ttype args struct {\n\t\tregisterer registerer.Registerer\n\t\tcfg        interface{}\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"wrong config type\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg:        \"wrong type\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"default config is not workable\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg:        nil,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"config: missing metadataExtractor\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"config: missing netclsProgrammer\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"config: missing sandboxExtractor\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"config: missing resetNetcls\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"config: missing CRI runtime service\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t},\n\t\t\t\t\tCRIRuntimeService: nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"config: missing CRI runtime service\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t},\n\t\t\t\t\tCRIRuntimeService: nil,\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"kubeClient: in-cluster config fails\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t},\n\t\t\t\t\tCRIRuntimeService: mockcri.NewMockExtendedRuntimeService(nil),\n\t\t\t\t\tNodename:          \"\",\n\t\t\t\t\tKubeconfig:        \"\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"kubeClient: non-existent kubeconfig fails\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t},\n\t\t\t\t\tCRIRuntimeService: mockcri.NewMockExtendedRuntimeService(nil),\n\t\t\t\t\tNodename:          \"\",\n\t\t\t\t\tKubeconfig:        \"does-not-exist\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg: &Config{\n\t\t\t\t\tMetadataExtractor: func(context.Context, *corev1.Pod, string) (*policy.PURuntime, error) {\n\t\t\t\t\t\treturn nil, nil\n\t\t\t\t\t},\n\t\t\t\t\tCRIRuntimeService: mockcri.NewMockExtendedRuntimeService(nil),\n\t\t\t\t\tNodename:          \"\",\n\t\t\t\t\tKubeconfig:        \"testdata/kubeconfig\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tm := New(ctx)\n\t\t\tif err := m.SetupConfig(tt.args.registerer, tt.args.cfg); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"k8sMonitor.SetupConfig() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tcancel()\n\t\t})\n\t}\n}\n\nfunc TestK8sMonitor_Resync(t *testing.T) {\n\ttype args struct {\n\t\tctx context.Context\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:    \"unimplemented\",\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tm := New(ctx)\n\t\t\tif err := m.Resync(tt.args.ctx); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"k8sMonitor.Resync() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tcancel()\n\t\t})\n\t}\n}\n\nfunc TestK8sMonitor_SetupHandlers(t *testing.T) {\n\ttype args struct {\n\t\tc *config.ProcessorConfig\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t}{\n\t\t{\n\t\t\tname: \"arguments must be set 1-to-1 in monitor: nil\",\n\t\t\targs: args{\n\t\t\t\tc: nil,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"arguments must be set 1-to-1 in monitor: simple handler\",\n\t\t\targs: args{\n\t\t\t\tc: &config.ProcessorConfig{\n\t\t\t\t\tCollector: collector.NewDefaultCollector(),\n\t\t\t\t\tPolicy:    mockpolicy.NewMockResolver(nil),\n\t\t\t\t\tExternalEventSender: []external.ReceiverRegistration{\n\t\t\t\t\t\tmockexternal.NewMockReceiverRegistration(nil),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tm := New(ctx)\n\t\t\tm.SetupHandlers(tt.args.c)\n\t\t\tif !reflect.DeepEqual(m.handlers, tt.args.c) {\n\t\t\t\tt.Errorf(\"m.handlers %v, want %v\", m.handlers, tt.args.c)\n\t\t\t}\n\t\t\tcancel()\n\t\t})\n\t}\n}\n\nfunc TestK8sMonitor_Run(t *testing.T) {\n\tpodTemplate1 := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tNodeName: \"test\",\n\t\t},\n\t}\n\tpodTemplate2 := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-host-network-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tHostNetwork: true,\n\t\t\tNodeName:    \"test\",\n\t\t},\n\t}\n\tc := fake.NewSimpleClientset(\n\t\tpodTemplate1.DeepCopy(),\n\t\tpodTemplate2.DeepCopy(),\n\t)\n\ttype fields struct {\n\t\tcollector                collector.EventCollector\n\t\tunsetExternalEventSender bool\n\t\tkubeClient               kubernetes.Interface\n\t\tmetadataExtractor        extractors.PodMetadataExtractor\n\t\tnodename                 string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tfields  fields\n\t\twantErr bool\n\t\tprepare func(t *testing.T, mocks *unitTestMonitorMocks)\n\t}{\n\t\t{\n\t\t\tname: \"no kubeClient\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient: nil,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"handlers setup is incomplete\",\n\t\t\tfields: fields{\n\t\t\t\tcollector:  nil,\n\t\t\t\tkubeClient: c,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"ExternalEventSender is not set\",\n\t\t\tfields: fields{\n\t\t\t\tcollector:                collector.NewDefaultCollector(),\n\t\t\t\tunsetExternalEventSender: true,\n\t\t\t\tkubeClient:               c,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"ReceiverRegistration fails: unavailable\",\n\t\t\tfields: fields{\n\t\t\t\tcollector:                collector.NewDefaultCollector(),\n\t\t\t\tunsetExternalEventSender: false,\n\t\t\t\tkubeClient:               c,\n\t\t\t\tnodename:                 \"test\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.podCache.EXPECT().SetupInformer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(c),\n\t\t\t\t\tgomock.Eq(\"test\"),\n\t\t\t\t\tgomock.AssignableToTypeOf(defaultNeedsUpdate),\n\t\t\t\t)\n\t\t\t\tmocks.externalEventSender.EXPECT().SenderName().Return(\"random\").Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"ReceiverRegistration fails: Register call fails\",\n\t\t\tfields: fields{\n\t\t\t\tcollector:                collector.NewDefaultCollector(),\n\t\t\t\tunsetExternalEventSender: false,\n\t\t\t\tkubeClient:               c,\n\t\t\t\tnodename:                 \"test\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.podCache.EXPECT().SetupInformer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(c),\n\t\t\t\t\tgomock.Eq(\"test\"),\n\t\t\t\t\tgomock.AssignableToTypeOf(defaultNeedsUpdate),\n\t\t\t\t)\n\t\t\t\tmocks.externalEventSender.EXPECT().SenderName().Return(constants.MonitorExtSenderName).Times(1)\n\t\t\t\tmocks.externalEventSender.EXPECT().Register(\n\t\t\t\t\tgomock.Eq(constants.K8sMonitorRegistrationName),\n\t\t\t\t\tgomock.AssignableToTypeOf(&K8sMonitor{}),\n\t\t\t\t).Return(fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Listing Sandboxes from CRI fails\",\n\t\t\tfields: fields{\n\t\t\t\tcollector:                collector.NewDefaultCollector(),\n\t\t\t\tunsetExternalEventSender: false,\n\t\t\t\tkubeClient:               c,\n\t\t\t\tnodename:                 \"test\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.externalEventSender.EXPECT().SenderName().Return(constants.MonitorExtSenderName).Times(1)\n\t\t\t\tmocks.externalEventSender.EXPECT().Register(\n\t\t\t\t\tgomock.Eq(constants.K8sMonitorRegistrationName),\n\t\t\t\t\tgomock.AssignableToTypeOf(&K8sMonitor{}),\n\t\t\t\t).Return(nil).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().SetupInformer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(c),\n\t\t\t\t\tgomock.Eq(\"test\"),\n\t\t\t\t\tgomock.AssignableToTypeOf(defaultNeedsUpdate),\n\t\t\t\t)\n\t\t\t\tmocks.cri.EXPECT().ListPodSandbox(gomock.Eq(&runtimeapi.PodSandboxFilter{\n\t\t\t\t\tState: &runtimeapi.PodSandboxStateValue{\n\t\t\t\t\t\tState: runtimeapi.PodSandboxState_SANDBOX_READY,\n\t\t\t\t\t},\n\t\t\t\t})).Return(nil, fmt.Errorf(\"error\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"monitor starts successful with empty sandbox list from CRI\",\n\t\t\tfields: fields{\n\t\t\t\tcollector:                collector.NewDefaultCollector(),\n\t\t\t\tunsetExternalEventSender: false,\n\t\t\t\tkubeClient:               c,\n\t\t\t\tnodename:                 \"test\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, mocks *unitTestMonitorMocks) {\n\t\t\t\tmocks.externalEventSender.EXPECT().SenderName().Return(constants.MonitorExtSenderName).Times(1)\n\t\t\t\tmocks.externalEventSender.EXPECT().Register(\n\t\t\t\t\tgomock.Eq(constants.K8sMonitorRegistrationName),\n\t\t\t\t\tgomock.AssignableToTypeOf(&K8sMonitor{}),\n\t\t\t\t).Return(nil).Times(1)\n\t\t\t\tmocks.podCache.EXPECT().SetupInformer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(c),\n\t\t\t\t\tgomock.Eq(\"test\"),\n\t\t\t\t\tgomock.AssignableToTypeOf(defaultNeedsUpdate),\n\t\t\t\t)\n\t\t\t\tmocks.cri.EXPECT().ListPodSandbox(gomock.Eq(&runtimeapi.PodSandboxFilter{\n\t\t\t\t\tState: &runtimeapi.PodSandboxStateValue{\n\t\t\t\t\t\tState: runtimeapi.PodSandboxState_SANDBOX_READY,\n\t\t\t\t\t},\n\t\t\t\t})).Return(\n\t\t\t\t\t[]*runtimeapi.PodSandbox{},\n\t\t\t\t\tnil,\n\t\t\t\t).Times(1)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tm, mocks := newUnitTestMonitor(ctrl)\n\t\t\tm.SenderReady()\n\t\t\tm.handlers.Collector = tt.fields.collector\n\t\t\tif tt.fields.unsetExternalEventSender {\n\t\t\t\tm.handlers.ExternalEventSender = nil\n\t\t\t}\n\t\t\tm.kubeClient = tt.fields.kubeClient\n\t\t\tm.metadataExtractor = tt.fields.metadataExtractor\n\t\t\tm.nodename = tt.fields.nodename\n\t\t\ttt.prepare(t, mocks)\n\t\t\tif err := m.Run(context.Background()); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"k8sMonitor.Run() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tctrl.Finish()\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/k8s/on_startup.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.uber.org/zap\"\n\n\truntimeapi \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\nfunc (m *K8sMonitor) onStartup(ctx context.Context, startEvent startEventFunc) error {\n\t// wait for runc-proxy to be started before continuing with syncing state\n\tif !m.isCniInstalledOrRuncProxyStarted() {\n\t\tzap.L().Info(\"K8sMonitor: waiting for CNI plugin to be installed and configured...\")\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn fmt.Errorf(\"K8sMonitor: startup was canceled: %w\", ctx.Err())\n\t\tcase <-m.cniInstalledOrRuncProxyStartedCh:\n\t\t\tzap.L().Info(\"K8sMonitor: CNI plugin is ready. Continuing startup.\")\n\t\t}\n\t}\n\n\tsandboxList, err := m.criRuntimeService.ListPodSandbox(&runtimeapi.PodSandboxFilter{\n\t\tState: &runtimeapi.PodSandboxStateValue{\n\t\t\tState: runtimeapi.PodSandboxState_SANDBOX_READY,\n\t\t},\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar wg sync.WaitGroup\n\tm.handlers.ResyncLock.RLock()\n\tdefer m.handlers.ResyncLock.RUnlock()\n\tfor _, sandbox := range sandboxList {\n\t\t// extract common Kubernetes metadata from the filesystem\n\t\t// technically CRI provides us with everything we need right now,\n\t\t// however, this way the results are consistent and easier to maintain in the future\n\t\tsandboxID := sandbox.GetId()\n\t\tkmd, err := extractKmdFromCRISandbox(sandboxID)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"K8sMonitor: onStartup: failed to extract sandbox metadata. Skipping initialization for this pod...\", zap.String(\"sandboxID\", sandboxID), zap.Error(err))\n\t\t\tcontinue\n\t\t}\n\n\t\t// fire away a start event\n\t\twg.Add(1)\n\t\tgo func(ctx context.Context, id string, m containermetadata.CommonKubernetesContainerMetadata) {\n\t\t\tif err := startEvent(ctx, m, 0); err != nil {\n\t\t\t\tzap.L().Error(\"K8sMonitor: onStartup: failed to send start event\", zap.String(\"sandboxID\", id), zap.Error(err))\n\t\t\t}\n\t\t\twg.Done()\n\t\t}(ctx, sandboxID, kmd)\n\t}\n\twg.Wait()\n\n\treturn nil\n}\n\nvar extractor = containermetadata.AutoDetect()\n\nfunc extractKmdFromCRISandbox(sandboxID string) (containermetadata.CommonKubernetesContainerMetadata, error) {\n\tif sandboxID == \"\" {\n\t\treturn nil, fmt.Errorf(\"sandbox ID empty\")\n\t}\n\tcontainerArgs := containermetadata.NewRuncArguments(containermetadata.StartAction, sandboxID)\n\tif !extractor.Has(containerArgs) {\n\t\treturn nil, fmt.Errorf(\"failed to detect sandbox on filesystem\")\n\t}\n\t_, kmd, err := extractor.Extract(containerArgs)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to extract metadata of sandbox from filesystem: %s\", err)\n\t}\n\tif kmd == nil {\n\t\tzap.L().Error(\"K8sMonitor: onStartup: failed to this container as Kubernetes sandbox from filesystem\", zap.String(\"sandboxID\", sandboxID), zap.Error(err))\n\t\treturn nil, fmt.Errorf(\"failed to detect this container as Kubernetes sandbox from filesystem\")\n\t}\n\treturn kmd, nil\n}\n"
  },
  {
    "path": "monitor/internal/k8s/on_startup_test.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata/mockcontainermetadata\"\n\n\truntimeapi \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cri/mockcri\"\n)\n\nfunc Test_extractKmdFromCRISandbox(t *testing.T) {\n\n\ttype args struct {\n\t\tsandboxID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    containermetadata.CommonKubernetesContainerMetadata\n\t\twantErr bool\n\t\tprepare func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor)\n\t}{\n\t\t{\n\t\t\tname: \"sandbox ID empty\",\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"\",\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor) {\n\t\t\t\t//nothing to be done here\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"container not found with the extractor\",\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"not-found\",\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor) {\n\t\t\t\textractor.EXPECT().Has(\n\t\t\t\t\tgomock.Eq(containermetadata.NewRuncArguments(containermetadata.StartAction, \"not-found\")),\n\t\t\t\t).Return(false).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"container extractor failed\",\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"sandbox-id\",\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor) {\n\t\t\t\tContainerArgs := containermetadata.NewRuncArguments(containermetadata.StartAction, \"sandbox-id\")\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(ContainerArgs)).Return(true).Times(1)\n\t\t\t\textractor.EXPECT().Extract(gomock.Eq(ContainerArgs)).Return(nil, nil, fmt.Errorf(\"failed to extrat\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"container extractor succeeded but is not Kubernetes container\",\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"sandbox-id\",\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor) {\n\t\t\t\tContainerArgs := containermetadata.NewRuncArguments(containermetadata.StartAction, \"sandbox-id\")\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(ContainerArgs)).Return(true).Times(1)\n\t\t\t\t// technically the first result would need to be populated, but that doesn't matter for the test\n\t\t\t\textractor.EXPECT().Extract(gomock.Eq(ContainerArgs)).Return(nil, nil, nil).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"container extractor succeeded\",\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"sandbox-id\",\n\t\t\t},\n\t\t\twant:    mockcontainermetadata.NewMockCommonKubernetesContainerMetadata(nil),\n\t\t\twantErr: false,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor) {\n\t\t\t\tContainerArgs := containermetadata.NewRuncArguments(containermetadata.StartAction, \"sandbox-id\")\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(ContainerArgs)).Return(true).Times(1)\n\t\t\t\t// technically the first result would need to be populated, but that doesn't matter for the test\n\t\t\t\textractor.EXPECT().Extract(gomock.Eq(ContainerArgs)).Return(\n\t\t\t\t\tnil,\n\t\t\t\t\tmockcontainermetadata.NewMockCommonKubernetesContainerMetadata(nil),\n\t\t\t\t\tnil,\n\t\t\t\t).Times(1)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tmockExtractor := mockcontainermetadata.NewMockCommonContainerMetadataExtractor(ctrl)\n\t\t\textractor = mockExtractor\n\t\t\ttt.prepare(t, mockExtractor)\n\t\t\tgot, err := extractKmdFromCRISandbox(tt.args.sandboxID)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extractKmdFromCRISandbox() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extractKmdFromCRISandbox() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\tctrl.Finish()\n\t\t})\n\t}\n}\n\ntype unitTestStartEvent interface {\n\tf() startEventFunc\n\twait()\n\tcalled() bool\n}\ntype unitTestStartEventHandler struct {\n\tsync.RWMutex\n\twg        sync.WaitGroup\n\twgCounter int\n\twasCalled bool\n\terr       error\n}\n\nfunc (h *unitTestStartEventHandler) startEvent(ctx context.Context, kmd containermetadata.CommonKubernetesContainerMetadata, retry uint) error {\n\th.Lock()\n\tdefer h.Unlock()\n\th.wasCalled = true\n\tif h.wgCounter > 0 {\n\t\th.wgCounter--\n\t}\n\tif h.wgCounter >= 0 {\n\t\th.wg.Done()\n\t}\n\treturn h.err\n}\n\nfunc (h *unitTestStartEventHandler) f() startEventFunc {\n\treturn h.startEvent\n}\n\nfunc (h *unitTestStartEventHandler) wait() {\n\th.wg.Wait()\n}\n\nfunc (h *unitTestStartEventHandler) called() bool {\n\th.RLock()\n\tdefer h.RUnlock()\n\treturn h.wasCalled\n}\n\nfunc newUnitTestStartEventHandler(n int, err error) unitTestStartEvent {\n\th := &unitTestStartEventHandler{\n\t\terr:       err,\n\t\twgCounter: n,\n\t}\n\th.wg.Add(n)\n\treturn h\n}\n\nfunc TestK8sMonitor_onStartup(t *testing.T) {\n\n\tlistSandboxFilter := &runtimeapi.PodSandboxFilter{\n\t\tState: &runtimeapi.PodSandboxStateValue{\n\t\t\tState: runtimeapi.PodSandboxState_SANDBOX_READY,\n\t\t},\n\t}\n\n\ttests := []struct {\n\t\tname               string\n\t\tstartEventHandler  unitTestStartEvent\n\t\twantErr            bool\n\t\tprepare            func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, cri *mockcri.MockExtendedRuntimeService)\n\t\texpectedStartEvent bool\n\t}{\n\t\t{\n\t\t\tname:               \"listing sandboxes fails\",\n\t\t\tstartEventHandler:  newUnitTestStartEventHandler(0, nil),\n\t\t\twantErr:            true,\n\t\t\texpectedStartEvent: false,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, cri *mockcri.MockExtendedRuntimeService) {\n\t\t\t\tcri.EXPECT().ListPodSandbox(gomock.Eq(listSandboxFilter)).Return(nil, fmt.Errorf(\"failed\")).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:               \"listing sandboxes succeeds, but extracting metadata fails\",\n\t\t\tstartEventHandler:  newUnitTestStartEventHandler(0, nil),\n\t\t\twantErr:            false,\n\t\t\texpectedStartEvent: false,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, cri *mockcri.MockExtendedRuntimeService) {\n\t\t\t\tcri.EXPECT().ListPodSandbox(gomock.Eq(listSandboxFilter)).Return(\n\t\t\t\t\t[]*runtimeapi.PodSandbox{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tId: \"sandbox-id\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tnil,\n\t\t\t\t).Times(1)\n\n\t\t\t\tContainerArgs := containermetadata.NewRuncArguments(containermetadata.StartAction, \"sandbox-id\")\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(ContainerArgs)).Return(false).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:               \"listing sandboxes succeeds, sending an event that fails\",\n\t\t\tstartEventHandler:  newUnitTestStartEventHandler(1, fmt.Errorf(\"error\")),\n\t\t\twantErr:            false,\n\t\t\texpectedStartEvent: true,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, cri *mockcri.MockExtendedRuntimeService) {\n\t\t\t\tcri.EXPECT().ListPodSandbox(gomock.Eq(listSandboxFilter)).Return(\n\t\t\t\t\t[]*runtimeapi.PodSandbox{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tId: \"sandbox-id\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tnil,\n\t\t\t\t).Times(1)\n\n\t\t\t\tContainerArgs := containermetadata.NewRuncArguments(containermetadata.StartAction, \"sandbox-id\")\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(ContainerArgs)).Return(true).Times(1)\n\t\t\t\t// technically the first result would need to be populated, but that doesn't matter for the test\n\t\t\t\textractor.EXPECT().Extract(gomock.Eq(ContainerArgs)).Return(\n\t\t\t\t\tnil,\n\t\t\t\t\tmockcontainermetadata.NewMockCommonKubernetesContainerMetadata(nil),\n\t\t\t\t\tnil,\n\t\t\t\t).Times(1)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:               \"listing 2 sandboxes succeeds, sending 2 start events\",\n\t\t\tstartEventHandler:  newUnitTestStartEventHandler(2, nil),\n\t\t\twantErr:            false,\n\t\t\texpectedStartEvent: true,\n\t\t\tprepare: func(t *testing.T, extractor *mockcontainermetadata.MockCommonContainerMetadataExtractor, cri *mockcri.MockExtendedRuntimeService) {\n\t\t\t\tcri.EXPECT().ListPodSandbox(gomock.Eq(listSandboxFilter)).Return(\n\t\t\t\t\t[]*runtimeapi.PodSandbox{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tId: \"sandbox-id-1\",\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tId: \"sandbox-id-2\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tnil,\n\t\t\t\t).Times(1)\n\n\t\t\t\tContainerArgs1 := containermetadata.NewRuncArguments(containermetadata.StartAction, \"sandbox-id-1\")\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(ContainerArgs1)).Return(true).Times(1)\n\t\t\t\t// technically the first result would need to be populated, but that doesn't matter for the test\n\t\t\t\textractor.EXPECT().Extract(gomock.Eq(ContainerArgs1)).Return(\n\t\t\t\t\tnil,\n\t\t\t\t\tmockcontainermetadata.NewMockCommonKubernetesContainerMetadata(nil),\n\t\t\t\t\tnil,\n\t\t\t\t).Times(1)\n\t\t\t\tContainerArgs2 := containermetadata.NewRuncArguments(containermetadata.StartAction, \"sandbox-id-2\")\n\t\t\t\textractor.EXPECT().Has(gomock.Eq(ContainerArgs2)).Return(true).Times(1)\n\t\t\t\t// technically the first result would need to be populated, but that doesn't matter for the test\n\t\t\t\textractor.EXPECT().Extract(gomock.Eq(ContainerArgs2)).Return(\n\t\t\t\t\tnil,\n\t\t\t\t\tmockcontainermetadata.NewMockCommonKubernetesContainerMetadata(nil),\n\t\t\t\t\tnil,\n\t\t\t\t).Times(1)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tctrl := gomock.NewController(t)\n\t\t\tmockExtractor := mockcontainermetadata.NewMockCommonContainerMetadataExtractor(ctrl)\n\t\t\textractor = mockExtractor\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tm := New(ctx)\n\t\t\tm.SetupHandlers(&config.ProcessorConfig{\n\t\t\t\tResyncLock: &sync.RWMutex{},\n\t\t\t})\n\t\t\tm.SenderReady()\n\t\t\tmockcri := mockcri.NewMockExtendedRuntimeService(ctrl)\n\t\t\tm.criRuntimeService = mockcri\n\t\t\ttt.prepare(t, mockExtractor, mockcri)\n\t\t\tif err := m.onStartup(ctx, tt.startEventHandler.f()); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"K8sMonitor.onStartup() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\ttt.startEventHandler.wait()\n\t\t\tif tt.expectedStartEvent != tt.startEventHandler.called() {\n\t\t\t\tt.Errorf(\"startEventHandler.called() = %v, want %v\", tt.startEventHandler.called(), tt.expectedStartEvent)\n\t\t\t}\n\t\t\tcancel()\n\t\t\tctrl.Finish()\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/k8s/pod_cache.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"reflect\"\n\t\"sync\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/fields\"\n\t\"k8s.io/client-go/informers\"\n\t\"k8s.io/client-go/kubernetes\"\n\tlistersv1 \"k8s.io/client-go/listers/core/v1\"\n\t\"k8s.io/client-go/tools/cache\"\n\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\terrCacheUninitialized = errors.New(\"cache uninitialized\")\n\terrSandboxEmpty       = errors.New(\"sandboxID must not be empty\")\n\terrPodNil             = errors.New(\"pod must not be nil\")\n\terrRuntimeNil         = errors.New(\"runtime must not be nil\")\n\terrPodNameEmpty       = errors.New(\"pod name must not be empty\")\n\terrPodNamespaceEmpty  = errors.New(\"nod namespace must not be empty\")\n\terrSandboxNotFound    = errors.New(\"sandbox not found\")\n)\n\nconst (\n\tnodeNameKeyIndex = \"spec.nodeName\"\n)\n\n// needsUpdateFunc is a function which compares two pod objects and determines\n// if we need to send an update event to the policy engine.\ntype needsUpdateFunc func(*corev1.Pod, *corev1.Pod) bool\n\n// defaultNeedsUpdate simply compares if the labels changed.\n// As we are using the reduced metadata extractor by now, this is\n// the only change that we need to watch out for at the moment.\nfunc defaultNeedsUpdate(prev, obj *corev1.Pod) bool {\n\treturn !reflect.DeepEqual(prev.GetLabels(), obj.GetLabels())\n}\n\ntype podCacheInterface interface {\n\tDelete(sandboxID string)\n\tGet(sandboxID string) *corev1.Pod\n\tSet(sandboxID string, pod *corev1.Pod) error\n\tFindSandboxID(name, namespace string) (string, error)\n\tSetupInformer(ctx context.Context, kubeClient kubernetes.Interface, nodeName string, needsUpdate needsUpdateFunc) listersv1.PodLister\n}\n\nvar _ podCacheInterface = &podCache{}\n\ntype podCache struct {\n\tpods map[string]*corev1.Pod\n\tsync.RWMutex\n\tupdateEvent updateEventFunc\n}\n\nfunc newPodCache(updateEvent updateEventFunc) *podCache {\n\n\tc := &podCache{\n\t\tpods:        make(map[string]*corev1.Pod),\n\t\tupdateEvent: updateEvent,\n\t}\n\treturn c\n}\n\nfunc (c *podCache) SetupInformer(ctx context.Context, kubeClient kubernetes.Interface, nodeName string, needsUpdate needsUpdateFunc) listersv1.PodLister {\n\t// get the pod informer from the default factory\n\t// add a field selector to narrow down our results\n\tfieldSelector := fields.OneTermEqualSelector(nodeNameKeyIndex, nodeName).String()\n\tfactory := informers.NewSharedInformerFactoryWithOptions(kubeClient, time.Hour*24, informers.WithTweakListOptions(func(opts *metav1.ListOptions) {\n\t\topts.FieldSelector = fieldSelector\n\t}))\n\tinformer := factory.Core().V1().Pods().Informer()\n\n\t// add an indexer so that our field selector by node name will work\n\tinformer.AddIndexers(cache.Indexers{ // nolint: errcheck\n\t\tnodeNameKeyIndex: func(obj interface{}) ([]string, error) {\n\t\t\t// this is essentially exactly what the node lifecycel controller uses as well\n\t\t\tpod, ok := obj.(*corev1.Pod)\n\t\t\tif !ok {\n\t\t\t\treturn []string{}, nil\n\t\t\t}\n\t\t\tif len(pod.Spec.NodeName) == 0 {\n\t\t\t\treturn []string{}, nil\n\t\t\t}\n\t\t\treturn []string{pod.Spec.NodeName}, nil\n\t\t},\n\t})\n\tinformer.AddEventHandler(cache.ResourceEventHandlerFuncs{\n\t\t// we only subscribe to pod update events\n\t\tUpdateFunc: func(prev, obj interface{}) {\n\t\t\tprevPod, ok := prev.(*corev1.Pod)\n\t\t\tif !ok {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tpod, ok := obj.(*corev1.Pod)\n\t\t\tif !ok {\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tif pod.Spec.NodeName != nodeName {\n\t\t\t\t// TODO: unit tests are hitting this\n\t\t\t\t// the added indexer and the FieldSelector which are added to the lister through the factory options\n\t\t\t\t// should prevent this code path from ever being hit.\n\t\t\t\t// This might be a shortcoming of the fake clientset. We have run into limitations before.\n\t\t\t\tzap.L().Debug(\"K8sMonitor: informer: received pod update event which does not belong to this node\", zap.String(\"podNodeName\", pod.Spec.NodeName), zap.String(\"nodeName\", nodeName))\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tif pod.Spec.HostNetwork {\n\t\t\t\tzap.L().Debug(\"K8sMonitor: informer: skipping host network pods\", zap.String(\"podName\", pod.GetName()),\n\t\t\t\t\tzap.String(\"podNamespace\", pod.GetNamespace()),\n\t\t\t\t\tzap.String(\"nodeName\", pod.Spec.NodeName))\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tupdateInternal := func(p *corev1.Pod) (string, error) {\n\t\t\t\t// find the sandbox for this pod\n\t\t\t\tsandboxID, err := c.FindSandboxID(p.GetName(), p.GetNamespace())\n\t\t\t\tif err != nil {\n\t\t\t\t\t// this can only happen if we were wrongly monitoring a pod which we shouldn't have\n\t\t\t\t\t// it's not the end of the world, but something might be up, let's log a debug message\n\t\t\t\t\t// The problem with this log message is: when a pod first starts up, it is bound to receive\n\t\t\t\t\t// update events. However, the kubelet has not started the pod yet, so we are not interested\n\t\t\t\t\t// in the event yet. There is unfortunately no way for us to distinguish between the two\n\t\t\t\t\tzap.L().Debug(\n\t\t\t\t\t\t\"K8sMonitor: informer: sandbox for pod not found in cache. Will not update the processing unit\",\n\t\t\t\t\t\tzap.String(\"podName\", p.GetName()),\n\t\t\t\t\t\tzap.String(\"podNamespace\", p.GetNamespace()),\n\t\t\t\t\t\tzap.String(\"nodeName\", p.Spec.NodeName),\n\t\t\t\t\t)\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\t// update it in our internal state\n\t\t\t\tif err := c.Set(sandboxID, p.DeepCopy()); err != nil {\n\t\t\t\t\tzap.L().Error(\n\t\t\t\t\t\t\"K8sMonitor: informer: failed to update pod in cache\",\n\t\t\t\t\t\tzap.String(\"sandboxID\", sandboxID),\n\t\t\t\t\t\tzap.String(\"podName\", p.GetName()),\n\t\t\t\t\t\tzap.String(\"podNamespace\", p.GetNamespace()),\n\t\t\t\t\t\tzap.String(\"nodeName\", p.Spec.NodeName),\n\t\t\t\t\t)\n\t\t\t\t\treturn \"\", err\n\t\t\t\t}\n\t\t\t\treturn sandboxID, nil\n\t\t\t}\n\n\t\t\t// now send the event to the policy engine if\n\t\t\t// 1. there is a update on the pod labels.\n\t\t\tif needsUpdate(prevPod, pod) {\n\t\t\t\tgo func(ctx context.Context, p *corev1.Pod) {\n\t\t\t\t\tsandboxID, err := updateInternal(p)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\t// now send the update event\n\t\t\t\t\tif err := c.updateEvent(ctx, sandboxID); err != nil {\n\t\t\t\t\t\tzap.L().Error(\n\t\t\t\t\t\t\t\"K8sMonitor: informer: failed to send update event to policy engine\",\n\t\t\t\t\t\t\tzap.String(\"sandboxID\", sandboxID),\n\t\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t\t)\n\t\t\t\t\t}\n\t\t\t\t}(ctx, pod)\n\t\t\t} else {\n\t\t\t\tzap.L().Debug(\n\t\t\t\t\t\"K8sMonitor: informer: no update event necessary\",\n\t\t\t\t\tzap.String(\"podName\", pod.GetName()),\n\t\t\t\t\tzap.String(\"podNamespace\", pod.GetNamespace()),\n\t\t\t\t)\n\n\t\t\t\t// try to update the internal state anyway\n\t\t\t\t// this is technically not required at the moment\n\t\t\t\t// but we never know when it might\n\t\t\t\tgo updateInternal(pod) // nolint\n\t\t\t}\n\t\t},\n\t})\n\n\t// now start the informer\n\tgo informer.Run(ctx.Done())\n\n\t// wait for the caches to sync before we return\n\t// if this fails, we can print a log, but this is not a\n\tif !cache.WaitForNamedCacheSync(\"pods\", ctx.Done(), informer.HasSynced) {\n\t\tzap.L().Warn(\"K8sMonitor: setupInformer: waiting for caches timed out\")\n\t}\n\n\t// return with a lister of the cache of this informer\n\treturn listersv1.NewPodLister(informer.GetIndexer())\n}\n\nfunc (c *podCache) Get(sandboxID string) *corev1.Pod {\n\tif c == nil {\n\t\treturn nil\n\t}\n\tc.RLock()\n\tdefer c.RUnlock()\n\tif c.pods == nil {\n\t\treturn nil\n\t}\n\tp, ok := c.pods[sandboxID]\n\tif !ok {\n\t\treturn nil\n\t}\n\treturn p.DeepCopy()\n}\n\n// FindSandboxID returns the sandbox ID of the pod that matches name and namespace.\n// It returns an error in all other cases - also if the sandbox is not in the cache.\nfunc (c *podCache) FindSandboxID(name, namespace string) (string, error) {\n\tif c == nil {\n\t\treturn \"\", errCacheUninitialized\n\t}\n\tif c.pods == nil {\n\t\treturn \"\", errCacheUninitialized\n\t}\n\tif name == \"\" {\n\t\treturn \"\", errPodNameEmpty\n\t}\n\tif namespace == \"\" {\n\t\treturn \"\", errPodNamespaceEmpty\n\t}\n\tc.RLock()\n\tdefer c.RUnlock()\n\tfor sandboxID, pod := range c.pods {\n\t\tif pod.GetName() == name && pod.GetNamespace() == namespace {\n\t\t\treturn sandboxID, nil\n\t\t}\n\t}\n\treturn \"\", errSandboxNotFound\n}\n\nfunc (c *podCache) Set(sandboxID string, pod *corev1.Pod) error {\n\tif c == nil {\n\t\treturn errCacheUninitialized\n\t}\n\tif sandboxID == \"\" {\n\t\treturn errSandboxEmpty\n\t}\n\tif pod == nil {\n\t\treturn errPodNil\n\t}\n\tc.Lock()\n\tdefer c.Unlock()\n\tif c.pods == nil {\n\t\treturn errCacheUninitialized\n\t}\n\tc.pods[sandboxID] = pod\n\treturn nil\n}\n\nfunc (c *podCache) Delete(sandboxID string) {\n\tif c == nil {\n\t\treturn\n\t}\n\tc.Lock()\n\tdefer c.Unlock()\n\tdelete(c.pods, sandboxID)\n}\n"
  },
  {
    "path": "monitor/internal/k8s/pod_cache_test.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/client-go/kubernetes\"\n\t\"k8s.io/client-go/kubernetes/fake\"\n\t//kubernetesTesting \"k8s.io/client-go/testing\"\n)\n\nfunc Test_podCache_Delete(t *testing.T) {\n\tupdateEvent := func(context.Context, string) error {\n\t\treturn nil\n\t}\n\n\ttests := []struct {\n\t\tname      string\n\t\tc         *podCache\n\t\tsandboxID string\n\t}{\n\t\t{\n\t\t\tname:      \"cache uninitialized\",\n\t\t\tc:         nil,\n\t\t\tsandboxID: \"does-not-matter\",\n\t\t},\n\t\t{\n\t\t\tname:      \"cache initialized\",\n\t\t\tc:         newPodCache(updateEvent),\n\t\t\tsandboxID: \"does-not-mater\",\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\ttt.c.Delete(tt.sandboxID)\n\t\t})\n\t}\n}\n\nfunc Test_podCache_Set(t *testing.T) {\n\tupdateEvent := func(context.Context, string) error {\n\t\treturn nil\n\t}\n\ttype args struct {\n\t\tsandboxID string\n\t\tpod       *corev1.Pod\n\t}\n\ttests := []struct {\n\t\tname         string\n\t\tc            *podCache\n\t\targs         args\n\t\twantErr      bool\n\t\twantErrError error\n\t}{\n\t\t{\n\t\t\tname:         \"cache uninitialized\",\n\t\t\tc:            nil,\n\t\t\twantErr:      true,\n\t\t\twantErrError: errCacheUninitialized,\n\t\t},\n\t\t{\n\t\t\tname:         \"cache has unintialized map\",\n\t\t\tc:            &podCache{},\n\t\t\twantErr:      true,\n\t\t\twantErrError: errCacheUninitialized,\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"does-not-matter\",\n\t\t\t\tpod:       &corev1.Pod{},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:         \"no sandboxID\",\n\t\t\tc:            newPodCache(updateEvent),\n\t\t\twantErr:      true,\n\t\t\twantErrError: errSandboxEmpty,\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"\",\n\t\t\t\tpod:       &corev1.Pod{},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:         \"pod is nil\",\n\t\t\tc:            newPodCache(updateEvent),\n\t\t\twantErr:      true,\n\t\t\twantErrError: errPodNil,\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"does-not-matter\",\n\t\t\t\tpod:       nil,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:    \"successful update entry\",\n\t\t\tc:       newPodCache(updateEvent),\n\t\t\twantErr: false,\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"does-not-matter\",\n\t\t\t\tpod:       &corev1.Pod{},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\terr := tt.c.Set(tt.args.sandboxID, tt.args.pod)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"podCache.Set() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif tt.wantErr {\n\t\t\t\tif err != tt.wantErrError {\n\t\t\t\t\tt.Errorf(\"podCache.Set() error = %v, wantErrError %v\", err, tt.wantErrError)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_podCache_Get(t *testing.T) {\n\tupdateEvent := func(context.Context, string) error {\n\t\treturn nil\n\t}\n\tcacheWithEntry := newPodCache(updateEvent)\n\tif err := cacheWithEntry.Set(\"entry\", &corev1.Pod{}); err != nil {\n\t\tpanic(err)\n\t}\n\ttests := []struct {\n\t\tname      string\n\t\tsandboxID string\n\t\tc         *podCache\n\t\twant      *corev1.Pod\n\t}{\n\t\t{\n\t\t\tname: \"uninitialized podCache\",\n\t\t\tc:    nil,\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"uninitialized map in podCache\",\n\t\t\tc:    &podCache{},\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname:      \"entry does not exist\",\n\t\t\tc:         newPodCache(updateEvent),\n\t\t\tsandboxID: \"does-not-exist\",\n\t\t\twant:      nil,\n\t\t},\n\t\t{\n\t\t\tname:      \"entry exists\",\n\t\t\tc:         cacheWithEntry,\n\t\t\tsandboxID: \"entry\",\n\t\t\twant:      &corev1.Pod{},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := tt.c.Get(tt.sandboxID); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"podCache.Get() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_podCache_FindSandboxID(t *testing.T) {\n\tupdateEvent := func(context.Context, string) error {\n\t\treturn nil\n\t}\n\tcacheWithEntry := newPodCache(updateEvent)\n\tif err := cacheWithEntry.Set(\"entry\", &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t}); err != nil {\n\t\tpanic(err)\n\t}\n\ttype args struct {\n\t\tname      string\n\t\tnamespace string\n\t}\n\ttests := []struct {\n\t\tname         string\n\t\tc            *podCache\n\t\targs         args\n\t\twant         string\n\t\twantErr      bool\n\t\twantErrError error\n\t}{\n\t\t{\n\t\t\tname:         \"cache uninitialized\",\n\t\t\tc:            nil,\n\t\t\twantErr:      true,\n\t\t\twantErrError: errCacheUninitialized,\n\t\t},\n\t\t{\n\t\t\tname:         \"pods uninitialized\",\n\t\t\tc:            &podCache{},\n\t\t\twantErr:      true,\n\t\t\twantErrError: errCacheUninitialized,\n\t\t},\n\t\t{\n\t\t\tname: \"pod name empty\",\n\t\t\tc:    newPodCache(updateEvent),\n\t\t\targs: args{\n\t\t\t\tname:      \"\",\n\t\t\t\tnamespace: \"default\",\n\t\t\t},\n\t\t\twantErr:      true,\n\t\t\twantErrError: errPodNameEmpty,\n\t\t},\n\t\t{\n\t\t\tname: \"pod namespace empty\",\n\t\t\tc:    newPodCache(updateEvent),\n\t\t\targs: args{\n\t\t\t\tname:      \"my-pod\",\n\t\t\t\tnamespace: \"\",\n\t\t\t},\n\t\t\twantErr:      true,\n\t\t\twantErrError: errPodNamespaceEmpty,\n\t\t},\n\t\t{\n\t\t\tname: \"sandbox not found\",\n\t\t\tc:    newPodCache(updateEvent),\n\t\t\targs: args{\n\t\t\t\tname:      \"my-pod\",\n\t\t\t\tnamespace: \"default\",\n\t\t\t},\n\t\t\twantErr:      true,\n\t\t\twantErrError: errSandboxNotFound,\n\t\t},\n\t\t{\n\t\t\tname: \"sandbox found\",\n\t\t\tc:    cacheWithEntry,\n\t\t\targs: args{\n\t\t\t\tname:      \"my-pod\",\n\t\t\t\tnamespace: \"default\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t\twant:    \"entry\",\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, err := tt.c.FindSandboxID(tt.args.name, tt.args.namespace)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"podCache.FindSandboxID() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"podCache.FindSandboxID() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\tif tt.wantErr {\n\t\t\t\tif err != tt.wantErrError {\n\t\t\t\t\tt.Errorf(\"podCache.FindSandboxID() error = %v, wantErrError %v\", err, tt.wantErrError)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype unitTestUpdateEvent interface {\n\tf() updateEventFunc\n\twait()\n\tcalled() bool\n}\ntype unitTestUpdateEventHandler struct {\n\tsync.RWMutex\n\twg        sync.WaitGroup\n\twgCounter int\n\twasCalled bool\n\terr       error\n}\n\nfunc (h *unitTestUpdateEventHandler) updateEvent(context.Context, string) error {\n\th.Lock()\n\tdefer h.Unlock()\n\th.wasCalled = true\n\tif h.wgCounter > 0 {\n\t\th.wgCounter--\n\t}\n\tif h.wgCounter >= 0 {\n\t\th.wg.Done()\n\t}\n\treturn h.err\n}\n\nfunc (h *unitTestUpdateEventHandler) f() updateEventFunc {\n\treturn h.updateEvent\n}\n\nfunc (h *unitTestUpdateEventHandler) wait() {\n\th.wg.Wait()\n}\n\nfunc (h *unitTestUpdateEventHandler) called() bool {\n\th.RLock()\n\tdefer h.RUnlock()\n\treturn h.wasCalled\n}\n\nfunc newUnitTestUpdateEventHandler(n int, err error) unitTestUpdateEvent {\n\th := &unitTestUpdateEventHandler{\n\t\terr:       err,\n\t\twgCounter: n,\n\t}\n\th.wg.Add(n)\n\treturn h\n}\n\nfunc Test_podCache_SetupInformer(t *testing.T) {\n\tpodTemplate := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tNodeName: \"test\",\n\t\t},\n\t}\n\trunning := corev1.ContainerStateRunning{}\n\tpending := corev1.ContainerStateWaiting{}\n\n\thostpodTemplate := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"my-host-pod\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tNodeName:    \"test\",\n\t\t\tHostNetwork: true,\n\t\t},\n\t\tStatus: corev1.PodStatus{\n\t\t\tPhase: corev1.PodRunning,\n\t\t\tInitContainerStatuses: []corev1.ContainerStatus{\n\t\t\t\t{\n\t\t\t\t\tContainerID: \"testing://containerID\",\n\t\t\t\t\tReady:       true,\n\t\t\t\t\tState: corev1.ContainerState{\n\t\t\t\t\t\tWaiting: &pending,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tContainerStatuses: []corev1.ContainerStatus{\n\t\t\t\t{\n\t\t\t\t\tContainerID: \"broken-container-id-needs-to-be-skipped\",\n\t\t\t\t\tReady:       true,\n\t\t\t\t\tState: corev1.ContainerState{\n\t\t\t\t\t\tWaiting: &pending,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\tupdateHostPodTemplate := hostpodTemplate.DeepCopy()\n\tupdateHostPodTemplate.Status.InitContainerStatuses[0].State.Running = &running\n\tupdateHostPodTemplate2 := updateHostPodTemplate.DeepCopy()\n\tupdateHostPodTemplate2.Labels = map[string]string{\n\t\t\"a\": \"b\",\n\t}\n\n\tupdatedPodTemplate := podTemplate.DeepCopy()\n\tupdatedPodTemplate.Labels = map[string]string{\n\t\t\"a\": \"b\",\n\t}\n\tupdatedPodTemplate2 := updatedPodTemplate.DeepCopy()\n\tupdatedPodTemplate2.Annotations = map[string]string{\n\t\t\"annotated\": \"\",\n\t}\n\n\tuntrackedPodOnSameHostTemplate := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"untracked-same-host\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tNodeName: \"test\",\n\t\t},\n\t}\n\tuntrackedPodOnDifferentHostTemplate := &corev1.Pod{\n\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\tName:      \"untracked-different-host\",\n\t\t\tNamespace: \"default\",\n\t\t},\n\t\tSpec: corev1.PodSpec{\n\t\t\tNodeName: \"different\",\n\t\t},\n\t}\n\n\tc := fake.NewSimpleClientset(\n\t\tpodTemplate.DeepCopy(),\n\t\tuntrackedPodOnSameHostTemplate.DeepCopy(),\n\t\tuntrackedPodOnDifferentHostTemplate.DeepCopy(),\n\t\thostpodTemplate.DeepCopy(),\n\t)\n\n\ttype fields struct {\n\t\tpods map[string]*corev1.Pod\n\t}\n\ttype args struct {\n\t\tctx         context.Context\n\t\tkubeClient  kubernetes.Interface\n\t\tnodeName    string\n\t\tneedsUpdate needsUpdateFunc\n\t}\n\ttests := []struct {\n\t\tname                string\n\t\tupdateEventHandler  unitTestUpdateEvent\n\t\tfields              fields\n\t\targs                args\n\t\taction              func(*testing.T, *podCache)\n\t\texpectedUpdateEvent bool\n\t\texpectedPods        map[string]*corev1.Pod\n\t}{\n\t\t{\n\t\t\tname:               \"update to a pod which we have in cache which requires update event\",\n\t\t\tupdateEventHandler: newUnitTestUpdateEventHandler(1, fmt.Errorf(\"increase coverage\")),\n\t\t\tfields: fields{\n\t\t\t\tpods: map[string]*corev1.Pod{\n\t\t\t\t\t\"entry\": podTemplate.DeepCopy(),\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tctx:         context.Background(),\n\t\t\t\tkubeClient:  c,\n\t\t\t\tnodeName:    \"test\",\n\t\t\t\tneedsUpdate: defaultNeedsUpdate,\n\t\t\t},\n\t\t\taction: func(_ *testing.T, _ *podCache) {\n\t\t\t\t_, err := c.CoreV1().Pods(\"default\").Update(context.Background(), updatedPodTemplate.DeepCopy(), metav1.UpdateOptions{})\n\t\t\t\tif err != nil {\n\t\t\t\t\tpanic(err)\n\t\t\t\t}\n\t\t\t},\n\t\t\texpectedUpdateEvent: true,\n\t\t\texpectedPods: map[string]*corev1.Pod{\n\t\t\t\t\"entry\": updatedPodTemplate.DeepCopy(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:               \"update to a pod which we have in cache which does not require an update event\",\n\t\t\tupdateEventHandler: newUnitTestUpdateEventHandler(0, nil),\n\t\t\tfields: fields{\n\t\t\t\tpods: map[string]*corev1.Pod{\n\t\t\t\t\t\"entry\": podTemplate.DeepCopy(),\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tctx:         context.Background(),\n\t\t\t\tkubeClient:  c,\n\t\t\t\tnodeName:    \"test\",\n\t\t\t\tneedsUpdate: defaultNeedsUpdate,\n\t\t\t},\n\t\t\taction: func(_ *testing.T, _ *podCache) {\n\t\t\t\t_, err := c.CoreV1().Pods(\"default\").Update(context.Background(), updatedPodTemplate2.DeepCopy(), metav1.UpdateOptions{})\n\t\t\t\tif err != nil {\n\t\t\t\t\tpanic(err)\n\t\t\t\t}\n\t\t\t\ttime.Sleep(time.Millisecond * 100)\n\t\t\t},\n\t\t\texpectedUpdateEvent: false,\n\t\t\texpectedPods: map[string]*corev1.Pod{\n\t\t\t\t\"entry\": updatedPodTemplate2.DeepCopy(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:               \"update to a pod from different host which we do not track\",\n\t\t\tupdateEventHandler: newUnitTestUpdateEventHandler(0, nil),\n\t\t\tfields: fields{\n\t\t\t\tpods: map[string]*corev1.Pod{\n\t\t\t\t\t\"entry\": podTemplate.DeepCopy(),\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tctx:         context.Background(),\n\t\t\t\tkubeClient:  c,\n\t\t\t\tnodeName:    \"test\",\n\t\t\t\tneedsUpdate: defaultNeedsUpdate,\n\t\t\t},\n\t\t\taction: func(_ *testing.T, _ *podCache) {\n\t\t\t\tupdated := untrackedPodOnDifferentHostTemplate.DeepCopy()\n\t\t\t\tupdated.Labels = map[string]string{\n\t\t\t\t\t\"update\": \"\",\n\t\t\t\t}\n\t\t\t\t_, err := c.CoreV1().Pods(\"default\").Update(context.Background(), updated.DeepCopy(), metav1.UpdateOptions{})\n\t\t\t\tif err != nil {\n\t\t\t\t\tpanic(err)\n\t\t\t\t}\n\t\t\t},\n\t\t\texpectedUpdateEvent: false,\n\t\t\texpectedPods: map[string]*corev1.Pod{\n\t\t\t\t\"entry\": podTemplate.DeepCopy(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:               \"update to a pod from the same host which we do not track\",\n\t\t\tupdateEventHandler: newUnitTestUpdateEventHandler(0, nil),\n\t\t\tfields: fields{\n\t\t\t\tpods: map[string]*corev1.Pod{\n\t\t\t\t\t\"entry\": podTemplate.DeepCopy(),\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tctx:         context.Background(),\n\t\t\t\tkubeClient:  c,\n\t\t\t\tnodeName:    \"test\",\n\t\t\t\tneedsUpdate: defaultNeedsUpdate,\n\t\t\t},\n\t\t\taction: func(_ *testing.T, _ *podCache) {\n\t\t\t\tupdated := untrackedPodOnSameHostTemplate.DeepCopy()\n\t\t\t\tupdated.Labels = map[string]string{\n\t\t\t\t\t\"update\": \"\",\n\t\t\t\t}\n\t\t\t\t_, err := c.CoreV1().Pods(\"default\").Update(context.Background(), updated.DeepCopy(), metav1.UpdateOptions{})\n\t\t\t\tif err != nil {\n\t\t\t\t\tpanic(err)\n\t\t\t\t}\n\t\t\t},\n\t\t\texpectedUpdateEvent: false,\n\t\t\texpectedPods: map[string]*corev1.Pod{\n\t\t\t\t\"entry\": podTemplate.DeepCopy(),\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tc := &podCache{\n\t\t\t\tpods:        tt.fields.pods,\n\t\t\t\tupdateEvent: tt.updateEventHandler.f(),\n\t\t\t}\n\t\t\tctx, cancel := context.WithCancel(tt.args.ctx)\n\t\t\tc.SetupInformer(ctx, tt.args.kubeClient, tt.args.nodeName, tt.args.needsUpdate)\n\t\t\ttt.action(t, c)\n\t\t\ttt.updateEventHandler.wait()\n\t\t\tc.RLock()\n\t\t\tif !reflect.DeepEqual(c.pods, tt.expectedPods) {\n\t\t\t\tt.Errorf(\"c.pods = %v, want %v\", c.pods, tt.expectedPods)\n\t\t\t}\n\t\t\tc.RUnlock()\n\t\t\tif tt.expectedUpdateEvent != tt.updateEventHandler.called() {\n\t\t\t\tt.Errorf(\"updateEventHandler.called() = %v, want %v\", tt.updateEventHandler.called(), tt.expectedUpdateEvent)\n\t\t\t}\n\t\t\tcancel()\n\t\t})\n\t}\n}\n\nfunc Test_defaultNeedsUpdate(t *testing.T) {\n\ttype args struct {\n\t\tprev *corev1.Pod\n\t\tobj  *corev1.Pod\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant bool\n\t}{\n\t\t{\n\t\t\tname: \"labels are both nil\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{},\n\t\t\t\tobj:  &corev1.Pod{},\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"labels are empty\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tobj: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"labels are the same\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tobj: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"labels are the same, but one has annotations\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tobj: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t\tAnnotations: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t\t{\n\t\t\tname: \"labels are nil in prev, but set on new\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{},\n\t\t\t\tobj: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"labels are empty in prev, but set on new\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tobj: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"labels have same key but different value\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"a\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tobj: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"prev has one label more\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t\"b\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tobj: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"prev has different labels\",\n\t\t\targs: args{\n\t\t\t\tprev: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"b\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tobj: &corev1.Pod{\n\t\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := defaultNeedsUpdate(tt.args.prev, tt.args.obj); got != tt.want {\n\t\t\t\tt.Errorf(\"defaultNeedsUpdate() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/k8s/runtime_cache.go",
    "content": "package k8smonitor\n\nimport (\n\t\"context\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\nvar (\n\t// defaultLoopWait defines how often we loop over the entries to discover dead containers\n\tdefaultLoopWait = time.Second * 5\n)\n\ntype runtimeCacheInterface interface {\n\tDelete(sandboxID string)\n\tGet(sandboxID string) policy.RuntimeReader\n\tSet(sandboxID string, runtime policy.RuntimeReader) error\n}\n\nvar _ runtimeCacheInterface = &runtimeCache{}\n\ntype runtimeCache struct {\n\tsync.RWMutex\n\truntimes  map[string]runtimeCacheEntry\n\tloopWait  time.Duration\n\tstopEvent stopEventFunc\n}\n\ntype runtimeCacheEntry struct {\n\truntime policy.RuntimeReader\n\trunning bool\n}\n\nfunc newRuntimeCache(ctx context.Context, stopEvent stopEventFunc) *runtimeCache {\n\tc := &runtimeCache{\n\t\truntimes:  make(map[string]runtimeCacheEntry),\n\t\tloopWait:  defaultLoopWait,\n\t\tstopEvent: stopEvent,\n\t}\n\tif c.loopWait > 0 {\n\t\tgo c.loop(ctx)\n\t}\n\treturn c\n}\n\nfunc makeSnapshot(m map[string]runtimeCacheEntry) map[string]policy.RuntimeReader {\n\tsnap := make(map[string]policy.RuntimeReader, len(m))\n\tfor k, v := range m {\n\t\tif v.running {\n\t\t\tsnap[k] = v.runtime\n\t\t}\n\t}\n\treturn snap\n}\n\n// loop is very awkward: it implements a runtime poller that checks if all runtimes\n// are actually still running. If not, it sends a stop event.\n// NOTE: this must be deprecated once we have hooked into the OCI runtime hooks in the bundle!\nfunc (c *runtimeCache) loop(ctx context.Context) {\nloop:\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\tbreak loop\n\t\tcase <-time.After(c.loopWait):\n\t\t\tc.RLock()\n\t\t\tif len(c.runtimes) > 0 {\n\t\t\t\t// take a snapshot\n\t\t\t\tsnap := makeSnapshot(c.runtimes)\n\t\t\t\tc.RUnlock()\n\t\t\t\t// and process them\n\t\t\t\tc.processRuntimes(ctx, snap)\n\t\t\t} else {\n\t\t\t\tc.RUnlock()\n\t\t\t}\n\t\t}\n\t}\n}\n\n// processRuntimes takes a snapshot of the runtimeCache, checks if the process is running, and sends a stop event if not\nfunc (c *runtimeCache) processRuntimes(ctx context.Context, snap map[string]policy.RuntimeReader) {\n\tfor id, runtime := range snap {\n\t\tpid := runtime.Pid()\n\t\tif pid > 0 {\n\t\t\tif running, err := sandboxIsRunning(pid); !running {\n\t\t\t\t// if there has been error checking, just continue\n\t\t\t\tif err != nil {\n\t\t\t\t\tzap.L().Error(\"K8sMonitor: runtime poller: failed to check if sandbox is still running\",\n\t\t\t\t\t\tzap.String(\"sandboxID\", id),\n\t\t\t\t\t\tzap.Int(\"sandboxPid\", pid),\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tzap.L().Debug(\"K8sMonitor: runtime poller: sandbox container must have stopped\", zap.String(\"sandboxID\", id), zap.Int(\"sandboxPid\", pid), zap.Error(err))\n\n\t\t\t\t// update the entry\n\t\t\t\tc.Lock()\n\t\t\t\tif _, ok := c.runtimes[id]; ok {\n\t\t\t\t\tc.runtimes[id] = runtimeCacheEntry{\n\t\t\t\t\t\truntime: runtime,\n\t\t\t\t\t\trunning: false,\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tc.Unlock()\n\n\t\t\t\t// fire away a stop event to the policy engine\n\t\t\t\t// log an error as every caller should do, but continue normally\n\t\t\t\t// there is nothing we can do about the error\n\t\t\t\tgo func(ctx context.Context, sandboxID string) {\n\t\t\t\t\tif err := c.stopEvent(ctx, sandboxID); err != nil {\n\t\t\t\t\t\tzap.L().Error(\"K8sMonitor: runtime poller: failed to send stop event to policy engine\", zap.String(\"sandboxID\", sandboxID), zap.Error(err))\n\t\t\t\t\t}\n\t\t\t\t}(ctx, id)\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (c *runtimeCache) Get(sandboxID string) policy.RuntimeReader {\n\tif c == nil {\n\t\treturn nil\n\t}\n\tc.RLock()\n\tdefer c.RUnlock()\n\tif c.runtimes == nil {\n\t\treturn nil\n\t}\n\tr, ok := c.runtimes[sandboxID]\n\tif !ok {\n\t\treturn nil\n\t}\n\t// TODO: should return a clone, not a pointer\n\treturn r.runtime\n}\n\nfunc (c *runtimeCache) Set(sandboxID string, runtime policy.RuntimeReader) error {\n\tif c == nil {\n\t\treturn errCacheUninitialized\n\t}\n\tif sandboxID == \"\" {\n\t\treturn errSandboxEmpty\n\t}\n\tif runtime == nil {\n\t\treturn errRuntimeNil\n\t}\n\tc.Lock()\n\tdefer c.Unlock()\n\tif c.runtimes == nil {\n\t\treturn errCacheUninitialized\n\t}\n\tc.runtimes[sandboxID] = runtimeCacheEntry{\n\t\truntime: runtime,\n\t\trunning: true,\n\t}\n\treturn nil\n}\n\nfunc (c *runtimeCache) Delete(sandboxID string) {\n\tif c == nil {\n\t\treturn\n\t}\n\tc.Lock()\n\tdefer c.Unlock()\n\tdelete(c.runtimes, sandboxID)\n}\n"
  },
  {
    "path": "monitor/internal/k8s/runtime_cache_linux.go",
    "content": "// +build linux\n\npackage k8smonitor\n\nimport (\n\t\"syscall\"\n)\n\n// syscallKill points to syscall.Kill and can be overwritten in unit tests\nvar syscallKill func(pid int, sig syscall.Signal) (err error) = syscall.Kill\n\nfunc sandboxIsRunning(pid int) (bool, error) {\n\tif err := syscallKill(pid, syscall.Signal(0)); err != nil {\n\t\t// the expected error is ESRCH: The process or process group does not exist.\n\t\tif err != syscall.ESRCH {\n\t\t\treturn false, err\n\t\t}\n\n\t\t// this is a successful check that the process is dead\n\t\t// and therefore the sandbox is not running anymore\n\t\treturn false, nil\n\t}\n\n\t// otherwise it means that the process is not dead and is still running\n\treturn true, nil\n}\n"
  },
  {
    "path": "monitor/internal/k8s/runtime_cache_test.go",
    "content": "// +build linux\n\npackage k8smonitor\n\n// TODO: make compatible with Windows\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"sync\"\n\t\"syscall\"\n\t\"testing\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\nfunc Test_runtimeCache_Delete(t *testing.T) {\n\tstopEvent := func(context.Context, string) error {\n\t\treturn nil\n\t}\n\n\t// override globals for unit tests\n\toldDefaultLoopWait := defaultLoopWait\n\toldSyscallKill := syscallKill\n\tdefer func() {\n\t\tdefaultLoopWait = oldDefaultLoopWait\n\t\tsyscallKill = oldSyscallKill\n\t}()\n\tdefaultLoopWait = time.Duration(0)\n\tsyscallKill = func(int, syscall.Signal) (err error) {\n\t\treturn nil\n\t}\n\n\ttests := []struct {\n\t\tname      string\n\t\tc         *runtimeCache\n\t\tsandboxID string\n\t}{\n\t\t{\n\t\t\tname:      \"cache uninitialized\",\n\t\t\tc:         nil,\n\t\t\tsandboxID: \"does-not-matter\",\n\t\t},\n\t\t{\n\t\t\tname:      \"cache initialized\",\n\t\t\tc:         newRuntimeCache(context.TODO(), stopEvent),\n\t\t\tsandboxID: \"does-not-mater\",\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\ttt.c.Delete(tt.sandboxID)\n\t\t})\n\t}\n}\n\nfunc Test_runtimeCache_Set(t *testing.T) {\n\tstopEvent := func(context.Context, string) error {\n\t\treturn nil\n\t}\n\n\t// override globals for unit tests\n\toldDefaultLoopWait := defaultLoopWait\n\toldSyscallKill := syscallKill\n\tdefer func() {\n\t\tdefaultLoopWait = oldDefaultLoopWait\n\t\tsyscallKill = oldSyscallKill\n\t}()\n\tdefaultLoopWait = time.Duration(0)\n\tsyscallKill = func(int, syscall.Signal) (err error) {\n\t\treturn nil\n\t}\n\n\ttype args struct {\n\t\tsandboxID string\n\t\truntime   policy.RuntimeReader\n\t}\n\ttests := []struct {\n\t\tname         string\n\t\tc            *runtimeCache\n\t\targs         args\n\t\twantErr      bool\n\t\twantErrError error\n\t}{\n\t\t{\n\t\t\tname:         \"cache uninitialized\",\n\t\t\tc:            nil,\n\t\t\twantErr:      true,\n\t\t\twantErrError: errCacheUninitialized,\n\t\t},\n\t\t{\n\t\t\tname:         \"cache has unintialized map\",\n\t\t\tc:            &runtimeCache{},\n\t\t\twantErr:      true,\n\t\t\twantErrError: errCacheUninitialized,\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"does-not-matter\",\n\t\t\t\truntime:   policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:         \"no sandboxID\",\n\t\t\tc:            newRuntimeCache(context.TODO(), stopEvent),\n\t\t\twantErr:      true,\n\t\t\twantErrError: errSandboxEmpty,\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"\",\n\t\t\t\truntime:   policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:         \"runtime is nil\",\n\t\t\tc:            newRuntimeCache(context.TODO(), stopEvent),\n\t\t\twantErr:      true,\n\t\t\twantErrError: errRuntimeNil,\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"does-not-matter\",\n\t\t\t\truntime:   nil,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:    \"successful update entry\",\n\t\t\tc:       newRuntimeCache(context.TODO(), stopEvent),\n\t\t\twantErr: false,\n\t\t\targs: args{\n\t\t\t\tsandboxID: \"does-not-matter\",\n\t\t\t\truntime:   policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\terr := tt.c.Set(tt.args.sandboxID, tt.args.runtime)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"runtimeCache.Set() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t\tif tt.wantErr {\n\t\t\t\tif err != tt.wantErrError {\n\t\t\t\t\tt.Errorf(\"runtimeCache.Set() error = %v, wantErrError %v\", err, tt.wantErrError)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_runtimeCache_Get(t *testing.T) {\n\tstopEvent := func(context.Context, string) error {\n\t\treturn nil\n\t}\n\n\t// override globals for unit tests\n\toldDefaultLoopWait := defaultLoopWait\n\toldSyscallKill := syscallKill\n\tdefer func() {\n\t\tdefaultLoopWait = oldDefaultLoopWait\n\t\tsyscallKill = oldSyscallKill\n\t}()\n\tdefaultLoopWait = time.Duration(0)\n\tsyscallKill = func(int, syscall.Signal) (err error) {\n\t\treturn nil\n\t}\n\n\tcacheWithEntry := newRuntimeCache(context.TODO(), stopEvent)\n\tif err := cacheWithEntry.Set(\"entry\", policy.NewPURuntimeWithDefaults()); err != nil {\n\t\tpanic(err)\n\t}\n\ttests := []struct {\n\t\tname      string\n\t\tsandboxID string\n\t\tc         *runtimeCache\n\t\twant      policy.RuntimeReader\n\t}{\n\t\t{\n\t\t\tname: \"uninitialized runtimeCache\",\n\t\t\tc:    nil,\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"uninitialized map in runtimeCache\",\n\t\t\tc:    &runtimeCache{},\n\t\t\twant: nil,\n\t\t},\n\t\t{\n\t\t\tname:      \"entry does not exist\",\n\t\t\tc:         newRuntimeCache(context.TODO(), stopEvent),\n\t\t\tsandboxID: \"does-not-exist\",\n\t\t\twant:      nil,\n\t\t},\n\t\t{\n\t\t\tname:      \"entry exists\",\n\t\t\tc:         cacheWithEntry,\n\t\t\tsandboxID: \"entry\",\n\t\t\twant:      policy.NewPURuntimeWithDefaults(),\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := tt.c.Get(tt.sandboxID); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"runtimeCache.Get() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_makeSnapshot(t *testing.T) {\n\ttype args struct {\n\t\tm map[string]runtimeCacheEntry\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant map[string]policy.RuntimeReader\n\t}{\n\t\t{\n\t\t\tname: \"empty\",\n\t\t\targs: args{\n\t\t\t\tm: map[string]runtimeCacheEntry{},\n\t\t\t},\n\t\t\twant: map[string]policy.RuntimeReader{},\n\t\t},\n\t\t{\n\t\t\tname: \"not-running entry\",\n\t\t\targs: args{\n\t\t\t\tm: map[string]runtimeCacheEntry{\n\t\t\t\t\t\"entry\": {\n\t\t\t\t\t\truntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t\t\t\trunning: false,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string]policy.RuntimeReader{},\n\t\t},\n\t\t{\n\t\t\tname: \"running entry\",\n\t\t\targs: args{\n\t\t\t\tm: map[string]runtimeCacheEntry{\n\t\t\t\t\t\"entry\": {\n\t\t\t\t\t\truntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t\t\t\trunning: true,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant: map[string]policy.RuntimeReader{\n\t\t\t\t\"entry\": policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := makeSnapshot(tt.args.m); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"makeSnapshot() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype unitTestStopEvent interface {\n\tf() stopEventFunc\n\twait()\n\tcalled() bool\n}\ntype unitTestStopEventHandler struct {\n\tsync.RWMutex\n\twg        sync.WaitGroup\n\twgCounter int\n\twasCalled bool\n\terr       error\n}\n\nfunc (h *unitTestStopEventHandler) stopEvent(context.Context, string) error {\n\th.Lock()\n\tdefer h.Unlock()\n\th.wasCalled = true\n\tif h.wgCounter > 0 {\n\t\th.wgCounter--\n\t}\n\tif h.wgCounter >= 0 {\n\t\th.wg.Done()\n\t}\n\treturn h.err\n}\n\nfunc (h *unitTestStopEventHandler) f() stopEventFunc {\n\treturn h.stopEvent\n}\n\nfunc (h *unitTestStopEventHandler) wait() {\n\th.wg.Wait()\n}\n\nfunc (h *unitTestStopEventHandler) called() bool {\n\th.RLock()\n\tdefer h.RUnlock()\n\treturn h.wasCalled\n}\n\nfunc newUnitTestStopEventHandler(n int, err error) unitTestStopEvent {\n\th := &unitTestStopEventHandler{\n\t\terr:       err,\n\t\twgCounter: n,\n\t}\n\th.wg.Add(n)\n\treturn h\n}\n\nfunc Test_runtimeCache_processRuntimes(t *testing.T) {\n\t// override globals for unit tests\n\toldDefaultLoopWait := defaultLoopWait\n\toldSyscallKill := syscallKill\n\tdefer func() {\n\t\tdefaultLoopWait = oldDefaultLoopWait\n\t\tsyscallKill = oldSyscallKill\n\t}()\n\tdefaultLoopWait = time.Duration(0)\n\tsyscallKill = func(int, syscall.Signal) (err error) {\n\t\treturn nil\n\t}\n\n\ttype fields struct {\n\t\truntimes map[string]runtimeCacheEntry\n\t}\n\ttype args struct {\n\t\tctx  context.Context\n\t\tsnap map[string]policy.RuntimeReader\n\t}\n\n\truntime := policy.NewPURuntime(\"entry\", 42, \"\", nil, nil, common.ContainerPU, policy.None, nil)\n\ttests := []struct {\n\t\tname              string\n\t\tsyscallKill       func(int, syscall.Signal) error\n\t\tstopEventHandler  unitTestStopEvent\n\t\tfields            fields\n\t\targs              args\n\t\texpectedStopEvent bool\n\t\texpectedRuntimes  map[string]runtimeCacheEntry\n\t}{\n\t\t{\n\t\t\tname: \"process still running\",\n\t\t\tsyscallKill: func(int, syscall.Signal) error {\n\t\t\t\treturn nil\n\t\t\t},\n\t\t\tstopEventHandler: newUnitTestStopEventHandler(0, nil),\n\t\t\tfields: fields{\n\t\t\t\truntimes: map[string]runtimeCacheEntry{\n\t\t\t\t\t\"entry\": {\n\t\t\t\t\t\truntime: runtime,\n\t\t\t\t\t\trunning: true,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tctx: context.Background(),\n\t\t\t\tsnap: map[string]policy.RuntimeReader{\n\t\t\t\t\t\"entry\": runtime,\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedStopEvent: false,\n\t\t\texpectedRuntimes: map[string]runtimeCacheEntry{\n\t\t\t\t\"entry\": {\n\t\t\t\t\truntime: runtime,\n\t\t\t\t\trunning: true,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"syscall returns unexpected error\",\n\t\t\tsyscallKill: func(int, syscall.Signal) error {\n\t\t\t\treturn fmt.Errorf(\"unexpected error\")\n\t\t\t},\n\t\t\tstopEventHandler: newUnitTestStopEventHandler(0, nil),\n\t\t\tfields: fields{\n\t\t\t\truntimes: map[string]runtimeCacheEntry{\n\t\t\t\t\t\"entry\": {\n\t\t\t\t\t\truntime: runtime,\n\t\t\t\t\t\trunning: true,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tctx: context.Background(),\n\t\t\t\tsnap: map[string]policy.RuntimeReader{\n\t\t\t\t\t\"entry\": runtime,\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedStopEvent: false,\n\t\t\texpectedRuntimes: map[string]runtimeCacheEntry{\n\t\t\t\t\"entry\": {\n\t\t\t\t\truntime: runtime,\n\t\t\t\t\trunning: true,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"process not running anymore\",\n\t\t\tsyscallKill: func(int, syscall.Signal) error {\n\t\t\t\treturn syscall.ESRCH\n\t\t\t},\n\t\t\tstopEventHandler: newUnitTestStopEventHandler(1, fmt.Errorf(\"more test coverage\")),\n\t\t\tfields: fields{\n\t\t\t\truntimes: map[string]runtimeCacheEntry{\n\t\t\t\t\t\"entry\": {\n\t\t\t\t\t\truntime: runtime,\n\t\t\t\t\t\trunning: true,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tctx: context.Background(),\n\t\t\t\tsnap: map[string]policy.RuntimeReader{\n\t\t\t\t\t\"entry\": runtime,\n\t\t\t\t},\n\t\t\t},\n\t\t\texpectedStopEvent: true,\n\t\t\texpectedRuntimes: map[string]runtimeCacheEntry{\n\t\t\t\t\"entry\": {\n\t\t\t\t\truntime: runtime,\n\t\t\t\t\trunning: false,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tsyscallKill = tt.syscallKill\n\t\t\tc := &runtimeCache{\n\t\t\t\truntimes:  tt.fields.runtimes,\n\t\t\t\tstopEvent: tt.stopEventHandler.f(),\n\t\t\t}\n\t\t\tctx, cancel := context.WithCancel(tt.args.ctx)\n\t\t\tdefer cancel()\n\t\t\tc.processRuntimes(ctx, tt.args.snap)\n\t\t\ttt.stopEventHandler.wait()\n\t\t\tif !reflect.DeepEqual(c.runtimes, tt.expectedRuntimes) {\n\t\t\t\tt.Errorf(\"c.runtimes = %v, want %v\", c.runtimes, tt.expectedRuntimes)\n\t\t\t}\n\t\t\tif tt.expectedStopEvent != tt.stopEventHandler.called() {\n\t\t\t\tt.Errorf(\"stopEventHandler.called() = %v, want %v\", tt.stopEventHandler.called(), tt.expectedStopEvent)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_runtimeCache_loop(t *testing.T) {\n\tstopEventHandler := newUnitTestStopEventHandler(1, nil)\n\n\t// override globals for unit tests\n\toldDefaultLoopWait := defaultLoopWait\n\toldSyscallKill := syscallKill\n\tdefer func() {\n\t\tdefaultLoopWait = oldDefaultLoopWait\n\t\tsyscallKill = oldSyscallKill\n\t}()\n\tdefaultLoopWait = time.Duration(1)\n\tsyscallKill = func(int, syscall.Signal) error {\n\t\treturn syscall.ESRCH\n\t}\n\n\ttests := []struct {\n\t\tname             string\n\t\tstopEventHandler unitTestStopEvent\n\t}{\n\t\t{\n\t\t\tname:             \"successful loop\",\n\t\t\tstopEventHandler: stopEventHandler,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// this starts the loop already\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tc := newRuntimeCache(ctx, tt.stopEventHandler.f())\n\n\t\t\t// TODO: to get that last inch of coverage :) not sure how else to get that\n\t\t\ttime.Sleep(time.Millisecond * 10)\n\n\t\t\t// add a runtime\n\t\t\truntime := policy.NewPURuntime(\"entry\", 42, \"\", nil, nil, common.ContainerPU, policy.None, nil)\n\t\t\tc.Set(\"entry\", runtime) // nolint: errcheck\n\t\t\tc.RLock()\n\t\t\tif c.runtimes[\"entry\"].running != true { // nolint\n\t\t\t\tt.Errorf(\"entry is not marked as running\")\n\t\t\t}\n\t\t\tc.RUnlock()\n\n\t\t\t// wait until the stop event was called\n\t\t\ttt.stopEventHandler.wait()\n\t\t\tcancel()\n\n\t\t\tc.RLock()\n\t\t\tif c.runtimes[\"entry\"].running != false { // nolint\n\t\t\t\tt.Errorf(\"entry is still marked as running\")\n\t\t\t}\n\t\t\tc.RUnlock()\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/k8s/runtime_cache_unsupported.go",
    "content": "// +build !linux,!windows\n\npackage k8smonitor\n\nimport (\n\t\"errors\"\n)\n\nfunc sandboxIsRunning(pid int) (bool, error) {\n\treturn false, errors.New(\"unsupported platform\")\n}\n"
  },
  {
    "path": "monitor/internal/k8s/runtime_cache_windows.go",
    "content": "// +build windows\n\npackage k8smonitor\n\nfunc sandboxIsRunning(pid int) (bool, error) {\n\t// TODO: implement\n\treturn true, nil\n}\n"
  },
  {
    "path": "monitor/internal/k8s/testdata/kubeconfig",
    "content": "apiVersion: v1\nclusters:\n- cluster:\n    server: https://localhost:9443\n  name: apiserver\ncontexts:\n- context:\n    cluster: apiserver\n    user: apiserver\n  name: apiserver\ncurrent-context: apiserver\nkind: Config\npreferences: {}\nusers:\n- name: apiserver\n  user: {}"
  },
  {
    "path": "monitor/internal/kubernetes/DEPRECATED.txt",
    "content": "This monitor has been deprecated in favour of the 'pod' monitor.\n"
  },
  {
    "path": "monitor/internal/kubernetes/cache.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"sync\"\n\n\t\"go.aporeto.io/trireme-lib/policy\"\n)\n\n// puidCacheEntry is a Kubernetes entry based on Docker as a key.\n// This entry keeps track of the DockerMonitor properties that cannot be queried later on (such as the runtime)\ntype puidCacheEntry struct {\n\t// podID is the reference to the Kubernetes pod that this container refers to\n\tkubeIdentifier string\n\n\t// The latest reference to the runtime as received from DockerMonitor\n\tdockerRuntime policy.RuntimeReader\n\n\t// The latest reference to the runtime as received from DockerMonitor\n\tkubernetesRuntime policy.RuntimeReader\n}\n\n// podCacheEntry is a Kubernetes entry based on a Pod as Key. The main goal here is to keep a mapping to all\n// existing Dockers PUIDs implementing this pod (as there might be multiple)\ntype podCacheEntry struct {\n\t// puIDs us a map containing a link to all the containers currently known to be part of that pod.\n\tpuIDs map[string]bool\n}\n\n// Cache is a cache implementation specific to KubernetesMonitor.\n// puidCache is centered on Docker and podCache is centered on Kubernetes\ntype cache struct {\n\t// popuidCache keeps a mapping between a PUID and the corresponding puidCacheEntry.\n\tpuidCache map[string]*puidCacheEntry\n\n\t// podCache keeps a mapping between a POD/Namespace name and the corresponding podCacheEntry.\n\tpodCache map[string]*podCacheEntry\n\n\t// Lock for the whole cache\n\tsync.RWMutex\n}\n\n// NewCache initialize a cache\nfunc newCache() *cache {\n\treturn &cache{\n\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\tpodCache:  map[string]*podCacheEntry{},\n\t}\n}\n\nfunc kubePodIdentifier(podName string, podNamespace string) string {\n\treturn podNamespace + \"/\" + podName\n}\n\n// updatePUIDCache updates the cache with an entry coming from a container perspective\nfunc (c *cache) updatePUIDCache(podNamespace string, podName string, puID string, dockerRuntime policy.RuntimeReader, kubernetesRuntime policy.RuntimeReader) {\n\tif podNamespace == \"\" || podName == \"\" || puID == \"\" {\n\t\treturn\n\t}\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tkubeIdentifier := kubePodIdentifier(podName, podNamespace)\n\n\tpuidEntry, ok := c.puidCache[puID]\n\tif !ok {\n\t\tpuidEntry = &puidCacheEntry{}\n\t\tc.puidCache[puID] = puidEntry\n\t}\n\tpuidEntry.kubeIdentifier = kubeIdentifier\n\tpuidEntry.dockerRuntime = dockerRuntime\n\tpuidEntry.kubernetesRuntime = kubernetesRuntime\n\n\tpodEntry, ok := c.podCache[kubeIdentifier]\n\tif !ok {\n\t\tpodEntry = &podCacheEntry{}\n\t\tpodEntry.puIDs = map[string]bool{}\n\t\tc.podCache[kubeIdentifier] = podEntry\n\t}\n\tpodEntry.puIDs[puID] = true\n\n}\n\n// deletePUIDCache deletes puid corresponding entries from the cache.\nfunc (c *cache) deletePUIDCache(puID string) {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\t// Remove from pod cache.\n\tpuidEntry, ok := c.puidCache[puID]\n\tif !ok {\n\t\treturn\n\t}\n\tkubeIdentifier := puidEntry.kubeIdentifier\n\n\tpodEntry, ok := c.podCache[kubeIdentifier]\n\tif !ok {\n\t\treturn\n\t}\n\n\tdelete(podEntry.puIDs, puID)\n\n\t// if no more containers in the pod, delete the podEntry.\n\tif len(podEntry.puIDs) == 0 {\n\t\tdelete(c.podCache, kubeIdentifier)\n\t}\n\n\t// delete entry in puidcache\n\tdelete(c.puidCache, puID)\n}\n\n// getOrCreatePodFromCache locks the cache in order to return the pod cache entry if found, or create it if not found\nfunc (c *cache) getPUIDsbyPod(podNamespace string, podName string) []string {\n\tc.RLock()\n\tdefer c.RUnlock()\n\n\tkubeIdentifier := kubePodIdentifier(podName, podNamespace)\n\tpodEntry, ok := c.podCache[kubeIdentifier]\n\tif !ok {\n\t\treturn []string{}\n\t}\n\n\treturn keysFromMap(podEntry.puIDs)\n}\n\n// getRuntimeByPUID locks the cache in order to return the pod cache entry if found, or create it if not found\nfunc (c *cache) getDockerRuntimeByPUID(puid string) policy.RuntimeReader {\n\tc.RLock()\n\tdefer c.RUnlock()\n\n\tpuidEntry, ok := c.puidCache[puid]\n\tif !ok {\n\t\treturn nil\n\t}\n\n\treturn puidEntry.dockerRuntime\n}\n\n// getRuntimeByPUID locks the cache in order to return the pod cache entry if found, or create it if not found\nfunc (c *cache) getKubernetesRuntimeByPUID(puid string) policy.RuntimeReader {\n\tc.RLock()\n\tdefer c.RUnlock()\n\n\tpuidEntry, ok := c.puidCache[puid]\n\tif !ok {\n\t\treturn nil\n\t}\n\n\treturn puidEntry.kubernetesRuntime\n}\n\n// deletePodEntry locks the cache in order to deletes pod cache entry.\nfunc (c *cache) deletePodEntry(podNamespace string, podName string) {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tkubeIdentifier := kubePodIdentifier(podName, podNamespace)\n\n\tdelete(c.podCache, kubeIdentifier)\n}\n\n// deletePUID locks the cache in order to delete the puid from puidcache.\nfunc (c *cache) deletePUIDEntry(puid string) {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tdelete(c.puidCache, puid)\n}\n\nfunc keysFromMap(m map[string]bool) []string {\n\tkeys := make([]string, len(m))\n\n\ti := 0\n\tfor k := range m {\n\t\tkeys[i] = k\n\t\ti++\n\t}\n\n\treturn keys\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/cache_test.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"reflect\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"go.aporeto.io/trireme-lib/policy\"\n)\n\nfunc Test_cache_updatePUIDCache(t *testing.T) {\n\n\t// Pregenerating a couple fake runtimes\n\truntime1 := policy.NewPURuntimeWithDefaults()\n\truntime1.SetPid(1)\n\truntime2 := policy.NewPURuntimeWithDefaults()\n\truntime2.SetPid(2)\n\truntime3 := policy.NewPURuntimeWithDefaults()\n\truntime3.SetPid(3)\n\n\ttype fields struct {\n\t\tpuidCache map[string]*puidCacheEntry\n\t\tpodCache  map[string]*podCacheEntry\n\t\tRWMutex   sync.RWMutex\n\t}\n\ttype args struct {\n\t\tpodNamespace      string\n\t\tpodName           string\n\t\tpuID              string\n\t\tdockerRuntime     policy.RuntimeReader\n\t\tkubernetesRuntime policy.RuntimeReader\n\t}\n\ttests := []struct {\n\t\tname         string\n\t\tfields       *fields\n\t\tfieldsResult *fields\n\t\targs         *args\n\t}{\n\t\t{\n\t\t\tname: \"test empty all\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\tfieldsResult: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodNamespace:      \"\",\n\t\t\t\tpodName:           \"\",\n\t\t\t\tpuID:              \"\",\n\t\t\t\tdockerRuntime:     policy.NewPURuntimeWithDefaults(),\n\t\t\t\tkubernetesRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"test empty NS\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\tfieldsResult: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodNamespace:      \"\",\n\t\t\t\tpodName:           \"xcvxcv\",\n\t\t\t\tpuID:              \"xcvxcv\",\n\t\t\t\tdockerRuntime:     policy.NewPURuntimeWithDefaults(),\n\t\t\t\tkubernetesRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"test empty Name\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\tfieldsResult: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodNamespace:      \"xcvxcv\",\n\t\t\t\tpodName:           \"\",\n\t\t\t\tpuID:              \"xcvxcv\",\n\t\t\t\tdockerRuntime:     policy.NewPURuntimeWithDefaults(),\n\t\t\t\tkubernetesRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"test empty PUID\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\tfieldsResult: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodNamespace:      \"xcvxcv\",\n\t\t\t\tpodName:           \"xcvxcv\",\n\t\t\t\tpuID:              \"\",\n\t\t\t\tdockerRuntime:     policy.NewPURuntimeWithDefaults(),\n\t\t\t\tkubernetesRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"test normal behavior\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{},\n\t\t\t\tpodCache:  map[string]*podCacheEntry{},\n\t\t\t},\n\t\t\tfieldsResult: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\t\"123456\": {\n\t\t\t\t\t\tkubeIdentifier:    \"namespace/name\",\n\t\t\t\t\t\tdockerRuntime:     runtime1,\n\t\t\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\t\"namespace/name\": {\n\t\t\t\t\t\tpuIDs: map[string]bool{\n\t\t\t\t\t\t\t\"123456\": true,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodNamespace:      \"namespace\",\n\t\t\t\tpodName:           \"name\",\n\t\t\t\tpuID:              \"123456\",\n\t\t\t\tdockerRuntime:     runtime1,\n\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"test additive behavior\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\t\"123456\": {\n\t\t\t\t\t\tkubeIdentifier:    \"namespace/name\",\n\t\t\t\t\t\tdockerRuntime:     runtime1,\n\t\t\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\t\"namespace/name\": {\n\t\t\t\t\t\tpuIDs: map[string]bool{\n\t\t\t\t\t\t\t\"123456\": true,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tfieldsResult: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\t\"123456\": {\n\t\t\t\t\t\tkubeIdentifier:    \"namespace/name\",\n\t\t\t\t\t\tdockerRuntime:     runtime1,\n\t\t\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t\t\t},\n\t\t\t\t\t\"abcdef\": {\n\t\t\t\t\t\tkubeIdentifier:    \"namespace2/name2\",\n\t\t\t\t\t\tdockerRuntime:     runtime3,\n\t\t\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\t\"namespace/name\": {\n\t\t\t\t\t\tpuIDs: map[string]bool{\n\t\t\t\t\t\t\t\"123456\": true,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\t\"namespace2/name2\": {\n\t\t\t\t\t\tpuIDs: map[string]bool{\n\t\t\t\t\t\t\t\"abcdef\": true,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodNamespace:      \"namespace2\",\n\t\t\t\tpodName:           \"name2\",\n\t\t\t\tpuID:              \"abcdef\",\n\t\t\t\tdockerRuntime:     runtime3,\n\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"test additive same pod\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\t\"123456\": {\n\t\t\t\t\t\tkubeIdentifier:    \"namespace/name\",\n\t\t\t\t\t\tdockerRuntime:     runtime1,\n\t\t\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\t\"namespace/name\": {\n\t\t\t\t\t\tpuIDs: map[string]bool{\n\t\t\t\t\t\t\t\"123456\": true,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tfieldsResult: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\t\"123456\": {\n\t\t\t\t\t\tkubeIdentifier:    \"namespace/name\",\n\t\t\t\t\t\tdockerRuntime:     runtime1,\n\t\t\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t\t\t},\n\t\t\t\t\t\"abcdef\": {\n\t\t\t\t\t\tkubeIdentifier:    \"namespace/name\",\n\t\t\t\t\t\tdockerRuntime:     runtime3,\n\t\t\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\t\"namespace/name\": {\n\t\t\t\t\t\tpuIDs: map[string]bool{\n\t\t\t\t\t\t\t\"123456\": true,\n\t\t\t\t\t\t\t\"abcdef\": true,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodNamespace:      \"namespace\",\n\t\t\t\tpodName:           \"name\",\n\t\t\t\tpuID:              \"abcdef\",\n\t\t\t\tdockerRuntime:     runtime3,\n\t\t\t\tkubernetesRuntime: runtime2,\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tc := &cache{\n\t\t\t\tpuidCache: tt.fields.puidCache,\n\t\t\t\tpodCache:  tt.fields.podCache,\n\t\t\t}\n\t\t\tc.updatePUIDCache(tt.args.podNamespace, tt.args.podName, tt.args.puID, tt.args.dockerRuntime, tt.args.kubernetesRuntime)\n\t\t\tif !reflect.DeepEqual(c.puidCache, tt.fieldsResult.puidCache) {\n\t\t\t\tt.Errorf(\"updatePUIDCache() field. got %v, want %v\", c.puidCache, tt.fieldsResult.puidCache)\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(c.podCache, tt.fieldsResult.podCache) {\n\t\t\t\tt.Errorf(\"updatePUIDCache() field. got %v, want %v\", c.podCache, tt.fieldsResult.podCache)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_cache_getPUIDsbyPod(t *testing.T) {\n\tpuid1 := \"12350\"\n\tpod1 := \"test/test5\"\n\tpuidEntry1 := &puidCacheEntry{\n\t\tkubeIdentifier: pod1,\n\t}\n\tpodEntry1 := &podCacheEntry{\n\t\tpuIDs: map[string]bool{\n\t\t\tpuid1: true,\n\t\t},\n\t}\n\n\ttype fields struct {\n\t\tpuidCache map[string]*puidCacheEntry\n\t\tpodCache  map[string]*podCacheEntry\n\t\tRWMutex   sync.RWMutex\n\t}\n\ttype args struct {\n\t\tpodNamespace string\n\t\tpodName      string\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields *fields\n\t\targs   *args\n\t\twant   []string\n\t}{\n\t\t{\n\t\t\tname: \"simple get\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodName:      \"test5\",\n\t\t\t\tpodNamespace: \"test\",\n\t\t\t},\n\t\t\twant: []string{puid1},\n\t\t},\n\t\t{\n\t\t\tname: \"non existing\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodName:      \"test1\",\n\t\t\t\tpodNamespace: \"test\",\n\t\t\t},\n\t\t\twant: []string{},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tc := &cache{\n\t\t\t\tpuidCache: tt.fields.puidCache,\n\t\t\t\tpodCache:  tt.fields.podCache,\n\t\t\t}\n\t\t\tif got := c.getPUIDsbyPod(tt.args.podNamespace, tt.args.podName); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"cache.getPUIDsbyPod() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_cache_getDockerRuntimeByPUID(t *testing.T) {\n\n\tpuid1 := \"12346\"\n\tpod1 := \"test/test1\"\n\tcontainerRuntime := policy.NewPURuntimeWithDefaults()\n\tcontainerRuntime.SetPid(123)\n\tpuidEntry1 := &puidCacheEntry{\n\t\tkubeIdentifier: pod1,\n\t\tdockerRuntime:  containerRuntime,\n\t}\n\tpodEntry1 := &podCacheEntry{\n\t\tpuIDs: map[string]bool{\n\t\t\tpuid1: true,\n\t\t},\n\t}\n\n\ttype fields struct {\n\t\tpuidCache map[string]*puidCacheEntry\n\t\tpodCache  map[string]*podCacheEntry\n\t\tRWMutex   sync.RWMutex\n\t}\n\ttype args struct {\n\t\tpuid string\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields *fields\n\t\targs   *args\n\t\twant   policy.RuntimeReader\n\t}{\n\t\t{\n\t\t\tname: \"simple get\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpuid: puid1,\n\t\t\t},\n\t\t\twant: containerRuntime,\n\t\t},\n\t\t{\n\t\t\tname: \"empty get\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpuid: \"123123\",\n\t\t\t},\n\t\t\twant: nil,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tc := &cache{\n\t\t\t\tpuidCache: tt.fields.puidCache,\n\t\t\t\tpodCache:  tt.fields.podCache,\n\t\t\t}\n\t\t\tif got := c.getDockerRuntimeByPUID(tt.args.puid); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"cache.getDockerRuntimeByPUID() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_cache_getKubernetesRuntimeByPUID(t *testing.T) {\n\n\tpuid1 := \"12347\"\n\tpod1 := \"test/test2\"\n\tcontainerRuntime := policy.NewPURuntimeWithDefaults()\n\tcontainerRuntime.SetPid(123)\n\tpuidEntry1 := &puidCacheEntry{\n\t\tkubeIdentifier:    pod1,\n\t\tkubernetesRuntime: containerRuntime,\n\t}\n\tpodEntry1 := &podCacheEntry{\n\t\tpuIDs: map[string]bool{\n\t\t\tpuid1: true,\n\t\t},\n\t}\n\n\ttype fields struct {\n\t\tpuidCache map[string]*puidCacheEntry\n\t\tpodCache  map[string]*podCacheEntry\n\t\tRWMutex   sync.RWMutex\n\t}\n\ttype args struct {\n\t\tpuid string\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields *fields\n\t\targs   *args\n\t\twant   policy.RuntimeReader\n\t}{\n\t\t{\n\t\t\tname: \"simple get\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpuid: puid1,\n\t\t\t},\n\t\t\twant: containerRuntime,\n\t\t},\n\t\t{\n\t\t\tname: \"empty get\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpuid: \"123123\",\n\t\t\t},\n\t\t\twant: nil,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tc := &cache{\n\t\t\t\tpuidCache: tt.fields.puidCache,\n\t\t\t\tpodCache:  tt.fields.podCache,\n\t\t\t}\n\t\t\tif got := c.getKubernetesRuntimeByPUID(tt.args.puid); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"cache.getKubernetesRuntimeByPUID() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_cache_deletePodEntry(t *testing.T) {\n\n\tpuid1 := \"12348\"\n\tpod1 := \"test/test3\"\n\tcontainerRuntime := policy.NewPURuntimeWithDefaults()\n\tcontainerRuntime.SetPid(123)\n\tpuidEntry1 := &puidCacheEntry{\n\t\tkubeIdentifier:    pod1,\n\t\tkubernetesRuntime: containerRuntime,\n\t}\n\tpodEntry1 := &podCacheEntry{\n\t\tpuIDs: map[string]bool{\n\t\t\tpuid1: true,\n\t\t},\n\t}\n\n\ttype fields struct {\n\t\tpuidCache map[string]*puidCacheEntry\n\t\tpodCache  map[string]*podCacheEntry\n\t\tRWMutex   sync.RWMutex\n\t}\n\ttype args struct {\n\t\tpodNamespace string\n\t\tpodName      string\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields *fields\n\t\targs   *args\n\t\twant1  map[string]*puidCacheEntry\n\t\twant2  map[string]*podCacheEntry\n\t}{\n\t\t{\n\t\t\tname: \"simple delete\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodName:      \"test3\",\n\t\t\t\tpodNamespace: \"test\",\n\t\t\t},\n\t\t\twant1: map[string]*puidCacheEntry{\n\t\t\t\tpuid1: puidEntry1,\n\t\t\t},\n\t\t\twant2: map[string]*podCacheEntry{},\n\t\t},\n\t\t{\n\t\t\tname: \"non mexisting delete\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpodName:      \"test2\",\n\t\t\t\tpodNamespace: \"test\",\n\t\t\t},\n\t\t\twant1: map[string]*puidCacheEntry{\n\t\t\t\tpuid1: puidEntry1,\n\t\t\t},\n\t\t\twant2: map[string]*podCacheEntry{\n\t\t\t\tpod1: podEntry1,\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tc := &cache{\n\t\t\t\tpuidCache: tt.fields.puidCache,\n\t\t\t\tpodCache:  tt.fields.podCache,\n\t\t\t}\n\t\t\tc.deletePodEntry(tt.args.podNamespace, tt.args.podName)\n\n\t\t\tif got := tt.fields.puidCache; !reflect.DeepEqual(got, tt.want1) {\n\t\t\t\tt.Errorf(\"after cache.deleteByPod = %v, want %v\", got, tt.want1)\n\t\t\t}\n\t\t\tif got := tt.fields.podCache; !reflect.DeepEqual(got, tt.want2) {\n\t\t\t\tt.Errorf(\"after cache.deleteByPod = %v, want %v\", got, tt.want2)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_cache_deletePUIDEntry(t *testing.T) {\n\n\tpuid1 := \"12349\"\n\tpod1 := \"test/test4\"\n\tcontainerRuntime := policy.NewPURuntimeWithDefaults()\n\tcontainerRuntime.SetPid(123)\n\tpuidEntry1 := &puidCacheEntry{\n\t\tkubeIdentifier:    pod1,\n\t\tkubernetesRuntime: containerRuntime,\n\t}\n\tpodEntry1 := &podCacheEntry{\n\t\tpuIDs: map[string]bool{\n\t\t\tpuid1: true,\n\t\t},\n\t}\n\n\ttype fields struct {\n\t\tpuidCache map[string]*puidCacheEntry\n\t\tpodCache  map[string]*podCacheEntry\n\t\tRWMutex   sync.RWMutex\n\t}\n\ttype args struct {\n\t\tpuid string\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields *fields\n\t\targs   *args\n\t\twant1  map[string]*puidCacheEntry\n\t\twant2  map[string]*podCacheEntry\n\t}{\n\t\t{\n\t\t\tname: \"simple delete\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpuid: puid1,\n\t\t\t},\n\t\t\twant1: map[string]*puidCacheEntry{},\n\t\t\twant2: map[string]*podCacheEntry{\n\t\t\t\tpod1: podEntry1,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"non mexisting delete\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpuid: \"123123\",\n\t\t\t},\n\t\t\twant1: map[string]*puidCacheEntry{\n\t\t\t\tpuid1: puidEntry1,\n\t\t\t},\n\t\t\twant2: map[string]*podCacheEntry{\n\t\t\t\tpod1: podEntry1,\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tc := &cache{\n\t\t\t\tpuidCache: tt.fields.puidCache,\n\t\t\t\tpodCache:  tt.fields.podCache,\n\t\t\t}\n\t\t\tc.deletePUIDEntry(tt.args.puid)\n\n\t\t\tif got := tt.fields.puidCache; !reflect.DeepEqual(got, tt.want1) {\n\t\t\t\tt.Errorf(\"after cache.deleteByPod = %v, want %v\", got, tt.want1)\n\t\t\t}\n\t\t\tif got := tt.fields.podCache; !reflect.DeepEqual(got, tt.want2) {\n\t\t\t\tt.Errorf(\"after cache.deleteByPod = %v, want %v\", got, tt.want2)\n\t\t\t}\n\n\t\t})\n\t}\n}\n\nfunc Test_cache_deletePUIDCache(t *testing.T) {\n\n\tpuid1 := \"12349\"\n\tpod1 := \"test/test4\"\n\tcontainerRuntime := policy.NewPURuntimeWithDefaults()\n\tcontainerRuntime.SetPid(123)\n\tpuidEntry1 := &puidCacheEntry{\n\t\tkubeIdentifier:    pod1,\n\t\tkubernetesRuntime: containerRuntime,\n\t}\n\tpodEntry1 := &podCacheEntry{\n\t\tpuIDs: map[string]bool{\n\t\t\tpuid1: true,\n\t\t},\n\t}\n\n\ttype fields struct {\n\t\tpuidCache map[string]*puidCacheEntry\n\t\tpodCache  map[string]*podCacheEntry\n\t\tRWMutex   sync.RWMutex\n\t}\n\ttype args struct {\n\t\tpuID string\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields *fields\n\t\targs   *args\n\t\twant1  map[string]*puidCacheEntry\n\t\twant2  map[string]*podCacheEntry\n\t}{\n\t\t{\n\t\t\tname:   \"Delete on empty cache\",\n\t\t\tfields: &fields{},\n\t\t\targs: &args{\n\t\t\t\tpuID: \"1234\",\n\t\t\t},\n\t\t\twant1: nil,\n\t\t\twant2: nil,\n\t\t},\n\t\t{\n\t\t\tname: \"Deleting a non existent puid\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpuID: \"1234\",\n\t\t\t},\n\t\t\twant1: map[string]*puidCacheEntry{\n\t\t\t\tpuid1: puidEntry1,\n\t\t\t},\n\t\t\twant2: map[string]*podCacheEntry{\n\t\t\t\tpod1: podEntry1,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Normal case\",\n\t\t\tfields: &fields{\n\t\t\t\tpuidCache: map[string]*puidCacheEntry{\n\t\t\t\t\tpuid1: puidEntry1,\n\t\t\t\t},\n\t\t\t\tpodCache: map[string]*podCacheEntry{\n\t\t\t\t\tpod1: podEntry1,\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: &args{\n\t\t\t\tpuID: \"12349\",\n\t\t\t},\n\t\t\twant1: map[string]*puidCacheEntry{},\n\t\t\twant2: map[string]*podCacheEntry{},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tc := &cache{\n\t\t\t\tpuidCache: tt.fields.puidCache,\n\t\t\t\tpodCache:  tt.fields.podCache,\n\t\t\t}\n\t\t\tc.deletePUIDCache(tt.args.puID)\n\t\t\tif got := tt.fields.puidCache; !reflect.DeepEqual(got, tt.want1) {\n\t\t\t\tt.Errorf(\"after cache.deleteByPod = %v, want %v\", got, tt.want1)\n\t\t\t}\n\t\t\tif got := tt.fields.podCache; !reflect.DeepEqual(got, tt.want2) {\n\t\t\t\tt.Errorf(\"after cache.deleteByPod = %v, want %v\", got, tt.want2)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/client.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"fmt\"\n\n\t\"go.uber.org/zap\"\n\tapi \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/fields\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/client-go/kubernetes\"\n\tkubecache \"k8s.io/client-go/tools/cache\"\n\t\"k8s.io/client-go/tools/clientcmd\"\n)\n\n// NewKubeClient Generate and initialize a Kubernetes client based on the parameter kubeconfig\nfunc NewKubeClient(kubeconfig string) (*kubernetes.Clientset, error) {\n\tconfig, err := clientcmd.BuildConfigFromFlags(\"\", kubeconfig)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Error Building config from Kubeconfig: %v\", err)\n\t}\n\n\treturn kubernetes.NewForConfig(config)\n}\n\n// CreateResourceController creates a controller for a specific ressource and namespace.\n// The parameter function will be called on Add/Delete/Update events\nfunc CreateResourceController(client kubecache.Getter, resource string, namespace string, apiStruct runtime.Object, selector fields.Selector,\n\taddFunc func(addedApiStruct interface{}), deleteFunc func(deletedApiStruct interface{}), updateFunc func(oldApiStruct, updatedApiStruct interface{})) (kubecache.Store, kubecache.Controller) {\n\n\thandlers := kubecache.ResourceEventHandlerFuncs{\n\t\tAddFunc:    addFunc,\n\t\tDeleteFunc: deleteFunc,\n\t\tUpdateFunc: updateFunc,\n\t}\n\n\tlistWatch := kubecache.NewListWatchFromClient(client, resource, namespace, selector)\n\tstore, controller := kubecache.NewInformer(listWatch, apiStruct, 0, handlers)\n\treturn store, controller\n}\n\n// CreateLocalPodController creates a controller specifically for Pods.\nfunc (m *KubernetesMonitor) CreateLocalPodController(namespace string,\n\taddFunc func(addedApiStruct *api.Pod) error, deleteFunc func(deletedApiStruct *api.Pod) error, updateFunc func(oldApiStruct, updatedApiStruct *api.Pod) error) (kubecache.Store, kubecache.Controller) {\n\n\treturn CreateResourceController(m.kubeClient.CoreV1().RESTClient(), \"pods\", namespace, &api.Pod{}, m.localNodeSelector(),\n\t\tfunc(addedApiStruct interface{}) {\n\t\t\tif err := addFunc(addedApiStruct.(*api.Pod)); err != nil {\n\t\t\t\tzap.L().Error(\"Error while handling Add Pod\", zap.Error(err))\n\t\t\t}\n\t\t},\n\t\tfunc(deletedApiStruct interface{}) {\n\t\t\tif err := deleteFunc(deletedApiStruct.(*api.Pod)); err != nil {\n\t\t\t\tzap.L().Error(\"Error while handling Delete Pod\", zap.Error(err))\n\t\t\t}\n\t\t},\n\t\tfunc(oldApiStruct, updatedApiStruct interface{}) {\n\t\t\tif err := updateFunc(oldApiStruct.(*api.Pod), updatedApiStruct.(*api.Pod)); err != nil {\n\t\t\t\tzap.L().Error(\"Error while handling Update Pod\", zap.Error(err))\n\t\t\t}\n\t\t})\n}\n\nfunc (m *KubernetesMonitor) localNodeSelector() fields.Selector {\n\treturn fields.Set(map[string]string{\n\t\t\"spec.nodeName\": m.localNode,\n\t}).AsSelector()\n}\n\n// Pod returns the full pod object.\nfunc (m *KubernetesMonitor) Pod(podName string, namespace string) (*api.Pod, error) {\n\ttargetPod, err := m.kubeClient.CoreV1().Pods(namespace).Get(podName, metav1.GetOptions{})\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error getting Kubernetes labels & IP for pod %v : %v \", podName, err)\n\t}\n\treturn targetPod, nil\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/client_test.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"reflect\"\n\t\"testing\"\n\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\tdockermonitor \"go.aporeto.io/trireme-lib/monitor/internal/docker\"\n\tapi \"k8s.io/api/core/v1\"\n\tkubefields \"k8s.io/apimachinery/pkg/fields\"\n\t\"k8s.io/client-go/kubernetes\"\n\tkubefake \"k8s.io/client-go/kubernetes/fake\"\n\tkubecache \"k8s.io/client-go/tools/cache\"\n)\n\nfunc TestNewKubeClient(t *testing.T) {\n\ttype args struct {\n\t\tkubeconfig string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    *kubernetes.Clientset\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"test1\",\n\t\t\targs: args{\n\t\t\t\tkubeconfig: \"/tmp/abcd\",\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, err := NewKubeClient(tt.args.kubeconfig)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"NewKubeClient() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"NewKubeClient() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestKubernetesMonitor_Pod(t *testing.T) {\n\n\tpod1 := &api.Pod{}\n\tpod1.SetName(\"pod1\")\n\tpod1.SetNamespace(\"beer\")\n\n\ttype fields struct {\n\t\tdockerMonitor       *dockermonitor.DockerMonitor\n\t\tkubeClient          kubernetes.Interface\n\t\tlocalNode           string\n\t\thandlers            *config.ProcessorConfig\n\t\tcache               *cache\n\t\tkubernetesExtractor extractors.KubernetesMetadataExtractorType\n\t\tpodStore            kubecache.Store\n\t\tpodController       kubecache.Controller\n\t\tpodControllerStop   chan struct{}\n\t\tenableHostPods      bool\n\t}\n\ttype args struct {\n\t\tpodName   string\n\t\tnamespace string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tfields  fields\n\t\targs    args\n\t\twant    *api.Pod\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"Query existing pod\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient: kubefake.NewSimpleClientset(pod1),\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tpodName:   \"pod1\",\n\t\t\t\tnamespace: \"beer\",\n\t\t\t},\n\t\t\twant:    pod1,\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Query non existing pod\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient: kubefake.NewSimpleClientset(pod1),\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tpodName:   \"pod2\",\n\t\t\t\tnamespace: \"beer\",\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tm := &KubernetesMonitor{\n\t\t\t\tdockerMonitor:       tt.fields.dockerMonitor,\n\t\t\t\tkubeClient:          tt.fields.kubeClient,\n\t\t\t\tlocalNode:           tt.fields.localNode,\n\t\t\t\thandlers:            tt.fields.handlers,\n\t\t\t\tcache:               tt.fields.cache,\n\t\t\t\tkubernetesExtractor: tt.fields.kubernetesExtractor,\n\t\t\t\tpodStore:            tt.fields.podStore,\n\t\t\t\tpodController:       tt.fields.podController,\n\t\t\t\tpodControllerStop:   tt.fields.podControllerStop,\n\t\t\t\tenableHostPods:      tt.fields.enableHostPods,\n\t\t\t}\n\t\t\tgot, err := m.Pod(tt.args.podName, tt.args.namespace)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"KubernetesMonitor.Pod() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"KubernetesMonitor.Pod() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestKubernetesMonitor_localNodeSelector(t *testing.T) {\n\ttype fields struct {\n\t\tdockerMonitor       *dockermonitor.DockerMonitor\n\t\tkubeClient          kubernetes.Interface\n\t\tlocalNode           string\n\t\thandlers            *config.ProcessorConfig\n\t\tcache               *cache\n\t\tkubernetesExtractor extractors.KubernetesMetadataExtractorType\n\t\tpodStore            kubecache.Store\n\t\tpodController       kubecache.Controller\n\t\tpodControllerStop   chan struct{}\n\t\tenableHostPods      bool\n\t}\n\ttests := []struct {\n\t\tname   string\n\t\tfields fields\n\t\twant   kubefields.Selector\n\t}{\n\t\t{\n\t\t\tname: \"Normal string\",\n\t\t\tfields: fields{\n\t\t\t\tlocalNode: \"abc\",\n\t\t\t},\n\t\t\twant: kubefields.Set(map[string]string{\n\t\t\t\t\"spec.nodeName\": \"abc\",\n\t\t\t}).AsSelector(),\n\t\t},\n\t\t{\n\t\t\tname: \"Empty string\",\n\t\t\tfields: fields{\n\t\t\t\tlocalNode: \"\",\n\t\t\t},\n\t\t\twant: kubefields.Set(map[string]string{\n\t\t\t\t\"spec.nodeName\": \"\",\n\t\t\t}).AsSelector(),\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tm := &KubernetesMonitor{\n\t\t\t\tdockerMonitor:       tt.fields.dockerMonitor,\n\t\t\t\tkubeClient:          tt.fields.kubeClient,\n\t\t\t\tlocalNode:           tt.fields.localNode,\n\t\t\t\thandlers:            tt.fields.handlers,\n\t\t\t\tcache:               tt.fields.cache,\n\t\t\t\tkubernetesExtractor: tt.fields.kubernetesExtractor,\n\t\t\t\tpodStore:            tt.fields.podStore,\n\t\t\t\tpodController:       tt.fields.podController,\n\t\t\t\tpodControllerStop:   tt.fields.podControllerStop,\n\t\t\t\tenableHostPods:      tt.fields.enableHostPods,\n\t\t\t}\n\t\t\tif got := m.localNodeSelector(); !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"KubernetesMonitor.localNodeSelector() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/config.go",
    "content": "package kubernetesmonitor\n\nimport (\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\tdockerMonitor \"go.aporeto.io/trireme-lib/monitor/internal/docker\"\n)\n\n// Config is the config for the Kubernetes monitor\ntype Config struct { // nolint\n\tDockerConfig dockerMonitor.Config\n\n\tKubeconfig     string\n\tNodename       string\n\tEnableHostPods bool\n\n\tKubernetesExtractor extractors.KubernetesMetadataExtractorType\n\tDockerExtractor     extractors.DockerMetadataExtractor\n}\n\n// DefaultConfig provides a default configuration\nfunc DefaultConfig() *Config {\n\treturn &Config{\n\t\tKubernetesExtractor: extractors.DefaultKubernetesMetadataExtractor,\n\t\tDockerExtractor:     extractors.DefaultMetadataExtractor,\n\t\tEnableHostPods:      false,\n\t\tKubeconfig:          \"\",\n\t\tNodename:            \"\",\n\t}\n}\n\n// SetupDefaultConfig adds defaults to a partial configuration\nfunc SetupDefaultConfig(kubernetesConfig *Config) *Config {\n\treturn kubernetesConfig\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/handler.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n\tapi \"k8s.io/api/core/v1\"\n)\n\n// General logic for handling logic fron the DockerMonitor ss the following:\n// The only interesting event is the Start and Die event. All the other events are ignored\n\n// Those events are then put together with the Pod events received from the Kubernetes API.\n// Once both are received and are consistent, the Pod get activated.\n\n// HandlePUEvent is called by all monitors when a PU event is generated. The implementer\n// is responsible to update all components by explicitly adding a new PU.\n// Specifically for Kubernetes, The monitor handles the downstream events from Docker.\nfunc (m *KubernetesMonitor) HandlePUEvent(ctx context.Context, puID string, event common.Event, dockerRuntime policy.RuntimeReader) error {\n\tzap.L().Debug(\"dockermonitor event\", zap.String(\"puID\", puID), zap.String(\"eventType\", string(event)))\n\n\tvar kubernetesRuntime policy.RuntimeReader\n\n\t// If the event coming from DockerMonitor is start or create, we will get a meaningful PURuntime from\n\t// DockerMonitor. We can use it and combine it with the pod information on Kubernetes API.\n\tif event == common.EventStart || event == common.EventCreate {\n\n\t\t// We check first if this is a Kubernetes managed container\n\t\tpodNamespace, podName, err := getKubernetesInformation(dockerRuntime)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// We get the information for that specific POD from Kubernetes API\n\t\tpod, err := m.getPod(podNamespace, podName)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// The KubernetesMetadataExtractor combines the information coming from Docker (runtime)\n\t\t// and from Kube (pod) in order to create a KubernetesRuntime.\n\t\t// The managedContainer parameters define if this container should be ignored.\n\t\tvar managedContainer bool\n\t\tkubernetesRuntime, managedContainer, err = m.kubernetesExtractor(dockerRuntime, pod)\n\t\tif err != nil {\n\n\t\t\treturn fmt.Errorf(\"error while processing Kubernetes pod %s/%s for container %s %s\", podNamespace, podName, puID, err)\n\t\t}\n\n\t\t// UnmanagedContainers are simply ignored. No policy is associated.\n\t\tif !managedContainer {\n\t\t\t// for Unmanaged container check if host network and add the process to cgroup.\n\t\t\tzap.L().Debug(\"unmanaged Kubernetes container on create or start\", zap.String(\"puID\", puID), zap.String(\"podNamespace\", podNamespace), zap.String(\"podName\", podName))\n\t\t\treturn m.decorateRuntime(puID, dockerRuntime, event, podName, podNamespace)\n\t\t}\n\n\t\t// We keep the cache uptoDate for future queries\n\t\tm.cache.updatePUIDCache(podNamespace, podName, puID, dockerRuntime, kubernetesRuntime)\n\t} else {\n\n\t\t// We check if this PUID was previously managed. We only sent the event upstream to the resolver if it was managed on create or start.\n\t\tkubernetesRuntime = m.cache.getKubernetesRuntimeByPUID(puID)\n\t\tif kubernetesRuntime == nil {\n\t\t\tzap.L().Debug(\"unmanaged Kubernetes container\", zap.String(\"puID\", puID), zap.String(\"event\", string(event)))\n\t\t\treturn nil\n\t\t}\n\t}\n\n\tif event == common.EventDestroy {\n\t\t// Time to kill the cache entry\n\t\tm.cache.deletePUIDCache(puID)\n\t}\n\n\t// The event is then sent to the upstream policyResolver\n\tif err := m.handlers.Policy.HandlePUEvent(ctx, puID, event, kubernetesRuntime); err != nil {\n\t\tzap.L().Error(\"Unable to resolve policy for puid\", zap.String(\"puID\", puID), zap.Error(err))\n\t\treturn fmt.Errorf(\"Unable to resolve policy for puid:%s\", puID)\n\t}\n\n\tif dockerRuntime.PUType() == common.LinuxProcessPU {\n\t\treturn m.decorateRuntime(puID, dockerRuntime, event, \"\", \"\")\n\t}\n\n\treturn nil\n}\n\n// RefreshPUs is used to resend an update event to the Upstream Policy Resolver in case of an update is needed.\nfunc (m *KubernetesMonitor) RefreshPUs(ctx context.Context, pod *api.Pod) error {\n\tif pod == nil {\n\t\treturn fmt.Errorf(\"pod is nil\")\n\t}\n\n\tpodNamespace := pod.GetNamespace()\n\tpodName := pod.GetName()\n\n\tpuIDs := m.cache.getPUIDsbyPod(podNamespace, podName)\n\n\tfor _, puid := range puIDs {\n\t\tdockerRuntime := m.cache.getDockerRuntimeByPUID(puid)\n\t\tif dockerRuntime == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tkubernetesRuntime, managedContainer, err := m.kubernetesExtractor(dockerRuntime, pod)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"error while processing Kubernetes pod %s/%s for container %s %s\", podNamespace, podName, puid, err)\n\t\t}\n\n\t\t// UnmanagedContainers are simply ignored. It should not come this far if it is a non managed container anyways.\n\t\tif !managedContainer {\n\t\t\tzap.L().Debug(\"unmanaged Kubernetes container\", zap.String(\"puID\", puid), zap.String(\"podNamespace\", podNamespace), zap.String(\"podName\", podName))\n\t\t\tcontinue\n\t\t}\n\n\t\t// We keep the cache uptoDate for future queries\n\t\tm.cache.updatePUIDCache(podNamespace, podName, puid, dockerRuntime, kubernetesRuntime)\n\n\t\tif err := m.handlers.Policy.HandlePUEvent(ctx, puid, common.EventUpdate, kubernetesRuntime); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// getKubernetesInformation returns the name and namespace from a standard Docker runtime, if the docker container is associated at all with Kubernetes\nfunc getKubernetesInformation(runtime policy.RuntimeReader) (string, string, error) {\n\tpodNamespace, ok := runtime.Tag(KubernetesPodNamespaceIdentifier)\n\tif !ok {\n\t\treturn \"\", \"\", fmt.Errorf(\"Error getting Kubernetes Pod namespace\")\n\t}\n\tpodName, ok := runtime.Tag(KubernetesPodNameIdentifier)\n\tif !ok {\n\t\treturn \"\", \"\", fmt.Errorf(\"Error getting Kubernetes Pod name\")\n\t}\n\n\treturn podNamespace, podName, nil\n}\n\n// decorateRuntime decorates the docker runtime with puid of the pause container.\nfunc (m *KubernetesMonitor) decorateRuntime(puID string, runtimeInfo policy.RuntimeReader, event common.Event,\n\tpodName, podNamespace string) (err error) {\n\n\t// Do nothing on other events apart from start event.\n\tif event != common.EventStart {\n\t\treturn nil\n\t}\n\n\tpuRuntime, ok := runtimeInfo.(*policy.PURuntime)\n\tif !ok {\n\t\tzap.L().Error(\"Found invalid runtime for puid\", zap.String(\"puid\", puID))\n\t\treturn fmt.Errorf(\"invalid runtime for puid:%s\", puID)\n\t}\n\n\textensions := policy.ExtendedMap{}\n\n\t// pause container with host net set to true.\n\tif runtimeInfo.PUType() == common.LinuxProcessPU {\n\t\textensions[constants.DockerHostMode] = \"true\"\n\t\textensions[constants.DockerHostPUID] = puID\n\t\toptions := puRuntime.Options()\n\t\toptions.PolicyExtensions = extensions\n\t\toptions.AutoPort = true\n\n\t\t// set Options on docker runtime.\n\t\tpuRuntime.SetOptions(options)\n\t\treturn nil\n\t}\n\n\tpausePUID := \"\"\n\tpuIDs := m.cache.getPUIDsbyPod(podNamespace, podName)\n\t// get the puid of the pause container.\n\tfor _, id := range puIDs {\n\t\trtm := m.cache.getDockerRuntimeByPUID(id)\n\t\tif rtm == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tif isPodInfraContainer(rtm) && rtm.PUType() == common.LinuxProcessPU {\n\t\t\tpausePUID = id\n\t\t\tbreak\n\t\t}\n\n\t\t// if the pause container is not host net container, nothing to do.\n\t\tif isPodInfraContainer(rtm) {\n\t\t\treturn nil\n\t\t}\n\t}\n\n\textensions[constants.DockerHostPUID] = pausePUID\n\toptions := puRuntime.Options()\n\toptions.PolicyExtensions = extensions\n\t// set Options on docker runtime.\n\tpuRuntime.SetOptions(options)\n\n\treturn nil\n}\n\n// isPodInfraContainer returns true if the runtime represents the infra container for the POD\nfunc isPodInfraContainer(runtime policy.RuntimeReader) bool {\n\t// The Infra container can be found by checking env. variable.\n\ttagContent, ok := runtime.Tag(KubernetesContainerNameIdentifier)\n\n\treturn ok && tagContent == KubernetesInfraContainerName\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/handler_test.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\tdockermonitor \"go.aporeto.io/trireme-lib/monitor/internal/docker\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\tapi \"k8s.io/api/core/v1\"\n\t\"k8s.io/client-go/kubernetes\"\n\tkubefake \"k8s.io/client-go/kubernetes/fake\"\n\tkubecache \"k8s.io/client-go/tools/cache\"\n)\n\nfunc Test_getKubernetesInformation(t *testing.T) {\n\n\tpuRuntimeWithTags := func(tags map[string]string) *policy.PURuntime {\n\t\tpuRuntime := policy.NewPURuntimeWithDefaults()\n\t\tpuRuntime.SetTags(policy.NewTagStoreFromMap(tags))\n\t\treturn puRuntime\n\t}\n\n\ttype args struct {\n\t\truntime policy.RuntimeReader\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    string\n\t\twant1   string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:    \"no Kubernetes Information\",\n\t\t\targs:    args{runtime: policy.NewPURuntimeWithDefaults()},\n\t\t\twant:    \"\",\n\t\t\twant1:   \"\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"both present\",\n\t\t\targs: args{runtime: puRuntimeWithTags(map[string]string{\n\t\t\t\tKubernetesPodNamespaceIdentifier: \"a\",\n\t\t\t\tKubernetesPodNameIdentifier:      \"b\",\n\t\t\t},\n\t\t\t),\n\t\t\t},\n\t\t\twant:    \"a\",\n\t\t\twant1:   \"b\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"both present. NamespaceIdentifier empty\",\n\t\t\targs: args{runtime: puRuntimeWithTags(map[string]string{\n\t\t\t\tKubernetesPodNamespaceIdentifier: \"\",\n\t\t\t\tKubernetesPodNameIdentifier:      \"b\",\n\t\t\t},\n\t\t\t),\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twant1:   \"b\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"both present. Name empty\",\n\t\t\targs: args{runtime: puRuntimeWithTags(map[string]string{\n\t\t\t\tKubernetesPodNamespaceIdentifier: \"a\",\n\t\t\t\tKubernetesPodNameIdentifier:      \"\",\n\t\t\t},\n\t\t\t),\n\t\t\t},\n\t\t\twant:    \"a\",\n\t\t\twant1:   \"\",\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Namespace missing\",\n\t\t\targs: args{runtime: puRuntimeWithTags(map[string]string{\n\t\t\t\tKubernetesPodNameIdentifier: \"b\",\n\t\t\t},\n\t\t\t),\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twant1:   \"\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Name missing\",\n\t\t\targs: args{runtime: puRuntimeWithTags(map[string]string{\n\t\t\t\tKubernetesPodNamespaceIdentifier: \"a\",\n\t\t\t},\n\t\t\t),\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twant1:   \"\",\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, got1, err := getKubernetesInformation(tt.args.runtime)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"getKubernetesInformation() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"getKubernetesInformation() got = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\tif got1 != tt.want1 {\n\t\t\t\tt.Errorf(\"getKubernetesInformation() got1 = %v, want %v\", got1, tt.want1)\n\t\t\t}\n\t\t})\n\t}\n}\n\ntype mockHandler struct{}\n\nfunc (m *mockHandler) HandlePUEvent(ctx context.Context, puID string, event common.Event, runtime policy.RuntimeReader) error {\n\treturn nil\n}\n\ntype mockErrHandler struct{}\n\nfunc (m *mockErrHandler) HandlePUEvent(ctx context.Context, puID string, event common.Event, runtime policy.RuntimeReader) error {\n\treturn errors.New(\"Dummy error\")\n}\n\nfunc TestKubernetesMonitor_HandlePUEvent(t *testing.T) {\n\n\tpod1 := &api.Pod{}\n\tpod1.SetName(\"pod1\")\n\tpod1.SetNamespace(\"beer\")\n\n\tpod1Runtime := policy.NewPURuntimeWithDefaults()\n\tpod1Runtime.SetTags(policy.NewTagStoreFromMap(map[string]string{\n\t\tKubernetesPodNamespaceIdentifier: \"beer\",\n\t\tKubernetesPodNameIdentifier:      \"pod1\",\n\t}))\n\n\thostContainerRuntime := func() *policy.PURuntime {\n\n\t\tpur := policy.NewPURuntime(\"\", 1, \"\", nil, nil, common.LinuxProcessPU, nil)\n\t\tpur.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: \"100\",\n\t\t})\n\t\tpur.SetTags(policy.NewTagStoreFromMap(map[string]string{\n\t\t\tKubernetesPodNamespaceIdentifier: \"beer\",\n\t\t\tKubernetesPodNameIdentifier:      \"pod1\",\n\t\t}))\n\t\treturn pur\n\t}\n\n\tkubernetesExtractorUnmanaged := func(runtime policy.RuntimeReader, pod *api.Pod) (*policy.PURuntime, bool, error) {\n\t\toriginalRuntime, ok := runtime.(*policy.PURuntime)\n\t\tif !ok {\n\t\t\treturn nil, false, fmt.Errorf(\"Error casting puruntime\")\n\t\t}\n\n\t\tnewRuntime := originalRuntime.Clone()\n\n\t\treturn newRuntime, false, nil\n\t}\n\n\tkubernetesExtractorErrored := func(runtime policy.RuntimeReader, pod *api.Pod) (*policy.PURuntime, bool, error) {\n\t\toriginalRuntime, ok := runtime.(*policy.PURuntime)\n\t\tif !ok {\n\t\t\treturn nil, false, fmt.Errorf(\"Error casting puruntime\")\n\t\t}\n\n\t\tnewRuntime := originalRuntime.Clone()\n\n\t\treturn newRuntime, false, fmt.Errorf(\"Previsible error\")\n\t}\n\n\tkubernetesExtractorManaged := func(runtime policy.RuntimeReader, pod *api.Pod) (*policy.PURuntime, bool, error) {\n\t\toriginalRuntime, ok := runtime.(*policy.PURuntime)\n\t\tif !ok {\n\t\t\treturn nil, false, fmt.Errorf(\"Error casting puruntime\")\n\t\t}\n\n\t\tnewRuntime := originalRuntime.Clone()\n\n\t\treturn newRuntime, true, nil\n\t}\n\n\ttype fields struct {\n\t\tdockerMonitor       *dockermonitor.DockerMonitor\n\t\tkubeClient          kubernetes.Interface\n\t\tlocalNode           string\n\t\thandlers            *config.ProcessorConfig\n\t\tcache               *cache\n\t\tkubernetesExtractor extractors.KubernetesMetadataExtractorType\n\t\tpodStore            kubecache.Store\n\t\tpodController       kubecache.Controller\n\t\tpodControllerStop   chan struct{}\n\t\tenableHostPods      bool\n\t}\n\ttype args struct {\n\t\tctx           context.Context\n\t\tpuID          string\n\t\tevent         common.Event\n\t\tdockerRuntime policy.RuntimeReader\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tfields  fields\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:   \"empty dockerruntime on create\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventCreate,\n\t\t\t\tdockerRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:   \"empty dockerruntime on start\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventCreate,\n\t\t\t\tdockerRuntime: policy.NewPURuntimeWithDefaults(),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Extractor with Unmanaged PU\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient:          kubefake.NewSimpleClientset(pod1),\n\t\t\t\tkubernetesExtractor: kubernetesExtractorUnmanaged,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventCreate,\n\t\t\t\tdockerRuntime: pod1Runtime,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Extractor with Errored output\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient:          kubefake.NewSimpleClientset(pod1),\n\t\t\t\tkubernetesExtractor: kubernetesExtractorErrored,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventCreate,\n\t\t\t\tdockerRuntime: pod1Runtime,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Extractor with managed PU\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient:          kubefake.NewSimpleClientset(pod1),\n\t\t\t\tkubernetesExtractor: kubernetesExtractorManaged,\n\t\t\t\tcache:               newCache(),\n\t\t\t\thandlers: &config.ProcessorConfig{\n\t\t\t\t\tPolicy: &mockHandler{},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventCreate,\n\t\t\t\tdockerRuntime: pod1Runtime,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Destroy not in cache\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient:          kubefake.NewSimpleClientset(pod1),\n\t\t\t\tkubernetesExtractor: kubernetesExtractorManaged,\n\t\t\t\tcache:               newCache(),\n\t\t\t\thandlers: &config.ProcessorConfig{\n\t\t\t\t\tPolicy: &mockHandler{},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventDestroy,\n\t\t\t\tdockerRuntime: pod1Runtime,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Activate host network pu\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient:          kubefake.NewSimpleClientset(pod1),\n\t\t\t\tkubernetesExtractor: kubernetesExtractorManaged,\n\t\t\t\tcache:               newCache(),\n\t\t\t\thandlers: &config.ProcessorConfig{\n\t\t\t\t\tPolicy: &mockHandler{},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventStart,\n\t\t\t\tdockerRuntime: hostContainerRuntime(),\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Non infra containers in a pod with host net\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient:          kubefake.NewSimpleClientset(pod1),\n\t\t\t\tkubernetesExtractor: kubernetesExtractorUnmanaged,\n\t\t\t\tcache:               newCache(),\n\t\t\t\thandlers: &config.ProcessorConfig{\n\t\t\t\t\tPolicy: &mockHandler{},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventStart,\n\t\t\t\tdockerRuntime: pod1Runtime,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Activate host network pu and policy engine fails\",\n\t\t\tfields: fields{\n\t\t\t\tkubeClient:          kubefake.NewSimpleClientset(pod1),\n\t\t\t\tkubernetesExtractor: kubernetesExtractorManaged,\n\t\t\t\tcache:               newCache(),\n\t\t\t\thandlers: &config.ProcessorConfig{\n\t\t\t\t\tPolicy: &mockErrHandler{},\n\t\t\t\t},\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tevent:         common.EventStart,\n\t\t\t\tdockerRuntime: hostContainerRuntime(),\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tm := &KubernetesMonitor{\n\t\t\t\tdockerMonitor:       tt.fields.dockerMonitor,\n\t\t\t\tkubeClient:          tt.fields.kubeClient,\n\t\t\t\tlocalNode:           tt.fields.localNode,\n\t\t\t\thandlers:            tt.fields.handlers,\n\t\t\t\tcache:               tt.fields.cache,\n\t\t\t\tkubernetesExtractor: tt.fields.kubernetesExtractor,\n\t\t\t\tpodStore:            tt.fields.podStore,\n\t\t\t\tpodController:       tt.fields.podController,\n\t\t\t\tpodControllerStop:   tt.fields.podControllerStop,\n\t\t\t\tenableHostPods:      tt.fields.enableHostPods,\n\t\t\t}\n\t\t\tif err := m.HandlePUEvent(tt.args.ctx, tt.args.puID, tt.args.event, tt.args.dockerRuntime); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"KubernetesMonitor.HandlePUEvent() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestKubernetesMonitor_RefreshPUs(t *testing.T) {\n\ttype fields struct {\n\t\tdockerMonitor       *dockermonitor.DockerMonitor\n\t\tkubeClient          kubernetes.Interface\n\t\tlocalNode           string\n\t\thandlers            *config.ProcessorConfig\n\t\tcache               *cache\n\t\tkubernetesExtractor extractors.KubernetesMetadataExtractorType\n\t\tpodStore            kubecache.Store\n\t\tpodController       kubecache.Controller\n\t\tpodControllerStop   chan struct{}\n\t\tenableHostPods      bool\n\t}\n\ttype args struct {\n\t\tctx context.Context\n\t\tpod *api.Pod\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tfields  fields\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:   \"empty pod\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tpod: nil,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tm := &KubernetesMonitor{\n\t\t\t\tdockerMonitor:       tt.fields.dockerMonitor,\n\t\t\t\tkubeClient:          tt.fields.kubeClient,\n\t\t\t\tlocalNode:           tt.fields.localNode,\n\t\t\t\thandlers:            tt.fields.handlers,\n\t\t\t\tcache:               tt.fields.cache,\n\t\t\t\tkubernetesExtractor: tt.fields.kubernetesExtractor,\n\t\t\t\tpodStore:            tt.fields.podStore,\n\t\t\t\tpodController:       tt.fields.podController,\n\t\t\t\tpodControllerStop:   tt.fields.podControllerStop,\n\t\t\t\tenableHostPods:      tt.fields.enableHostPods,\n\t\t\t}\n\t\t\tif err := m.RefreshPUs(tt.args.ctx, tt.args.pod); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"KubernetesMonitor.RefreshPUs() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_isPodInfraContainer(t *testing.T) {\n\ttype args struct {\n\t\truntime policy.RuntimeReader\n\t}\n\n\tpuRuntimeWithTags := func(tags map[string]string) *policy.PURuntime {\n\t\tpuRuntime := policy.NewPURuntimeWithDefaults()\n\t\tpuRuntime.SetTags(policy.NewTagStoreFromMap(tags))\n\t\treturn puRuntime\n\t}\n\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant bool\n\t}{\n\t\t{\n\t\t\tname: \"Test if runtime has kubernetes infra pod tags\",\n\t\t\targs: args{\n\t\t\t\truntime: puRuntimeWithTags(map[string]string{\n\t\t\t\t\t\"@usr:io.kubernetes.container.name\": \"POD\",\n\t\t\t\t},\n\t\t\t\t),\n\t\t\t},\n\t\t\twant: true,\n\t\t},\n\t\t{\n\t\t\tname: \"Test if runtime does not have kubernetes infra pod tags\",\n\t\t\targs: args{\n\t\t\t\truntime: puRuntimeWithTags(map[string]string{\n\t\t\t\t\t\"key\": \"value\",\n\t\t\t\t},\n\t\t\t\t),\n\t\t\t},\n\t\t\twant: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif got := isPodInfraContainer(tt.args.runtime); got != tt.want {\n\t\t\t\tt.Errorf(\"isPodInfraContainer() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestKubernetesMonitor_decorateRuntime(t *testing.T) {\n\n\tpur1 := policy.NewPURuntime(\"\", 1, \"\", nil, nil, common.LinuxProcessPU, nil)\n\tpur2 := policy.NewPURuntime(\"\", 1, \"\", nil, nil, common.ContainerPU, nil)\n\n\thostpuID := \"1234\"\n\tpuID := \"12345\"\n\n\tpodName := \"abcd\"\n\tpodNamespace := \"abcd\"\n\ttestCache := newCache()\n\ttestCache.updatePUIDCache(\"abcd\", \"abcd\", \"1234\", pur1, nil)\n\n\ttype fields struct {\n\t\tdockerMonitor       *dockermonitor.DockerMonitor\n\t\tkubeClient          kubernetes.Interface\n\t\tlocalNode           string\n\t\thandlers            *config.ProcessorConfig\n\t\tcache               *cache\n\t\tkubernetesExtractor extractors.KubernetesMetadataExtractorType\n\t\tpodStore            kubecache.Store\n\t\tpodController       kubecache.Controller\n\t\tpodControllerStop   chan struct{}\n\t\tenableHostPods      bool\n\t}\n\ttype args struct {\n\t\tpuID         string\n\t\truntimeInfo  policy.RuntimeReader\n\t\tevent        common.Event\n\t\tpodName      string\n\t\tpodNamespace string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tfields  fields\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:   \"Check no op behavior on non start events\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tpuID:  \"123\",\n\t\t\t\tevent: common.EventCreate,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname:   \"Invalid Runtime\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tevent: common.EventStart,\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname:   \"Decorate runtime for pu activated as linux process\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tevent:        common.EventStart,\n\t\t\t\tpodName:      podName,\n\t\t\t\tpodNamespace: podNamespace,\n\t\t\t\truntimeInfo:  pur1,\n\t\t\t\tpuID:         hostpuID,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"Decorate runtime for pu activated as container process\",\n\t\t\tfields: fields{\n\t\t\t\tcache: testCache,\n\t\t\t},\n\t\t\targs: args{\n\t\t\t\tevent:        common.EventStart,\n\t\t\t\tpodName:      podName,\n\t\t\t\tpodNamespace: podNamespace,\n\t\t\t\truntimeInfo:  pur2,\n\t\t\t\tpuID:         puID,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tm := &KubernetesMonitor{\n\t\t\t\tdockerMonitor:       tt.fields.dockerMonitor,\n\t\t\t\tkubeClient:          tt.fields.kubeClient,\n\t\t\t\tlocalNode:           tt.fields.localNode,\n\t\t\t\thandlers:            tt.fields.handlers,\n\t\t\t\tcache:               tt.fields.cache,\n\t\t\t\tkubernetesExtractor: tt.fields.kubernetesExtractor,\n\t\t\t\tpodStore:            tt.fields.podStore,\n\t\t\t\tpodController:       tt.fields.podController,\n\t\t\t\tpodControllerStop:   tt.fields.podControllerStop,\n\t\t\t\tenableHostPods:      tt.fields.enableHostPods,\n\t\t\t}\n\t\t\tif err := m.decorateRuntime(tt.args.puID, tt.args.runtimeInfo, tt.args.event, tt.args.podName, tt.args.podNamespace); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"KubernetesMonitor.decorateRuntime() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/kubernetes.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\tapi \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/labels\"\n\tkubecache \"k8s.io/client-go/tools/cache\"\n)\n\n// KubernetesPodNameIdentifier is the label used by Docker for the K8S pod name.\nconst KubernetesPodNameIdentifier = \"@usr:io.kubernetes.pod.name\"\n\n// KubernetesPodNamespaceIdentifier is the label used by Docker for the K8S namespace.\nconst KubernetesPodNamespaceIdentifier = \"@usr:io.kubernetes.pod.namespace\"\n\n// KubernetesContainerNameIdentifier is the label used by Docker for the K8S container name.\nconst KubernetesContainerNameIdentifier = \"@usr:io.kubernetes.container.name\"\n\n// KubernetesInfraContainerName is the name of the infra POD.\nconst KubernetesInfraContainerName = \"POD\"\n\n// UpstreamNameIdentifier is the identifier used to identify the nane on the resulting PU\nconst UpstreamNameIdentifier = \"k8s:name\"\n\n// UpstreamNamespaceIdentifier is the identifier used to identify the nanespace on the resulting PU\nconst UpstreamNamespaceIdentifier = \"k8s:namespace\"\n\nfunc (m *KubernetesMonitor) addPod(addedPod *api.Pod) error {\n\tzap.L().Debug(\"pod added event\", zap.String(\"name\", addedPod.GetName()), zap.String(\"namespace\", addedPod.GetNamespace()))\n\n\t// This event is not needed as the trigger is the  DockerMonitor event\n\t// The pod obejct is cached in order to reuse it and avoid an API request possibly laster on\n\n\treturn nil\n}\n\nfunc (m *KubernetesMonitor) deletePod(deletedPod *api.Pod) error {\n\tzap.L().Debug(\"pod deleted event\", zap.String(\"name\", deletedPod.GetName()), zap.String(\"namespace\", deletedPod.GetNamespace()))\n\n\treturn nil\n}\n\nfunc (m *KubernetesMonitor) updatePod(oldPod, updatedPod *api.Pod) error {\n\tzap.L().Debug(\"pod modified event\", zap.String(\"name\", updatedPod.GetName()), zap.String(\"namespace\", updatedPod.GetNamespace()))\n\n\tif !isPolicyUpdateNeeded(oldPod, updatedPod) {\n\t\tzap.L().Debug(\"no modified labels for Pod\", zap.String(\"name\", updatedPod.GetName()), zap.String(\"namespace\", updatedPod.GetNamespace()))\n\t\treturn nil\n\t}\n\n\t// This event requires sending the Runtime upstream again.\n\t// TODO: Use propagated context\n\treturn m.RefreshPUs(context.TODO(), updatedPod)\n}\n\nfunc (m *KubernetesMonitor) getPod(podNamespace, podName string) (*api.Pod, error) {\n\tzap.L().Debug(\"no pod cached, querying Kubernetes API\")\n\n\t// TODO: Use cached Kube Store (from a shared informer)\n\treturn m.Pod(podName, podNamespace)\n}\n\nfunc isPolicyUpdateNeeded(oldPod, newPod *api.Pod) bool {\n\tif !(oldPod.Status.PodIP == newPod.Status.PodIP) {\n\t\treturn true\n\t}\n\tif !labels.Equals(oldPod.GetLabels(), newPod.GetLabels()) {\n\t\treturn true\n\t}\n\treturn false\n}\n\n// hasSynced sends an event on the Sync chan when the attachedController finished syncing.\nfunc hasSynced(sync chan struct{}, controller kubecache.Controller) {\n\tfor {\n\t\tif controller.HasSynced() {\n\t\t\tsync <- struct{}{}\n\t\t\treturn\n\t\t}\n\t\t<-time.After(100 * time.Millisecond)\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/monitor.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/trireme-lib/collector\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\tdockermonitor \"go.aporeto.io/trireme-lib/monitor/internal/docker\"\n\t\"go.aporeto.io/trireme-lib/monitor/registerer\"\n\t\"go.uber.org/zap\"\n\t\"k8s.io/client-go/kubernetes\"\n\tkubecache \"k8s.io/client-go/tools/cache\"\n)\n\n// KubernetesMonitor implements a monitor that sends pod events upstream\n// It is implemented as a filter on the standard DockerMonitor.\n// It gets all the PU events from the DockerMonitor and if the container is the POD container from Kubernetes,\n// It connects to the Kubernetes API and adds the tags that are coming from Kuberntes that cannot be found\ntype KubernetesMonitor struct {\n\tdockerMonitor       *dockermonitor.DockerMonitor\n\tkubeClient          kubernetes.Interface\n\tlocalNode           string\n\thandlers            *config.ProcessorConfig\n\tcache               *cache\n\tkubernetesExtractor extractors.KubernetesMetadataExtractorType\n\n\tpodStore          kubecache.Store\n\tpodController     kubecache.Controller\n\tpodControllerStop chan struct{}\n\n\tenableHostPods bool\n}\n\n// New returns a new kubernetes monitor.\nfunc New() *KubernetesMonitor {\n\tkubeMonitor := &KubernetesMonitor{}\n\tkubeMonitor.cache = newCache()\n\n\treturn kubeMonitor\n}\n\n// SetupConfig provides a configuration to implmentations. Every implmentation\n// can have its own config type.\nfunc (m *KubernetesMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) error {\n\n\tdefaultConfig := DefaultConfig()\n\n\tif cfg == nil {\n\t\tcfg = defaultConfig\n\t}\n\n\tkubernetesconfig, ok := cfg.(*Config)\n\tif !ok {\n\t\treturn fmt.Errorf(\"Invalid configuration specified\")\n\t}\n\n\tkubernetesconfig = SetupDefaultConfig(kubernetesconfig)\n\n\tprocessorConfig := &config.ProcessorConfig{\n\t\tPolicy:    m,\n\t\tCollector: collector.NewDefaultCollector(),\n\t}\n\n\t// As the Kubernetes monitor depends on the DockerMonitor, we setup the Docker monitor first\n\tdockerMon := dockermonitor.New()\n\tdockerMon.SetupHandlers(processorConfig)\n\n\tdockerConfig := dockermonitor.DefaultConfig()\n\tdockerConfig.EventMetadataExtractor = kubernetesconfig.DockerExtractor\n\n\t// we use the defaultconfig for now\n\tif err := dockerMon.SetupConfig(nil, dockerConfig); err != nil {\n\t\treturn fmt.Errorf(\"docker monitor instantiation error: %s\", err.Error())\n\t}\n\n\tm.dockerMonitor = dockerMon\n\n\t// Setting up Kubernetes\n\tm.localNode = kubernetesconfig.Nodename\n\tkubeClient, err := NewKubeClient(kubernetesconfig.Kubeconfig)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"kubernetes client instantiation error: %s\", err.Error())\n\t}\n\tm.kubeClient = kubeClient\n\n\tm.enableHostPods = kubernetesconfig.EnableHostPods\n\tm.kubernetesExtractor = kubernetesconfig.KubernetesExtractor\n\n\tm.podStore, m.podController = m.CreateLocalPodController(\"\",\n\t\tm.addPod,\n\t\tm.deletePod,\n\t\tm.updatePod)\n\n\tm.podControllerStop = make(chan struct{})\n\n\tzap.L().Warn(\"Using deprecated Kubernetes Monitor\")\n\n\treturn nil\n}\n\n// Run starts the monitor.\nfunc (m *KubernetesMonitor) Run(ctx context.Context) error {\n\tif m.kubeClient == nil {\n\t\treturn fmt.Errorf(\"kubernetes client is not initialized correctly\")\n\t}\n\n\t// TODO. Give directly the channel to the Kubernetes library\n\tgo m.podController.Run(ctx.Done())\n\tinitialPodSync := make(chan struct{})\n\tgo hasSynced(initialPodSync, m.podController)\n\t<-initialPodSync\n\n\treturn m.dockerMonitor.Run(ctx)\n}\n\n// SetupHandlers sets up handlers for monitors to invoke for various events such as\n// processing unit events and synchronization events. This will be called before Start()\n// by the consumer of the monitor\nfunc (m *KubernetesMonitor) SetupHandlers(c *config.ProcessorConfig) {\n\tm.handlers = c\n}\n\n// Resync requests to the monitor to do a resync.\nfunc (m *KubernetesMonitor) Resync(ctx context.Context) error {\n\treturn m.dockerMonitor.Resync(ctx)\n}\n"
  },
  {
    "path": "monitor/internal/kubernetes/monitor_test.go",
    "content": "// +build !windows\n\npackage kubernetesmonitor\n\nimport (\n\t\"testing\"\n\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\tdockermonitor \"go.aporeto.io/trireme-lib/monitor/internal/docker\"\n\t\"go.aporeto.io/trireme-lib/monitor/registerer\"\n\t\"k8s.io/client-go/kubernetes\"\n\tkubecache \"k8s.io/client-go/tools/cache\"\n)\n\nfunc TestKubernetesMonitor_SetupConfig(t *testing.T) {\n\n\ttype fields struct {\n\t\tdockerMonitor       *dockermonitor.DockerMonitor\n\t\tkubeClient          kubernetes.Interface\n\t\tlocalNode           string\n\t\thandlers            *config.ProcessorConfig\n\t\tcache               *cache\n\t\tkubernetesExtractor extractors.KubernetesMetadataExtractorType\n\t\tpodStore            kubecache.Store\n\t\tpodController       kubecache.Controller\n\t\tpodControllerStop   chan struct{}\n\t\tenableHostPods      bool\n\t}\n\ttype args struct {\n\t\tregisterer registerer.Registerer\n\t\tcfg        interface{}\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tfields  fields\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname:   \"random config\",\n\t\t\tfields: fields{},\n\t\t\targs: args{\n\t\t\t\tregisterer: nil,\n\t\t\t\tcfg:        \"123\",\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tm := &KubernetesMonitor{\n\t\t\t\tdockerMonitor:       tt.fields.dockerMonitor,\n\t\t\t\tkubeClient:          tt.fields.kubeClient,\n\t\t\t\tlocalNode:           tt.fields.localNode,\n\t\t\t\thandlers:            tt.fields.handlers,\n\t\t\t\tcache:               tt.fields.cache,\n\t\t\t\tkubernetesExtractor: tt.fields.kubernetesExtractor,\n\t\t\t\tpodStore:            tt.fields.podStore,\n\t\t\t\tpodController:       tt.fields.podController,\n\t\t\t\tpodControllerStop:   tt.fields.podControllerStop,\n\t\t\t\tenableHostPods:      tt.fields.enableHostPods,\n\t\t\t}\n\t\t\tif err := m.SetupConfig(tt.args.registerer, tt.args.cfg); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"KubernetesMonitor.SetupConfig() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/linux/config.go",
    "content": "package linuxmonitor\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n)\n\n// Config is the configuration options to start a CNI monitor\ntype Config struct {\n\tEventMetadataExtractor extractors.EventMetadataExtractor\n\tStoredPath             string\n\tReleasePath            string\n\tHost                   bool\n}\n\n// DefaultConfig provides a default configuration\nfunc DefaultConfig(host bool) *Config {\n\n\treturn &Config{\n\t\tEventMetadataExtractor: extractors.DefaultHostMetadataExtractor,\n\t\tReleasePath:            \"\",\n\t\tStoredPath:             common.TriremeCgroupPath,\n\t\tHost:                   host,\n\t}\n}\n\n// SetupDefaultConfig adds defaults to a partial configuration\nfunc SetupDefaultConfig(linuxConfig *Config) *Config {\n\n\tdefaultConfig := DefaultConfig(linuxConfig.Host)\n\n\tif linuxConfig.ReleasePath == \"\" {\n\t\tlinuxConfig.ReleasePath = defaultConfig.ReleasePath\n\t}\n\n\tif linuxConfig.EventMetadataExtractor == nil {\n\t\tlinuxConfig.EventMetadataExtractor = defaultConfig.EventMetadataExtractor\n\t}\n\n\tif linuxConfig.StoredPath == \"\" {\n\t\tlinuxConfig.StoredPath = common.TriremeCgroupPath\n\t}\n\n\treturn linuxConfig\n}\n"
  },
  {
    "path": "monitor/internal/linux/monitor.go",
    "content": "package linuxmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"regexp\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls\"\n)\n\n// LinuxMonitor captures all the monitor processor information\n// It implements the EventProcessor interface of the rpc monitor\ntype LinuxMonitor struct {\n\tproc *linuxProcessor\n}\n\n// New returns a new implmentation of a monitor implmentation\nfunc New(context.Context) *LinuxMonitor {\n\n\treturn &LinuxMonitor{\n\t\tproc: &linuxProcessor{},\n\t}\n}\n\n// Run implements Implementation interface\nfunc (l *LinuxMonitor) Run(ctx context.Context) error {\n\n\tif err := l.proc.config.IsComplete(); err != nil {\n\t\treturn fmt.Errorf(\"linux %t: %s\", l.proc.host, err)\n\t}\n\n\treturn l.Resync(ctx)\n}\n\n// SetupConfig provides a configuration to implmentations. Every implmentation\n// can have its own config type.\nfunc (l *LinuxMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) error {\n\n\tif cfg == nil {\n\t\tcfg = DefaultConfig(false)\n\t}\n\n\tlinuxConfig, ok := cfg.(*Config)\n\tif !ok {\n\t\treturn fmt.Errorf(\"Invalid configuration specified\")\n\t}\n\n\tif registerer != nil {\n\t\tif err := registerer.RegisterProcessor(common.HostNetworkPU, l.proc); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif err := registerer.RegisterProcessor(common.HostPU, l.proc); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif err := registerer.RegisterProcessor(common.LinuxProcessPU, l.proc); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Setup defaults\n\tlinuxConfig = SetupDefaultConfig(linuxConfig)\n\n\t// Setup config\n\tl.proc.host = linuxConfig.Host\n\n\tl.proc.netcls = cgnetcls.NewCgroupNetController(common.TriremeCgroupPath, linuxConfig.ReleasePath)\n\n\tl.proc.regStart = regexp.MustCompile(\"^[a-zA-Z0-9_]{1,11}$\")\n\tl.proc.regStop = regexp.MustCompile(\"^/trireme/[a-zA-Z0-9_]{1,11}$\")\n\n\tl.proc.metadataExtractor = linuxConfig.EventMetadataExtractor\n\tif l.proc.metadataExtractor == nil {\n\t\treturn fmt.Errorf(\"Unable to setup a metadata extractor\")\n\t}\n\n\treturn nil\n}\n\n// SetupHandlers sets up handlers for monitors to invoke for various events such as\n// processing unit events and synchronization events. This will be called before Start()\n// by the consumer of the monitor\nfunc (l *LinuxMonitor) SetupHandlers(m *config.ProcessorConfig) {\n\n\tl.proc.config = m\n}\n\n// Resync instructs the monitor to do a resync.\nfunc (l *LinuxMonitor) Resync(ctx context.Context) error {\n\treturn l.proc.Resync(ctx, nil)\n}\n"
  },
  {
    "path": "monitor/internal/linux/processor.go",
    "content": "package linuxmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/buildflags\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls\"\n\t\"go.uber.org/zap\"\n)\n\nvar ignoreNames = map[string]*struct{}{\n\t\"cgroup.clone_children\": nil,\n\t\"cgroup.procs\":          nil,\n\t\"net_cls.classid\":       nil,\n\t\"net_prio.ifpriomap\":    nil,\n\t\"net_prio.prioidx\":      nil,\n\t\"notify_on_release\":     nil,\n\t\"tasks\":                 nil,\n}\n\n// linuxProcessor captures all the monitor processor information\n// It implements the EventProcessor interface of the rpc monitor\ntype linuxProcessor struct {\n\thost              bool\n\tconfig            *config.ProcessorConfig\n\tmetadataExtractor extractors.EventMetadataExtractor\n\tnetcls            cgnetcls.Cgroupnetcls\n\tregStart          *regexp.Regexp\n\tregStop           *regexp.Regexp\n\tsync.Mutex\n}\n\nfunc baseName(name, separator string) string {\n\n\tlastseparator := strings.LastIndex(name, separator)\n\tif len(name) <= lastseparator {\n\t\treturn \"\"\n\t}\n\treturn name[lastseparator+1:]\n}\n\n// Create handles create events\nfunc (l *linuxProcessor) Create(ctx context.Context, eventInfo *common.EventInfo) error {\n\t// This should never be called for Linux Processes\n\treturn fmt.Errorf(\"Use start directly for Linux processes. Create not supported\")\n}\n\n// Start handles start events\nfunc (l *linuxProcessor) Start(ctx context.Context, eventInfo *common.EventInfo) error {\n\n\t// Validate the PUID format. Additional validations TODO\n\tif !l.regStart.Match([]byte(eventInfo.PUID)) {\n\t\treturn fmt.Errorf(\"invalid pu id: %s\", eventInfo.PUID)\n\t}\n\n\t// Normalize to a nativeID context. This will become key for any recoveries\n\t// and it's an one way function.\n\tnativeID, err := l.generateContextID(eventInfo)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tprocesses, err := l.netcls.ListCgroupProcesses(nativeID)\n\tif err == nil && len(processes) != 0 {\n\t\t//This PU already exists we are getting a duplicate event\n\t\tzap.L().Debug(\"Duplicate start event for the same PU\", zap.String(\"PUID\", nativeID))\n\t\tif err = l.netcls.AddProcess(nativeID, int(eventInfo.PID)); err != nil {\n\t\t\tif derr := l.netcls.DeleteCgroup(nativeID); derr != nil {\n\t\t\t\tzap.L().Warn(\"Failed to clean cgroup\", zap.Error(derr))\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t}\n\n\t// Extract the metadata and create the runtime\n\truntime, err := l.metadataExtractor(eventInfo)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// We need to send a create event to the policy engine.\n\tif err = l.config.Policy.HandlePUEvent(ctx, nativeID, common.EventCreate, runtime); err != nil {\n\t\treturn fmt.Errorf(\"Unable to create PU: %s\", err)\n\t}\n\n\t// We can now send a start event to the policy engine\n\tif err = l.config.Policy.HandlePUEvent(ctx, nativeID, common.EventStart, runtime); err != nil {\n\t\treturn fmt.Errorf(\"Unable to start PU: %s\", err)\n\t}\n\n\tl.Lock()\n\t// We can now program cgroups and everything else.\n\tif eventInfo.HostService {\n\t\terr = l.processHostServiceStart(eventInfo, runtime)\n\t} else {\n\t\terr = l.processLinuxServiceStart(nativeID, eventInfo, runtime)\n\t}\n\tl.Unlock()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Failed to program cgroups: %s\", err)\n\t}\n\n\t// Send the event to the collector.\n\tl.config.Collector.CollectContainerEvent(&collector.ContainerRecord{\n\t\tContextID: eventInfo.PUID,\n\t\tIPAddress: runtime.IPAddresses(),\n\t\tTags:      runtime.Tags(),\n\t\tEvent:     collector.ContainerStart,\n\t})\n\n\treturn nil\n}\n\n// Stop handles a stop event\nfunc (l *linuxProcessor) Stop(ctx context.Context, event *common.EventInfo) error {\n\n\tpuID, err := l.generateContextID(event)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tprocesses, err := l.netcls.ListCgroupProcesses(puID)\n\tif err == nil && len(processes) != 0 {\n\t\tzap.L().Debug(\"Received Bogus Stop\", zap.Int(\"Num Processes\", len(processes)), zap.Error(err))\n\t\treturn nil\n\t}\n\n\tif puID == \"/trireme\" {\n\t\treturn nil\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetPUType(event.PUType)\n\n\treturn l.config.Policy.HandlePUEvent(ctx, puID, common.EventStop, runtime)\n}\n\n// Destroy handles a destroy event\nfunc (l *linuxProcessor) Destroy(ctx context.Context, eventInfo *common.EventInfo) error {\n\n\tpuID, err := l.generateContextID(eventInfo)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif puID == \"/trireme\" {\n\t\tpuID = strings.TrimLeft(puID, \"/\")\n\t\tl.netcls.Deletebasepath(puID)\n\t\treturn nil\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetPUType(eventInfo.PUType)\n\n\t// Send the event upstream\n\tif err := l.config.Policy.HandlePUEvent(ctx, puID, common.EventDestroy, runtime); err != nil {\n\t\tzap.L().Warn(\"Unable to clean trireme \",\n\t\t\tzap.String(\"puID\", puID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tl.Lock()\n\tdefer l.Unlock()\n\n\tif eventInfo.HostService {\n\t\t// For network only pus, we do not program cgroups and hence should not clean it.\n\t\t// Cleaning this could result in removal of root cgroup that was configured for\n\t\t// true host mode pu.\n\t\tif eventInfo.NetworkOnlyTraffic {\n\t\t\treturn nil\n\t\t}\n\n\t\tif err := l.netcls.AssignRootMark(0); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to write to net_cls.classid file for new cgroup: %s\", err)\n\t\t}\n\t}\n\n\tpuID = baseName(puID, \"/\")\n\n\t//let us remove the cgroup files now\n\tif err := l.netcls.DeleteCgroup(puID); err != nil {\n\t\tzap.L().Warn(\"Failed to clean netcls group\",\n\t\t\tzap.String(\"puID\", puID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\treturn nil\n}\n\n// Pause handles a pause event\nfunc (l *linuxProcessor) Pause(ctx context.Context, eventInfo *common.EventInfo) error {\n\n\tpuID, err := l.generateContextID(eventInfo)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"unable to generate context id: %s\", err)\n\t}\n\n\treturn l.config.Policy.HandlePUEvent(ctx, puID, common.EventPause, nil)\n}\n\nfunc (l *linuxProcessor) resyncHostService(ctx context.Context, e *common.EventInfo) error {\n\n\truntime, err := l.metadataExtractor(e)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tnativeID, err := l.generateContextID(e)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err = l.config.Policy.HandlePUEvent(ctx, nativeID, common.EventStart, runtime); err != nil {\n\t\treturn fmt.Errorf(\"Unable to start PU: %s\", err)\n\t}\n\n\treturn l.processHostServiceStart(e, runtime)\n}\n\n// Resync resyncs with all the existing services that were there before we start\nfunc (l *linuxProcessor) Resync(ctx context.Context, e *common.EventInfo) error {\n\t// This lock is not complete necessary here\n\tl.config.ResyncLock.RLock()\n\tdefer l.config.ResyncLock.RUnlock()\n\tif e != nil {\n\t\t// If its a host service then use pu from eventInfo\n\t\t// The code block below assumes that pu is already created\n\t\tif e.HostService {\n\t\t\treturn l.resyncHostService(ctx, e)\n\t\t}\n\t}\n\n\tcgroups := l.netcls.ListAllCgroups(\"\")\n\tfor _, cgroup := range cgroups {\n\n\t\tif _, ok := ignoreNames[cgroup]; ok {\n\t\t\tcontinue\n\t\t}\n\n\t\t// List all the cgroup processes. If its empty, we can remove it.\n\t\tprocs, err := l.netcls.ListCgroupProcesses(cgroup)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\t// All processes in cgroup have died. Let's clean up.\n\t\tif len(procs) == 0 {\n\t\t\tif err := l.netcls.DeleteCgroup(cgroup); err != nil {\n\t\t\t\tzap.L().Warn(\"Failed to deleted cgroup\",\n\t\t\t\t\tzap.String(\"cgroup\", cgroup),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\truntime := policy.NewPURuntimeWithDefaults()\n\t\tpuType := common.LinuxProcessPU\n\n\t\truntime.SetPUType(puType)\n\t\truntime.SetOptions(policy.OptionsType{\n\t\t\tCgroupMark: strconv.FormatUint(cgnetcls.MarkVal(), 10),\n\t\t\tCgroupName: cgroup,\n\t\t})\n\n\t\t// Processes are still alive. We should enforce policy.\n\t\tif err := l.config.Policy.HandlePUEvent(ctx, cgroup, common.EventStart, runtime); err != nil {\n\t\t\tzap.L().Error(\"Failed to restart cgroup control\", zap.String(\"cgroup ID\", cgroup), zap.Error(err))\n\t\t}\n\n\t\tif err := l.processLinuxServiceStart(cgroup, nil, runtime); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// generateContextID creates the puID from the event information\nfunc (l *linuxProcessor) generateContextID(eventInfo *common.EventInfo) (string, error) {\n\n\tpuID := eventInfo.PUID\n\tif eventInfo.Cgroup == \"\" {\n\t\treturn puID, nil\n\t}\n\n\tif !l.regStop.Match([]byte(eventInfo.Cgroup)) {\n\t\treturn \"\", fmt.Errorf(\"invalid pu id: %s\", eventInfo.Cgroup)\n\t}\n\n\tpuID = baseName(eventInfo.Cgroup, \"/\")\n\n\treturn puID, nil\n}\n\nfunc (l *linuxProcessor) processLinuxServiceStart(nativeID string, event *common.EventInfo, runtimeInfo *policy.PURuntime) error {\n\n\t// It is okay to launch this so let us create a cgroup for it\n\tif err := l.netcls.Creategroup(nativeID); err != nil {\n\t\treturn err\n\t}\n\n\tmarkval := runtimeInfo.Options().CgroupMark\n\tif markval == \"\" {\n\t\tif derr := l.netcls.DeleteCgroup(nativeID); derr != nil {\n\t\t\tzap.L().Warn(\"Failed to clean cgroup\", zap.Error(derr))\n\t\t}\n\t\treturn fmt.Errorf(\"mark value %s not found\", markval)\n\t}\n\n\tmark, _ := strconv.ParseUint(markval, 10, 32)\n\tif err := l.netcls.AssignMark(nativeID, mark); err != nil {\n\t\tif derr := l.netcls.DeleteCgroup(nativeID); derr != nil {\n\t\t\tzap.L().Warn(\"Failed to clean cgroup\", zap.Error(derr))\n\t\t}\n\t\treturn err\n\t}\n\n\tif event != nil {\n\t\tif err := l.netcls.AddProcess(nativeID, int(event.PID)); err != nil {\n\t\t\tif derr := l.netcls.DeleteCgroup(nativeID); derr != nil {\n\t\t\t\tzap.L().Warn(\"Failed to clean cgroup\", zap.Error(derr))\n\t\t\t}\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (l *linuxProcessor) processHostServiceStart(event *common.EventInfo, runtimeInfo *policy.PURuntime) error {\n\n\tif event.NetworkOnlyTraffic || buildflags.IsLegacyKernel() {\n\t\treturn nil\n\t}\n\n\tmarkval := runtimeInfo.Options().CgroupMark\n\tmark, _ := strconv.ParseUint(markval, 10, 32)\n\n\treturn l.netcls.AssignRootMark(mark)\n}\n"
  },
  {
    "path": "monitor/internal/linux/processor_test.go",
    "content": "// +build !windows\n\npackage linuxmonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"sync\"\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy/mockpolicy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cgnetcls/mockcgnetcls\"\n)\n\nfunc testLinuxProcessor(puHandler policy.Resolver) *linuxProcessor {\n\tl := New(context.Background())\n\tl.SetupHandlers(&config.ProcessorConfig{\n\t\tCollector:  &collector.DefaultCollector{},\n\t\tPolicy:     puHandler,\n\t\tResyncLock: &sync.RWMutex{},\n\t})\n\tif err := l.SetupConfig(nil, &Config{\n\t\tEventMetadataExtractor: extractors.DefaultHostMetadataExtractor,\n\t\tStoredPath:             \"/tmp\",\n\t\tReleasePath:            \"./\",\n\t}); err != nil {\n\t\treturn nil\n\t}\n\treturn l.proc\n}\n\nfunc TestCreate(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tp := testLinuxProcessor(puHandler)\n\n\t\tConvey(\"When I try a create event with invalid PU ID, \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"/@#$@\",\n\t\t\t}\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\terr := p.Create(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a create event that is valid\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"1234\",\n\t\t\t}\n\t\t\tConvey(\"I should get an error - create not supported\", func() {\n\t\t\t\terr := p.Create(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestStop(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tp := testLinuxProcessor(puHandler)\n\t\tmockcls := mockcgnetcls.NewMockCgroupnetcls(ctrl)\n\t\tp.netcls = mockcls\n\n\t\tConvey(\"When I get a stop event that is valid\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"/trireme/1234\",\n\t\t\t}\n\n\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\tmockcls.EXPECT().ListCgroupProcesses(\"/trireme/1234\")\n\n\t\t\tConvey(\"I should get the status of the upstream function\", func() {\n\t\t\t\terr := p.Stop(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t})\n}\n\n// TODO: remove nolint\n// nolint\nfunc TestDestroy(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\tdummyPUPath := \"/var/run/trireme/linux/1234\"\n\tioutil.WriteFile(dummyPUPath, []byte{}, 0644) // nolint\n\n\tdefer os.RemoveAll(dummyPUPath) // nolint\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tp := testLinuxProcessor(puHandler)\n\t\tmockcls := mockcgnetcls.NewMockCgroupnetcls(ctrl)\n\t\tp.netcls = mockcls\n\n\t\tConvey(\"When I get a destroy event that is valid\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"1234\",\n\t\t\t}\n\t\t\tmockcls.EXPECT().DeleteCgroup(gomock.Any()).Return(nil)\n\n\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\tConvey(\"I should get the status of the upstream function\", func() {\n\t\t\t\terr := p.Destroy(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a destroy event that is valid for hostpu\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID:               \"123\",\n\t\t\t\tHostService:        true,\n\t\t\t\tNetworkOnlyTraffic: true,\n\t\t\t}\n\n\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\tConvey(\"I should get the status of the upstream function\", func() {\n\t\t\t\terr := p.Destroy(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestPause(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tp := testLinuxProcessor(puHandler)\n\n\t\tConvey(\"When I get a pause event that is valid\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"/trireme/1234\",\n\t\t\t}\n\n\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\tConvey(\"I should get the status of the upstream function\", func() {\n\t\t\t\terr := p.Pause(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestStart(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\t\tp := testLinuxProcessor(puHandler)\n\n\t\tConvey(\"When I get a start event with no PUID\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"\",\n\t\t\t}\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event that is valid that fails on the generation of PU ID\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName: \"^^^\",\n\t\t\t}\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event that is valid that fails on the metadata extractor\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName: \"service\",\n\t\t\t\tTags: []string{\"badtag\"},\n\t\t\t}\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event and the upstream returns an error \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:      \"PU\",\n\t\t\t\tPID:       1,\n\t\t\t\tPUID:      \"12345\",\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.LinuxProcessPU,\n\t\t\t}\n\t\t\tConvey(\"I should get an error \", func() {\n\t\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(errors.New(\"error\"))\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event and create group fails \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:      \"PU\",\n\t\t\t\tPID:       1,\n\t\t\t\tPUID:      \"12345\",\n\t\t\t\tEventType: common.EventStop,\n\t\t\t\tPUType:    common.LinuxProcessPU,\n\t\t\t}\n\n\t\t\tmockcls := mockcgnetcls.NewMockCgroupnetcls(ctrl)\n\t\t\tp.netcls = mockcls\n\n\t\t\tConvey(\"I should get an error \", func() {\n\t\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(2).Return(nil)\n\t\t\t\tmockcls.EXPECT().Creategroup(gomock.Any()).Return(errors.New(\"error\"))\n\t\t\t\tmockcls.EXPECT().ListCgroupProcesses(\"12345\")\n\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event and the runtime options don't have a mark value\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:      \"PU\",\n\t\t\t\tPID:       1,\n\t\t\t\tPUID:      \"12345\",\n\t\t\t\tEventType: common.EventStop,\n\t\t\t\tPUType:    common.LinuxProcessPU,\n\t\t\t}\n\n\t\t\tmockcls := mockcgnetcls.NewMockCgroupnetcls(ctrl)\n\t\t\tp.netcls = mockcls\n\n\t\t\tConvey(\"I should not get an error \", func() {\n\t\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(2).Return(nil)\n\t\t\t\tmockcls.EXPECT().Creategroup(gomock.Any()).Return(nil)\n\t\t\t\tmockcls.EXPECT().AssignMark(gomock.Any(), gomock.Any()).Return(nil)\n\t\t\t\tmockcls.EXPECT().AddProcess(gomock.Any(), gomock.Any())\n\t\t\t\tmockcls.EXPECT().ListCgroupProcesses(\"12345\")\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestResync(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\t\tp := testLinuxProcessor(puHandler)\n\n\t\tConvey(\"When I get a resync event \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:      \"PU\",\n\t\t\t\tPID:       1,\n\t\t\t\tPUID:      \"12345\",\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.LinuxProcessPU,\n\t\t\t}\n\n\t\t\tmockcls := mockcgnetcls.NewMockCgroupnetcls(ctrl)\n\t\t\tp.netcls = mockcls\n\n\t\t\tConvey(\"I should not get an error \", func() {\n\t\t\t\tmockcls.EXPECT().ListAllCgroups(gomock.Any()).Return([]string{\"cgroup\"})\n\t\t\t\tmockcls.EXPECT().ListCgroupProcesses(gomock.Any()).Return([]string{\"procs\"}, nil)\n\t\t\t\tmockcls.EXPECT().Creategroup(gomock.Any()).Return(nil)\n\t\t\t\tmockcls.EXPECT().AssignMark(gomock.Any(), gomock.Any()).Return(nil)\n\t\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\t\terr := p.Resync(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a resync event with no croup process\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:      \"PU\",\n\t\t\t\tPID:       1,\n\t\t\t\tPUID:      \"12345\",\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.LinuxProcessPU,\n\t\t\t}\n\n\t\t\tmockcls := mockcgnetcls.NewMockCgroupnetcls(ctrl)\n\t\t\tp.netcls = mockcls\n\n\t\t\tConvey(\"I should not get an error \", func() {\n\t\t\t\tmockcls.EXPECT().ListAllCgroups(gomock.Any()).Return([]string{\"cgroup\"})\n\t\t\t\tmockcls.EXPECT().ListCgroupProcesses(gomock.Any()).Return([]string{}, nil)\n\t\t\t\tmockcls.EXPECT().DeleteCgroup(gomock.Any()).Return(nil)\n\t\t\t\terr := p.Resync(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a resync event for hostservice\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:               \"PU\",\n\t\t\t\tPID:                1,\n\t\t\t\tPUID:               \"12345\",\n\t\t\t\tEventType:          common.EventStart,\n\t\t\t\tHostService:        true,\n\t\t\t\tNetworkOnlyTraffic: true,\n\t\t\t\tPUType:             common.LinuxProcessPU,\n\t\t\t}\n\n\t\t\tConvey(\"I should not get an error \", func() {\n\t\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\t\terr := p.Resync(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/internal/pod/config.go",
    "content": "package podmonitor\n\nimport (\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n)\n\n// Config is the config for the Kubernetes monitor\ntype Config struct { // nolint\n\tKubeconfig     string\n\tNodename       string\n\tEnableHostPods bool\n\tWorkers        int\n\n\tMetadataExtractor         extractors.PodMetadataExtractor\n\tNetclsProgrammer          extractors.PodNetclsProgrammer\n\tPidsSetMaxProcsProgrammer extractors.PodPidsSetMaxProcsProgrammer\n\tResetNetcls               extractors.ResetNetclsKubepods\n\tSandboxExtractor          extractors.PodSandboxExtractor\n}\n\n// DefaultConfig provides a default configuration\nfunc DefaultConfig() *Config {\n\treturn &Config{\n\t\tMetadataExtractor:         nil,\n\t\tNetclsProgrammer:          nil,\n\t\tPidsSetMaxProcsProgrammer: nil,\n\t\tResetNetcls:               nil,\n\t\tSandboxExtractor:          nil,\n\t\tEnableHostPods:            false,\n\t\tKubeconfig:                \"\",\n\t\tNodename:                  \"\",\n\t\tWorkers:                   4,\n\t}\n}\n\n// SetupDefaultConfig adds defaults to a partial configuration\nfunc SetupDefaultConfig(kubernetesConfig *Config) *Config {\n\treturn kubernetesConfig\n}\n"
  },
  {
    "path": "monitor/internal/pod/config_test.go",
    "content": "package podmonitor\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestConfig(t *testing.T) {\n\tConvey(\"SetupDefaultConfig should just return the config as is\", t, func() {\n\t\tc := SetupDefaultConfig(&Config{\n\t\t\tKubeconfig:        \"test\",\n\t\t\tNodename:          \"test\",\n\t\t\tEnableHostPods:    true,\n\t\t\tMetadataExtractor: nil,\n\t\t\tNetclsProgrammer:  nil,\n\t\t\tResetNetcls:       nil,\n\t\t\tWorkers:           6,\n\t\t})\n\t\tSo(c.Kubeconfig, ShouldEqual, \"test\")\n\t\tSo(c.Nodename, ShouldEqual, \"test\")\n\t\tSo(c.EnableHostPods, ShouldBeTrue)\n\t\tSo(c.MetadataExtractor, ShouldBeNil)\n\t\tSo(c.NetclsProgrammer, ShouldBeNil)\n\t\tSo(c.ResetNetcls, ShouldBeNil)\n\t\tSo(c.Workers, ShouldEqual, 6)\n\t})\n\n\tConvey(\"DefaultConfig should always return a pointer to a config\", t, func() {\n\t\tc := DefaultConfig()\n\t\tSo(c, ShouldNotBeNil)\n\t})\n}\n"
  },
  {
    "path": "monitor/internal/pod/controller.go",
    "content": "// +build linux !windows\n\npackage podmonitor\n\nimport (\n\t\"context\"\n\terrs \"errors\"\n\t\"time\"\n\n\t\"k8s.io/client-go/tools/record\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/api/errors\"\n\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/controller\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n\t\"sigs.k8s.io/controller-runtime/pkg/handler\"\n\t\"sigs.k8s.io/controller-runtime/pkg/manager\"\n\t\"sigs.k8s.io/controller-runtime/pkg/reconcile\"\n\t\"sigs.k8s.io/controller-runtime/pkg/source\"\n)\n\nvar (\n\t// ErrHandlePUStartEventFailed is the error sent back if a start event fails\n\tErrHandlePUStartEventFailed = errs.New(\"Aporeto Enforcer start event failed\")\n\n\t// ErrNetnsExtractionMissing is the error when we are missing a PID or netns path after successful metadata extraction\n\tErrNetnsExtractionMissing = errs.New(\"Aporeto Enforcer missed to extract PID or netns path\")\n\n\t// ErrHandlePUStopEventFailed is the error sent back if a stop event fails\n\tErrHandlePUStopEventFailed = errs.New(\"Aporeto Enforcer stop event failed\")\n\n\t// ErrHandlePUDestroyEventFailed is the error sent back if a create event fails\n\tErrHandlePUDestroyEventFailed = errs.New(\"Aporeto Enforcer destroy event failed\")\n)\n\n// newReconciler returns a new reconcile.Reconciler\nfunc newReconciler(mgr manager.Manager, handler *config.ProcessorConfig, metadataExtractor extractors.PodMetadataExtractor, netclsProgrammer extractors.PodNetclsProgrammer, sandboxExtractor extractors.PodSandboxExtractor, nodeName string, enableHostPods bool, deleteCh chan<- DeleteEvent, deleteReconcileCh chan<- struct{}, resyncInfo *ResyncInfoChan) *ReconcilePod {\n\treturn &ReconcilePod{\n\t\tclient:            mgr.GetClient(),\n\t\trecorder:          mgr.GetRecorder(\"trireme-pod-controller\"),\n\t\thandler:           handler,\n\t\tmetadataExtractor: metadataExtractor,\n\t\tnetclsProgrammer:  netclsProgrammer,\n\t\tsandboxExtractor:  sandboxExtractor,\n\t\tnodeName:          nodeName,\n\t\tenableHostPods:    enableHostPods,\n\t\tdeleteCh:          deleteCh,\n\t\tdeleteReconcileCh: deleteReconcileCh,\n\t\tresyncInfo:        resyncInfo,\n\n\t\t// TODO: should move into configuration\n\t\thandlePUEventTimeout:   60 * time.Second,\n\t\tmetadataExtractTimeout: 10 * time.Second,\n\t\tnetclsProgramTimeout:   10 * time.Second,\n\t}\n}\n\n// addController adds a new Controller to mgr with r as the reconcile.Reconciler\nfunc addController(mgr manager.Manager, r *ReconcilePod, workers int, eventsCh <-chan event.GenericEvent) error {\n\t// Create a new controller\n\tc, err := controller.New(\"trireme-pod-controller\", mgr, controller.Options{\n\t\tReconciler:              r,\n\t\tMaxConcurrentReconciles: workers,\n\t})\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// we use this mapper in both of our event sources\n\tmapper := &WatchPodMapper{\n\t\tclient:         mgr.GetClient(),\n\t\tnodeName:       r.nodeName,\n\t\tenableHostPods: r.enableHostPods,\n\t}\n\n\t// use the our watch pod mapper which filters pods before we reconcile\n\tif err := c.Watch(\n\t\t&source.Kind{Type: &corev1.Pod{}},\n\t\t&handler.EnqueueRequestsFromMapFunc{ToRequests: mapper},\n\t); err != nil {\n\t\treturn err\n\t}\n\n\t// we pass in a custom channel for events generated by resync\n\treturn c.Watch(\n\t\t&source.Channel{Source: eventsCh},\n\t\t&handler.EnqueueRequestsFromMapFunc{ToRequests: mapper},\n\t)\n}\n\nvar _ reconcile.Reconciler = &ReconcilePod{}\n\n// DeleteEvent is used to send delete events to our event loop which will watch\n// them for real deletion in the Kubernetes API. Once an object is gone, we will\n// send down destroy events to trireme.\ntype DeleteEvent struct {\n\tPodUID        string\n\tSandboxID     string\n\tNamespaceName client.ObjectKey\n}\n\n// ReconcilePod reconciles a Pod object\ntype ReconcilePod struct {\n\t// This client, initialized using mgr.Client() above, is a split client\n\t// that reads objects from the cache and writes to the apiserver\n\tclient            client.Client\n\trecorder          record.EventRecorder\n\thandler           *config.ProcessorConfig\n\tmetadataExtractor extractors.PodMetadataExtractor\n\tnetclsProgrammer  extractors.PodNetclsProgrammer\n\tsandboxExtractor  extractors.PodSandboxExtractor\n\tnodeName          string\n\tenableHostPods    bool\n\tdeleteCh          chan<- DeleteEvent\n\tdeleteReconcileCh chan<- struct{}\n\tresyncInfo        *ResyncInfoChan\n\n\tmetadataExtractTimeout time.Duration\n\thandlePUEventTimeout   time.Duration\n\tnetclsProgramTimeout   time.Duration\n}\n\nfunc (r *ReconcilePod) resyncHelper(nn string) {\n\tif r.resyncInfo != nil {\n\t\tr.resyncInfo.SendInfo(nn)\n\t}\n}\n\n// Reconcile reads that state of the cluster for a pod object\nfunc (r *ReconcilePod) Reconcile(request reconcile.Request) (reconcile.Result, error) {\n\tctx := context.Background()\n\tnn := request.NamespacedName.String()\n\n\t// we do this very early on:\n\t// whatever happened to the processing of this pod event, we are telling the Resync handler\n\t// that we have seen it. Even if we have not sent an event to the policy engine,\n\t// it means that most likely we are okay for an existing PU to be deleted first\n\tdefer r.resyncHelper(nn)\n\n\tvar puID, sandboxID string\n\tvar err error\n\t// Fetch the corresponding pod object.\n\tpod := &corev1.Pod{}\n\tif err := r.client.Get(ctx, request.NamespacedName, pod); err != nil {\n\t\tif errors.IsNotFound(err) {\n\t\t\tr.deleteReconcileCh <- struct{}{}\n\t\t\treturn reconcile.Result{}, nil\n\t\t}\n\t\t// Otherwise, we retry.\n\t\treturn reconcile.Result{}, err\n\t}\n\n\tsandboxID, err = r.sandboxExtractor(ctx, pod)\n\tif err != nil {\n\t\t// Do nothing if we can't find the sandboxID\n\t\tzap.L().Debug(\"Pod reconcile: Cannot extract the SandboxID for \", zap.String(\"podname: \", nn))\n\t}\n\tpuID = string(pod.GetUID())\n\t// abort immediately if this is a HostNetwork pod, but we don't want to activate them\n\t// NOTE: is already done in the mapper, however, this additional check does not hurt\n\tif pod.Spec.HostNetwork && !r.enableHostPods {\n\t\tzap.L().Debug(\"Pod is a HostNetwork pod, but enableHostPods is false\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn))\n\t\treturn reconcile.Result{}, nil\n\t}\n\n\t// it looks like we can miss events for all sorts of unknown reasons\n\t// if we reconcile though and the pod exists, we definitely know though\n\t// that it must go away at some point, so always register it with the delete controller\n\tr.deleteCh <- DeleteEvent{\n\t\tPodUID:        puID,\n\t\tSandboxID:     sandboxID,\n\t\tNamespaceName: request.NamespacedName,\n\t}\n\n\t// try to find out if any of the containers have been started yet\n\t// this is static information on the pod, we don't need to care of the phase for determining that\n\t// NOTE: This is important because InitContainers are started during the PodPending phase which is\n\t//       what we need to rely on for activation as early as possible\n\tvar started bool\n\tfor _, status := range pod.Status.InitContainerStatuses {\n\t\tif status.State.Running != nil {\n\t\t\tstarted = true\n\t\t\tbreak\n\t\t}\n\t}\n\tif !started {\n\t\tfor _, status := range pod.Status.ContainerStatuses {\n\t\t\tif status.State.Running != nil {\n\t\t\t\tstarted = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\tswitch pod.Status.Phase {\n\tcase corev1.PodPending:\n\t\tfallthrough\n\tcase corev1.PodRunning:\n\t\tzap.L().Debug(\"PodPending / PodRunning\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Bool(\"anyContainerStarted\", started))\n\n\t\t// now try to do the metadata extraction\n\t\textractCtx, extractCancel := context.WithTimeout(ctx, r.metadataExtractTimeout)\n\t\tdefer extractCancel()\n\t\tpuRuntime, err := r.metadataExtractor(extractCtx, pod, started)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"failed to extract metadata\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\tr.recorder.Eventf(pod, \"Warning\", \"PUExtractMetadata\", \"PU '%s' failed to extract metadata: %s\", puID, err.Error())\n\t\t\treturn reconcile.Result{}, err\n\t\t}\n\n\t\t// now create/update the PU\n\t\t// every HandlePUEvent call gets done in this context\n\t\thandlePUCtx, handlePUCancel := context.WithTimeout(ctx, r.handlePUEventTimeout)\n\t\tdefer handlePUCancel()\n\t\tif err := r.handler.Policy.HandlePUEvent(\n\t\t\thandlePUCtx,\n\t\t\tpuID,\n\t\t\tcommon.EventUpdate,\n\t\t\tpuRuntime,\n\t\t); err != nil {\n\t\t\tzap.L().Error(\"failed to handle update event\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\tr.recorder.Eventf(pod, \"Warning\", \"PUUpdate\", \"failed to handle update event for PU '%s': %s\", puID, err.Error())\n\t\t\t// return reconcile.Result{}, err\n\t\t} else {\n\t\t\tr.recorder.Eventf(pod, \"Normal\", \"PUUpdate\", \"PU '%s' updated successfully\", puID)\n\t\t}\n\n\t\t// NOTE: a pod that is terminating, is going to reconcile as well in the PodRunning phase,\n\t\t// however, it will have the deletion timestamp set which is an indicator for us that it is\n\t\t// shutting down. It means for us, that we don't have to start anything anymore. We can safely stop\n\t\t// the PU when the phase is PodSucceeded/PodFailed. However, we sent an update event above and included\n\t\t// some new tags from the metadata extractor.\n\t\tif pod.DeletionTimestamp != nil {\n\t\t\treturn reconcile.Result{}, nil\n\t\t}\n\t\t// If the pod hasn't started or if there is no sandbox present, requeue.\n\t\tif sandboxID == \"\" || !started {\n\t\t\treturn reconcile.Result{Requeue: true}, nil\n\t\t}\n\t\tif started {\n\t\t\t// if the metadata extractor is missing the PID or nspath, we need to try again\n\t\t\t// we need it for starting the PU. However, only require this if we are not in host network mode.\n\t\t\t// NOTE: this can happen for example if the containers are not in a running state on their own\n\t\t\tif !pod.Spec.HostNetwork && len(puRuntime.NSPath()) == 0 && puRuntime.Pid() == 0 {\n\t\t\t\tzap.L().Error(\"Kubernetes thinks a container is running, however, we failed to extract a PID or NSPath with the metadata extractor. Requeueing...\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn))\n\t\t\t\tr.recorder.Eventf(pod, \"Warning\", \"PUStart\", \"PU '%s' failed to extract netns\", puID)\n\t\t\t\treturn reconcile.Result{}, ErrNetnsExtractionMissing\n\t\t\t}\n\n\t\t\t// now start the PU\n\t\t\t// every HandlePUEvent call gets done in this context\n\t\t\thandlePUStartCtx, handlePUStartCancel := context.WithTimeout(ctx, r.handlePUEventTimeout)\n\t\t\tdefer handlePUStartCancel()\n\t\t\tif err := r.handler.Policy.HandlePUEvent(\n\t\t\t\thandlePUStartCtx,\n\t\t\t\tpuID,\n\t\t\t\tcommon.EventStart,\n\t\t\t\tpuRuntime,\n\t\t\t); err != nil {\n\t\t\t\tif policy.IsErrPUAlreadyActivated(err) {\n\t\t\t\t\t// abort early if this PU has already been activated before\n\t\t\t\t\tzap.L().Debug(\"PU has already been activated\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\t\t} else {\n\t\t\t\t\tzap.L().Error(\"failed to handle start event\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\t\t\tr.recorder.Eventf(pod, \"Warning\", \"PUStart\", \"PU '%s' failed to start: %s\", puID, err.Error())\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tr.recorder.Eventf(pod, \"Normal\", \"PUStart\", \"PU '%s' started successfully\", puID)\n\t\t\t}\n\n\t\t\t// if this is a host network pod, we need to program the net_cls cgroup\n\t\t\tif pod.Spec.HostNetwork {\n\t\t\t\tnetclsProgramCtx, netclsProgramCancel := context.WithTimeout(ctx, r.netclsProgramTimeout)\n\t\t\t\tdefer netclsProgramCancel()\n\t\t\t\tif err := r.netclsProgrammer(netclsProgramCtx, pod, puRuntime); err != nil {\n\t\t\t\t\tif extractors.IsErrNetclsAlreadyProgrammed(err) {\n\t\t\t\t\t\tzap.L().Debug(\"net_cls cgroup has already been programmed previously\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\t\t\t} else if extractors.IsErrNoHostNetworkPod(err) {\n\t\t\t\t\t\tzap.L().Error(\"net_cls cgroup programmer told us that this is no host network pod.\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\t\t\t} else {\n\t\t\t\t\t\tzap.L().Error(\"failed to program net_cls cgroup of pod\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\t\t\t\tr.recorder.Eventf(pod, \"Warning\", \"PUStart\", \"Host Network PU '%s' failed to program its net_cls cgroups: %s\", puID, err.Error())\n\t\t\t\t\t\treturn reconcile.Result{}, err\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tzap.L().Debug(\"net_cls cgroup has been successfully programmed for trireme\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn))\n\t\t\t\t\tr.recorder.Eventf(pod, \"Normal\", \"PUStart\", \"Host Network PU '%s' has successfully programmed its net_cls cgroups\", puID)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn reconcile.Result{}, nil\n\n\tcase corev1.PodSucceeded:\n\t\tfallthrough\n\tcase corev1.PodFailed:\n\t\tzap.L().Debug(\"PodSucceeded / PodFailed\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn))\n\t\t// do metadata extraction regardless of them being stopped\n\t\t//\n\t\t// there is the edge case that the enforcer is starting up and we encounter the pod for the first time\n\t\t// in stopped state, so we have to do metadata extraction here as well\n\t\textractCtx, extractCancel := context.WithTimeout(ctx, r.metadataExtractTimeout)\n\t\tdefer extractCancel()\n\t\tpuRuntime, err := r.metadataExtractor(extractCtx, pod, started)\n\t\tif err != nil {\n\t\t\tzap.L().Error(\"failed to extract metadata\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\tr.recorder.Eventf(pod, \"Warning\", \"PUExtractMetadata\", \"PU '%s' failed to extract metadata: %s\", puID, err.Error())\n\t\t\treturn reconcile.Result{}, err\n\t\t}\n\n\t\t// every HandlePUEvent call gets done in this context\n\t\thandlePUCtx, handlePUCancel := context.WithTimeout(ctx, r.handlePUEventTimeout)\n\t\tdefer handlePUCancel()\n\n\t\tif err := r.handler.Policy.HandlePUEvent(\n\t\t\thandlePUCtx,\n\t\t\tpuID,\n\t\t\tcommon.EventUpdate,\n\t\t\tpuRuntime,\n\t\t); err != nil {\n\t\t\tzap.L().Error(\"failed to handle update event\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\tr.recorder.Eventf(pod, \"Warning\", \"PUUpdate\", \"failed to handle update event for PU '%s': %s\", puID, err.Error())\n\t\t\t// return reconcile.Result{}, err\n\t\t} else {\n\t\t\tr.recorder.Eventf(pod, \"Normal\", \"PUUpdate\", \"PU '%s' updated successfully\", puID)\n\t\t}\n\n\t\tif err := r.handler.Policy.HandlePUEvent(\n\t\t\thandlePUCtx,\n\t\t\tpuID,\n\t\t\tcommon.EventStop,\n\t\t\tpuRuntime,\n\t\t); err != nil {\n\t\t\tzap.L().Error(\"failed to handle stop event\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.Error(err))\n\t\t\tr.recorder.Eventf(pod, \"Warning\", \"PUStop\", \"PU '%s' failed to stop: %s\", puID, err.Error())\n\t\t} else {\n\t\t\tr.recorder.Eventf(pod, \"Normal\", \"PUStop\", \"PU '%s' has been successfully stopped\", puID)\n\t\t}\n\n\t\t// we don't need to reconcile\n\t\t// sending the stop event is enough\n\t\treturn reconcile.Result{}, nil\n\n\tcase corev1.PodUnknown:\n\t\tzap.L().Error(\"pod is in unknown state\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn))\n\n\t\t// we don't need to retry, there is nothing *we* can do about it to fix this\n\t\treturn reconcile.Result{}, nil\n\tdefault:\n\t\tzap.L().Error(\"unknown pod phase\", zap.String(\"puID\", puID), zap.String(\"namespacedName\", nn), zap.String(\"podPhase\", string(pod.Status.Phase)))\n\n\t\t// we don't need to retry, there is nothing *we* can do about it to fix this\n\t\treturn reconcile.Result{}, nil\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/pod/controller_test.go",
    "content": "// +build linux\n\npackage podmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\t\"time\"\n\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/policy/mockpolicy\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\n\tfakeclient \"sigs.k8s.io/controller-runtime/pkg/client/fake\"\n\t\"sigs.k8s.io/controller-runtime/pkg/reconcile\"\n)\n\n// TODO: should be a mock, but how to create it? we don't even vendor in tireme-lib\ntype fakeRecorder struct{}\n\nfunc (r *fakeRecorder) Event(object runtime.Object, eventtype, reason, message string) {\n}\nfunc (r *fakeRecorder) Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...interface{}) {\n}\nfunc (r *fakeRecorder) PastEventf(object runtime.Object, timestamp metav1.Time, eventtype, reason, messageFmt string, args ...interface{}) {\n}\nfunc (r *fakeRecorder) AnnotatedEventf(object runtime.Object, annotations map[string]string, eventtype, reason, messageFmt string, args ...interface{}) {\n}\n\nfunc TestController(t *testing.T) {\n\tConvey(\"Given a reconciler\", t, func() {\n\t\tctrl := gomock.NewController(t)\n\t\tdefer ctrl.Finish()\n\t\t// ctx := context.TODO()\n\n\t\tfailure := fmt.Errorf(\"fail hard\")\n\n\t\tpod1 := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"pod1\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/pod1\"),\n\t\t\t},\n\t\t}\n\t\tpod2 := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"pod2\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/pod2\"),\n\t\t\t},\n\t\t\tSpec: corev1.PodSpec{\n\t\t\t\tHostNetwork: true,\n\t\t\t},\n\t\t}\n\t\tpod3 := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:              \"pod3\",\n\t\t\t\tNamespace:         \"default\",\n\t\t\t\tUID:               types.UID(\"default/pod3\"),\n\t\t\t\tDeletionTimestamp: &metav1.Time{Time: time.Now()},\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodRunning,\n\t\t\t},\n\t\t}\n\t\tpodUnknown := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"unknown\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/unknown\"),\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodUnknown,\n\t\t\t},\n\t\t}\n\t\tpodUnrecognized := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"unrecognized\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/unrecognized\"),\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodPhase(\"not-really-a-pod-phase\"),\n\t\t\t},\n\t\t}\n\t\tpodFailed := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"failed\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/failed\"),\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodFailed,\n\t\t\t},\n\t\t}\n\t\tpodSucceeded := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"succeeded\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/succeeded\"),\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodSucceeded,\n\t\t\t},\n\t\t}\n\t\tpodPending := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"pending\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/pending\"),\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodPending,\n\t\t\t},\n\t\t}\n\t\tpodPendingAndStarted := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"pendingAndStarted\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/pendingAndStarted\"),\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodPending,\n\t\t\t\tInitContainerStatuses: []corev1.ContainerStatus{\n\t\t\t\t\t{\n\t\t\t\t\t\tState: corev1.ContainerState{\n\t\t\t\t\t\t\tRunning: &corev1.ContainerStateRunning{\n\t\t\t\t\t\t\t\tStartedAt: metav1.Time{Time: time.Now()},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tpodRunningNotStarted := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"runningNotStarted\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/runningNotStarted\"),\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodRunning,\n\t\t\t},\n\t\t}\n\t\tpodRunning := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"running\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/running\"),\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodRunning,\n\t\t\t\tInitContainerStatuses: []corev1.ContainerStatus{\n\t\t\t\t\t{\n\t\t\t\t\t\tState: corev1.ContainerState{\n\t\t\t\t\t\t\tTerminated: &corev1.ContainerStateTerminated{\n\t\t\t\t\t\t\t\tExitCode: 0,\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tContainerStatuses: []corev1.ContainerStatus{\n\t\t\t\t\t{\n\t\t\t\t\t\tState: corev1.ContainerState{\n\t\t\t\t\t\t\tRunning: &corev1.ContainerStateRunning{\n\t\t\t\t\t\t\t\tStartedAt: metav1.Time{Time: time.Now()},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tpodRunningHostNetwork := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"runningHostNetwork\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"default/runningHostNetwork\"),\n\t\t\t},\n\t\t\tSpec: corev1.PodSpec{\n\t\t\t\tHostNetwork: true,\n\t\t\t},\n\t\t\tStatus: corev1.PodStatus{\n\t\t\t\tPhase: corev1.PodRunning,\n\t\t\t\tContainerStatuses: []corev1.ContainerStatus{\n\t\t\t\t\t{\n\t\t\t\t\t\tState: corev1.ContainerState{\n\t\t\t\t\t\t\tRunning: &corev1.ContainerStateRunning{\n\t\t\t\t\t\t\t\tStartedAt: metav1.Time{Time: time.Now()},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t}\n\t\tc := fakeclient.NewFakeClient(pod1, pod2, pod3, podUnknown, podUnrecognized, podSucceeded, podFailed, podPending, podPendingAndStarted, podRunningNotStarted, podRunning, podRunningHostNetwork)\n\n\t\thandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tmetadataExtractor := func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\treturn nil, nil\n\t\t}\n\t\tnetclsProgrammer := func(context.Context, *corev1.Pod, policy.RuntimeReader) error {\n\t\t\treturn nil\n\t\t}\n\n\t\tsandboxExtractor := func(context.Context, *corev1.Pod) (string, error) {\n\t\t\treturn \"\", nil\n\t\t}\n\n\t\tsandboxID := \"test\"\n\t\t// we will only send all delete events in this test, we are not going to handle them\n\t\tdeleteCh := make(chan DeleteEvent, 1000)\n\t\tdeleteReconcileCh := make(chan struct{}, 1000)\n\n\t\tpc := &config.ProcessorConfig{\n\t\t\tPolicy: handler,\n\t\t}\n\n\t\tr := &ReconcilePod{\n\t\t\tclient:            c,\n\t\t\trecorder:          &fakeRecorder{},\n\t\t\thandler:           pc,\n\t\t\tmetadataExtractor: metadataExtractor,\n\t\t\tnetclsProgrammer:  netclsProgrammer,\n\t\t\tsandboxExtractor:  sandboxExtractor,\n\t\t\tnodeName:          \"testing-node\",\n\t\t\tenableHostPods:    true,\n\t\t\tdeleteCh:          deleteCh,\n\t\t\tdeleteReconcileCh: deleteReconcileCh,\n\t\t\tresyncInfo:        NewResyncInfoChan(),\n\n\t\t\t// taken from original file\n\t\t\thandlePUEventTimeout:   5 * time.Second,\n\t\t\tmetadataExtractTimeout: 3 * time.Second,\n\t\t\tnetclsProgramTimeout:   2 * time.Second,\n\t\t}\n\n\t\tConvey(\"a not existing pod should trigger a destroy event without any error\", func() {\n\t\t\t//handler.EXPECT().HandlePUEvent(gomock.Any(), \"b/a\", common.EventDestroy, gomock.Any()).Return(nil).Times(1)\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"a\", Namespace: \"b\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"a not existing pod should trigger a destroy event, and *not* fail if it cannot handle the destroy\", func() {\n\t\t\t//handler.EXPECT().HandlePUEvent(gomock.Any(), \"b/a\", common.EventDestroy, gomock.Any()).Return(fmt.Errorf(\"stopping failed\")).Times(1)\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"a\", Namespace: \"b\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"an existing pod with HostNetwork=true, but host pod activation disabled, should silently return\", func() {\n\t\t\tr.enableHostPods = false\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"pod2\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"a pod which is terminating, should update metadata and silently return\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/pod3\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"pod3\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"a pod which is in PodUnknown state should silently return\", func() {\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"unknown\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"a pod which has an unrecognized pod phase should silently return\", func() {\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"unrecognized\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"a pod which is in podsucceeded or podfailed state should try to stop the PU\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/failed\", common.EventUpdate, gomock.Any()).Return(fmt.Errorf(\"update failed\")).Times(1)\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/failed\", common.EventStop, gomock.Any()).Return(fmt.Errorf(\"stop failed\")).Times(1)\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"failed\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/succeeded\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/succeeded\", common.EventStop, gomock.Any()).Return(nil).Times(1)\n\t\t\t_, err = r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"succeeded\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/succeeded\", common.EventUpdate, gomock.Any()).Return(policy.ErrPUNotFound(\"default/succeeded\", nil)).Times(1)\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/succeeded\", common.EventStop, gomock.Any()).Return(policy.ErrPUNotFound(\"default/succeeded\", nil)).Times(1)\n\t\t\t_, err = r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"succeeded\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tConvey(\"and retry if metadata extraction fails\", func() {\n\t\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\t\treturn nil, failure\n\t\t\t\t}\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"succeeded\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldEqual, failure)\n\t\t\t})\n\n\t\t\tReset(func() {\n\t\t\t\tr.metadataExtractor = metadataExtractor\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"a pod in pending state should update or create a PU if it does already exist\", func() {\n\t\t\t// metadata extractor needs to change tags in order to provoke an update call\n\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\tru := policy.NewPURuntimeWithDefaults()\n\t\t\t\tru.SetTags(policy.NewTagStoreFromMap(map[string]string{\"exists\": \"exists\", \"a\": \"b\"}))\n\t\t\t\treturn ru, nil\n\t\t\t}\n\n\t\t\t// update works\n\t\t\texistingRuntime := policy.NewPURuntimeWithDefaults()\n\t\t\texistingRuntime.SetTags(policy.NewTagStoreFromMap(map[string]string{\"exists\": \"exists\"}))\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/pending\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"pending\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t// update fails hard\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/pending\", common.EventUpdate, gomock.Any()).Return(failure).Times(1)\n\t\t\t_, err = r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"pending\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t// PU does not exist, but create fails hard\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/pending\", common.EventUpdate, gomock.Any()).Return(policy.ErrPUNotFound(\"default/pending\", nil)).Times(1)\n\t\t\t_, err = r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"pending\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t// PU does not exist, but create succeeds\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/pending\", common.EventUpdate, gomock.Any()).Return(policy.ErrPUNotFound(\"default/pending\", nil)).Times(1)\n\t\t\t_, err = r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"pending\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"a pod in pending state which has an init container started, should silently return if everything could be started\", func() {\n\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntime(\"default/pendingAndStarted\", 42, \"\", nil, nil, common.ContainerPU, nil), nil\n\t\t\t}\n\t\t\tr.sandboxExtractor = func(context.Context, *corev1.Pod) (string, error) {\n\t\t\t\treturn sandboxID, nil\n\t\t\t}\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/pendingAndStarted\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/pendingAndStarted\", common.EventStart, gomock.Any()).Return(nil).Times(1)\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"pendingAndStarted\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"a pod in running state should silently return if no containers have been started yet\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/runningNotStarted\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"runningNotStarted\", Namespace: \"default\"}})\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"a pod in running state\", func() {\n\t\t\tConvey(\"should retry if metadata extraction fails\", func() {\n\t\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\t\treturn nil, failure\n\t\t\t\t}\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"running\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldEqual, failure)\n\t\t\t})\n\t\t\tConvey(\"should retry if metadata extraction succeeded, but no PID nor netns path were found and this is not a hostnetwork pod\", func() {\n\t\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/running\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\t\treturn policy.NewPURuntimeWithDefaults(), nil\n\t\t\t\t}\n\t\t\t\tr.sandboxExtractor = func(context.Context, *corev1.Pod) (string, error) {\n\t\t\t\t\treturn sandboxID, nil\n\t\t\t\t}\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"running\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldEqual, ErrNetnsExtractionMissing)\n\t\t\t})\n\t\t\tConvey(\"should *not* fail if metadata and PID/netnspath extraction succeeded, but the Start PU event fails\", func() {\n\t\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\t\treturn policy.NewPURuntime(\"default/running\", 42, \"\", nil, nil, common.ContainerPU, nil), nil\n\t\t\t\t}\n\t\t\t\tr.sandboxExtractor = func(context.Context, *corev1.Pod) (string, error) {\n\t\t\t\t\treturn sandboxID, nil\n\t\t\t\t}\n\t\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/running\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/running\", common.EventStart, gomock.Any()).Return(failure).Times(1)\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"running\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t\tConvey(\"should return silently if metadata and PID/netnspath extraction succeeded, but the PU has already been activated\", func() {\n\t\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\t\treturn policy.NewPURuntime(\"default/running\", 42, \"\", nil, nil, common.ContainerPU, nil), nil\n\t\t\t\t}\n\t\t\t\tr.sandboxExtractor = func(context.Context, *corev1.Pod) (string, error) {\n\t\t\t\t\treturn sandboxID, nil\n\t\t\t\t}\n\t\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/running\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/running\", common.EventStart, gomock.Any()).Return(policy.ErrPUAlreadyActivated(\"default/running\", nil)).Times(1)\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"running\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t\tConvey(\"should return silently if metadata and PID/netnspath extraction succeeded, and the PU could be successfully activated\", func() {\n\t\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\t\treturn policy.NewPURuntime(\"default/running\", 42, \"\", nil, nil, common.ContainerPU, nil), nil\n\t\t\t\t}\n\t\t\t\tr.sandboxExtractor = func(context.Context, *corev1.Pod) (string, error) {\n\t\t\t\t\treturn sandboxID, nil\n\t\t\t\t}\n\t\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/running\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/running\", common.EventStart, gomock.Any()).Return(nil).Times(1)\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"running\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"a HostNetwork=true pod should try to start the PU and try to program the netcls cgroup\", func() {\n\t\t\tr.metadataExtractor = func(ctx context.Context, p *corev1.Pod, extractNetns bool) (*policy.PURuntime, error) {\n\t\t\t\treturn policy.NewPURuntime(\"default/runningHostNetwork\", 0, \"\", nil, nil, common.LinuxProcessPU, nil), nil\n\t\t\t}\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/runningHostNetwork\", common.EventUpdate, gomock.Any()).Return(nil).Times(1)\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), \"default/runningHostNetwork\", common.EventStart, gomock.Any()).Return(nil).AnyTimes()\n\t\t\tConvey(\"and succeed if metadata extraction succeeded, and netcls cgroup programming succeeded\", func() {\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"runningHostNetwork\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t\tConvey(\"and succeed if metadata extraction succeeded, and netcls cgroup programming failed with netcls already programmed\", func() {\n\t\t\t\tr.netclsProgrammer = func(context.Context, *corev1.Pod, policy.RuntimeReader) error {\n\t\t\t\t\treturn extractors.ErrNetclsAlreadyProgrammed(\"mark\")\n\t\t\t\t}\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"runningHostNetwork\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t\tConvey(\"and return silently if metadata extraction succeeded, but netcls cgroup programming discovered that this pod is not a host network pod (cannot recover)\", func() {\n\t\t\t\tr.netclsProgrammer = func(context.Context, *corev1.Pod, policy.RuntimeReader) error {\n\t\t\t\t\treturn extractors.ErrNoHostNetworkPod\n\t\t\t\t}\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"runningHostNetwork\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t\tConvey(\"should fail if metadata extraction succeeded, but netcls cgroup programming fails\", func() {\n\t\t\t\tr.netclsProgrammer = func(context.Context, *corev1.Pod, policy.RuntimeReader) error {\n\t\t\t\t\treturn failure\n\t\t\t\t}\n\t\t\t\tr.sandboxExtractor = func(context.Context, *corev1.Pod) (string, error) {\n\t\t\t\t\treturn sandboxID, nil\n\t\t\t\t}\n\t\t\t\t_, err := r.Reconcile(reconcile.Request{NamespacedName: types.NamespacedName{Name: \"runningHostNetwork\", Namespace: \"default\"}})\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldEqual, failure)\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/internal/pod/delete_controller.go",
    "content": "// +build !windows\n\npackage podmonitor\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/api/errors\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n)\n\n// deleteControllerReconcileFunc is the reconciler function signature for the DeleteController\ntype deleteControllerReconcileFunc func(context.Context, client.Client, string, *config.ProcessorConfig, time.Duration, map[string]DeleteObject, extractors.PodSandboxExtractor, chan event.GenericEvent)\n\n// DeleteController is responsible for cleaning up after Kubernetes because we\n// are missing our native ID on the last reconcile event where the pod has already\n// been deleted. This is also more reliable because we are filling this controller\n// with events starting from the time when we first see a deletion timestamp on a pod.\n// It pretty much facilitates the work of a finalizer without needing a finalizer and\n// also only kicking in once a pod has *really* been deleted.\ntype DeleteController struct {\n\tclient   client.Client\n\tnodeName string\n\thandler  *config.ProcessorConfig\n\n\tdeleteCh           chan DeleteEvent\n\treconcileCh        chan struct{}\n\treconcileFunc      deleteControllerReconcileFunc\n\ttickerPeriod       time.Duration\n\titemProcessTimeout time.Duration\n\tsandboxExtractor   extractors.PodSandboxExtractor\n\teventsCh           chan event.GenericEvent\n}\n\n// DeleteObject is the obj used to store in the event map.\ntype DeleteObject struct {\n\tpodUID    string\n\tsandboxID string\n\tpodName   client.ObjectKey\n}\n\n// NewDeleteController creates a new DeleteController.\nfunc NewDeleteController(c client.Client, nodeName string, pc *config.ProcessorConfig, sandboxExtractor extractors.PodSandboxExtractor, eventsCh chan event.GenericEvent) *DeleteController {\n\treturn &DeleteController{\n\t\tclient:             c,\n\t\tnodeName:           nodeName,\n\t\thandler:            pc,\n\t\tdeleteCh:           make(chan DeleteEvent, 1000),\n\t\treconcileCh:        make(chan struct{}),\n\t\treconcileFunc:      deleteControllerReconcile,\n\t\ttickerPeriod:       5 * time.Second,\n\t\titemProcessTimeout: 30 * time.Second,\n\t\tsandboxExtractor:   sandboxExtractor,\n\t\teventsCh:           eventsCh,\n\t}\n}\n\n// GetDeleteCh returns the delete channel on which to queue delete events\nfunc (c *DeleteController) GetDeleteCh() chan<- DeleteEvent {\n\treturn c.deleteCh\n}\n\n// GetReconcileCh returns the channel on which to notify the controller about an immediate reconcile event\nfunc (c *DeleteController) GetReconcileCh() chan<- struct{} {\n\treturn c.reconcileCh\n}\n\n// Start implemets the Runnable interface\nfunc (c *DeleteController) Start(z <-chan struct{}) error {\n\tbackgroundCtx := context.Background()\n\tt := time.NewTicker(c.tickerPeriod)\n\tm := make(map[string]DeleteObject)\n\n\t// the poor man's controller loop\n\tfor {\n\t\tselect {\n\t\tcase ev := <-c.deleteCh:\n\t\t\tobj := DeleteObject{podUID: ev.PodUID, sandboxID: ev.SandboxID, podName: ev.NamespaceName}\n\t\t\t// here don't update the map, insert only if not present.\n\t\t\tif _, ok := m[ev.PodUID]; !ok {\n\t\t\t\tm[ev.PodUID] = obj\n\t\t\t}\n\t\tcase <-c.reconcileCh:\n\t\t\tc.reconcileFunc(backgroundCtx, c.client, c.nodeName, c.handler, c.itemProcessTimeout, m, c.sandboxExtractor, c.eventsCh)\n\t\tcase <-t.C:\n\t\t\tc.reconcileFunc(backgroundCtx, c.client, c.nodeName, c.handler, c.itemProcessTimeout, m, c.sandboxExtractor, c.eventsCh)\n\t\tcase <-z:\n\t\t\tt.Stop()\n\t\t\treturn nil\n\t\t}\n\t}\n}\n\n// deleteControllerReconcile is the real reconciler implementation for the DeleteController\nfunc deleteControllerReconcile(backgroundCtx context.Context, c client.Client, nodeName string, pc *config.ProcessorConfig, itemProcessTimeout time.Duration, m map[string]DeleteObject, sandboxExtractor extractors.PodSandboxExtractor, eventCh chan event.GenericEvent) {\n\tfor podUID, req := range m {\n\t\tdeleteControllerProcessItem(backgroundCtx, c, nodeName, pc, itemProcessTimeout, m, podUID, req.podName, sandboxExtractor, eventCh)\n\t}\n}\n\nfunc deleteControllerProcessItem(backgroundCtx context.Context, c client.Client, nodeName string, pc *config.ProcessorConfig, itemProcessTimeout time.Duration, m map[string]DeleteObject, podUID string, req client.ObjectKey, sandboxExtractor extractors.PodSandboxExtractor, eventCh chan event.GenericEvent) {\n\tvar ok bool\n\tvar delObj DeleteObject\n\tif delObj, ok = m[podUID]; !ok {\n\t\tzap.L().Warn(\"DeleteController: nativeID not found in delete controller map\", zap.String(\"nativeID\", podUID))\n\t\treturn\n\t}\n\tctx, cancel := context.WithTimeout(backgroundCtx, itemProcessTimeout)\n\tdefer cancel()\n\tpod := &corev1.Pod{}\n\tif err := c.Get(ctx, req, pod); err != nil {\n\t\tif errors.IsNotFound(err) {\n\t\t\t// this is the normal case: a pod is gone\n\t\t\t// so just send a destroy event\n\t\t\tzap.L().Warn(\"DeleteController: the pod is deleted in the cluster, so call the destroy PU\")\n\t\t\tif err := pc.Policy.HandlePUEvent(\n\t\t\t\tctx,\n\t\t\t\tpodUID,\n\t\t\t\tcommon.EventDestroy,\n\t\t\t\tpolicy.NewPURuntimeWithDefaults(),\n\t\t\t); err != nil {\n\t\t\t\t// we don't really care, we just warn\n\t\t\t\tzap.L().Warn(\"DeleteController: Failed to handle destroy event\", zap.String(\"puID\", podUID), zap.String(\"namespacedName\", req.String()), zap.Error(err))\n\t\t\t}\n\t\t\t// we only fire events away, we don't really care about the error anyway\n\t\t\t// it is up to the policy engine to make sense of that\n\t\t\tdelete(m, podUID)\n\t\t} else {\n\t\t\t// we don't really care, we just warn\n\t\t\tzap.L().Warn(\"DeleteController: Failed to get pod from Kubernetes API\", zap.String(\"puID\", podUID), zap.String(\"namespacedName\", req.String()), zap.Error(err))\n\t\t}\n\t\treturn\n\t}\n\n\t// For StatefulSets we need to account for another special case: pods that move between nodes *keep* the same UID, so they won't fit the check below.\n\t// However, we can simply double-check the node name in the same way how we already filter events in the watcher/monitor\n\tif pod.Spec.NodeName != nodeName {\n\t\tzap.L().Warn(\"DeleteController: the pod is now on a different node, send destroy event and delete the cache\", zap.String(\"puID\", podUID), zap.String(\"namespacedName\", req.String()), zap.String(\"podNodeName\", pod.Spec.NodeName), zap.String(\"nodeName\", nodeName))\n\t\tif err := pc.Policy.HandlePUEvent(\n\t\t\tctx,\n\t\t\tpodUID,\n\t\t\tcommon.EventDestroy,\n\t\t\tpolicy.NewPURuntimeWithDefaults(),\n\t\t); err != nil {\n\t\t\t// we don't really care, we just warn\n\t\t\tzap.L().Warn(\"DeleteController: Failed to handle destroy event\", zap.String(\"puID\", podUID), zap.String(\"namespacedName\", req.String()), zap.Error(err))\n\t\t}\n\t\t// we only fire events away, we don't really care about the error anyway\n\t\t// it is up to the policy engine to make sense of that\n\t\tdelete(m, podUID)\n\t\treturn\n\t}\n\n\t// the edge case: a pod with the same namespaced name came up and we have missed a delete event\n\t// this means that this pod belongs to a different PU and must live, therefore we try to delete the old one\n\n\t// the following code also takes care of any restarts in the Pod, the restarts can be caused by either\n\t// the sandbox getting killed or all the containers restarting due a crash or kill.\n\n\t// Now destroy the PU only if the following\n\t// 1. Simple case if the pod UID don't match then go ahead and destroy the PU.\n\t// 2. When the pod UID match then do the following:\n\t//\t\t2.a Get the current SandboxID from the pod.\n\t// \t\t2.b Get the sandboxID from the map.\n\t// \t\t2.c If the sandBoxID differ then send the destroy event for the old(map) sandBoxID.\n\n\t// 1st case, simple if the pod UID don't match then just call the destroy PU event and delete the map entry with the old key.\n\tif string(pod.UID) != delObj.podUID {\n\n\t\tzap.L().Warn(\"DeleteController: Pod does not have expected native ID, we must have missed an event and the same pod was recreated. Trying to destroy PU\", zap.String(\"puID\", podUID), zap.String(\"namespacedName\", req.String()), zap.String(\"podUID\", string(pod.GetUID())))\n\t\tif err := pc.Policy.HandlePUEvent(\n\t\t\tctx,\n\t\t\tpodUID,\n\t\t\tcommon.EventDestroy,\n\t\t\tpolicy.NewPURuntimeWithDefaults(),\n\t\t); err != nil {\n\t\t\t// we don't really care, we just warn\n\t\t\tzap.L().Warn(\"DeleteController: Failed to handle destroy event\", zap.String(\"puID\", podUID), zap.String(\"namespacedName\", req.String()), zap.Error(err))\n\t\t}\n\t\t// we only fire events away, we don't really care about the error anyway\n\t\t// it is up to the policy engine to make sense of that\n\t\tdelete(m, podUID)\n\t\treturn\n\t}\n\n\t// now the 2nd case, when pod UID match\n\tif string(pod.UID) == delObj.podUID {\n\t\tzap.L().Debug(\"DeleteController: the pod UID Match happened for\", zap.String(\"podName:\", req.String()), zap.String(\"podUID\", string(pod.UID)))\n\t\t// 2a get the current sandboxID\n\t\tif sandboxExtractor == nil {\n\t\t\treturn\n\t\t}\n\t\tcurrentSandboxID, err := sandboxExtractor(ctx, pod)\n\t\tif err != nil {\n\t\t\tzap.L().Debug(\"DeleteController: cannot extract the SandboxID, return\", zap.String(\"namespacedName\", req.String()), zap.String(\"podUID\", string(pod.GetUID())))\n\t\t\treturn\n\t\t}\n\t\t// update the map with the sandboxID\n\t\t// here we update the map only if the sandboxID has not been extracted.\n\t\t// The extraction of the sandboxID if  missed by the main controller then we will update the map below.\n\t\tif delObj.sandboxID == \"\" {\n\t\t\tdelObj = DeleteObject{podUID: podUID, sandboxID: currentSandboxID, podName: req}\n\t\t\tm[podUID] = delObj\n\t\t}\n\t\t// 2b get the pod/old sandboxID\n\t\toldSandboxID := delObj.sandboxID\n\n\t\tzap.L().Debug(\"DeleteController:\", zap.String(\" the sandboxID, curr:\", currentSandboxID), zap.String(\" old sandboxID: \", oldSandboxID))\n\t\t// 2c compare the oldSandboxID and currentSandboxID, if they differ then destroy the PU\n\t\tif oldSandboxID != currentSandboxID {\n\t\t\tzap.L().Warn(\"DeleteController: Pod SandboxID differ. Trying to destroy PU\", zap.String(\"namespacedName\", req.String()), zap.String(\"currentSandboxID\", currentSandboxID), zap.String(\"oldSandboxID\", oldSandboxID))\n\t\t\tif err := pc.Policy.HandlePUEvent(\n\t\t\t\tctx,\n\t\t\t\tpodUID,\n\t\t\t\tcommon.EventDestroy,\n\t\t\t\tpolicy.NewPURuntimeWithDefaults(),\n\t\t\t); err != nil {\n\t\t\t\t// we don't really care, we just warn\n\t\t\t\tzap.L().Warn(\"DeleteController: Failed to handle destroy event\", zap.String(\"puID\", podUID), zap.String(\"namespacedName\", req.String()), zap.Error(err))\n\t\t\t}\n\t\t\t// we only fire events away, we don't really care about the error anyway\n\t\t\t// it is up to the policy engine to make sense of that\n\t\t\tdelete(m, podUID)\n\t\t\tzap.L().Warn(\"DeleteController: PU destroyed, now send event for the pod-controller to reconcile\", zap.String(\" podName: \", req.String()))\n\t\t\t// below we send event to the main pod-controller to reconcile again and to create a PU if it is not created yet.\n\t\t\teventCh <- event.GenericEvent{\n\t\t\t\tObject: pod,\n\t\t\t\tMeta:   pod.GetObjectMeta(),\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/pod/delete_controller_test.go",
    "content": "// +build !windows\n\npackage podmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/trireme-lib/policy/mockpolicy\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\tfakeclient \"sigs.k8s.io/controller-runtime/pkg/client/fake\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n)\n\nfunc TestDeleteControllerFunctionality(t *testing.T) {\n\tConvey(\"Given a fake controller-runtime client and a mock policy resolver\", t, func() {\n\t\tctrl := gomock.NewController(t)\n\t\tdefer ctrl.Finish()\n\n\t\tnodeName := \"test1\"\n\t\tpod1 := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"pod1\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t\tUID:       types.UID(\"aaaa\"),\n\t\t\t},\n\t\t\tSpec: corev1.PodSpec{\n\t\t\t\tNodeName: nodeName,\n\t\t\t},\n\t\t}\n\t\tc := fakeclient.NewFakeClient(pod1)\n\t\teventsCh := make(chan event.GenericEvent)\n\t\tgo func() {\n\t\t\tfor {\n\t\t\t\t<-eventsCh\n\t\t\t}\n\t\t}()\n\t\thandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tpc := &config.ProcessorConfig{\n\t\t\tPolicy: handler,\n\t\t}\n\n\t\tctx := context.Background()\n\t\titemProcessTimeout := 5 * time.Second\n\n\t\tfailure := fmt.Errorf(\"failure\")\n\n\t\tConvey(\"then no destroy events should be sent if there is nothing in the state right now\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), common.EventDestroy, gomock.Any()).Return(nil).Times(0)\n\t\t\tm := make(map[string]DeleteObject)\n\t\t\tdeleteControllerReconcile(ctx, c, nodeName, pc, itemProcessTimeout, m, nil, eventsCh)\n\t\t\tSo(m, ShouldBeEmpty)\n\t\t})\n\n\t\tConvey(\"then *no* destroy events should be sent if the pod with the same namespaced name and UID still exists in the Kubernetes API\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), common.EventDestroy, gomock.Any()).Return(nil).Times(0)\n\t\t\tm := make(map[string]DeleteObject)\n\t\t\tnn := client.ObjectKey{\n\t\t\t\tName:      \"pod1\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t}\n\t\t\tm[\"aaaa\"] = DeleteObject{podUID: \"aaaa\", sandboxID: \"\", podName: nn}\n\t\t\tdeleteControllerReconcile(ctx, c, nodeName, pc, itemProcessTimeout, m, nil, eventsCh)\n\t\t\tSo(m, ShouldHaveLength, 1)\n\t\t})\n\n\t\tConvey(\"then a destroy event should be sent if the pod with the same namespaced name and UID still exists in the Kubernetes API, but is now on a different node\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), common.EventDestroy, gomock.Any()).Return(nil).Times(1)\n\t\t\tm := make(map[string]DeleteObject)\n\t\t\tnn := client.ObjectKey{\n\t\t\t\tName:      \"pod1\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t}\n\t\t\tm[\"aaaa\"] = DeleteObject{podUID: \"aaaa\", sandboxID: \"\", podName: nn}\n\t\t\tdeleteControllerReconcile(ctx, c, \"test2\", pc, itemProcessTimeout, m, nil, eventsCh)\n\t\t\tSo(m, ShouldBeEmpty)\n\t\t})\n\n\t\tConvey(\"then a destroy event should be sent if the pod with the same namespaced name but *different* UID exists in the Kubernetes API\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), common.EventDestroy, gomock.Any()).Return(nil).Times(1)\n\t\t\tm := make(map[string]DeleteObject)\n\t\t\tnn := client.ObjectKey{\n\t\t\t\tName:      \"pod1\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t}\n\t\t\tm[\"bbbb\"] = DeleteObject{podUID: \"\", sandboxID: \"\", podName: nn}\n\t\t\tdeleteControllerReconcile(ctx, c, nodeName, pc, itemProcessTimeout, m, nil, eventsCh)\n\t\t\tSo(m, ShouldBeEmpty)\n\t\t})\n\n\t\tConvey(\"then a destroy event should be sent if the pod does not exist in the Kubernetes API\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), common.EventDestroy, gomock.Any()).Return(nil).Times(1)\n\t\t\tm := make(map[string]DeleteObject)\n\n\t\t\tnn := client.ObjectKey{\n\t\t\t\tName:      \"pod2\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t}\n\t\t\tm[\"aaaa\"] = DeleteObject{podUID: \"\", sandboxID: \"\", podName: nn}\n\t\t\tdeleteControllerReconcile(ctx, c, nodeName, pc, itemProcessTimeout, m, nil, eventsCh)\n\t\t\tSo(m, ShouldBeEmpty)\n\t\t})\n\n\t\tConvey(\"then a destroy event should be sent if the pod with the same namespaced name but *different* UID exists in the Kubernetes API, and it should still be removed from the map if it fails\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), common.EventDestroy, gomock.Any()).Return(failure).Times(1)\n\t\t\tm := make(map[string]DeleteObject)\n\n\t\t\tnn := client.ObjectKey{\n\t\t\t\tName:      \"pod1\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t}\n\t\t\tm[\"bbbb\"] = DeleteObject{podUID: \"\", sandboxID: \"\", podName: nn}\n\t\t\tdeleteControllerReconcile(ctx, c, nodeName, pc, itemProcessTimeout, m, nil, eventsCh)\n\t\t\tSo(m, ShouldBeEmpty)\n\t\t})\n\n\t\tConvey(\"then a destroy event should be sent if the pod does not exist in the Kubernetes API, and it should still be removed from the map if it fails\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), common.EventDestroy, gomock.Any()).Return(failure).Times(1)\n\t\t\tm := make(map[string]DeleteObject)\n\n\t\t\tnn := client.ObjectKey{\n\t\t\t\tName:      \"pod2\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t}\n\t\t\tm[\"aaaa\"] = DeleteObject{podUID: \"\", sandboxID: \"\", podName: nn}\n\t\t\tdeleteControllerReconcile(ctx, c, nodeName, pc, itemProcessTimeout, m, nil, eventsCh)\n\t\t\tSo(m, ShouldBeEmpty)\n\t\t})\n\n\t\tsandboxExtractor := func(context.Context, *corev1.Pod) (string, error) {\n\t\t\treturn \"different\", nil\n\t\t}\n\n\t\tConvey(\"then a destroy event should be sent if the pod exists in the Kubernetes API, but the sandbox has changed\", func() {\n\t\t\thandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), common.EventDestroy, gomock.Any()).Return(nil).Times(1)\n\t\t\tm := make(map[string]DeleteObject)\n\n\t\t\tnn := client.ObjectKey{\n\t\t\t\tName:      \"pod1\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t}\n\t\t\tm[\"aaaa\"] = DeleteObject{podUID: \"aaaa\", sandboxID: \"sandbox\", podName: nn}\n\t\t\tdeleteControllerReconcile(ctx, c, nodeName, pc, itemProcessTimeout, m, sandboxExtractor, eventsCh)\n\t\t\tSo(m, ShouldBeEmpty)\n\t\t})\n\t})\n}\n\nfunc TestDeleteController(t *testing.T) {\n\tConvey(\"Given a delete controller\", t, func() {\n\t\tz := make(chan struct{})\n\n\t\tnodeName := \"test1\"\n\t\ttestMap := make(map[string]DeleteObject)\n\t\teventsCh := make(chan event.GenericEvent)\n\t\tgo func() {\n\t\t\t<-eventsCh\n\t\t}()\n\t\t//nolint:unparam\n\t\treconcileFunc := func(ctx context.Context, c client.Client, nodeName string, pc *config.ProcessorConfig, t time.Duration, m map[string]DeleteObject, s extractors.PodSandboxExtractor, eventsCh chan event.GenericEvent) {\n\t\t\tfor k, v := range m {\n\t\t\t\ttestMap[k] = v\n\t\t\t}\n\t\t}\n\n\t\tdc := NewDeleteController(nil, nodeName, nil, nil, eventsCh)\n\t\tdc.deleteCh = make(chan DeleteEvent)\n\t\tdc.reconcileCh = make(chan struct{})\n\t\tdc.tickerPeriod = 1 * time.Second\n\t\tdc.itemProcessTimeout = 1 * time.Second\n\t\tdc.reconcileFunc = reconcileFunc\n\n\t\tConvey(\"it should be able to receive delete events, and access them during a reconcile\", func() {\n\t\t\tev := DeleteEvent{\n\t\t\t\tPodUID: \"aaaa\",\n\t\t\t\tNamespaceName: client.ObjectKey{\n\t\t\t\t\tName:      \"pod1\",\n\t\t\t\t\tNamespace: \"default\",\n\t\t\t\t},\n\t\t\t}\n\t\t\texp := DeleteObject{\n\t\t\t\tpodUID:    \"aaaa\",\n\t\t\t\tsandboxID: \"\",\n\t\t\t\tpodName: client.ObjectKey{\n\t\t\t\t\tName:      \"pod1\",\n\t\t\t\t\tNamespace: \"default\",\n\t\t\t\t},\n\t\t\t}\n\t\t\tgo func() {\n\t\t\t\tdc.GetDeleteCh() <- ev\n\t\t\t\tdc.GetReconcileCh() <- struct{}{}\n\t\t\t\tclose(z)\n\t\t\t}()\n\n\t\t\terr := dc.Start(z)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(testMap, ShouldContainKey, ev.PodUID)\n\t\t\tSo(testMap[ev.PodUID], ShouldResemble, exp)\n\t\t})\n\n\t\tConvey(\"it should be able to receive delete events, and access them during a reconcile that was triggered through the ticker\", func() {\n\t\t\tev := DeleteEvent{\n\t\t\t\tPodUID: \"aaaa\",\n\t\t\t\tNamespaceName: client.ObjectKey{\n\t\t\t\t\tName:      \"pod1\",\n\t\t\t\t\tNamespace: \"default\",\n\t\t\t\t},\n\t\t\t}\n\t\t\texp := DeleteObject{\n\t\t\t\tpodUID:    \"aaaa\",\n\t\t\t\tsandboxID: \"\",\n\t\t\t\tpodName: client.ObjectKey{\n\t\t\t\t\tName:      \"pod1\",\n\t\t\t\t\tNamespace: \"default\",\n\t\t\t\t},\n\t\t\t}\n\t\t\tgo func() {\n\t\t\t\tdc.GetDeleteCh() <- ev\n\t\t\t\t// sleeping for twice the ticker period should always trigger the reconcile\n\t\t\t\ttime.Sleep(dc.tickerPeriod * 2)\n\t\t\t\tclose(z)\n\t\t\t}()\n\n\t\t\terr := dc.Start(z)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(testMap, ShouldContainKey, ev.PodUID)\n\t\t\tSo(testMap[ev.PodUID], ShouldResemble, exp)\n\t\t})\n\n\t\tReset(func() {\n\t\t\ttestMap = make(map[string]DeleteObject)\n\t\t\tdc = &DeleteController{\n\t\t\t\tclient:             nil,\n\t\t\t\thandler:            nil,\n\t\t\t\tdeleteCh:           make(chan DeleteEvent),\n\t\t\t\treconcileCh:        make(chan struct{}),\n\t\t\t\ttickerPeriod:       1 * time.Second,\n\t\t\t\titemProcessTimeout: 1 * time.Second,\n\t\t\t\treconcileFunc:      reconcileFunc,\n\t\t\t}\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/internal/pod/mockcache_test.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: sigs.k8s.io/controller-runtime/pkg/cache (interfaces: Cache)\n\n// Package podmonitor is a generated GoMock package.\npackage podmonitor\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n\tschema \"k8s.io/apimachinery/pkg/runtime/schema\"\n\ttypes \"k8s.io/apimachinery/pkg/types\"\n\tcache \"k8s.io/client-go/tools/cache\"\n\tclient \"sigs.k8s.io/controller-runtime/pkg/client\"\n)\n\n// MockCache is a mock of Cache interface\n// nolint\ntype MockCache struct {\n\tctrl     *gomock.Controller\n\trecorder *MockCacheMockRecorder\n}\n\n// MockCacheMockRecorder is the mock recorder for MockCache\n// nolint\ntype MockCacheMockRecorder struct {\n\tmock *MockCache\n}\n\n// NewMockCache creates a new mock instance\n// nolint\nfunc NewMockCache(ctrl *gomock.Controller) *MockCache {\n\tmock := &MockCache{ctrl: ctrl}\n\tmock.recorder = &MockCacheMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockCache) EXPECT() *MockCacheMockRecorder {\n\treturn m.recorder\n}\n\n// Get mocks base method\n// nolint\nfunc (m *MockCache) Get(arg0 context.Context, arg1 types.NamespacedName, arg2 runtime.Object) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Get\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Get indicates an expected call of Get\n// nolint\nfunc (mr *MockCacheMockRecorder) Get(arg0, arg1, arg2 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Get\", reflect.TypeOf((*MockCache)(nil).Get), arg0, arg1, arg2)\n}\n\n// GetInformer mocks base method\n// nolint\nfunc (m *MockCache) GetInformer(arg0 runtime.Object) (cache.SharedIndexInformer, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetInformer\", arg0)\n\tret0, _ := ret[0].(cache.SharedIndexInformer)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// GetInformer indicates an expected call of GetInformer\n// nolint\nfunc (mr *MockCacheMockRecorder) GetInformer(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetInformer\", reflect.TypeOf((*MockCache)(nil).GetInformer), arg0)\n}\n\n// GetInformerForKind mocks base method\n// nolint\nfunc (m *MockCache) GetInformerForKind(arg0 schema.GroupVersionKind) (cache.SharedIndexInformer, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetInformerForKind\", arg0)\n\tret0, _ := ret[0].(cache.SharedIndexInformer)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// GetInformerForKind indicates an expected call of GetInformerForKind\n// nolint\nfunc (mr *MockCacheMockRecorder) GetInformerForKind(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetInformerForKind\", reflect.TypeOf((*MockCache)(nil).GetInformerForKind), arg0)\n}\n\n// IndexField mocks base method\n// nolint\nfunc (m *MockCache) IndexField(arg0 runtime.Object, arg1 string, arg2 client.IndexerFunc) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"IndexField\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// IndexField indicates an expected call of IndexField\n// nolint\nfunc (mr *MockCacheMockRecorder) IndexField(arg0, arg1, arg2 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"IndexField\", reflect.TypeOf((*MockCache)(nil).IndexField), arg0, arg1, arg2)\n}\n\n// List mocks base method\n// nolint\nfunc (m *MockCache) List(arg0 context.Context, arg1 *client.ListOptions, arg2 runtime.Object) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"List\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// List indicates an expected call of List\n// nolint\nfunc (mr *MockCacheMockRecorder) List(arg0, arg1, arg2 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"List\", reflect.TypeOf((*MockCache)(nil).List), arg0, arg1, arg2)\n}\n\n// Start mocks base method\n// nolint\nfunc (m *MockCache) Start(arg0 <-chan struct{}) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Start\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Start indicates an expected call of Start\n// nolint\nfunc (mr *MockCacheMockRecorder) Start(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Start\", reflect.TypeOf((*MockCache)(nil).Start), arg0)\n}\n\n// WaitForCacheSync mocks base method\n// nolint\nfunc (m *MockCache) WaitForCacheSync(arg0 <-chan struct{}) bool {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"WaitForCacheSync\", arg0)\n\tret0, _ := ret[0].(bool)\n\treturn ret0\n}\n\n// WaitForCacheSync indicates an expected call of WaitForCacheSync\n// nolint\nfunc (mr *MockCacheMockRecorder) WaitForCacheSync(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"WaitForCacheSync\", reflect.TypeOf((*MockCache)(nil).WaitForCacheSync), arg0)\n}\n"
  },
  {
    "path": "monitor/internal/pod/mockclient_test.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: sigs.k8s.io/controller-runtime/pkg/client (interfaces: Client)\n\n// Package podmonitor is a generated GoMock package.\npackage podmonitor\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n\ttypes \"k8s.io/apimachinery/pkg/types\"\n\tclient \"sigs.k8s.io/controller-runtime/pkg/client\"\n)\n\n// MockClient is a mock of Client interface\n// nolint\ntype MockClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockClientMockRecorder\n}\n\n// MockClientMockRecorder is the mock recorder for MockClient\n// nolint\ntype MockClientMockRecorder struct {\n\tmock *MockClient\n}\n\n// NewMockClient creates a new mock instance\n// nolint\nfunc NewMockClient(ctrl *gomock.Controller) *MockClient {\n\tmock := &MockClient{ctrl: ctrl}\n\tmock.recorder = &MockClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockClient) EXPECT() *MockClientMockRecorder {\n\treturn m.recorder\n}\n\n// Create mocks base method\n// nolint\nfunc (m *MockClient) Create(arg0 context.Context, arg1 runtime.Object) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Create\", arg0, arg1)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Create indicates an expected call of Create\n// nolint\nfunc (mr *MockClientMockRecorder) Create(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Create\", reflect.TypeOf((*MockClient)(nil).Create), arg0, arg1)\n}\n\n// Delete mocks base method\n// nolint\nfunc (m *MockClient) Delete(arg0 context.Context, arg1 runtime.Object, arg2 ...client.DeleteOptionFunc) error {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"Delete\", varargs...)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Delete indicates an expected call of Delete\n// nolint\nfunc (mr *MockClientMockRecorder) Delete(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Delete\", reflect.TypeOf((*MockClient)(nil).Delete), varargs...)\n}\n\n// Get mocks base method\n// nolint\nfunc (m *MockClient) Get(arg0 context.Context, arg1 types.NamespacedName, arg2 runtime.Object) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Get\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Get indicates an expected call of Get\n// nolint\nfunc (mr *MockClientMockRecorder) Get(arg0, arg1, arg2 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Get\", reflect.TypeOf((*MockClient)(nil).Get), arg0, arg1, arg2)\n}\n\n// List mocks base method\n// nolint\nfunc (m *MockClient) List(arg0 context.Context, arg1 *client.ListOptions, arg2 runtime.Object) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"List\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// List indicates an expected call of List\n// nolint\nfunc (mr *MockClientMockRecorder) List(arg0, arg1, arg2 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"List\", reflect.TypeOf((*MockClient)(nil).List), arg0, arg1, arg2)\n}\n\n// Status mocks base method\n// nolint\nfunc (m *MockClient) Status() client.StatusWriter {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Status\")\n\tret0, _ := ret[0].(client.StatusWriter)\n\treturn ret0\n}\n\n// Status indicates an expected call of Status\n// nolint\nfunc (mr *MockClientMockRecorder) Status() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Status\", reflect.TypeOf((*MockClient)(nil).Status))\n}\n\n// Update mocks base method\n// nolint\nfunc (m *MockClient) Update(arg0 context.Context, arg1 runtime.Object) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Update\", arg0, arg1)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Update indicates an expected call of Update\n// nolint\nfunc (mr *MockClientMockRecorder) Update(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Update\", reflect.TypeOf((*MockClient)(nil).Update), arg0, arg1)\n}\n"
  },
  {
    "path": "monitor/internal/pod/mockinformer_test.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: k8s.io/client-go/tools/cache (interfaces: SharedIndexInformer)\n\n// Package podmonitor is a generated GoMock package.\npackage podmonitor\n\nimport (\n\treflect \"reflect\"\n\ttime \"time\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tcache \"k8s.io/client-go/tools/cache\"\n)\n\n// MockSharedIndexInformer is a mock of SharedIndexInformer interface\n// nolint\ntype MockSharedIndexInformer struct {\n\tctrl     *gomock.Controller\n\trecorder *MockSharedIndexInformerMockRecorder\n}\n\n// MockSharedIndexInformerMockRecorder is the mock recorder for MockSharedIndexInformer\n// nolint\ntype MockSharedIndexInformerMockRecorder struct {\n\tmock *MockSharedIndexInformer\n}\n\n// NewMockSharedIndexInformer creates a new mock instance\n// nolint\nfunc NewMockSharedIndexInformer(ctrl *gomock.Controller) *MockSharedIndexInformer {\n\tmock := &MockSharedIndexInformer{ctrl: ctrl}\n\tmock.recorder = &MockSharedIndexInformerMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockSharedIndexInformer) EXPECT() *MockSharedIndexInformerMockRecorder {\n\treturn m.recorder\n}\n\n// AddEventHandler mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) AddEventHandler(arg0 cache.ResourceEventHandler) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"AddEventHandler\", arg0)\n}\n\n// AddEventHandler indicates an expected call of AddEventHandler\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) AddEventHandler(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddEventHandler\", reflect.TypeOf((*MockSharedIndexInformer)(nil).AddEventHandler), arg0)\n}\n\n// AddEventHandlerWithResyncPeriod mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) AddEventHandlerWithResyncPeriod(arg0 cache.ResourceEventHandler, arg1 time.Duration) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"AddEventHandlerWithResyncPeriod\", arg0, arg1)\n}\n\n// AddEventHandlerWithResyncPeriod indicates an expected call of AddEventHandlerWithResyncPeriod\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) AddEventHandlerWithResyncPeriod(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddEventHandlerWithResyncPeriod\", reflect.TypeOf((*MockSharedIndexInformer)(nil).AddEventHandlerWithResyncPeriod), arg0, arg1)\n}\n\n// AddIndexers mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) AddIndexers(arg0 cache.Indexers) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"AddIndexers\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AddIndexers indicates an expected call of AddIndexers\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) AddIndexers(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddIndexers\", reflect.TypeOf((*MockSharedIndexInformer)(nil).AddIndexers), arg0)\n}\n\n// GetController mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) GetController() cache.Controller {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetController\")\n\tret0, _ := ret[0].(cache.Controller)\n\treturn ret0\n}\n\n// GetController indicates an expected call of GetController\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) GetController() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetController\", reflect.TypeOf((*MockSharedIndexInformer)(nil).GetController))\n}\n\n// GetIndexer mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) GetIndexer() cache.Indexer {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetIndexer\")\n\tret0, _ := ret[0].(cache.Indexer)\n\treturn ret0\n}\n\n// GetIndexer indicates an expected call of GetIndexer\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) GetIndexer() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetIndexer\", reflect.TypeOf((*MockSharedIndexInformer)(nil).GetIndexer))\n}\n\n// GetStore mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) GetStore() cache.Store {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetStore\")\n\tret0, _ := ret[0].(cache.Store)\n\treturn ret0\n}\n\n// GetStore indicates an expected call of GetStore\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) GetStore() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetStore\", reflect.TypeOf((*MockSharedIndexInformer)(nil).GetStore))\n}\n\n// HasSynced mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) HasSynced() bool {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"HasSynced\")\n\tret0, _ := ret[0].(bool)\n\treturn ret0\n}\n\n// HasSynced indicates an expected call of HasSynced\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) HasSynced() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"HasSynced\", reflect.TypeOf((*MockSharedIndexInformer)(nil).HasSynced))\n}\n\n// LastSyncResourceVersion mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) LastSyncResourceVersion() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"LastSyncResourceVersion\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// LastSyncResourceVersion indicates an expected call of LastSyncResourceVersion\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) LastSyncResourceVersion() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"LastSyncResourceVersion\", reflect.TypeOf((*MockSharedIndexInformer)(nil).LastSyncResourceVersion))\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockSharedIndexInformer) Run(arg0 <-chan struct{}) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"Run\", arg0)\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockSharedIndexInformerMockRecorder) Run(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockSharedIndexInformer)(nil).Run), arg0)\n}\n"
  },
  {
    "path": "monitor/internal/pod/mockmanager_test.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: sigs.k8s.io/controller-runtime/pkg/manager (interfaces: Manager)\n\n// Package podmonitor is a generated GoMock package.\npackage podmonitor\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tmeta \"k8s.io/apimachinery/pkg/api/meta\"\n\truntime \"k8s.io/apimachinery/pkg/runtime\"\n\trest \"k8s.io/client-go/rest\"\n\trecord \"k8s.io/client-go/tools/record\"\n\tcache \"sigs.k8s.io/controller-runtime/pkg/cache\"\n\tclient \"sigs.k8s.io/controller-runtime/pkg/client\"\n\tmanager \"sigs.k8s.io/controller-runtime/pkg/manager\"\n\ttypes \"sigs.k8s.io/controller-runtime/pkg/webhook/admission/types\"\n)\n\n// MockManager is a mock of Manager interface\n// nolint\ntype MockManager struct {\n\tctrl     *gomock.Controller\n\trecorder *MockManagerMockRecorder\n}\n\n// MockManagerMockRecorder is the mock recorder for MockManager\n// nolint\ntype MockManagerMockRecorder struct {\n\tmock *MockManager\n}\n\n// NewMockManager creates a new mock instance\n// nolint\nfunc NewMockManager(ctrl *gomock.Controller) *MockManager {\n\tmock := &MockManager{ctrl: ctrl}\n\tmock.recorder = &MockManagerMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockManager) EXPECT() *MockManagerMockRecorder {\n\treturn m.recorder\n}\n\n// Add mocks base method\n// nolint\nfunc (m *MockManager) Add(arg0 manager.Runnable) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Add\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Add indicates an expected call of Add\n// nolint\nfunc (mr *MockManagerMockRecorder) Add(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Add\", reflect.TypeOf((*MockManager)(nil).Add), arg0)\n}\n\n// GetAdmissionDecoder mocks base method\n// nolint\nfunc (m *MockManager) GetAdmissionDecoder() types.Decoder {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetAdmissionDecoder\")\n\tret0, _ := ret[0].(types.Decoder)\n\treturn ret0\n}\n\n// GetAdmissionDecoder indicates an expected call of GetAdmissionDecoder\n// nolint\nfunc (mr *MockManagerMockRecorder) GetAdmissionDecoder() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetAdmissionDecoder\", reflect.TypeOf((*MockManager)(nil).GetAdmissionDecoder))\n}\n\n// GetCache mocks base method\n// nolint\nfunc (m *MockManager) GetCache() cache.Cache {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetCache\")\n\tret0, _ := ret[0].(cache.Cache)\n\treturn ret0\n}\n\n// GetCache indicates an expected call of GetCache\n// nolint\nfunc (mr *MockManagerMockRecorder) GetCache() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetCache\", reflect.TypeOf((*MockManager)(nil).GetCache))\n}\n\n// GetClient mocks base method\n// nolint\nfunc (m *MockManager) GetClient() client.Client {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetClient\")\n\tret0, _ := ret[0].(client.Client)\n\treturn ret0\n}\n\n// GetClient indicates an expected call of GetClient\n// nolint\nfunc (mr *MockManagerMockRecorder) GetClient() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetClient\", reflect.TypeOf((*MockManager)(nil).GetClient))\n}\n\n// GetConfig mocks base method\n// nolint\nfunc (m *MockManager) GetConfig() *rest.Config {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetConfig\")\n\tret0, _ := ret[0].(*rest.Config)\n\treturn ret0\n}\n\n// GetConfig indicates an expected call of GetConfig\n// nolint\nfunc (mr *MockManagerMockRecorder) GetConfig() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetConfig\", reflect.TypeOf((*MockManager)(nil).GetConfig))\n}\n\n// GetFieldIndexer mocks base method\n// nolint\nfunc (m *MockManager) GetFieldIndexer() client.FieldIndexer {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetFieldIndexer\")\n\tret0, _ := ret[0].(client.FieldIndexer)\n\treturn ret0\n}\n\n// GetFieldIndexer indicates an expected call of GetFieldIndexer\n// nolint\nfunc (mr *MockManagerMockRecorder) GetFieldIndexer() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetFieldIndexer\", reflect.TypeOf((*MockManager)(nil).GetFieldIndexer))\n}\n\n// GetRESTMapper mocks base method\n// nolint\nfunc (m *MockManager) GetRESTMapper() meta.RESTMapper {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetRESTMapper\")\n\tret0, _ := ret[0].(meta.RESTMapper)\n\treturn ret0\n}\n\n// GetRESTMapper indicates an expected call of GetRESTMapper\n// nolint\nfunc (mr *MockManagerMockRecorder) GetRESTMapper() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetRESTMapper\", reflect.TypeOf((*MockManager)(nil).GetRESTMapper))\n}\n\n// GetRecorder mocks base method\n// nolint\nfunc (m *MockManager) GetRecorder(arg0 string) record.EventRecorder {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetRecorder\", arg0)\n\tret0, _ := ret[0].(record.EventRecorder)\n\treturn ret0\n}\n\n// GetRecorder indicates an expected call of GetRecorder\n// nolint\nfunc (mr *MockManagerMockRecorder) GetRecorder(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetRecorder\", reflect.TypeOf((*MockManager)(nil).GetRecorder), arg0)\n}\n\n// GetScheme mocks base method\n// nolint\nfunc (m *MockManager) GetScheme() *runtime.Scheme {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"GetScheme\")\n\tret0, _ := ret[0].(*runtime.Scheme)\n\treturn ret0\n}\n\n// GetScheme indicates an expected call of GetScheme\n// nolint\nfunc (mr *MockManagerMockRecorder) GetScheme() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"GetScheme\", reflect.TypeOf((*MockManager)(nil).GetScheme))\n}\n\n// SetFields mocks base method\n// nolint\nfunc (m *MockManager) SetFields(arg0 interface{}) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SetFields\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SetFields indicates an expected call of SetFields\n// nolint\nfunc (mr *MockManagerMockRecorder) SetFields(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetFields\", reflect.TypeOf((*MockManager)(nil).SetFields), arg0)\n}\n\n// Start mocks base method\n// nolint\nfunc (m *MockManager) Start(arg0 <-chan struct{}) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Start\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Start indicates an expected call of Start\n// nolint\nfunc (mr *MockManagerMockRecorder) Start(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Start\", reflect.TypeOf((*MockManager)(nil).Start), arg0)\n}\n"
  },
  {
    "path": "monitor/internal/pod/mockzapcore_test.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: go.uber.org/zap/zapcore (interfaces: Core)\n\n// Package podmonitor is a generated GoMock package.\npackage podmonitor\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tzapcore \"go.uber.org/zap/zapcore\"\n)\n\n// MockCore is a mock of Core interface\n// nolint\ntype MockCore struct {\n\tctrl     *gomock.Controller\n\trecorder *MockCoreMockRecorder\n}\n\n// MockCoreMockRecorder is the mock recorder for MockCore\n// nolint\ntype MockCoreMockRecorder struct {\n\tmock *MockCore\n}\n\n// NewMockCore creates a new mock instance\n// nolint\nfunc NewMockCore(ctrl *gomock.Controller) *MockCore {\n\tmock := &MockCore{ctrl: ctrl}\n\tmock.recorder = &MockCoreMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockCore) EXPECT() *MockCoreMockRecorder {\n\treturn m.recorder\n}\n\n// Check mocks base method\n// nolint\nfunc (m *MockCore) Check(arg0 zapcore.Entry, arg1 *zapcore.CheckedEntry) *zapcore.CheckedEntry {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Check\", arg0, arg1)\n\tret0, _ := ret[0].(*zapcore.CheckedEntry)\n\treturn ret0\n}\n\n// Check indicates an expected call of Check\n// nolint\nfunc (mr *MockCoreMockRecorder) Check(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Check\", reflect.TypeOf((*MockCore)(nil).Check), arg0, arg1)\n}\n\n// Enabled mocks base method\n// nolint\nfunc (m *MockCore) Enabled(arg0 zapcore.Level) bool {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Enabled\", arg0)\n\tret0, _ := ret[0].(bool)\n\treturn ret0\n}\n\n// Enabled indicates an expected call of Enabled\n// nolint\nfunc (mr *MockCoreMockRecorder) Enabled(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Enabled\", reflect.TypeOf((*MockCore)(nil).Enabled), arg0)\n}\n\n// Sync mocks base method\n// nolint\nfunc (m *MockCore) Sync() error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Sync\")\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Sync indicates an expected call of Sync\n// nolint\nfunc (mr *MockCoreMockRecorder) Sync() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Sync\", reflect.TypeOf((*MockCore)(nil).Sync))\n}\n\n// With mocks base method\n// nolint\nfunc (m *MockCore) With(arg0 []zapcore.Field) zapcore.Core {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"With\", arg0)\n\tret0, _ := ret[0].(zapcore.Core)\n\treturn ret0\n}\n\n// With indicates an expected call of With\n// nolint\nfunc (mr *MockCoreMockRecorder) With(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"With\", reflect.TypeOf((*MockCore)(nil).With), arg0)\n}\n\n// Write mocks base method\n// nolint\nfunc (m *MockCore) Write(arg0 zapcore.Entry, arg1 []zapcore.Field) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Write\", arg0, arg1)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Write indicates an expected call of Write\n// nolint\nfunc (mr *MockCoreMockRecorder) Write(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Write\", reflect.TypeOf((*MockCore)(nil).Write), arg0, arg1)\n}\n"
  },
  {
    "path": "monitor/internal/pod/monitor.go",
    "content": "// +build linux !windows\n\npackage podmonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"time\"\n\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/trireme-lib/monitor/registerer\"\n\n\t\"k8s.io/client-go/rest\"\n\t\"k8s.io/client-go/tools/clientcmd\"\n\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n\t\"sigs.k8s.io/controller-runtime/pkg/manager\"\n\n\t\"go.uber.org/zap\"\n)\n\n// PodMonitor implements a monitor that sends pod events upstream\n// It is implemented as a filter on the standard DockerMonitor.\n// It gets all the PU events from the DockerMonitor and if the container is the POD container from Kubernetes,\n// It connects to the Kubernetes API and adds the tags that are coming from Kuberntes that cannot be found\ntype PodMonitor struct {\n\tlocalNode                 string\n\thandlers                  *config.ProcessorConfig\n\tmetadataExtractor         extractors.PodMetadataExtractor\n\tnetclsProgrammer          extractors.PodNetclsProgrammer\n\tpidsSetMaxProcsProgrammer extractors.PodPidsSetMaxProcsProgrammer\n\tresetNetcls               extractors.ResetNetclsKubepods\n\tsandboxExtractor          extractors.PodSandboxExtractor\n\tenableHostPods            bool\n\tworkers                   int\n\tkubeCfg                   *rest.Config\n\tkubeClient                client.Client\n\teventsCh                  chan event.GenericEvent\n\tresyncInfo                *ResyncInfoChan\n}\n\n// New returns a new kubernetes monitor.\nfunc New() *PodMonitor {\n\tpodMonitor := &PodMonitor{\n\t\teventsCh:   make(chan event.GenericEvent),\n\t\tresyncInfo: NewResyncInfoChan(),\n\t}\n\n\treturn podMonitor\n}\n\n// SetupConfig provides a configuration to implmentations. Every implmentation\n// can have its own config type.\nfunc (m *PodMonitor) SetupConfig(_ registerer.Registerer, cfg interface{}) error {\n\n\tdefaultConfig := DefaultConfig()\n\n\tif cfg == nil {\n\t\tcfg = defaultConfig\n\t}\n\n\tkubernetesconfig, ok := cfg.(*Config)\n\tif !ok {\n\t\treturn fmt.Errorf(\"Invalid configuration specified (type '%T')\", cfg)\n\t}\n\n\tkubernetesconfig = SetupDefaultConfig(kubernetesconfig)\n\n\t// build kubernetes config\n\tvar kubeCfg *rest.Config\n\tif len(kubernetesconfig.Kubeconfig) > 0 {\n\t\tvar err error\n\t\tkubeCfg, err = clientcmd.BuildConfigFromFlags(\"\", kubernetesconfig.Kubeconfig)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tvar err error\n\t\tkubeCfg, err = rest.InClusterConfig()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif kubernetesconfig.MetadataExtractor == nil {\n\t\treturn fmt.Errorf(\"missing metadata extractor\")\n\t}\n\n\tif kubernetesconfig.NetclsProgrammer == nil {\n\t\treturn fmt.Errorf(\"missing net_cls programmer\")\n\t}\n\n\tif kubernetesconfig.ResetNetcls == nil {\n\t\treturn fmt.Errorf(\"missing reset net_cls implementation\")\n\t}\n\tif kubernetesconfig.SandboxExtractor == nil {\n\t\treturn fmt.Errorf(\"missing SandboxExtractor implementation\")\n\t}\n\tif kubernetesconfig.Workers < 1 {\n\t\treturn fmt.Errorf(\"number of Kubernetes monitor workers must be at least 1\")\n\t}\n\t// Setting up Kubernetes\n\tm.kubeCfg = kubeCfg\n\tm.localNode = kubernetesconfig.Nodename\n\tm.enableHostPods = kubernetesconfig.EnableHostPods\n\tm.metadataExtractor = kubernetesconfig.MetadataExtractor\n\tm.netclsProgrammer = kubernetesconfig.NetclsProgrammer\n\tm.pidsSetMaxProcsProgrammer = kubernetesconfig.PidsSetMaxProcsProgrammer\n\tm.sandboxExtractor = kubernetesconfig.SandboxExtractor\n\tm.resetNetcls = kubernetesconfig.ResetNetcls\n\tm.workers = kubernetesconfig.Workers\n\n\treturn nil\n}\n\n// Run starts the monitor.\nfunc (m *PodMonitor) Run(ctx context.Context) error {\n\tif m.kubeCfg == nil {\n\t\treturn errors.New(\"pod: missing kubeconfig\")\n\t}\n\n\tif err := m.handlers.IsComplete(); err != nil {\n\t\treturn fmt.Errorf(\"pod: handlers are not complete: %s\", err.Error())\n\t}\n\n\t// ensure to run the reset net_cls\n\t// NOTE: we also call this during resync, however, that is not called at startup (we call ResyncWithAllPods instead before we return)\n\tif m.resetNetcls == nil {\n\t\treturn errors.New(\"pod: missing net_cls reset implementation\")\n\t}\n\tif err := m.resetNetcls(ctx); err != nil {\n\t\treturn fmt.Errorf(\"pod: failed to reset net_cls cgroups: %s\", err.Error())\n\t}\n\n\t// starts the manager in the background and will return once it is running\n\t// NOTE: This will block until the Kubernetes manager and all controllers are up. All errors are being handled within the function\n\tm.startManager(ctx)\n\n\t// call ResyncWithAllPods before we return from here\n\t// this will block until every pod at this point in time has been seeing at least one `Reconcile` call\n\t// we do this so that we build up our internal PU cache in the policy engine,\n\t// so that when we remove stale pods on startup, we don't remove them and create them again\n\tif err := ResyncWithAllPods(ctx, m.kubeClient, m.resyncInfo, m.eventsCh, m.localNode); err != nil {\n\t\tzap.L().Warn(\"Pod resync failed\", zap.Error(err))\n\t}\n\treturn nil\n}\n\n// SetupHandlers sets up handlers for monitors to invoke for various events such as\n// processing unit events and synchronization events. This will be called before Start()\n// by the consumer of the monitor\nfunc (m *PodMonitor) SetupHandlers(c *config.ProcessorConfig) {\n\tm.handlers = c\n}\n\n// Resync requests to the monitor to do a resync.\nfunc (m *PodMonitor) Resync(ctx context.Context) error {\n\tif m.resetNetcls != nil {\n\t\tif err := m.resetNetcls(ctx); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif m.kubeClient == nil {\n\t\treturn errors.New(\"pod: client has not been initialized yet\")\n\t}\n\n\treturn ResyncWithAllPods(ctx, m.kubeClient, m.resyncInfo, m.eventsCh, m.localNode)\n}\n\nconst (\n\tstartupWarningMessage = \"pod: the Kubernetes controller did not start within the last 5s. Waiting...\"\n)\n\nvar (\n\tretrySleep          = time.Second * 3\n\twarningMessageSleep = time.Second * 5\n\twarningTimeout      = time.Second * 5\n\tmanagerNew          = manager.New\n)\n\nfunc (m *PodMonitor) startManager(ctx context.Context) {\n\tvar mgr manager.Manager\n\n\tstartTimestamp := time.Now()\n\tcontrollerStarted := make(chan struct{})\n\n\tgo func() {\n\t\t// manager.New already contacts the Kubernetes API\n\t\tfor {\n\t\t\tvar err error\n\t\t\tmgr, err = managerNew(m.kubeCfg, manager.Options{})\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Error(\"pod: new manager instantiation failed. Retrying in 3s...\", zap.Error(err))\n\t\t\t\ttime.Sleep(retrySleep)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\n\t\t// Create the delete event controller first\n\t\tdc := NewDeleteController(mgr.GetClient(), m.localNode, m.handlers, m.sandboxExtractor, m.eventsCh)\n\t\tfor {\n\t\t\tif err := mgr.Add(dc); err != nil {\n\t\t\t\tzap.L().Error(\"pod: adding delete controller failed. Retrying in 3s...\", zap.Error(err))\n\t\t\t\ttime.Sleep(retrySleep)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\n\t\t// Create the main controller for the monitor\n\t\tfor {\n\t\t\tif err := addController(\n\t\t\t\tmgr,\n\t\t\t\tnewReconciler(mgr, m.handlers, m.metadataExtractor, m.netclsProgrammer, m.sandboxExtractor, m.localNode, m.enableHostPods, dc.GetDeleteCh(), dc.GetReconcileCh(), m.resyncInfo),\n\t\t\t\tm.workers,\n\t\t\t\tm.eventsCh,\n\t\t\t); err != nil {\n\t\t\t\tzap.L().Error(\"pod: adding main monitor controller failed. Retrying in 3s...\", zap.Error(err))\n\t\t\t\ttime.Sleep(retrySleep)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\n\t\tfor {\n\t\t\tif err := mgr.Add(&runnable{ch: controllerStarted}); err != nil {\n\t\t\t\tzap.L().Error(\"pod: adding side controller failed. Retrying in 3s...\", zap.Error(err))\n\t\t\t\ttime.Sleep(retrySleep)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\n\t\t// starting the manager is a bit awkward:\n\t\t// - it does not use contexts\n\t\t// - we pass in a fake signal handler channel\n\t\t// - we start another go routine which waits for the context to be cancelled\n\t\t//   and closes that channel if that is the case\n\n\t\tfor {\n\t\t\tif err := mgr.Start(ctx.Done()); err != nil {\n\t\t\t\tzap.L().Error(\"pod: manager start failed. Retrying in 3s...\", zap.Error(err))\n\t\t\t\ttime.Sleep(retrySleep)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t}()\n\nwaitLoop:\n\tfor {\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\tbreak waitLoop\n\t\tcase <-time.After(warningMessageSleep):\n\t\t\t// we give everything 5 seconds to report back before we issue a warning\n\t\t\tzap.L().Warn(startupWarningMessage)\n\t\tcase <-controllerStarted:\n\t\t\tm.kubeClient = mgr.GetClient()\n\t\t\tt := time.Since(startTimestamp)\n\t\t\tif t > warningTimeout {\n\t\t\t\tzap.L().Warn(\"pod: controller startup finished, but took longer than expected\", zap.Duration(\"duration\", t))\n\t\t\t} else {\n\t\t\t\tzap.L().Debug(\"pod: controller startup finished\", zap.Duration(\"duration\", t))\n\t\t\t}\n\t\t\tbreak waitLoop\n\t\t}\n\t}\n}\n\ntype runnable struct {\n\tch chan struct{}\n}\n\nfunc (r *runnable) Start(z <-chan struct{}) error {\n\t// close the indicator channel which means that the manager has been started successfully\n\tclose(r.ch)\n\n\t// stay up and running, the manager needs that\n\t<-z\n\treturn nil\n}\n"
  },
  {
    "path": "monitor/internal/pod/monitor_test.go",
    "content": "// +build linux !windows\n\npackage podmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"math\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang/mock/gomock\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/policy/mockpolicy\"\n\t\"go.uber.org/zap\"\n\tzapcore \"go.uber.org/zap/zapcore\"\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/runtime\"\n\t\"k8s.io/client-go/kubernetes/scheme\"\n\t\"k8s.io/client-go/rest\"\n\tcache \"k8s.io/client-go/tools/cache\"\n\t\"sigs.k8s.io/controller-runtime/pkg/controller\"\n\t\"sigs.k8s.io/controller-runtime/pkg/manager\"\n\t\"sigs.k8s.io/controller-runtime/pkg/runtime/inject\"\n)\n\nfunc createNewPodMonitor() *PodMonitor {\n\tm := New()\n\tmockError := fmt.Errorf(\"mockerror: overwrite function with your own mock before using\")\n\tmonitorConfig := DefaultConfig()\n\tmonitorConfig.Kubeconfig = \"testdata/kubeconfig\"\n\tmonitorConfig.MetadataExtractor = func(context.Context, *corev1.Pod, bool) (*policy.PURuntime, error) {\n\t\treturn nil, mockError\n\t}\n\tmonitorConfig.NetclsProgrammer = func(context.Context, *corev1.Pod, policy.RuntimeReader) error {\n\t\treturn mockError\n\t}\n\tmonitorConfig.PidsSetMaxProcsProgrammer = func(ctx context.Context, pod *corev1.Pod, maxProcs int) error {\n\t\treturn mockError\n\t}\n\tmonitorConfig.ResetNetcls = func(context.Context) error {\n\t\treturn mockError\n\t}\n\tmonitorConfig.SandboxExtractor = func(context.Context, *corev1.Pod) (string, error) {\n\t\treturn \"\", mockError\n\t}\n\n\tif err := m.SetupConfig(nil, monitorConfig); err != nil {\n\t\tpanic(err)\n\t}\n\treturn m\n}\n\nfunc isKubernetesController() gomock.Matcher {\n\treturn &controllerMatcher{}\n}\n\ntype controllerMatcher struct{}\n\nvar _ gomock.Matcher = &controllerMatcher{}\n\n// Matches returns whether x is a match.\nfunc (m *controllerMatcher) Matches(x interface{}) bool {\n\t_, ok := x.(controller.Controller)\n\treturn ok\n}\n\n// String describes what the matcher matches.\nfunc (m *controllerMatcher) String() string {\n\treturn \"is not a Kubernetes controller\"\n}\n\nconst durationKey = \"duration\"\n\nfunc TestPodMonitor_startManager(t *testing.T) {\n\torigLogger := zap.L()\n\t// reset logger after this test completes\n\tdefer func() {\n\t\tzap.ReplaceGlobals(origLogger)\n\t}()\n\n\t// overwrite globals\n\tretrySleep = time.Duration(0)\n\twarningMessageSleep = time.Millisecond * 300\n\twarningTimeout = time.Millisecond * 300\n\n\t// use this like:\n\t//   managerNew = managerNewTest(mgr, nil)\n\tmanagerNewTest := func(mgr *MockManager, err error) func(*rest.Config, manager.Options) (manager.Manager, error) {\n\t\treturn func(*rest.Config, manager.Options) (manager.Manager, error) {\n\t\t\treturn mgr, err\n\t\t}\n\t}\n\n\tm := createNewPodMonitor()\n\n\ttests := []struct {\n\t\tname           string\n\t\tm              *PodMonitor\n\t\texpect         func(*testing.T, *gomock.Controller, context.Context, context.CancelFunc)\n\t\twantKubeClient bool\n\t}{\n\t\t{\n\t\t\tname:           \"successful startup without any errors in the expected timeframe\",\n\t\t\tm:              m,\n\t\t\twantKubeClient: true,\n\t\t\texpect: func(t *testing.T, ctrl *gomock.Controller, ctx context.Context, cancel context.CancelFunc) {\n\t\t\t\tmgr := NewMockManager(ctrl)\n\t\t\t\tmanagerNew = managerNewTest(mgr, nil)\n\t\t\t\tc := NewMockClient(ctrl)\n\t\t\t\tcch := NewMockCache(ctrl)\n\t\t\t\tinf := NewMockSharedIndexInformer(ctrl)\n\n\t\t\t\t// this is our version of a mocked SetFields function\n\t\t\t\tvar sf func(i interface{}) error\n\t\t\t\tsf = func(i interface{}) error {\n\t\t\t\t\tif _, err := inject.InjectorInto(sf, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.SchemeInto(scheme.Scheme, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.CacheInto(cch, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.StopChannelInto(ctx.Done(), i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\n\t\t\t\t// delete controller\n\t\t\t\tmgr.EXPECT().Add(gomock.AssignableToTypeOf(&DeleteController{})).Times(1).Return(nil)\n\t\t\t\tmgr.EXPECT().GetClient().Times(1).Return(c)\n\n\t\t\t\t// main controller\n\t\t\t\t// newReconciler calls these\n\t\t\t\tmgr.EXPECT().GetClient().Times(1).Return(c)\n\t\t\t\tmgr.EXPECT().GetRecorder(\"trireme-pod-controller\").Times(1).Return(nil)\n\t\t\t\t// addController calls controller.New which calls these\n\t\t\t\tmgr.EXPECT().SetFields(gomock.AssignableToTypeOf(&ReconcilePod{})).Times(1).DoAndReturn(sf)\n\t\t\t\tmgr.EXPECT().GetCache().Times(1).Return(cch)\n\t\t\t\tmgr.EXPECT().GetConfig().Times(1).Return(nil)\n\t\t\t\tmgr.EXPECT().GetScheme().Times(1).Return(scheme.Scheme)\n\t\t\t\tmgr.EXPECT().GetClient().Times(2).Return(c) // once inside of controller.New and once by us\n\t\t\t\tmgr.EXPECT().GetRecorder(\"trireme-pod-controller\").Times(1).Return(nil)\n\t\t\t\tmgr.EXPECT().Add(isKubernetesController()).Times(1).DoAndReturn(func(run manager.Runnable) error {\n\t\t\t\t\treturn sf(run)\n\t\t\t\t})\n\t\t\t\t// these are called by our c.Watch statement for registering our Pod event source\n\t\t\t\t// NOTE: this will also call Start on the informer already! This is the reason why the mgr.Start which\n\t\t\t\t//       waits for the caches to be filled will already download a fresh list of all the pods!\n\t\t\t\tcch.EXPECT().GetInformer(gomock.AssignableToTypeOf(&corev1.Pod{})).Times(1).DoAndReturn(func(arg0 runtime.Object) (cache.SharedIndexInformer, error) {\n\t\t\t\t\treturn inf, nil\n\t\t\t\t})\n\t\t\t\tinf.EXPECT().AddEventHandler(gomock.Any()).Times(1)\n\n\t\t\t\t// monitoring/side controller\n\t\t\t\tvar r manager.Runnable\n\t\t\t\tmgr.EXPECT().Add(gomock.AssignableToTypeOf(&runnable{})).DoAndReturn(func(run manager.Runnable) error {\n\t\t\t\t\tr = run\n\t\t\t\t\treturn nil\n\t\t\t\t}).Times(1)\n\n\t\t\t\t// the manager start needs to at least start the monitoring controller for the right behaviour in our code\n\t\t\t\tmgr.EXPECT().Start(gomock.Any()).DoAndReturn(func(z <-chan struct{}) error {\n\t\t\t\t\tgo r.Start(z) //nolint\n\t\t\t\t\treturn nil\n\t\t\t\t}).Times(1)\n\n\t\t\t\t// after start, we call GetClient as well to assign it to the monitor\n\t\t\t\tmgr.EXPECT().GetClient().Times(1).Return(c)\n\n\t\t\t\t// on successful startup, we only expect one debug message at the end\n\t\t\t\t// we setup everything here to ensure that *only* this log will appear\n\t\t\t\t// we are additionally testing if the logic of the if condition worked\n\t\t\t\tzc := NewMockCore(ctrl)\n\t\t\t\tlogger := zap.New(zc)\n\t\t\t\tzap.ReplaceGlobals(logger)\n\t\t\t\tzc.EXPECT().Enabled(zapcore.DebugLevel).Times(1).Return(true)\n\t\t\t\tzc.EXPECT().Check(gomock.Any(), gomock.Any()).Times(1).DoAndReturn(func(ent zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {\n\t\t\t\t\treturn ce.AddCore(ent, zc)\n\t\t\t\t})\n\t\t\t\tzc.EXPECT().Write(gomock.Any(), gomock.Any()).Times(1).DoAndReturn(func(ent zapcore.Entry, fields []zapcore.Field) error {\n\t\t\t\t\texpectedLogMessage := \"pod: controller startup finished\"\n\t\t\t\t\tif !strings.HasPrefix(ent.Message, expectedLogMessage) {\n\t\t\t\t\t\tt.Errorf(\"expectedLogMessage = '%s', ent.Message = '%s'\", expectedLogMessage, ent.Message)\n\t\t\t\t\t\treturn nil\n\t\t\t\t\t}\n\t\t\t\t\tvar foundDuration bool\n\t\t\t\t\tfor _, field := range fields {\n\t\t\t\t\t\tif field.Key == durationKey {\n\t\t\t\t\t\t\tfoundDuration = true\n\t\t\t\t\t\t\tif field.Type != zapcore.DurationType {\n\t\t\t\t\t\t\t\tt.Errorf(\"duration field of log message is not DurationType (8), but %v\", field.Type)\n\t\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\td := time.Duration(field.Integer)\n\t\t\t\t\t\t\tif d > warningTimeout {\n\t\t\t\t\t\t\t\tt.Errorf(\"startup time (%s) surpassed the warningTimeout (%s), but printed it as debug log instead of warning\", d, warningTimeout)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif !foundDuration {\n\t\t\t\t\t\tt.Errorf(\"did not find debug log message which has test duration field\")\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:           \"successful startup without any errors taking longer than expected\",\n\t\t\tm:              m,\n\t\t\twantKubeClient: true,\n\t\t\texpect: func(t *testing.T, ctrl *gomock.Controller, ctx context.Context, cancel context.CancelFunc) {\n\t\t\t\tmgr := NewMockManager(ctrl)\n\t\t\t\tmanagerNew = managerNewTest(mgr, nil)\n\t\t\t\tc := NewMockClient(ctrl)\n\t\t\t\tcch := NewMockCache(ctrl)\n\t\t\t\tinf := NewMockSharedIndexInformer(ctrl)\n\n\t\t\t\t// this is our version of a mocked SetFields function\n\t\t\t\tvar sf func(i interface{}) error\n\t\t\t\tsf = func(i interface{}) error {\n\t\t\t\t\tif _, err := inject.InjectorInto(sf, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.SchemeInto(scheme.Scheme, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.CacheInto(cch, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.StopChannelInto(ctx.Done(), i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\n\t\t\t\t// delete controller\n\t\t\t\tmgr.EXPECT().Add(gomock.AssignableToTypeOf(&DeleteController{})).Times(1).Return(nil)\n\t\t\t\tmgr.EXPECT().GetClient().Times(1).Return(c)\n\n\t\t\t\t// main controller\n\t\t\t\t// newReconciler calls these\n\t\t\t\tmgr.EXPECT().GetClient().Times(1).Return(c)\n\t\t\t\tmgr.EXPECT().GetRecorder(\"trireme-pod-controller\").Times(1).Return(nil)\n\t\t\t\t// addController calls controller.New which calls these\n\t\t\t\tmgr.EXPECT().SetFields(gomock.AssignableToTypeOf(&ReconcilePod{})).Times(1).DoAndReturn(sf)\n\t\t\t\tmgr.EXPECT().GetCache().Times(1).Return(cch)\n\t\t\t\tmgr.EXPECT().GetConfig().Times(1).Return(nil)\n\t\t\t\tmgr.EXPECT().GetScheme().Times(1).Return(scheme.Scheme)\n\t\t\t\tmgr.EXPECT().GetClient().Times(2).Return(c) // once inside of controller.New and once by us\n\t\t\t\tmgr.EXPECT().GetRecorder(\"trireme-pod-controller\").Times(1).Return(nil)\n\t\t\t\tmgr.EXPECT().Add(isKubernetesController()).Times(1).DoAndReturn(func(run manager.Runnable) error {\n\t\t\t\t\treturn sf(run)\n\t\t\t\t})\n\t\t\t\t// these are called by our c.Watch statement for registering our Pod event source\n\t\t\t\t// NOTE: this will also call Start on the informer already! This is the reason why the mgr.Start which\n\t\t\t\t//       waits for the caches to be filled will already download a fresh list of all the pods!\n\t\t\t\tcch.EXPECT().GetInformer(gomock.AssignableToTypeOf(&corev1.Pod{})).Times(1).DoAndReturn(func(arg0 runtime.Object) (cache.SharedIndexInformer, error) {\n\t\t\t\t\treturn inf, nil\n\t\t\t\t})\n\t\t\t\tinf.EXPECT().AddEventHandler(gomock.Any()).Times(1)\n\n\t\t\t\t// monitoring/side controller\n\t\t\t\tvar r manager.Runnable\n\t\t\t\tmgr.EXPECT().Add(gomock.AssignableToTypeOf(&runnable{})).DoAndReturn(func(run manager.Runnable) error {\n\t\t\t\t\tr = run\n\t\t\t\t\treturn nil\n\t\t\t\t}).Times(1)\n\n\t\t\t\t// the manager start needs to at least start the monitoring controller for the right behaviour in our code\n\t\t\t\tmgr.EXPECT().Start(gomock.Any()).DoAndReturn(func(z <-chan struct{}) error {\n\t\t\t\t\t// delay the startup by the warningMessage or warningTimeout messages depending on which one is longer\n\t\t\t\t\ttime.Sleep(time.Duration(math.Max(float64(warningTimeout), float64(warningMessageSleep))))\n\t\t\t\t\tgo r.Start(z) //nolint\n\t\t\t\t\treturn nil\n\t\t\t\t}).Times(1)\n\n\t\t\t\t// after start, we call GetClient as well to assign it to the monitor\n\t\t\t\tmgr.EXPECT().GetClient().Times(1).Return(c)\n\n\t\t\t\t// on successful startup, we only expect one debug message at the end\n\t\t\t\t// we setup everything here to ensure that *only* this log will appear\n\t\t\t\t// we are additionally testing if the logic of the if condition worked\n\t\t\t\tzc := NewMockCore(ctrl)\n\t\t\t\tlogger := zap.New(zc)\n\t\t\t\tzap.ReplaceGlobals(logger)\n\t\t\t\tzc.EXPECT().Enabled(zapcore.WarnLevel).Times(2).Return(true)\n\t\t\t\tzc.EXPECT().Check(gomock.Any(), gomock.Any()).Times(2).DoAndReturn(func(ent zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {\n\t\t\t\t\treturn ce.AddCore(ent, zc)\n\t\t\t\t})\n\t\t\t\texpectedLogMessages := []string{\n\t\t\t\t\t\"pod: the Kubernetes controller did not start within the last 5s. Waiting...\",\n\t\t\t\t\t\"pod: controller startup finished, but took longer than expected\",\n\t\t\t\t}\n\t\t\t\tzc.EXPECT().Write(gomock.Any(), gomock.Any()).Times(2).DoAndReturn(func(ent zapcore.Entry, fields []zapcore.Field) error {\n\t\t\t\t\tvar found bool\n\t\t\t\t\tfor _, expectedLogMessage := range expectedLogMessages {\n\t\t\t\t\t\tif ent.Message == expectedLogMessage {\n\t\t\t\t\t\t\tfound = true\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif !found {\n\t\t\t\t\t\tt.Errorf(\"expectedLogMessages = '%s', ent.Message = '%s'\", expectedLogMessages, ent.Message)\n\t\t\t\t\t}\n\n\t\t\t\t\t// this is where we expect the duration field\n\t\t\t\t\tif ent.Message == \"pod: controller startup finished, but took longer than expected undefined\" {\n\t\t\t\t\t\tvar foundDuration bool\n\t\t\t\t\t\tfor _, field := range fields {\n\t\t\t\t\t\t\tif field.Key == durationKey {\n\t\t\t\t\t\t\t\tfoundDuration = true\n\t\t\t\t\t\t\t\tif field.Type != zapcore.DurationType {\n\t\t\t\t\t\t\t\t\tt.Errorf(\"duration field of log message is not DurationType (8), but %v\", field.Type)\n\t\t\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\td := time.Duration(field.Integer)\n\t\t\t\t\t\t\t\tif d < warningTimeout {\n\t\t\t\t\t\t\t\t\tt.Errorf(\"startup time (%s) surpassed the warningTimeout (%s), but printed it as warning log instead of debug\", d, warningTimeout)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif !foundDuration {\n\t\t\t\t\t\t\tt.Errorf(\"did not find warning log message which has test duration field\")\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:           \"successful startup with an error once for all actions\",\n\t\t\tm:              m,\n\t\t\twantKubeClient: true,\n\t\t\texpect: func(t *testing.T, ctrl *gomock.Controller, ctx context.Context, cancel context.CancelFunc) {\n\t\t\t\tmgr := NewMockManager(ctrl)\n\t\t\t\tvar managerNewErrorerd bool\n\t\t\t\tmanagerNew = func(*rest.Config, manager.Options) (manager.Manager, error) {\n\t\t\t\t\tif !managerNewErrorerd {\n\t\t\t\t\t\tmanagerNewErrorerd = true\n\t\t\t\t\t\treturn nil, fmt.Errorf(\"errored\")\n\t\t\t\t\t}\n\t\t\t\t\treturn mgr, nil\n\t\t\t\t}\n\t\t\t\tc := NewMockClient(ctrl)\n\t\t\t\tcch := NewMockCache(ctrl)\n\t\t\t\tinf := NewMockSharedIndexInformer(ctrl)\n\n\t\t\t\t// this is our version of a mocked SetFields function\n\t\t\t\tvar sf func(i interface{}) error\n\t\t\t\tsf = func(i interface{}) error {\n\t\t\t\t\tif _, err := inject.InjectorInto(sf, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.SchemeInto(scheme.Scheme, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.CacheInto(cch, i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := inject.StopChannelInto(ctx.Done(), i); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\n\t\t\t\t// delete controller\n\t\t\t\tvar deleteControllerErrored bool\n\t\t\t\tmgr.EXPECT().Add(gomock.AssignableToTypeOf(&DeleteController{})).Times(2).DoAndReturn(func(run manager.Runnable) error {\n\t\t\t\t\tif !deleteControllerErrored {\n\t\t\t\t\t\tdeleteControllerErrored = true\n\t\t\t\t\t\treturn fmt.Errorf(\"errored\")\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t\tmgr.EXPECT().GetClient().Times(1).Return(c)\n\n\t\t\t\t// main controller\n\t\t\t\t// newReconciler calls these\n\t\t\t\tmgr.EXPECT().GetClient().Times(2).Return(c)\n\t\t\t\tmgr.EXPECT().GetRecorder(\"trireme-pod-controller\").Times(2).Return(nil)\n\t\t\t\t// addController calls controller.New which calls these\n\t\t\t\tmgr.EXPECT().SetFields(gomock.AssignableToTypeOf(&ReconcilePod{})).Times(2).DoAndReturn(sf)\n\t\t\t\tmgr.EXPECT().GetCache().Times(2).Return(cch)\n\t\t\t\tmgr.EXPECT().GetConfig().Times(2).Return(nil)\n\t\t\t\tmgr.EXPECT().GetScheme().Times(2).Return(scheme.Scheme)\n\t\t\t\tmgr.EXPECT().GetClient().Times(3).Return(c) // twice in controller.New because of the failure, and one time after that (that is after mgr.Add actually)\n\t\t\t\tmgr.EXPECT().GetRecorder(\"trireme-pod-controller\").Times(2).Return(nil)\n\t\t\t\tvar mainControllerErrored bool\n\t\t\t\tmgr.EXPECT().Add(isKubernetesController()).Times(2).DoAndReturn(func(run manager.Runnable) error {\n\t\t\t\t\tif !mainControllerErrored {\n\t\t\t\t\t\tmainControllerErrored = true\n\t\t\t\t\t\treturn fmt.Errorf(\"errored\")\n\t\t\t\t\t}\n\t\t\t\t\treturn sf(run)\n\t\t\t\t})\n\t\t\t\t// these are called by our c.Watch statement for registering our Pod event source\n\t\t\t\t// NOTE: this will also call Start on the informer already! This is the reason why the mgr.Start which\n\t\t\t\t//       waits for the caches to be filled will already download a fresh list of all the pods!\n\t\t\t\tcch.EXPECT().GetInformer(gomock.AssignableToTypeOf(&corev1.Pod{})).Times(1).DoAndReturn(func(arg0 runtime.Object) (cache.SharedIndexInformer, error) {\n\t\t\t\t\treturn inf, nil\n\t\t\t\t})\n\t\t\t\tinf.EXPECT().AddEventHandler(gomock.Any()).Times(1)\n\n\t\t\t\t// monitoring/side controller\n\t\t\t\tvar r manager.Runnable\n\t\t\t\tvar sideControllerErrored bool\n\t\t\t\tmgr.EXPECT().Add(gomock.AssignableToTypeOf(&runnable{})).DoAndReturn(func(run manager.Runnable) error {\n\t\t\t\t\tif !sideControllerErrored {\n\t\t\t\t\t\tsideControllerErrored = true\n\t\t\t\t\t\treturn fmt.Errorf(\"errored\")\n\t\t\t\t\t}\n\t\t\t\t\tr = run\n\t\t\t\t\treturn nil\n\t\t\t\t}).Times(2)\n\n\t\t\t\t// the manager start needs to at least start the monitoring controller for the right behaviour in our code\n\t\t\t\tvar managerStartErrored bool\n\t\t\t\tmgr.EXPECT().Start(gomock.Any()).DoAndReturn(func(z <-chan struct{}) error {\n\t\t\t\t\tif !managerStartErrored {\n\t\t\t\t\t\tmanagerStartErrored = true\n\t\t\t\t\t\treturn fmt.Errorf(\"errored\")\n\t\t\t\t\t}\n\t\t\t\t\tgo r.Start(z) //nolint\n\t\t\t\t\treturn nil\n\t\t\t\t}).Times(2)\n\n\t\t\t\t// after start, we call GetClient as well to assign it to the monitor\n\t\t\t\tmgr.EXPECT().GetClient().Times(1).Return(c)\n\n\t\t\t\t// on successful startup, we only expect one debug message at the end\n\t\t\t\t// we setup everything here to ensure that *only* this log will appear\n\t\t\t\t// we are additionally testing if the logic of the if condition worked\n\t\t\t\tzc := NewMockCore(ctrl)\n\t\t\t\tlogger := zap.New(zc)\n\t\t\t\tzap.ReplaceGlobals(logger)\n\t\t\t\tzc.EXPECT().Enabled(zapcore.DebugLevel).Times(1).Return(true)\n\t\t\t\tzc.EXPECT().Enabled(zapcore.ErrorLevel).Times(5).Return(true)\n\t\t\t\tzc.EXPECT().Check(gomock.Any(), gomock.Any()).Times(6).DoAndReturn(func(ent zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {\n\t\t\t\t\treturn ce.AddCore(ent, zc)\n\t\t\t\t})\n\t\t\t\texpectedLogMessages := []string{\n\t\t\t\t\t\"pod: new manager instantiation failed. Retrying in 3s...\",\n\t\t\t\t\t\"pod: adding delete controller failed. Retrying in 3s...\",\n\t\t\t\t\t\"pod: adding main monitor controller failed. Retrying in 3s...\",\n\t\t\t\t\t\"pod: adding side controller failed. Retrying in 3s...\",\n\t\t\t\t\t\"pod: manager start failed. Retrying in 3s...\",\n\t\t\t\t\t\"pod: controller startup finished\",\n\t\t\t\t}\n\t\t\t\tzc.EXPECT().Write(gomock.Any(), gomock.Any()).Times(6).DoAndReturn(func(ent zapcore.Entry, fields []zapcore.Field) error {\n\t\t\t\t\tvar found bool\n\t\t\t\t\tfor _, expectedLogMessage := range expectedLogMessages {\n\t\t\t\t\t\tif ent.Message == expectedLogMessage {\n\t\t\t\t\t\t\tfound = true\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif !found {\n\t\t\t\t\t\tt.Errorf(\"expectedLogMessages = '%s', ent.Message = '%s'\", expectedLogMessages, ent.Message)\n\t\t\t\t\t}\n\n\t\t\t\t\t// this is where we expect the duration field\n\t\t\t\t\tif ent.Message == \"pod: controller startup finished\" {\n\t\t\t\t\t\tvar foundDuration bool\n\t\t\t\t\t\tfor _, field := range fields {\n\t\t\t\t\t\t\tif field.Key == durationKey {\n\t\t\t\t\t\t\t\tfoundDuration = true\n\t\t\t\t\t\t\t\tif field.Type != zapcore.DurationType {\n\t\t\t\t\t\t\t\t\tt.Errorf(\"duration field of log message is not DurationType (8), but %v\", field.Type)\n\t\t\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\td := time.Duration(field.Integer)\n\t\t\t\t\t\t\t\tif d > warningTimeout {\n\t\t\t\t\t\t\t\t\tt.Errorf(\"startup time (%s) surpassed the warningTimeout (%s), but printed it as debug log instead of warning\", d, warningTimeout)\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif !foundDuration {\n\t\t\t\t\t\t\tt.Errorf(\"did not find debug log message which has test duration field\")\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\treturn nil\n\t\t\t\t})\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname:           \"context gets cancelled\",\n\t\t\tm:              m,\n\t\t\twantKubeClient: false,\n\t\t\texpect: func(t *testing.T, ctrl *gomock.Controller, ctx context.Context, cancel context.CancelFunc) {\n\t\t\t\tmanagerNew = managerNewTest(nil, fmt.Errorf(\"error\"))\n\t\t\t\tzc := NewMockCore(ctrl)\n\t\t\t\tlogger := zap.New(zc)\n\t\t\t\tzap.ReplaceGlobals(logger)\n\t\t\t\tzc.EXPECT().Enabled(zapcore.ErrorLevel).AnyTimes().Return(false)\n\t\t\t\tcancel()\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// create a mock controller per test run to track mocked calls\n\t\t\t// call expect() to register and prepare for the side effects of the functions\n\t\t\t// always nil the kubeClient for every call\n\t\t\tctx, cancel := context.WithCancel(context.Background())\n\t\t\tctrl := gomock.NewController(t)\n\t\t\ttt.expect(t, ctrl, ctx, cancel)\n\t\t\ttt.m.kubeClient = nil\n\n\t\t\t// probably paranoid: this ensures that nothing in the tested function actually calls out to the policy engine yet (ctrl.Finish() would catch those)\n\t\t\thandler := mockpolicy.NewMockResolver(ctrl)\n\t\t\tpc := &config.ProcessorConfig{\n\t\t\t\tPolicy: handler,\n\t\t\t}\n\t\t\ttt.m.SetupHandlers(pc)\n\n\t\t\t// now execute the mocked test\n\t\t\ttt.m.startManager(ctx)\n\n\t\t\t// do the kubeclient check\n\t\t\tif tt.wantKubeClient && tt.m.kubeClient == nil {\n\t\t\t\tt.Errorf(\"PodMonitor.startManager() kubeClient = %v, wantKubeClient %v\", tt.m.kubeClient, tt.wantKubeClient)\n\t\t\t}\n\t\t\tif !tt.wantKubeClient && tt.m.kubeClient != nil {\n\t\t\t\tt.Errorf(\"PodMonitor.startManager() kubeClient = %v, wantKubeClient %v\", tt.m.kubeClient, tt.wantKubeClient)\n\t\t\t}\n\n\t\t\t// call Finish on every test run to ensure the calls add up per test\n\t\t\t// this is essentially the real check of all the test conditions as the whole function is side-effecting only\n\t\t\tctrl.Finish()\n\n\t\t\t// last but not least, call cancel() so that all mocked routines which were depending on this context stop for sure\n\t\t\tcancel()\n\t\t})\n\t}\n}\n\nfunc TestPodMonitor_Resync(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tctx := context.Background()\n\n\tc := NewMockClient(ctrl)\n\tm := createNewPodMonitor()\n\thandler := mockpolicy.NewMockResolver(ctrl)\n\tpc := &config.ProcessorConfig{\n\t\tPolicy: handler,\n\t}\n\tm.SetupHandlers(pc)\n\n\ttype args struct {\n\t\tctx context.Context\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tm       *PodMonitor\n\t\texpect  func(t *testing.T, m *PodMonitor)\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"resync fails with a failing reset netcls\",\n\t\t\tm:    m,\n\t\t\targs: args{\n\t\t\t\tctx: ctx,\n\t\t\t},\n\t\t\texpect: func(t *testing.T, m *PodMonitor) {\n\t\t\t\tm.kubeClient = c\n\t\t\t\tm.resetNetcls = func(context.Context) error {\n\t\t\t\t\treturn fmt.Errorf(\"resync error\")\n\t\t\t\t}\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"resync fails with a missing kubeclient\",\n\t\t\tm:    m,\n\t\t\targs: args{\n\t\t\t\tctx: ctx,\n\t\t\t},\n\t\t\texpect: func(t *testing.T, m *PodMonitor) {\n\t\t\t\tm.kubeClient = nil\n\t\t\t\tm.resetNetcls = func(context.Context) error {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"successful call to ResyncWathAllPods\",\n\t\t\tm:    m,\n\t\t\targs: args{\n\t\t\t\tctx: ctx,\n\t\t\t},\n\t\t\texpect: func(t *testing.T, m *PodMonitor) {\n\t\t\t\tm.kubeClient = c\n\t\t\t\tm.resetNetcls = func(context.Context) error {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t\tc.EXPECT().List(gomock.Any(), gomock.Any(), gomock.Any()).Times(1).Return(nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t\t// not more to test, the heavy lifting is done in ResyncWithAllPods\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\ttt.expect(t, tt.m)\n\t\t\tif err := tt.m.Resync(tt.args.ctx); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"PodMonitor.Resync() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/pod/resync.go",
    "content": "// +build linux !windows\n\npackage podmonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"sync\"\n\t\"time\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n\n\t\"go.uber.org/zap\"\n)\n\n// ResyncWithAllPods is called from the implemented resync, it will list all pods\n// and fire them down the event source (the generic event channel).\n// It will block until every pod at the time of calling has been calling `Reconcile` at least once.\nfunc ResyncWithAllPods(ctx context.Context, c client.Client, i *ResyncInfoChan, evCh chan<- event.GenericEvent, nodeName string) error {\n\tzap.L().Debug(\"Pod resync: starting to resync all pods\")\n\tif c == nil {\n\t\treturn errors.New(\"pod: no client available\")\n\t}\n\n\tif evCh == nil {\n\t\treturn errors.New(\"pod: no event source available\")\n\t}\n\n\tif i == nil {\n\t\treturn errors.New(\"pod: no resync info channel available\")\n\t}\n\n\tlist := &corev1.PodList{}\n\tif err := c.List(ctx, &client.ListOptions{}, list); err != nil {\n\t\treturn fmt.Errorf(\"pod: %s\", err.Error())\n\t}\n\n\t// build a map of pods that we will expect to turn true\n\tm := make(map[string]bool)\n\tfor _, pod := range list.Items {\n\t\tif pod.Spec.NodeName != nodeName {\n\t\t\tcontinue\n\t\t}\n\t\tpodName := pod.GetName()\n\t\tpodNamespace := pod.GetNamespace()\n\t\tif podName != \"\" && podNamespace != \"\" {\n\t\t\tm[fmt.Sprintf(\"%s/%s\", podNamespace, podName)] = false\n\t\t}\n\t}\n\tzap.L().Debug(\"Pod resync: pods that need to be resynced\", zap.Any(\"pods\", m))\n\n\t// Request that the controller reports to us from now on\n\ti.EnableNeedsInfo()\n\n\t// fire away events to the controller\n\tfor _, pod := range list.Items {\n\t\tif pod.Spec.NodeName != nodeName {\n\t\t\tcontinue\n\t\t}\n\t\tp := pod.DeepCopy()\n\t\tevCh <- event.GenericEvent{\n\t\t\tMeta:   p.GetObjectMeta(),\n\t\t\tObject: p,\n\t\t}\n\t}\n\n\t// now wait for all pods to have reported back\n\tbegin := time.Now()\nwaitLoop:\n\tfor {\n\t\tif time.Since(begin) > (time.Second * 60) {\n\t\t\tzap.L().Warn(\"Pod resync: failed to reconcile on all pods. Unblocking now anyway.\")\n\t\t\tbreak waitLoop\n\t\t}\n\n\t\tselect {\n\t\tcase info := <-*i.GetInfoCh():\n\t\t\tif _, ok := m[info]; ok {\n\t\t\t\tzap.L().Debug(\"Pod resync: pod that is part of the resync\", zap.String(\"pod\", info))\n\t\t\t\tm[info] = true\n\t\t\t} else {\n\t\t\t\tzap.L().Debug(\"Pod resync: *not* a pod that is part of the resync\", zap.String(\"pod\", info))\n\t\t\t}\n\t\tcase <-time.After(time.Second * 5):\n\t\t\tzap.L().Debug(\"Pod resync: timeout waiting for pod reconcile\")\n\t\t}\n\n\t\t// now check if we can abort already\n\t\tfor _, v := range m {\n\t\t\tif !v {\n\t\t\t\tcontinue waitLoop\n\t\t\t}\n\t\t}\n\t\tbreak waitLoop\n\t}\n\ti.DisableNeedsInfo()\n\tzap.L().Debug(\"Pod resync: finished resyncing all pods\")\n\n\treturn nil\n}\n\n// ResyncInfoChan is used to report back from the controller on which pods it has processed.\n// It allows the Resync of the monitor to block and wait until a list has been processed.\ntype ResyncInfoChan struct {\n\tm  sync.RWMutex\n\tb  bool\n\tch chan string\n}\n\n// NewResyncInfoChan creates a new ResyncInfoChan\nfunc NewResyncInfoChan() *ResyncInfoChan {\n\treturn &ResyncInfoChan{\n\t\tch: make(chan string, 100),\n\t}\n}\n\n// EnableNeedsInfo enables the need for sending info\nfunc (r *ResyncInfoChan) EnableNeedsInfo() {\n\tr.m.Lock()\n\tdefer r.m.Unlock()\n\tr.b = true\n}\n\n// DisableNeedsInfo disables the need for sending info\nfunc (r *ResyncInfoChan) DisableNeedsInfo() {\n\tr.m.Lock()\n\tdefer r.m.Unlock()\n\tr.b = false\n}\n\n// NeedsInfo returns if there is a need for sending info\nfunc (r *ResyncInfoChan) NeedsInfo() bool {\n\tr.m.RLock()\n\tdefer r.m.RUnlock()\n\treturn r.b\n}\n\n// SendInfo will make the info available through an internal channel\nfunc (r *ResyncInfoChan) SendInfo(info string) {\n\tr.m.RLock()\n\tdefer r.m.RUnlock()\n\tif r.b {\n\t\tr.ch <- info\n\t}\n}\n\n// GetInfoCh returns the channel\nfunc (r *ResyncInfoChan) GetInfoCh() *chan string {\n\tr.m.RLock()\n\tdefer r.m.RUnlock()\n\treturn &r.ch\n}\n"
  },
  {
    "path": "monitor/internal/pod/resync_test.go",
    "content": "// +build !windows\n\npackage podmonitor\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\n\tfakeclient \"sigs.k8s.io/controller-runtime/pkg/client/fake\"\n\t\"sigs.k8s.io/controller-runtime/pkg/event\"\n)\n\nfunc TestResyncWithAllPods(t *testing.T) {\n\tConvey(\"Given a client, two pods and an event channel\", t, func() {\n\t\tctx := context.TODO()\n\t\tevCh := make(chan event.GenericEvent, 100)\n\t\tresyncInfo := NewResyncInfoChan()\n\t\tpod1 := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"pod1\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t},\n\t\t\tSpec: corev1.PodSpec{\n\t\t\t\tNodeName: \"node\",\n\t\t\t},\n\t\t}\n\t\tpod2 := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName:      \"pod2\",\n\t\t\t\tNamespace: \"default\",\n\t\t\t},\n\t\t\tSpec: corev1.PodSpec{\n\t\t\t\tNodeName: \"node\",\n\t\t\t},\n\t\t}\n\t\tc := fakeclient.NewFakeClient(pod1, pod2)\n\n\t\tConvey(\"resync should fail if there is no client\", func() {\n\t\t\terr := ResyncWithAllPods(ctx, nil, resyncInfo, evCh, \"node\")\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldEqual, \"pod: no client available\")\n\t\t})\n\n\t\tConvey(\"resync should fail if there is no event channel\", func() {\n\t\t\terr := ResyncWithAllPods(ctx, c, resyncInfo, nil, \"node\")\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tSo(err.Error(), ShouldEqual, \"pod: no event source available\")\n\t\t})\n\n\t\tConvey(\"resync should successfully send messages with all pods\", func() {\n\t\t\tresyncInfo.EnableNeedsInfo()\n\t\t\tresyncInfo.SendInfo(\"default/pod1\")\n\t\t\tresyncInfo.SendInfo(\"default/pod2\")\n\t\t\terr := ResyncWithAllPods(ctx, c, resyncInfo, evCh, \"node\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tallPods := []string{\"pod1\", \"pod2\"}\n\t\t\tcollectedPods := []string{}\n\n\t\t\tobj1 := <-evCh\n\t\t\tSo(obj1.Meta.GetName(), ShouldBeIn, allPods)\n\t\t\tcollectedPods = append(collectedPods, obj1.Meta.GetName())\n\t\t\tobj2 := <-evCh\n\t\t\tSo(obj2.Meta.GetName(), ShouldBeIn, allPods)\n\t\t\tcollectedPods = append(collectedPods, obj2.Meta.GetName())\n\t\t\tSo(\"pod1\", ShouldBeIn, collectedPods)\n\t\t\tSo(\"pod2\", ShouldBeIn, collectedPods)\n\t\t\tSo(obj1.Meta.GetName(), ShouldNotEqual, obj2.Meta.GetName())\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/internal/pod/testdata/kubeconfig",
    "content": "apiVersion: v1\nclusters:\n- cluster:\n    server: https://localhost:9443\n  name: apiserver\ncontexts:\n- context:\n    cluster: apiserver\n    user: apiserver\n  name: apiserver\ncurrent-context: apiserver\nkind: Config\npreferences: {}\nusers:\n- name: apiserver\n  user: {}"
  },
  {
    "path": "monitor/internal/pod/watcher.go",
    "content": "// +build !windows\n\npackage podmonitor\n\nimport (\n\tcorev1 \"k8s.io/api/core/v1\"\n\t\"k8s.io/apimachinery/pkg/types\"\n\t\"sigs.k8s.io/controller-runtime/pkg/client\"\n\t\"sigs.k8s.io/controller-runtime/pkg/handler\"\n\t\"sigs.k8s.io/controller-runtime/pkg/reconcile\"\n)\n\n// WatchPodMapper determines if we want to reconcile on a pod event. There are two limitiations:\n// - the pod must be schedule on a matching nodeName\n// - if the pod requests host networking, only reconcile if we want to enable host pods\ntype WatchPodMapper struct {\n\tclient         client.Client\n\tnodeName       string\n\tenableHostPods bool\n}\n\n// Map implements the handler.Mapper interface to emit reconciles for corev1.Pods. It effectively\n// filters the pods by looking for a matching nodeName and filters them out if host networking is requested,\n// but we don't want to enable those.\nfunc (w *WatchPodMapper) Map(obj handler.MapObject) []reconcile.Request {\n\tpod, ok := obj.Object.(*corev1.Pod)\n\tif !ok {\n\t\treturn nil\n\t}\n\n\tif pod.Spec.NodeName != w.nodeName {\n\t\treturn nil\n\t}\n\n\tif pod.Spec.HostNetwork && !w.enableHostPods {\n\t\treturn nil\n\t}\n\n\treturn []reconcile.Request{\n\t\t{\n\t\t\tNamespacedName: types.NamespacedName{\n\t\t\t\tName:      pod.Name,\n\t\t\t\tNamespace: pod.Namespace,\n\t\t\t},\n\t\t},\n\t}\n}\n"
  },
  {
    "path": "monitor/internal/pod/watcher_test.go",
    "content": "// +build !windows\n\npackage podmonitor\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\n\tcorev1 \"k8s.io/api/core/v1\"\n\tmetav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\"\n\t\"sigs.k8s.io/controller-runtime/pkg/handler\"\n)\n\nfunc TestWatchPodMapper(t *testing.T) {\n\tConvey(\"Given a watch pod mapper and a pod\", t, func() {\n\t\tm := WatchPodMapper{\n\t\t\t// client is currently not in use for this mapper\n\t\t\tclient:         nil,\n\t\t\tnodeName:       \"testing-node\",\n\t\t\tenableHostPods: true,\n\t\t}\n\t\tp := &corev1.Pod{\n\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\tName: \"p1\",\n\t\t\t},\n\t\t\tSpec: corev1.PodSpec{\n\t\t\t\tNodeName: \"testing-node\",\n\t\t\t},\n\t\t}\n\t\tConvey(\"do not reconcile if the object is not a pod\", func() {\n\t\t\tsvc := &corev1.Service{\n\t\t\t\tObjectMeta: metav1.ObjectMeta{\n\t\t\t\t\tName: \"service\",\n\t\t\t\t},\n\t\t\t}\n\t\t\treqs := m.Map(handler.MapObject{Meta: svc.GetObjectMeta(), Object: svc})\n\t\t\tSo(reqs, ShouldHaveLength, 0)\n\t\t})\n\t\tConvey(\"do not reconcile if the node name does not match\", func() {\n\t\t\tp.Spec.NodeName = \"wrong\"\n\t\t\treqs := m.Map(handler.MapObject{Meta: p.GetObjectMeta(), Object: p})\n\t\t\tSo(reqs, ShouldHaveLength, 0)\n\t\t})\n\t\tConvey(\"do not reconcile if enabling host pods is not enabled, but the pod has HostNetwork set to true\", func() {\n\t\t\tm.enableHostPods = false\n\t\t\tp.Spec.HostNetwork = true\n\t\t\treqs := m.Map(handler.MapObject{Meta: p.GetObjectMeta(), Object: p})\n\t\t\tSo(reqs, ShouldHaveLength, 0)\n\t\t})\n\t\tConvey(\"reconcile if the node name matches\", func() {\n\t\t\treqs := m.Map(handler.MapObject{Meta: p.GetObjectMeta(), Object: p})\n\t\t\tSo(reqs, ShouldHaveLength, 1)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/internal/uid/config.go",
    "content": "package uidmonitor\n\nimport (\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n)\n\n// Config is the configuration options to start a CNI monitor\ntype Config struct {\n\tEventMetadataExtractor extractors.EventMetadataExtractor\n\tStoredPath             string\n\tReleasePath            string\n}\n\n// DefaultConfig provides default configuration for uid monitor\nfunc DefaultConfig() *Config {\n\n\treturn &Config{\n\t\tEventMetadataExtractor: extractors.UIDMetadataExtractor,\n\t\tStoredPath:             \"/var/run/trireme_uid\",\n\t\tReleasePath:            \"\",\n\t}\n}\n\n// SetupDefaultConfig adds defaults to a partial configuration\nfunc SetupDefaultConfig(uidConfig *Config) *Config {\n\n\tdefaultConfig := DefaultConfig()\n\n\tif uidConfig.ReleasePath == \"\" {\n\t\tuidConfig.ReleasePath = defaultConfig.ReleasePath\n\t}\n\tif uidConfig.StoredPath == \"\" {\n\t\tuidConfig.StoredPath = defaultConfig.StoredPath\n\t}\n\tif uidConfig.EventMetadataExtractor == nil {\n\t\tuidConfig.EventMetadataExtractor = defaultConfig.EventMetadataExtractor\n\t}\n\n\treturn uidConfig\n}\n"
  },
  {
    "path": "monitor/internal/uid/monitor.go",
    "content": "package uidmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"regexp\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/registerer\"\n\t\"go.aporeto.io/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/trireme-lib/utils/cgnetcls\"\n)\n\n// UIDMonitor captures all the monitor processor information for a UIDLoginPU\n// It implements the EventProcessor interface of the rpc monitor\ntype UIDMonitor struct {\n\tproc *uidProcessor\n}\n\n// New returns a new implmentation of a monitor implmentation\nfunc New() *UIDMonitor {\n\n\treturn &UIDMonitor{\n\t\tproc: &uidProcessor{},\n\t}\n}\n\n// Run implements Implementation interface\nfunc (u *UIDMonitor) Run(ctx context.Context) error {\n\n\tif err := u.proc.config.IsComplete(); err != nil {\n\t\treturn fmt.Errorf(\"uid: %s\", err)\n\t}\n\n\treturn u.Resync(ctx)\n}\n\n// SetupConfig provides a configuration to implmentations. Every implmentation\n// can have its own config type.\nfunc (u *UIDMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) error {\n\n\tdefaultConfig := DefaultConfig()\n\tif cfg == nil {\n\t\tcfg = defaultConfig\n\t}\n\n\tuidConfig, ok := cfg.(*Config)\n\tif !ok {\n\t\treturn fmt.Errorf(\"Invalid configuration specified\")\n\t}\n\n\tif registerer != nil {\n\t\tif err := registerer.RegisterProcessor(common.UIDLoginPU, u.proc); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Setup defaults\n\tuidConfig = SetupDefaultConfig(uidConfig)\n\n\t// Setup config\n\tu.proc.netcls = cgnetcls.NewCgroupNetController(common.TriremeUIDCgroupPath, uidConfig.ReleasePath)\n\tu.proc.regStart = regexp.MustCompile(\"^[a-zA-Z0-9_]{1,11}$\")\n\tu.proc.regStop = regexp.MustCompile(\"^/trireme/[a-zA-Z0-9_]{1,11}$\")\n\tu.proc.putoPidMap = cache.NewCache(\"putoPidMap\")\n\tu.proc.pidToPU = cache.NewCache(\"pidToPU\")\n\tu.proc.metadataExtractor = uidConfig.EventMetadataExtractor\n\tif u.proc.metadataExtractor == nil {\n\t\treturn fmt.Errorf(\"Unable to setup a metadata extractor\")\n\t}\n\n\treturn nil\n}\n\n// SetupHandlers sets up handlers for monitors to invoke for various events such as\n// processing unit events and synchronization events. This will be called before Start()\n// by the consumer of the monitor\nfunc (u *UIDMonitor) SetupHandlers(m *config.ProcessorConfig) {\n\n\tu.proc.config = m\n}\n\n// Resync asks the monitor to do a resync\nfunc (u *UIDMonitor) Resync(ctx context.Context) error {\n\treturn u.proc.Resync(ctx, nil)\n}\n"
  },
  {
    "path": "monitor/internal/uid/processor.go",
    "content": "package uidmonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"go.aporeto.io/trireme-lib/collector\"\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/trireme-lib/policy\"\n\t\"go.aporeto.io/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/trireme-lib/utils/cgnetcls\"\n\t\"go.uber.org/zap\"\n)\n\nvar ignoreNames = map[string]*struct{}{\n\t\"cgroup.clone_children\": nil,\n\t\"cgroup.procs\":          nil,\n\t\"net_cls.classid\":       nil,\n\t\"net_prio.ifpriomap\":    nil,\n\t\"net_prio.prioidx\":      nil,\n\t\"notify_on_release\":     nil,\n\t\"tasks\":                 nil,\n}\n\n// uidProcessor captures all the monitor processor information for a UIDLoginPU\n// It implements the EventProcessor interface of the rpc monitor\ntype uidProcessor struct {\n\tconfig            *config.ProcessorConfig\n\tmetadataExtractor extractors.EventMetadataExtractor\n\tnetcls            cgnetcls.Cgroupnetcls\n\tregStart          *regexp.Regexp\n\tregStop           *regexp.Regexp\n\tputoPidMap        *cache.Cache\n\tpidToPU           *cache.Cache\n\tsync.Mutex\n}\n\nconst (\n\ttriremeBaseCgroup = \"/trireme\"\n)\n\n// puToPidEntry represents an entry to puToPidMap\ntype puToPidEntry struct {\n\tpidlist            map[int32]bool\n\tInfo               *policy.PURuntime\n\tpublishedContextID string\n}\n\n// Start handles start events\nfunc (u *uidProcessor) Start(ctx context.Context, eventInfo *common.EventInfo) error {\n\n\treturn u.createAndStart(ctx, eventInfo, false)\n}\n\n// Stop handles a stop event and destroy as well. Destroy does nothing for the uid monitor\nfunc (u *uidProcessor) Stop(ctx context.Context, eventInfo *common.EventInfo) error {\n\n\tpuID := eventInfo.PUID\n\n\tif puID == triremeBaseCgroup {\n\t\tu.netcls.Deletebasepath(puID)\n\t\treturn nil\n\t}\n\n\tu.Lock()\n\tdefer u.Unlock()\n\n\t// Take the PID part of the user/pid PUID\n\tvar pid string\n\tuserID := eventInfo.PUID\n\tparts := strings.SplitN(puID, \"/\", 2)\n\tif len(parts) == 2 {\n\t\tuserID = parts[0]\n\t\tpid = parts[1]\n\t}\n\n\tif len(pid) > 0 {\n\t\t// Delete the cgroup for that pid\n\t\tif err := u.netcls.DeleteCgroup(puID); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif pidlist, err := u.putoPidMap.Get(userID); err == nil {\n\t\t\tpidCxt := pidlist.(*puToPidEntry)\n\n\t\t\tiPid, err := strconv.Atoi(pid)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\t// Clean pid from both caches\n\t\t\tdelete(pidCxt.pidlist, int32(iPid))\n\n\t\t\tif err = u.pidToPU.Remove(int32(iPid)); err != nil {\n\t\t\t\tzap.L().Warn(\"Failed to remove entry in the cache\", zap.Error(err), zap.String(\"stopped pid\", pid))\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetPUType(common.UIDLoginPU)\n\n\t// Since all the PIDs of the user are gone, we can delete the user context.\n\tif err := u.config.Policy.HandlePUEvent(ctx, userID, common.EventStop, runtime); err != nil {\n\t\tzap.L().Warn(\"Failed to stop trireme PU \",\n\t\t\tzap.String(\"puID\", puID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err := u.config.Policy.HandlePUEvent(ctx, userID, common.EventDestroy, runtime); err != nil {\n\t\tzap.L().Warn(\"Failed to Destroy clean trireme \",\n\t\t\tzap.String(\"puID\", puID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\n\tif err := u.putoPidMap.Remove(userID); err != nil {\n\t\tzap.L().Warn(\"Failed to remove entry in the cache\", zap.Error(err), zap.String(\"puID\", puID))\n\t}\n\n\treturn u.netcls.DeleteCgroup(strings.TrimRight(userID, \"/\"))\n}\n\n// Create handles create events\nfunc (u *uidProcessor) Create(ctx context.Context, eventInfo *common.EventInfo) error {\n\treturn nil\n}\n\n// Destroy handles a destroy event\nfunc (u *uidProcessor) Destroy(ctx context.Context, eventInfo *common.EventInfo) error {\n\t// Destroy is not used for the UIDMonitor since we will destroy when we get stop\n\t// This is to try and save some time .Stop/Destroy is two RPC calls.\n\t// We don't define pause on uid monitor so stop is always followed by destroy\n\treturn nil\n}\n\n// Pause handles a pause event\nfunc (u *uidProcessor) Pause(ctx context.Context, eventInfo *common.EventInfo) error {\n\n\treturn u.config.Policy.HandlePUEvent(ctx, eventInfo.PUID, common.EventPause, nil)\n}\n\n// Resync resyncs with all the existing services that were there before we start\nfunc (u *uidProcessor) Resync(ctx context.Context, e *common.EventInfo) error {\n\n\tuids := u.netcls.ListAllCgroups(\"\")\n\tfor _, uid := range uids {\n\n\t\tif _, ok := ignoreNames[uid]; ok {\n\t\t\tcontinue\n\t\t}\n\n\t\tprocessesOfUID := u.netcls.ListAllCgroups(uid)\n\t\tactivePids := []int32{}\n\n\t\tfor _, pid := range processesOfUID {\n\t\t\tif _, ok := ignoreNames[pid]; ok {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tcgroupPath := uid + \"/\" + pid\n\t\t\tpidlist, _ := u.netcls.ListCgroupProcesses(cgroupPath)\n\t\t\tif len(pidlist) == 0 {\n\t\t\t\tif err := u.netcls.DeleteCgroup(cgroupPath); err != nil {\n\t\t\t\t\tzap.L().Warn(\"Unable to delete cgroup\",\n\t\t\t\t\t\tzap.String(\"cgroup\", cgroupPath),\n\t\t\t\t\t\tzap.Error(err),\n\t\t\t\t\t)\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tiPid, _ := strconv.Atoi(pid)\n\t\t\tactivePids = append(activePids, int32(iPid))\n\t\t}\n\n\t\tif len(activePids) == 0 {\n\t\t\tif err := u.netcls.DeleteCgroup(uid); err != nil {\n\t\t\t\tzap.L().Warn(\"Unable to delete cgroup\",\n\t\t\t\t\tzap.String(\"cgroup\", uid),\n\t\t\t\t\tzap.Error(err),\n\t\t\t\t)\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\tevent := &common.EventInfo{\n\t\t\tPID:    activePids[0],\n\t\t\tPUID:   uid,\n\t\t\tPUType: common.UIDLoginPU,\n\t\t}\n\n\t\tif err := u.createAndStart(ctx, event, true); err != nil {\n\t\t\tzap.L().Error(\"Can not synchronize user\", zap.String(\"user\", uid))\n\t\t}\n\n\t\tfor i := 1; i < len(activePids); i++ {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPID:    activePids[i],\n\t\t\t\tPUID:   uid,\n\t\t\t\tPUType: common.UIDLoginPU,\n\t\t\t}\n\t\t\tif err := u.createAndStart(ctx, event, true); err != nil {\n\t\t\t\tzap.L().Error(\"Can not synchronize user\", zap.String(\"user\", uid))\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (u *uidProcessor) createAndStart(ctx context.Context, eventInfo *common.EventInfo, startOnly bool) error {\n\n\tu.Lock()\n\tdefer u.Unlock()\n\n\tif eventInfo.Name == \"\" {\n\t\teventInfo.Name = eventInfo.PUID\n\t}\n\n\tpuID := eventInfo.PUID\n\tpids, err := u.putoPidMap.Get(puID)\n\tvar runtimeInfo *policy.PURuntime\n\tif err != nil {\n\t\truntimeInfo, err = u.metadataExtractor(eventInfo)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tpublishedContextID := puID\n\t\t// Setup the run time\n\t\tif !startOnly {\n\t\t\tif perr := u.config.Policy.HandlePUEvent(ctx, publishedContextID, common.EventCreate, runtimeInfo); perr != nil {\n\t\t\t\tzap.L().Error(\"Failed to create process\", zap.Error(perr))\n\t\t\t\treturn perr\n\t\t\t}\n\t\t}\n\n\t\tif perr := u.config.Policy.HandlePUEvent(ctx, publishedContextID, common.EventStart, runtimeInfo); perr != nil {\n\t\t\tzap.L().Error(\"Failed to start process\", zap.Error(perr))\n\t\t\treturn perr\n\t\t}\n\n\t\tif err = u.processLinuxServiceStart(puID, eventInfo, runtimeInfo); err != nil {\n\t\t\tzap.L().Error(\"processLinuxServiceStart\", zap.Error(err))\n\t\t\treturn err\n\t\t}\n\n\t\tu.config.Collector.CollectContainerEvent(&collector.ContainerRecord{\n\t\t\tContextID: puID,\n\t\t\tIPAddress: runtimeInfo.IPAddresses(),\n\t\t\tTags:      runtimeInfo.Tags(),\n\t\t\tEvent:     collector.ContainerStart,\n\t\t})\n\n\t\tentry := &puToPidEntry{\n\t\t\tInfo:               runtimeInfo,\n\t\t\tpublishedContextID: publishedContextID,\n\t\t\tpidlist:            map[int32]bool{},\n\t\t}\n\n\t\tif err := u.putoPidMap.Add(puID, entry); err != nil {\n\t\t\tzap.L().Warn(\"Failed to add puID/PU in the cache\",\n\t\t\t\tzap.Error(err),\n\t\t\t\tzap.String(\"puID\", puID),\n\t\t\t)\n\t\t}\n\n\t\tpids = entry\n\t}\n\n\tpids.(*puToPidEntry).pidlist[eventInfo.PID] = true\n\tif err := u.pidToPU.Add(eventInfo.PID, eventInfo.PUID); err != nil {\n\t\tzap.L().Warn(\"Failed to add eventInfoPID/eventInfoPUID in the cache\",\n\t\t\tzap.Error(err),\n\t\t\tzap.Int32(\"eventInfo.PID\", eventInfo.PID),\n\t\t\tzap.String(\"eventInfo.PUID\", eventInfo.PUID),\n\t\t)\n\t}\n\n\tpidPath := puID + \"/\" + strconv.Itoa(int(eventInfo.PID))\n\n\treturn u.processLinuxServiceStart(pidPath, eventInfo, pids.(*puToPidEntry).Info)\n\n}\n\nfunc (u *uidProcessor) processLinuxServiceStart(pidName string, event *common.EventInfo, runtimeInfo *policy.PURuntime) error {\n\n\tif err := u.netcls.Creategroup(pidName); err != nil {\n\t\tzap.L().Error(\"Failed to create cgroup for the user\", zap.String(\"user\", pidName), zap.Error(err))\n\t\treturn err\n\t}\n\n\tmarkval := runtimeInfo.Options().CgroupMark\n\tif markval == \"\" {\n\t\tif derr := u.netcls.DeleteCgroup(pidName); derr != nil {\n\t\t\tzap.L().Warn(\"Failed to clean cgroup\", zap.Error(derr))\n\t\t}\n\t\treturn errors.New(\"mark value not found\")\n\t}\n\n\tmark, err := strconv.ParseUint(markval, 10, 32)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif err = u.netcls.AssignMark(pidName, mark); err != nil {\n\t\tif derr := u.netcls.DeleteCgroup(pidName); derr != nil {\n\t\t\tzap.L().Warn(\"Failed to clean cgroup\", zap.Error(derr))\n\t\t}\n\t\treturn err\n\t}\n\n\tif err := u.netcls.AddProcess(pidName, int(event.PID)); err != nil {\n\t\tif derr := u.netcls.DeleteCgroup(pidName); derr != nil {\n\t\t\tzap.L().Warn(\"Failed to clean cgroup\", zap.Error(derr))\n\t\t}\n\t\treturn err\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "monitor/internal/windows/config.go",
    "content": "// +build windows\n\npackage windowsmonitor\n\nimport \"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\n// Config is the configuration options to start a CNI monitor\ntype Config struct {\n\tEventMetadataExtractor extractors.EventMetadataExtractor\n\tHost                   bool\n}\n\n// DefaultConfig provides a default configuration\nfunc DefaultConfig(host bool) *Config {\n\treturn &Config{\n\t\tEventMetadataExtractor: extractors.DefaultHostMetadataExtractor,\n\t\tHost:                   host,\n\t}\n}\n\n// SetupDefaultConfig adds defaults to a partial configuration\nfunc SetupDefaultConfig(windowsConfig *Config) *Config {\n\n\tdefaultConfig := DefaultConfig(windowsConfig.Host)\n\tif windowsConfig.EventMetadataExtractor == nil {\n\t\twindowsConfig.EventMetadataExtractor = defaultConfig.EventMetadataExtractor\n\t}\n\treturn windowsConfig\n}\n"
  },
  {
    "path": "monitor/internal/windows/monitor.go",
    "content": "// +build windows\n\npackage windowsmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"regexp\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n)\n\n// WindowsMonitor hold state for the windows monitor\ntype WindowsMonitor struct {\n\tproc *windowsProcessor\n}\n\n// New returns a new implmentation of a monitor implmentation\nfunc New(context.Context) *WindowsMonitor {\n\treturn &WindowsMonitor{\n\t\tproc: &windowsProcessor{},\n\t}\n}\n\n// Run implements Implementation interface\nfunc (w *WindowsMonitor) Run(ctx context.Context) error {\n\tif err := w.proc.config.IsComplete(); err != nil {\n\t\treturn fmt.Errorf(\"windows %t: %s\", w.proc.host, err)\n\t}\n\n\treturn w.Resync(ctx)\n\n}\n\n// SetupHandlers sets up handlers for monitors to invoke for various events such as\n// processing unit events and synchronization events. This will be called before Start()\n// by the consumer of the monitor\nfunc (w *WindowsMonitor) SetupHandlers(m *config.ProcessorConfig) {\n\tw.proc.config = m\n}\n\n// SetupConfig sets up the config for the monitor\nfunc (w *WindowsMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) error {\n\tif cfg == nil {\n\t\tcfg = DefaultConfig(false)\n\t}\n\twindowsConfig, ok := cfg.(*Config)\n\tif !ok {\n\t\treturn fmt.Errorf(\"Invalid configuration specified\")\n\t}\n\tif registerer != nil {\n\t\tif err := registerer.RegisterProcessor(common.HostPU, w.proc); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif err := registerer.RegisterProcessor(common.HostNetworkPU, w.proc); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif err := registerer.RegisterProcessor(common.WindowsProcessPU, w.proc); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\twindowsConfig = SetupDefaultConfig(windowsConfig)\n\tw.proc.host = windowsConfig.Host\n\tw.proc.regStart = regexp.MustCompile(\"^[a-zA-Z0-9_]{1,11}$\")\n\tw.proc.metadataExtractor = windowsConfig.EventMetadataExtractor\n\tif w.proc.metadataExtractor == nil {\n\t\treturn fmt.Errorf(\"Unable to setup a metadata extractor\")\n\t}\n\treturn nil\n}\n\n// Resync instructs the monitor to do a resync.\nfunc (w *WindowsMonitor) Resync(ctx context.Context) error {\n\treturn w.proc.Resync(ctx, nil)\n}\n"
  },
  {
    "path": "monitor/internal/windows/processor.go",
    "content": "// +build windows\n\npackage windowsmonitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"regexp\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.uber.org/zap\"\n)\n\ntype windowsProcessor struct {\n\tregStart          *regexp.Regexp\n\tmetadataExtractor extractors.EventMetadataExtractor\n\tconfig            *config.ProcessorConfig\n\thost              bool\n}\n\n// Start processes PU start events\nfunc (w *windowsProcessor) Start(ctx context.Context, eventInfo *common.EventInfo) error {\n\t// Validate the PUID format. Additional validations TODO\n\tif !w.regStart.Match([]byte(eventInfo.PUID)) {\n\t\treturn fmt.Errorf(\"invalid pu id: %s\", eventInfo.PUID)\n\t}\n\n\t// Normalize to a nativeID context. This will become key for any recoveries\n\t// and it's an one way function.\n\tnativeID := w.generateContextID(eventInfo)\n\t// Extract the metadata and create the runtime\n\truntime, err := w.metadataExtractor(eventInfo)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// We need to send a create event to the policy engine.\n\tif err = w.config.Policy.HandlePUEvent(ctx, nativeID, common.EventCreate, runtime); err != nil {\n\t\treturn fmt.Errorf(\"Unable to create PU: %s\", err)\n\t}\n\n\t// We can now send a start event to the policy engine\n\tif err = w.config.Policy.HandlePUEvent(ctx, nativeID, common.EventStart, runtime); err != nil {\n\t\treturn fmt.Errorf(\"Unable to start PU: %s\", err)\n\t}\n\treturn nil\n}\n\n// Event processes PU stop events\nfunc (w *windowsProcessor) Stop(ctx context.Context, eventInfo *common.EventInfo) error {\n\tpuID := w.generateContextID(eventInfo)\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetPUType(eventInfo.PUType)\n\n\treturn w.config.Policy.HandlePUEvent(ctx, puID, common.EventStop, runtime)\n}\n\n// Create process a PU create event\nfunc (w *windowsProcessor) Create(ctx context.Context, eventInfo *common.EventInfo) error {\n\treturn fmt.Errorf(\"Use start directly for windows processes. Create not supported\")\n}\n\n// Event process a PU destroy event\nfunc (w *windowsProcessor) Destroy(ctx context.Context, eventInfo *common.EventInfo) error {\n\tpuID := w.generateContextID(eventInfo)\n\truntime := policy.NewPURuntimeWithDefaults()\n\truntime.SetPUType(eventInfo.PUType)\n\n\t// Send the event upstream\n\tif err := w.config.Policy.HandlePUEvent(ctx, puID, common.EventDestroy, runtime); err != nil {\n\t\tzap.L().Warn(\"Unable to clean trireme \",\n\t\t\tzap.String(\"puID\", puID),\n\t\t\tzap.Error(err),\n\t\t)\n\t}\n\treturn nil\n}\n\n// Event processes a pause event\nfunc (w *windowsProcessor) Pause(ctx context.Context, eventInfo *common.EventInfo) error {\n\treturn fmt.Errorf(\"Use start directly for windows processes. Pause not supported\")\n}\n\n// Resync resyncs all PUs handled by this processor\nfunc (w *windowsProcessor) Resync(ctx context.Context, eventInfo *common.EventInfo) error {\n\tif eventInfo != nil {\n\t\t// If its a host service then use pu from eventInfo\n\t\tif eventInfo.HostService {\n\t\t\truntime, err := w.metadataExtractor(eventInfo)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tnativeID := w.generateContextID(eventInfo)\n\t\t\tif err = w.config.Policy.HandlePUEvent(ctx, nativeID, common.EventStart, runtime); err != nil {\n\t\t\t\treturn fmt.Errorf(\"Unable to start PU: %s\", err)\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\t}\n\n\t// TODO(windows): handle resync of windows process PU later?\n\tzap.L().Debug(\"Resync not handled\")\n\n\treturn nil\n}\n\nfunc (w *windowsProcessor) generateContextID(eventInfo *common.EventInfo) string {\n\tpuID := eventInfo.PUID\n\n\t// TODO(windows): regStop may be used here somewhere in the future?\n\n\treturn puID\n}\n"
  },
  {
    "path": "monitor/internal/windows/processor_test.go",
    "content": "// +build windows\n\npackage windowsmonitor\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy/mockpolicy\"\n)\n\nfunc testWindowsProcessor(puHandler policy.Resolver) *windowsProcessor {\n\tw := New(context.Background())\n\tw.SetupHandlers(&config.ProcessorConfig{\n\t\tCollector: &collector.DefaultCollector{},\n\t\tPolicy:    puHandler,\n\t})\n\tif err := w.SetupConfig(nil, &Config{\n\t\tEventMetadataExtractor: extractors.DefaultHostMetadataExtractor,\n\t}); err != nil {\n\t\treturn nil\n\t}\n\treturn w.proc\n}\n\nfunc TestCreate(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tp := testWindowsProcessor(puHandler)\n\n\t\tConvey(\"When I try a create event with invalid PU ID, \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"/@#$@\",\n\t\t\t}\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\terr := p.Create(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a create event that is valid\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"1234\",\n\t\t\t}\n\t\t\tConvey(\"I should get an error - create not supported\", func() {\n\t\t\t\terr := p.Create(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestStop(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tp := testWindowsProcessor(puHandler)\n\n\t\tConvey(\"When I get a stop event that is valid\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"/trireme/1234\",\n\t\t\t}\n\n\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\n\t\t\tConvey(\"I should get the status of the upstream function\", func() {\n\t\t\t\terr := p.Stop(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t})\n}\n\n// TODO: remove nolint\n// nolint\nfunc TestDestroy(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tp := testWindowsProcessor(puHandler)\n\n\t\tConvey(\"When I get a destroy event that is valid\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"1234\",\n\t\t\t}\n\n\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\tConvey(\"I should get the status of the upstream function\", func() {\n\t\t\t\terr := p.Destroy(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a destroy event that is valid for hostpu\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID:               \"123\",\n\t\t\t\tHostService:        true,\n\t\t\t\tNetworkOnlyTraffic: true,\n\t\t\t}\n\n\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\tConvey(\"I should get the status of the upstream function\", func() {\n\t\t\t\terr := p.Destroy(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestPause(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\n\t\tp := testWindowsProcessor(puHandler)\n\n\t\tConvey(\"When I get a pause event that is valid\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"/trireme/1234\",\n\t\t\t}\n\n\t\t\tConvey(\"I should get the status of the upstream function\", func() {\n\t\t\t\terr := p.Pause(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil) // Pause does nothing on Windows\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestStart(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\t\tp := testWindowsProcessor(puHandler)\n\n\t\tConvey(\"When I get a start event with no PUID\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tPUID: \"\",\n\t\t\t}\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event that is valid that fails on the generation of PU ID\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName: \"^^^\",\n\t\t\t}\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event that is valid that fails on the metadata extractor\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName: \"service\",\n\t\t\t\tTags: []string{\"badtag\"},\n\t\t\t}\n\t\t\tConvey(\"I should get an error\", func() {\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event and the upstream returns an error \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:      \"PU\",\n\t\t\t\tPID:       1,\n\t\t\t\tPUID:      \"12345\",\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.LinuxProcessPU,\n\t\t\t}\n\t\t\tConvey(\"I should get an error \", func() {\n\t\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(errors.New(\"error\"))\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a start event and the runtime options don't have a mark value\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:      \"PU\",\n\t\t\t\tPID:       1,\n\t\t\t\tPUID:      \"12345\",\n\t\t\t\tEventType: common.EventStop,\n\t\t\t\tPUType:    common.LinuxProcessPU,\n\t\t\t}\n\n\t\t\tConvey(\"I should not get an error \", func() {\n\t\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Times(2).Return(nil)\n\t\t\t\terr := p.Start(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestResync(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a valid processor\", t, func() {\n\t\tpuHandler := mockpolicy.NewMockResolver(ctrl)\n\t\tp := testWindowsProcessor(puHandler)\n\n\t\tConvey(\"When I get a resync event \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:      \"PU\",\n\t\t\t\tPID:       1,\n\t\t\t\tPUID:      \"12345\",\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.LinuxProcessPU,\n\t\t\t}\n\n\t\t\tConvey(\"I should not get an error \", func() {\n\t\t\t\terr := p.Resync(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I get a resync event for hostservice\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tName:               \"PU\",\n\t\t\t\tPID:                1,\n\t\t\t\tPUID:               \"12345\",\n\t\t\t\tEventType:          common.EventStart,\n\t\t\t\tHostService:        true,\n\t\t\t\tNetworkOnlyTraffic: true,\n\t\t\t\tPUType:             common.LinuxProcessPU,\n\t\t\t}\n\n\t\t\tConvey(\"I should not get an error \", func() {\n\t\t\t\tpuHandler.EXPECT().HandlePUEvent(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil)\n\t\t\t\terr := p.Resync(context.Background(), event)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "monitor/mockmonitor/mockmonitor.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: monitor/interfaces.go\n\n// Package mockmonitor is a generated GoMock package.\npackage mockmonitor\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tconfig \"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\tregisterer \"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n)\n\n// MockMonitor is a mock of Monitor interface\n// nolint\ntype MockMonitor struct {\n\tctrl     *gomock.Controller\n\trecorder *MockMonitorMockRecorder\n}\n\n// MockMonitorMockRecorder is the mock recorder for MockMonitor\n// nolint\ntype MockMonitorMockRecorder struct {\n\tmock *MockMonitor\n}\n\n// NewMockMonitor creates a new mock instance\n// nolint\nfunc NewMockMonitor(ctrl *gomock.Controller) *MockMonitor {\n\tmock := &MockMonitor{ctrl: ctrl}\n\tmock.recorder = &MockMonitorMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockMonitor) EXPECT() *MockMonitorMockRecorder {\n\treturn m.recorder\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockMonitor) Run(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Run\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockMonitorMockRecorder) Run(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockMonitor)(nil).Run), ctx)\n}\n\n// UpdateConfiguration mocks base method\n// nolint\nfunc (m *MockMonitor) UpdateConfiguration(ctx context.Context, config *config.MonitorConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateConfiguration\", ctx, config)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateConfiguration indicates an expected call of UpdateConfiguration\n// nolint\nfunc (mr *MockMonitorMockRecorder) UpdateConfiguration(ctx, config interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateConfiguration\", reflect.TypeOf((*MockMonitor)(nil).UpdateConfiguration), ctx, config)\n}\n\n// Resync mocks base method\n// nolint\nfunc (m *MockMonitor) Resync(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Resync\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Resync indicates an expected call of Resync\n// nolint\nfunc (mr *MockMonitorMockRecorder) Resync(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Resync\", reflect.TypeOf((*MockMonitor)(nil).Resync), ctx)\n}\n\n// MockImplementation is a mock of Implementation interface\n// nolint\ntype MockImplementation struct {\n\tctrl     *gomock.Controller\n\trecorder *MockImplementationMockRecorder\n}\n\n// MockImplementationMockRecorder is the mock recorder for MockImplementation\n// nolint\ntype MockImplementationMockRecorder struct {\n\tmock *MockImplementation\n}\n\n// NewMockImplementation creates a new mock instance\n// nolint\nfunc NewMockImplementation(ctrl *gomock.Controller) *MockImplementation {\n\tmock := &MockImplementation{ctrl: ctrl}\n\tmock.recorder = &MockImplementationMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockImplementation) EXPECT() *MockImplementationMockRecorder {\n\treturn m.recorder\n}\n\n// Run mocks base method\n// nolint\nfunc (m *MockImplementation) Run(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Run\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Run indicates an expected call of Run\n// nolint\nfunc (mr *MockImplementationMockRecorder) Run(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Run\", reflect.TypeOf((*MockImplementation)(nil).Run), ctx)\n}\n\n// SetupConfig mocks base method\n// nolint\nfunc (m *MockImplementation) SetupConfig(registerer registerer.Registerer, cfg interface{}) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SetupConfig\", registerer, cfg)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SetupConfig indicates an expected call of SetupConfig\n// nolint\nfunc (mr *MockImplementationMockRecorder) SetupConfig(registerer, cfg interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetupConfig\", reflect.TypeOf((*MockImplementation)(nil).SetupConfig), registerer, cfg)\n}\n\n// SetupHandlers mocks base method\n// nolint\nfunc (m *MockImplementation) SetupHandlers(c *config.ProcessorConfig) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"SetupHandlers\", c)\n}\n\n// SetupHandlers indicates an expected call of SetupHandlers\n// nolint\nfunc (mr *MockImplementationMockRecorder) SetupHandlers(c interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetupHandlers\", reflect.TypeOf((*MockImplementation)(nil).SetupHandlers), c)\n}\n\n// Resync mocks base method\n// nolint\nfunc (m *MockImplementation) Resync(ctx context.Context) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Resync\", ctx)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Resync indicates an expected call of Resync\n// nolint\nfunc (mr *MockImplementationMockRecorder) Resync(ctx interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Resync\", reflect.TypeOf((*MockImplementation)(nil).Resync), ctx)\n}\n"
  },
  {
    "path": "monitor/monitor.go",
    "content": "// +build linux !windows\n\npackage monitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\tdockermonitor \"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/docker\"\n\tk8smonitor \"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/k8s\"\n\tlinuxmonitor \"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/linux\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/remoteapi/server\"\n\t\"go.uber.org/zap\"\n)\n\ntype monitors struct {\n\tconfig     *config.MonitorConfig\n\tmonitors   map[config.Type]Implementation\n\tregisterer registerer.Registerer\n\tserver     server.APIServer\n}\n\n// NewMonitors instantiates all/any combination of monitors supported.\nfunc NewMonitors(ctx context.Context, opts ...Options) (Monitor, error) {\n\n\tvar err error\n\n\tc := &config.MonitorConfig{\n\t\tMergeTags: []string{},\n\t\tCommon:    &config.ProcessorConfig{},\n\t\tMonitors:  map[config.Type]interface{}{},\n\t}\n\n\tfor _, opt := range opts {\n\t\topt(c)\n\t}\n\n\tif err = c.Common.IsComplete(); err != nil {\n\t\treturn nil, err\n\t}\n\n\tm := &monitors{\n\t\tconfig:   c,\n\t\tmonitors: make(map[config.Type]Implementation),\n\t}\n\n\tm.registerer = registerer.New()\n\n\tm.server, err = server.NewEventServer(common.TriremeSocket, m.registerer)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor k, v := range c.Monitors {\n\t\tswitch k {\n\t\tcase config.Docker:\n\t\t\tmon := dockermonitor.New(ctx)\n\t\t\tmon.SetupHandlers(c.Common)\n\t\t\tif err := mon.SetupConfig(nil, v); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Docker: %s\", err.Error())\n\t\t\t}\n\t\t\tm.monitors[config.Docker] = mon\n\n\t\tcase config.K8s:\n\t\t\tmon := k8smonitor.New(ctx)\n\t\t\tmon.SetupHandlers(c.Common)\n\t\t\tif err := mon.SetupConfig(nil, v); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"K8s: %s\", err.Error())\n\t\t\t}\n\t\t\tm.monitors[config.K8s] = mon\n\n\t\tcase config.LinuxProcess:\n\t\t\tmon := linuxmonitor.New(ctx)\n\t\t\tmon.SetupHandlers(c.Common)\n\t\t\tif err := mon.SetupConfig(m.registerer, v); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Process: %s\", err.Error())\n\t\t\t}\n\t\t\tm.monitors[config.LinuxProcess] = mon\n\n\t\tcase config.LinuxHost:\n\t\t\tmon := linuxmonitor.New(ctx)\n\t\t\tmon.SetupHandlers(c.Common)\n\t\t\tif err := mon.SetupConfig(m.registerer, v); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Host: %s\", err.Error())\n\t\t\t}\n\t\t\tm.monitors[config.LinuxHost] = mon\n\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"Unsupported type %d\", k)\n\t\t}\n\t}\n\n\tzap.L().Debug(\"Monitor configuration\", zap.String(\"conf\", m.config.String()))\n\n\treturn m, nil\n}\n\nfunc (m *monitors) Run(ctx context.Context) (err error) {\n\n\tif err = m.server.Run(ctx); err != nil {\n\t\treturn err\n\t}\n\n\tfor _, v := range m.monitors {\n\t\tif err = v.Run(ctx); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// UpdateConfiguration updates the configuration of the monitors.\nfunc (m *monitors) UpdateConfiguration(ctx context.Context, config *config.MonitorConfig) error {\n\t// Monitor configuration cannot change at this time.\n\t// TODO:\n\treturn nil\n}\n\n// Resync resyncs the monitor\nfunc (m *monitors) Resync(ctx context.Context) error {\n\n\tfailure := false\n\tvar errs string\n\n\tfor _, i := range m.monitors {\n\t\tif err := i.Resync(ctx); err != nil {\n\t\t\terrs = errs + err.Error()\n\t\t\tfailure = true\n\t\t}\n\t}\n\n\tif failure {\n\t\treturn fmt.Errorf(\"Monitor resync failed: %s\", errs)\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "monitor/monitor_windows.go",
    "content": "// +build windows\n\npackage monitor\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\twindowsmonitor \"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/windows\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/remoteapi/server\"\n\t\"go.uber.org/zap\"\n)\n\ntype monitors struct {\n\tconfig     *config.MonitorConfig\n\tmonitors   map[config.Type]Implementation\n\tregisterer registerer.Registerer\n\tserver     server.APIServer\n}\n\n// Run starts the monitor.\nfunc (m *monitors) Run(ctx context.Context) (err error) {\n\tif err = m.server.Run(ctx); err != nil {\n\t\treturn err\n\t}\n\n\tfor _, v := range m.monitors {\n\t\tif err = v.Run(ctx); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// UpdateConfiguration updates the configuration of the monitor\nfunc (m *monitors) UpdateConfiguration(ctx context.Context, config *config.MonitorConfig) error {\n\treturn nil\n}\n\n// Resync requests to the monitor to do a resync.\nfunc (m *monitors) Resync(ctx context.Context) error {\n\n\tfailure := false\n\tvar errs string\n\n\tfor _, i := range m.monitors {\n\t\tif err := i.Resync(ctx); err != nil {\n\t\t\terrs = errs + err.Error()\n\t\t\tfailure = true\n\t\t}\n\t}\n\n\tif failure {\n\t\treturn fmt.Errorf(\"Monitor resync failed: %s\", errs)\n\t}\n\n\treturn nil\n}\n\n// NewMonitors instantiates all/any combination of monitors supported.\nfunc NewMonitors(ctx context.Context, opts ...Options) (Monitor, error) {\n\tvar err error\n\n\tc := &config.MonitorConfig{\n\t\tMergeTags: []string{},\n\t\tCommon:    &config.ProcessorConfig{},\n\t\tMonitors:  map[config.Type]interface{}{},\n\t}\n\n\tfor _, opt := range opts {\n\t\topt(c)\n\t}\n\n\tif err = c.Common.IsComplete(); err != nil {\n\t\treturn nil, err\n\t}\n\n\tm := &monitors{\n\t\tconfig:   c,\n\t\tmonitors: make(map[config.Type]Implementation),\n\t}\n\n\tm.registerer = registerer.New()\n\n\tm.server, err = server.NewEventServer(common.TriremeSocket, m.registerer)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tfor k, v := range c.Monitors {\n\t\tswitch k {\n\t\tcase config.Windows:\n\t\t\tmon := windowsmonitor.New(ctx)\n\t\t\tmon.SetupHandlers(c.Common)\n\t\t\t// TODO(windows): make a real Windows monitor option rather than using LinuxHost\n\t\t\tif err := mon.SetupConfig(m.registerer, v); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Windows: %s\", err.Error())\n\t\t\t}\n\t\t\tm.monitors[config.Windows] = mon\n\t\tcase config.LinuxHost:\n\t\t\tmon := windowsmonitor.New(ctx)\n\t\t\tmon.SetupHandlers(c.Common)\n\t\t\tif err := mon.SetupConfig(m.registerer, v); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Host: %s\", err.Error())\n\t\t\t}\n\t\t\tm.monitors[config.LinuxHost] = mon\n\t\tcase config.LinuxProcess:\n\t\t\tmon := windowsmonitor.New(ctx)\n\t\t\tmon.SetupHandlers(c.Common)\n\t\t\tif err := mon.SetupConfig(m.registerer, v); err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"Process: %s\", err.Error())\n\t\t\t}\n\t\t\tm.monitors[config.LinuxProcess] = mon\n\t\t}\n\t}\n\tzap.L().Debug(\"Monitor configuration\", zap.String(\"conf\", m.config.String()))\n\n\treturn m, nil\n\n}\n"
  },
  {
    "path": "monitor/options.go",
    "content": "package monitor\n\nimport (\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/collector\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/external\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\tdockermonitor \"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/docker\"\n\tk8smonitor \"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/k8s\"\n\tlinuxmonitor \"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/linux\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/policy\"\n\tcriapi \"k8s.io/cri-api/pkg/apis\"\n)\n\n// Options is provided using functional arguments.\ntype Options func(*config.MonitorConfig)\n\n// DockerMonitorOption is provided using functional arguments.\ntype DockerMonitorOption func(*dockermonitor.Config)\n\n// K8smonitorOption is provided using functional arguments.\ntype K8smonitorOption func(*k8smonitor.Config)\n\n// LinuxMonitorOption is provided using functional arguments.\ntype LinuxMonitorOption func(*linuxmonitor.Config)\n\n// SubOptionMonitorLinuxExtractor provides a way to specify metadata extractor for linux monitors.\nfunc SubOptionMonitorLinuxExtractor(extractor extractors.EventMetadataExtractor) LinuxMonitorOption {\n\treturn func(cfg *linuxmonitor.Config) {\n\t\tcfg.EventMetadataExtractor = extractor\n\t}\n}\n\n// SubOptionMonitorLinuxRealeaseAgentPath specifies the path to release agent programmed in cgroup\nfunc SubOptionMonitorLinuxRealeaseAgentPath(releasePath string) LinuxMonitorOption {\n\treturn func(cfg *linuxmonitor.Config) {\n\t\tcfg.ReleasePath = releasePath\n\t}\n}\n\n// optionMonitorLinux provides a way to add a linux monitor and related configuration to be used with New().\nfunc optionMonitorLinux(\n\thost bool,\n\topts ...LinuxMonitorOption,\n) Options {\n\tlc := linuxmonitor.DefaultConfig(host)\n\t// Collect all docker options\n\tfor _, opt := range opts {\n\t\topt(lc)\n\t}\n\treturn func(cfg *config.MonitorConfig) {\n\t\tif host {\n\t\t\tcfg.Monitors[config.LinuxHost] = lc\n\t\t} else {\n\t\t\tcfg.Monitors[config.LinuxProcess] = lc\n\t\t}\n\t}\n}\n\n// OptionMonitorLinuxHost provides a way to add a linux host monitor and related configuration to be used with New().\nfunc OptionMonitorLinuxHost(\n\topts ...LinuxMonitorOption,\n) Options {\n\treturn optionMonitorLinux(true, opts...)\n}\n\n// OptionMonitorLinuxProcess provides a way to add a linux process monitor and related configuration to be used with New().\nfunc OptionMonitorLinuxProcess(\n\topts ...LinuxMonitorOption,\n) Options {\n\treturn optionMonitorLinux(false, opts...)\n}\n\n// SubOptionMonitorDockerExtractor provides a way to specify metadata extractor for docker.\nfunc SubOptionMonitorDockerExtractor(extractor extractors.DockerMetadataExtractor) DockerMonitorOption {\n\treturn func(cfg *dockermonitor.Config) {\n\t\tcfg.EventMetadataExtractor = extractor\n\t}\n}\n\n// SubOptionMonitorDockerSocket provides a way to specify socket info for docker.\nfunc SubOptionMonitorDockerSocket(socketType, socketAddress string) DockerMonitorOption {\n\treturn func(cfg *dockermonitor.Config) {\n\t\tcfg.SocketType = socketType\n\t\tcfg.SocketAddress = socketAddress\n\t}\n}\n\n// SubOptionMonitorDockerFlags provides a way to specify configuration flags info for docker.\nfunc SubOptionMonitorDockerFlags(syncAtStart bool) DockerMonitorOption {\n\treturn func(cfg *dockermonitor.Config) {\n\t\tcfg.SyncAtStart = syncAtStart\n\t}\n}\n\n// SubOptionMonitorDockerDestroyStoppedContainers sets the option to destroy stopped containers.\nfunc SubOptionMonitorDockerDestroyStoppedContainers(f bool) DockerMonitorOption {\n\treturn func(cfg *dockermonitor.Config) {\n\t\tcfg.DestroyStoppedContainers = f\n\t}\n}\n\n// OptionMonitorDocker provides a way to add a docker monitor and related configuration to be used with New().\nfunc OptionMonitorDocker(opts ...DockerMonitorOption) Options {\n\n\tdc := dockermonitor.DefaultConfig()\n\t// Collect all docker options\n\tfor _, opt := range opts {\n\t\topt(dc)\n\t}\n\n\treturn func(cfg *config.MonitorConfig) {\n\t\tcfg.Monitors[config.Docker] = dc\n\t}\n}\n\n// OptionMonitorK8s provides a way to add a K8s monitor and related configuration to be used with New().\nfunc OptionMonitorK8s(opts ...K8smonitorOption) Options {\n\tkc := k8smonitor.DefaultConfig()\n\tfor _, opt := range opts {\n\t\topt(kc)\n\t}\n\n\treturn func(cfg *config.MonitorConfig) {\n\t\tcfg.Monitors[config.K8s] = kc\n\t}\n}\n\n// SubOptionMonitorK8sKubeconfig provides a way to specify a kubeconfig to use to connect to Kubernetes.\n// In case of an in-cluter config, leave the kubeconfig field blank\nfunc SubOptionMonitorK8sKubeconfig(kubeconfig string) K8smonitorOption {\n\treturn func(cfg *k8smonitor.Config) {\n\t\tcfg.Kubeconfig = kubeconfig\n\t}\n}\n\n// SubOptionMonitorK8sNodename provides a way to specify the kubernetes node name.\n// This is useful for filtering\nfunc SubOptionMonitorK8sNodename(nodename string) K8smonitorOption {\n\treturn func(cfg *k8smonitor.Config) {\n\t\tcfg.Nodename = nodename\n\t}\n}\n\n// SubOptionMonitorK8sMetadataExtractor provides a way to specify metadata extractor for Kubernetes\nfunc SubOptionMonitorK8sMetadataExtractor(extractor extractors.PodMetadataExtractor) K8smonitorOption {\n\treturn func(cfg *k8smonitor.Config) {\n\t\tcfg.MetadataExtractor = extractor\n\t}\n}\n\n// SubOptionMonitorK8sCRIRuntimeService provides a way to pass through the CRI runtime service\nfunc SubOptionMonitorK8sCRIRuntimeService(criRuntimeService criapi.RuntimeService) K8smonitorOption {\n\treturn func(cfg *k8smonitor.Config) {\n\t\tcfg.CRIRuntimeService = criRuntimeService\n\t}\n}\n\n// OptionMergeTags provides a way to add merge tags to be used with New().\nfunc OptionMergeTags(tags []string) Options {\n\treturn func(cfg *config.MonitorConfig) {\n\t\tcfg.MergeTags = tags\n\t\tcfg.Common.MergeTags = tags\n\t}\n}\n\n// OptionCollector provide a way to add to the monitor the collector instance\nfunc OptionCollector(c collector.EventCollector) Options {\n\treturn func(cfg *config.MonitorConfig) {\n\t\tcfg.Common.Collector = c\n\t}\n}\n\n// OptionPolicyResolver provides a way to add to the monitor the policy resolver instance\nfunc OptionPolicyResolver(p policy.Resolver) Options {\n\treturn func(cfg *config.MonitorConfig) {\n\t\tcfg.Common.Policy = p\n\t}\n}\n\n// OptionExternalEventSenders provide a way to add to the monitor the external event senders\nfunc OptionExternalEventSenders(evs []external.ReceiverRegistration) Options {\n\treturn func(cfg *config.MonitorConfig) {\n\t\tcfg.Common.ExternalEventSender = evs\n\t}\n}\n\n// OptionResyncLock provide a shared lock between monitors if the monitor desires to sync with other components during PU resync at startup\nfunc OptionResyncLock(resyncLock *sync.RWMutex) Options {\n\treturn func(cfg *config.MonitorConfig) {\n\t\tcfg.Common.ResyncLock = resyncLock\n\t}\n}\n\n// NewMonitor provides a configuration for monitors.\nfunc NewMonitor(opts ...Options) *config.MonitorConfig {\n\n\tcfg := &config.MonitorConfig{\n\t\tMonitors: make(map[config.Type]interface{}),\n\t}\n\n\tfor _, opt := range opts {\n\t\topt(cfg)\n\t}\n\n\treturn cfg\n}\n"
  },
  {
    "path": "monitor/options_windows.go",
    "content": "// +build windows\n\npackage monitor\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/config\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/extractors\"\n\twindowsmonitor \"go.aporeto.io/enforcerd/trireme-lib/monitor/internal/windows\"\n)\n\n// WindowsMonitorOption is provided using functional arguments\ntype WindowsMonitorOption func(*windowsmonitor.Config)\n\n// OptionMonitorWindows provides a way to add a windows monitor and related configuration to be used with New().\nfunc OptionMonitorWindows(\n\topts ...WindowsMonitorOption,\n) Options {\n\treturn optionMonitorWindows(true, opts...)\n}\n\n// OptionMonitorWindowsProcess provides a way to add a linux process monitor and related configuration to be used with New().\nfunc OptionMonitorWindowsProcess(\n\topts ...WindowsMonitorOption,\n) Options {\n\treturn optionMonitorWindows(false, opts...)\n}\n\n// SubOptionMonitorWindowsExtractor provides a way to specify metadata extractor for linux monitors.\nfunc SubOptionMonitorWindowsExtractor(extractor extractors.EventMetadataExtractor) WindowsMonitorOption {\n\treturn func(cfg *windowsmonitor.Config) {\n\t\tcfg.EventMetadataExtractor = extractor\n\t}\n}\n\n// optionMonitorWindows provides a way to add a windows monitor and related configuration to be used with New().\nfunc optionMonitorWindows(host bool,\n\topts ...WindowsMonitorOption,\n) Options {\n\twc := windowsmonitor.DefaultConfig(host)\n\t// Collect all docker options\n\tfor _, opt := range opts {\n\t\topt(wc)\n\t}\n\treturn func(cfg *config.MonitorConfig) {\n\t\tif host {\n\t\t\tcfg.Monitors[config.LinuxHost] = wc\n\t\t} else {\n\t\t\tcfg.Monitors[config.LinuxProcess] = wc\n\t\t}\n\t}\n}\n\n// SubOptionWindowsHostMode provides a way to add a windows host monitor and related configuration to be used with New().\nfunc SubOptionWindowsHostMode(host bool) WindowsMonitorOption {\n\treturn func(cfg *windowsmonitor.Config) {\n\t\tcfg.Host = host\n\t}\n}\n"
  },
  {
    "path": "monitor/processor/interfaces.go",
    "content": "package processor\n\nimport (\n\t\"context\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// Processor is a generic interface that processes monitor events using\n// a normalized event structure.\ntype Processor interface {\n\n\t// Start processes PU start events\n\tStart(ctx context.Context, eventInfo *common.EventInfo) error\n\n\t// Event processes PU stop events\n\tStop(ctx context.Context, eventInfo *common.EventInfo) error\n\n\t// Create process a PU create event\n\tCreate(ctx context.Context, eventInfo *common.EventInfo) error\n\n\t// Event process a PU destroy event\n\tDestroy(ctx context.Context, eventInfo *common.EventInfo) error\n\n\t// Event processes a pause event\n\tPause(ctx context.Context, eventInfo *common.EventInfo) error\n\n\t// Resync resyncs all PUs handled by this processor\n\tResync(ctx context.Context, EventInfo *common.EventInfo) error\n}\n"
  },
  {
    "path": "monitor/processor/mockprocessor/mockprocessor.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: monitor/processor/interfaces.go\n\n// Package mockprocessor is a generated GoMock package.\npackage mockprocessor\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tcommon \"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// MockProcessor is a mock of Processor interface\n// nolint\ntype MockProcessor struct {\n\tctrl     *gomock.Controller\n\trecorder *MockProcessorMockRecorder\n}\n\n// MockProcessorMockRecorder is the mock recorder for MockProcessor\n// nolint\ntype MockProcessorMockRecorder struct {\n\tmock *MockProcessor\n}\n\n// NewMockProcessor creates a new mock instance\n// nolint\nfunc NewMockProcessor(ctrl *gomock.Controller) *MockProcessor {\n\tmock := &MockProcessor{ctrl: ctrl}\n\tmock.recorder = &MockProcessorMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockProcessor) EXPECT() *MockProcessorMockRecorder {\n\treturn m.recorder\n}\n\n// Start mocks base method\n// nolint\nfunc (m *MockProcessor) Start(ctx context.Context, eventInfo *common.EventInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Start\", ctx, eventInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Start indicates an expected call of Start\n// nolint\nfunc (mr *MockProcessorMockRecorder) Start(ctx, eventInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Start\", reflect.TypeOf((*MockProcessor)(nil).Start), ctx, eventInfo)\n}\n\n// Stop mocks base method\n// nolint\nfunc (m *MockProcessor) Stop(ctx context.Context, eventInfo *common.EventInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Stop\", ctx, eventInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Stop indicates an expected call of Stop\n// nolint\nfunc (mr *MockProcessorMockRecorder) Stop(ctx, eventInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Stop\", reflect.TypeOf((*MockProcessor)(nil).Stop), ctx, eventInfo)\n}\n\n// Create mocks base method\n// nolint\nfunc (m *MockProcessor) Create(ctx context.Context, eventInfo *common.EventInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Create\", ctx, eventInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Create indicates an expected call of Create\n// nolint\nfunc (mr *MockProcessorMockRecorder) Create(ctx, eventInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Create\", reflect.TypeOf((*MockProcessor)(nil).Create), ctx, eventInfo)\n}\n\n// Destroy mocks base method\n// nolint\nfunc (m *MockProcessor) Destroy(ctx context.Context, eventInfo *common.EventInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Destroy\", ctx, eventInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Destroy indicates an expected call of Destroy\n// nolint\nfunc (mr *MockProcessorMockRecorder) Destroy(ctx, eventInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Destroy\", reflect.TypeOf((*MockProcessor)(nil).Destroy), ctx, eventInfo)\n}\n\n// Pause mocks base method\n// nolint\nfunc (m *MockProcessor) Pause(ctx context.Context, eventInfo *common.EventInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Pause\", ctx, eventInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Pause indicates an expected call of Pause\n// nolint\nfunc (mr *MockProcessorMockRecorder) Pause(ctx, eventInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Pause\", reflect.TypeOf((*MockProcessor)(nil).Pause), ctx, eventInfo)\n}\n\n// Resync mocks base method\n// nolint\nfunc (m *MockProcessor) Resync(ctx context.Context, EventInfo *common.EventInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Resync\", ctx, EventInfo)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Resync indicates an expected call of Resync\n// nolint\nfunc (mr *MockProcessorMockRecorder) Resync(ctx, EventInfo interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Resync\", reflect.TypeOf((*MockProcessor)(nil).Resync), ctx, EventInfo)\n}\n"
  },
  {
    "path": "monitor/registerer/interfaces.go",
    "content": "package registerer\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/processor\"\n)\n\n// Registerer inteface allows event processors to register themselves with the event server.\ntype Registerer interface {\n\n\t// Register Processor registers event processors for a certain type of PU\n\tRegisterProcessor(puType common.PUType, p processor.Processor) error\n\n\tGetHandler(puType common.PUType, e common.Event) (common.EventHandler, error)\n}\n"
  },
  {
    "path": "monitor/registerer/registerer.go",
    "content": "package registerer\n\nimport (\n\t\"fmt\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/processor\"\n)\n\n// registerer provides a way for others to register a registerer\ntype registerer struct {\n\thandlers map[common.PUType]map[common.Event]common.EventHandler\n}\n\n// New returns a new registerer\nfunc New() Registerer {\n\n\treturn &registerer{\n\t\thandlers: map[common.PUType]map[common.Event]common.EventHandler{},\n\t}\n}\n\n// RegisterProcessor registers an event processor for a given PUTYpe. Only one\n// processor is allowed for a given PU Type.\nfunc (r *registerer) RegisterProcessor(puType common.PUType, ep processor.Processor) error {\n\n\tif _, ok := r.handlers[puType]; ok {\n\t\treturn fmt.Errorf(\"Processor already registered for this PU type %d \", puType)\n\t}\n\n\tr.handlers[puType] = map[common.Event]common.EventHandler{}\n\n\tr.addHandler(puType, common.EventStart, ep.Start)\n\tr.addHandler(puType, common.EventStop, ep.Stop)\n\tr.addHandler(puType, common.EventCreate, ep.Create)\n\tr.addHandler(puType, common.EventDestroy, ep.Destroy)\n\tr.addHandler(puType, common.EventPause, ep.Pause)\n\tr.addHandler(puType, common.EventResync, ep.Resync)\n\n\treturn nil\n}\n\nfunc (r *registerer) GetHandler(puType common.PUType, eventType common.Event) (common.EventHandler, error) {\n\n\thandlers, ok := r.handlers[puType]\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"PUType %d not registered\", puType)\n\t}\n\n\te, ok := handlers[eventType]\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"PUType %d event type %s not registered\", puType, eventType)\n\t}\n\n\treturn e, nil\n}\n\n// addHandler adds a handler for a puType/event.\nfunc (r *registerer) addHandler(puType common.PUType, event common.Event, handler common.EventHandler) {\n\tr.handlers[puType][event] = handler\n}\n"
  },
  {
    "path": "monitor/registerer/registerer_test.go",
    "content": "package registerer\n\nimport (\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/processor/mockprocessor\"\n)\n\nfunc TestNew(t *testing.T) {\n\tConvey(\"When I create a new registrer\", t, func() {\n\t\tr := New()\n\t\tConvey(\"It should be valid\", func() {\n\t\t\tSo(r, ShouldNotBeNil)\n\t\t\tSo(r.(*registerer).handlers, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc TestRegisterProcessor(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"When I register a new processor\", t, func() {\n\t\tr := New()\n\n\t\tprocessor := mockprocessor.NewMockProcessor(ctrl)\n\t\terr := r.RegisterProcessor(common.ContainerPU, processor)\n\n\t\tConvey(\"The registration should be succesfull\", func() {\n\t\t\tSo(err, ShouldBeNil)\n\t\t\th, ok := r.(*registerer).handlers[common.ContainerPU]\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(h, ShouldNotBeNil)\n\t\t\tSo(len(h), ShouldEqual, 6)\n\t\t})\n\n\t\tConvey(\"If I ask for the handler, I should ge the right handler\", func() {\n\t\t\t_, err := r.GetHandler(common.ContainerPU, common.EventCreate)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t_, err = r.GetHandler(common.ContainerPU, common.EventStart)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t_, err = r.GetHandler(common.ContainerPU, common.EventDestroy)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t_, err = r.GetHandler(common.ContainerPU, common.EventPause)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"If I ask for the handler with a bad PUType, I should get an error \", func() {\n\t\t\t_, err := r.GetHandler(common.LinuxProcessPU, common.EventCreate)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If I ask for the handler, with a bad event, I should get an error \", func() {\n\t\t\t_, err := r.GetHandler(common.LinuxProcessPU, common.Event(\"300\"))\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If I try to register the processor twice, I should get an error \", func() {\n\t\t\terr := r.RegisterProcessor(common.ContainerPU, processor)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n\n}\n"
  },
  {
    "path": "monitor/remoteapi/client/client.go",
    "content": "package client\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"net\"\n\t\"net/http\"\n\t\"time\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\ntype dialContextFunc func(ctx context.Context, network, address string) (net.Conn, error)\n\n// SendRequest sends a request to the remote.\n// TODO: Add retries\nfunc (c *Client) SendRequest(event *common.EventInfo) error {\n\n\thttpc := http.Client{\n\t\tTransport: &http.Transport{\n\t\t\tDialContext:     c.getDialContext(),\n\t\t\tMaxIdleConns:    10,\n\t\t\tIdleConnTimeout: 10 * time.Second,\n\t\t},\n\t}\n\n\tb := new(bytes.Buffer)\n\tif err := json.NewEncoder(b).Encode(event); err != nil {\n\t\treturn fmt.Errorf(\"Unable to encode message: %s\", err)\n\t}\n\n\tresp, err := httpc.Post(\"http://unix\", \"application/json\", b)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer resp.Body.Close() // nolint\n\n\tif resp.StatusCode == http.StatusAccepted {\n\t\treturn nil\n\t}\n\n\terrorBuffer, err := ioutil.ReadAll(resp.Body)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Invalid request: %s\", err)\n\t}\n\n\treturn fmt.Errorf(\"Invalid request : %s\", string(errorBuffer))\n}\n"
  },
  {
    "path": "monitor/remoteapi/client/client_nonwindows.go",
    "content": "// +build !windows\n\npackage client\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n)\n\n// Client is an api client structure.\ntype Client struct {\n\taddr *net.UnixAddr\n}\n\n// NewClient creates a new client.\nfunc NewClient(path string) (*Client, error) {\n\taddr, err := net.ResolveUnixAddr(\"unix\", path)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"invalid address: %s\", err)\n\t}\n\n\treturn &Client{addr: addr}, nil\n}\n\nfunc (c *Client) getDialContext() dialContextFunc {\n\treturn func(_ context.Context, _, _ string) (net.Conn, error) {\n\t\treturn net.DialUnix(\"unix\", nil, c.addr)\n\t}\n}\n"
  },
  {
    "path": "monitor/remoteapi/client/client_windows.go",
    "content": "// +build windows\n\npackage client\n\nimport (\n\t\"context\"\n\t\"net\"\n\n\t\"gopkg.in/natefinch/npipe.v2\"\n)\n\n// Client is an api client structure.\ntype Client struct {\n\tpipeName string\n}\n\n// NewClient creates a new client.\nfunc NewClient(path string) (*Client, error) {\n\treturn &Client{pipeName: `\\\\.\\pipe\\` + path}, nil\n}\n\nfunc (c *Client) getDialContext() dialContextFunc {\n\treturn func(_ context.Context, _, _ string) (net.Conn, error) {\n\t\treturn npipe.Dial(c.pipeName)\n\t}\n}\n"
  },
  {
    "path": "monitor/remoteapi/client/interfaces.go",
    "content": "package client\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// APIClient is the interface of the API client\ntype APIClient interface {\n\t// SendRequest will send a request to the server.\n\tSendRequest(event *common.EventInfo) error\n}\n"
  },
  {
    "path": "monitor/remoteapi/client/mockclient/mockclient.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: monitor/remoteapi/client/interfaces.go\n\n// Package mockclient is a generated GoMock package.\npackage mockclient\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tcommon \"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// MockAPIClient is a mock of APIClient interface\n// nolint\ntype MockAPIClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockAPIClientMockRecorder\n}\n\n// MockAPIClientMockRecorder is the mock recorder for MockAPIClient\n// nolint\ntype MockAPIClientMockRecorder struct {\n\tmock *MockAPIClient\n}\n\n// NewMockAPIClient creates a new mock instance\n// nolint\nfunc NewMockAPIClient(ctrl *gomock.Controller) *MockAPIClient {\n\tmock := &MockAPIClient{ctrl: ctrl}\n\tmock.recorder = &MockAPIClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockAPIClient) EXPECT() *MockAPIClientMockRecorder {\n\treturn m.recorder\n}\n\n// SendRequest mocks base method\n// nolint\nfunc (m *MockAPIClient) SendRequest(event *common.EventInfo) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"SendRequest\", event)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// SendRequest indicates an expected call of SendRequest\n// nolint\nfunc (mr *MockAPIClientMockRecorder) SendRequest(event interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SendRequest\", reflect.TypeOf((*MockAPIClient)(nil).SendRequest), event)\n}\n"
  },
  {
    "path": "monitor/remoteapi/server/interfaces.go",
    "content": "package server\n\nimport (\n\t\"context\"\n)\n\n// APIServer is the main interface of the API server.\n// Allows to create mock functions for testing.\ntype APIServer interface {\n\t// Run runs an API server\n\tRun(ctx context.Context) error\n}\n"
  },
  {
    "path": "monitor/remoteapi/server/server.go",
    "content": "package server\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"regexp\"\n\t\"strconv\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n\t\"go.uber.org/zap\"\n)\n\n// EventServer is a new event server\ntype EventServer struct {\n\tsocketPath string\n\tserver     *http.Server\n\tregisterer registerer.Registerer\n}\n\n// NewEventServer creates a new event server\nfunc NewEventServer(address string, registerer registerer.Registerer) (*EventServer, error) {\n\n\terr := cleanupPipe(address)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &EventServer{\n\t\tsocketPath: address,\n\t\tregisterer: registerer,\n\t}, nil\n}\n\n// ServeHTTP is called for every request.\nfunc (e *EventServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {\n\te.create(w, r)\n}\n\n// Run runs the server. The server will run in the background. It will\n// gracefully die with the provided context.\nfunc (e *EventServer) Run(ctx context.Context) error {\n\n\t// Create the handler\n\te.server = &http.Server{\n\t\tHandler: e,\n\t}\n\n\tlistener, err := e.makePipe()\n\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Start serving HTTP requests in the background\n\tgo e.server.Serve(listener) // nolint\n\n\t// Listen for context cancellation to close the socket.\n\tgo func() {\n\t\t<-ctx.Done()\n\t\tlistener.Close() // nolint\n\t}()\n\n\treturn nil\n}\n\n// create is the main hadler that process and validates the events\n// before calling the actual monitor handlers to process the event.\nfunc (e *EventServer) create(w http.ResponseWriter, r *http.Request) {\n\tevent := &common.EventInfo{}\n\tdefer r.Body.Close() // nolint\n\n\tif err := json.NewDecoder(r.Body).Decode(event); err != nil {\n\t\thttp.Error(w, \"Invalid request\", http.StatusBadRequest)\n\t\treturn\n\t}\n\n\tif err := validateTypes(event); err != nil {\n\t\tzap.L().Error(\"Error in validating types\", zap.Error(err), zap.Reflect(\"Event\", event))\n\t\thttp.Error(w, fmt.Sprintf(\"Invalid request fields: %s\", err), http.StatusBadRequest)\n\t\treturn\n\t}\n\n\tif err := validateUser(r, event); err != nil {\n\t\thttp.Error(w, fmt.Sprintf(\"Invalid user to pid mapping found: %s\", err), http.StatusForbidden)\n\t\treturn\n\t}\n\n\tif err := validateEvent(event); err != nil {\n\t\thttp.Error(w, fmt.Sprintf(\"Bad request: %s\", err), http.StatusBadRequest)\n\t\treturn\n\t}\n\n\tif err := e.processEvent(r.Context(), event); err != nil {\n\t\tzap.L().Error(\"Error in processing event\", zap.Error(err), zap.Reflect(\"Event\", event))\n\t\thttp.Error(w, fmt.Sprintf(\"Cannot handle request: %s\", err), http.StatusInternalServerError)\n\t\treturn\n\t}\n\n\tw.WriteHeader(http.StatusAccepted)\n}\n\n// processEvent processes the event by retrieving the right monitor handler.\nfunc (e *EventServer) processEvent(ctx context.Context, eventInfo *common.EventInfo) (err error) {\n\n\tif e.registerer == nil {\n\t\treturn fmt.Errorf(\"No registered handlers\")\n\t}\n\n\tf, err := e.registerer.GetHandler(eventInfo.PUType, eventInfo.EventType)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Handler not found: %s\", err)\n\t}\n\n\treturn f(ctx, eventInfo)\n}\n\n// validateTypes validates the various types and prevents any bad strings.\nfunc validateTypes(event *common.EventInfo) error {\n\n\tregexStrings := regexp.MustCompile(\"^[a-zA-Z0-9_:.$%/-]{0,256}$\")\n\tregexNS := regexp.MustCompile(\"^[a-zA-Z0-9/-]{0,128}$\")\n\tregexCgroup := regexp.MustCompile(\"^/trireme/(uid/){0,1}[a-zA-Z0-9_:.$%]{1,64}$\")\n\n\tif _, ok := common.EventMap[event.EventType]; !ok {\n\t\treturn fmt.Errorf(\"invalid event: %s\", string(event.EventType))\n\t}\n\n\tif event.PUType > common.TransientPU {\n\t\treturn fmt.Errorf(\"invalid pu type %v\", event.PUType)\n\t}\n\n\tif len(event.Cgroup) > 0 && !regexCgroup.Match([]byte(event.Cgroup)) {\n\t\treturn fmt.Errorf(\"Invalid cgroup format: %s\", event.Cgroup)\n\t}\n\n\tif !regexNS.Match([]byte(event.NS)) {\n\t\treturn fmt.Errorf(\"Namespace is not of the right format\")\n\t}\n\n\tfor k, v := range event.IPs {\n\t\tif !regexStrings.Match([]byte(k)) {\n\t\t\treturn fmt.Errorf(\"Invalid IP name: %s\", k)\n\t\t}\n\n\t\tif ip := net.ParseIP(v); ip == nil {\n\t\t\treturn fmt.Errorf(\"Invalid IP address: %s\", v)\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// validateEvent validates that this is reasonable event and\n// modifies the default values.\nfunc validateEvent(event *common.EventInfo) error {\n\n\tif event.EventType == common.EventCreate || event.EventType == common.EventStart {\n\t\tif event.HostService {\n\t\t\tif event.NetworkOnlyTraffic {\n\t\t\t\tif event.Name == \"\" {\n\t\t\t\t\treturn fmt.Errorf(\"Service name must be provided and must not be default\")\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tif event.PUID == \"\" {\n\t\t\t\tevent.PUID = strconv.Itoa(int(event.PID))\n\t\t\t}\n\t\t}\n\t}\n\n\tif event.EventType == common.EventStop || event.EventType == common.EventDestroy {\n\t\tregStop := regexp.MustCompile(\"^/trireme/[a-zA-Z0-9_]{1,11}$\")\n\t\tif event.Cgroup != \"\" && !regStop.Match([]byte(event.Cgroup)) {\n\t\t\treturn fmt.Errorf(\"Cgroup is not of the right format\")\n\t\t}\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "monitor/remoteapi/server/server_nonwindows.go",
    "content": "// +build !windows\n\npackage server\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"os\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/shirou/gopsutil/process\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\nfunc cleanupPipe(address string) error {\n\t// Cleanup the socket first.\n\tif _, err := os.Stat(address); err == nil {\n\t\tif err := os.Remove(address); err != nil {\n\t\t\treturn fmt.Errorf(\"Cannot create clean up socket: %s\", err)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (e *EventServer) makePipe() (net.Listener, error) {\n\t// Start a custom listener\n\taddr, _ := net.ResolveUnixAddr(\"unix\", e.socketPath)\n\tnl, err := net.ListenUnix(\"unix\", addr)\n\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Unable to start API server: %s\", err)\n\t}\n\n\t// We make the socket accesible to all users of the system.\n\t// TODO: create a trireme group for this\n\tif err := os.Chmod(addr.String(), 0766); err != nil {\n\t\treturn nil, fmt.Errorf(\"Cannot make the socket accessible to all users: %s\", err)\n\t}\n\n\treturn &UIDListener{nl}, nil\n}\n\n// validateUser validates that the originating user is not sending a request\n// for a process that they don't own. Root users are allowed to send\n// any event.\nfunc validateUser(r *http.Request, event *common.EventInfo) error {\n\n\t// Find the calling user.\n\tparts := strings.Split(r.RemoteAddr, \":\")\n\tif len(parts) != 3 {\n\t\treturn fmt.Errorf(\"Invalid user context\")\n\t}\n\n\t// Accept all requests from root users\n\tif parts[0] == \"0\" {\n\t\treturn nil\n\t}\n\n\t// The target process must be valid.\n\tp, err := process.NewProcess(event.PID)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Process not found\")\n\t}\n\n\t// The UID of the calling process must match the UID of the target process.\n\tuids, err := p.Uids()\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Unknown user ID\")\n\t}\n\n\tmatch := false\n\tfor _, uid := range uids {\n\t\tif strconv.Itoa(int(uid)) == parts[0] {\n\t\t\tmatch = true\n\t\t}\n\t}\n\n\tif !match {\n\t\treturn fmt.Errorf(\"Invalid user - no access to this process: %+v PARTS: %+v\", event, parts)\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "monitor/remoteapi/server/server_test.go",
    "content": "package server\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"os\"\n\t\"strconv\"\n\t\"testing\"\n\n\t\"github.com/golang/mock/gomock\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/processor/mockprocessor\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/registerer\"\n)\n\nfunc TestNewEventServer(t *testing.T) {\n\tConvey(\"When I create a new server\", t, func() {\n\t\treg := registerer.New()\n\t\ts, err := NewEventServer(\"/tmp/trireme.sock\", reg)\n\t\tConvey(\"The object should be correct\", func() {\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(s, ShouldNotBeNil)\n\t\t\tSo(s.socketPath, ShouldResemble, \"/tmp/trireme.sock\")\n\t\t\tSo(s.registerer, ShouldEqual, reg)\n\t\t})\n\t})\n}\n\nfunc TestValidateUser(t *testing.T) {\n\tConvey(\"When I try to validate a user\", t, func() {\n\n\t\tConvey(\"When I get a bad remote address, it should fail\", func() {\n\t\t\tr := &http.Request{}\n\t\t\tr.RemoteAddr = \"badpath\"\n\t\t\tevent := &common.EventInfo{}\n\n\t\t\terr := validateUser(r, event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I issue the request as a superuser it should always succeed\", func() {\n\t\t\tr := &http.Request{}\n\t\t\tr.RemoteAddr = \"0:0:1000\"\n\t\t\tevent := &common.EventInfo{}\n\n\t\t\terr := validateUser(r, event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I issue the request as a regular user with a bad process it should fail\", func() {\n\t\t\tr := &http.Request{}\n\t\t\tr.RemoteAddr = \"1:10:1000\"\n\t\t\tevent := &common.EventInfo{PID: -1}\n\n\t\t\terr := validateUser(r, event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I issue the request as a regular user to a foreign process\", func() {\n\n\t\t\tmyuid := strconv.Itoa(os.Getuid())\n\t\t\tmyguyid := strconv.Itoa(os.Getgid())\n\t\t\tmypid := int32(os.Getpid())\n\t\t\tmypidstring := strconv.Itoa(int(mypid))\n\n\t\t\tr := &http.Request{}\n\t\t\tr.RemoteAddr = myuid + \":\" + myguyid + \":\" + mypidstring\n\t\t\tevent := &common.EventInfo{PID: 0}\n\n\t\t\terr := validateUser(r, event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"When I issue the request as a regular user with valid pid\", func() {\n\n\t\t\tmyuid := strconv.Itoa(os.Getuid())\n\t\t\tmyguyid := strconv.Itoa(os.Getgid())\n\t\t\tmypid := int32(os.Getpid())\n\t\t\tmypidstring := strconv.Itoa(int(mypid))\n\n\t\t\tr := &http.Request{}\n\t\t\tr.RemoteAddr = myuid + \":\" + myguyid + \":\" + mypidstring\n\t\t\tevent := &common.EventInfo{PID: mypid}\n\n\t\t\terr := validateUser(r, event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t})\n}\n\nfunc TestValidateTypes(t *testing.T) {\n\tConvey(\"When I validate the types of an event\", t, func() {\n\n\t\tConvey(\"If I have a bad eventtype it should error.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.Event(123),\n\t\t\t}\n\n\t\t\terr := validateTypes(event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If I have a bad PUType it should error.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.PUType(123),\n\t\t\t}\n\n\t\t\terr := validateTypes(event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If the event name has utf8 charaters and it is NOT UIDPAM PU, it should succeed.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tName:      \"utf8-_!@#%&\\\" (*)+.,/$!:;<>=?{}~\",\n\t\t\t}\n\n\t\t\terr := validateTypes(event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"If the cgroup has bad charaters, it should error.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tName:      \"Container\",\n\t\t\t\tCgroup:    \"/potatoes\",\n\t\t\t}\n\n\t\t\terr := validateTypes(event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If the namespace path has bad charaters, it should error.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tName:      \"Container\",\n\t\t\t\tCgroup:    \"/trireme/123\",\n\t\t\t\tNS:        \"!@##$!#\",\n\t\t\t}\n\n\t\t\terr := validateTypes(event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If the IPs have a bad name, it should error.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tName:      \"Container\",\n\t\t\t\tCgroup:    \"/trireme/123\",\n\t\t\t\tNS:        \"/var/run/docker/netns/6f7287cc342b\",\n\t\t\t\tIPs:       map[string]string{\"^^^\": \"123\"},\n\t\t\t}\n\n\t\t\terr := validateTypes(event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If the IP address is bad, it should error.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tName:      \"Container\",\n\t\t\t\tCgroup:    \"/trireme/123\",\n\t\t\t\tNS:        \"/var/run/docker/netns/6f7287cc342b\",\n\t\t\t\tIPs:       map[string]string{\"bridge\": \"123\"},\n\t\t\t}\n\n\t\t\terr := validateTypes(event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"If all the types are correct, it should succeed.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tName:      \"Container\",\n\t\t\t\tCgroup:    \"/trireme/123\",\n\t\t\t\tNS:        \"/var/run/docker/netns/6f7287cc342b\",\n\t\t\t\tIPs:       map[string]string{\"bridge\": \"172.17.0.1\"},\n\t\t\t}\n\n\t\t\terr := validateTypes(event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t})\n}\n\nfunc TestValidateEvent(t *testing.T) {\n\tConvey(\"When I validate events\", t, func() {\n\n\t\tConvey(\"If I get a Create  with no HostService, and PUID is nil, I should update it.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType:   common.EventCreate,\n\t\t\t\tPID:         1,\n\t\t\t\tHostService: false,\n\t\t\t}\n\n\t\t\terr := validateEvent(event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(event.PUID, ShouldResemble, \"1\")\n\t\t})\n\n\t\tConvey(\"If I get a Create  with no HostService, and PUID is not nil, I should get the right PUID\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType:   common.EventCreate,\n\t\t\t\tPID:         1,\n\t\t\t\tHostService: false,\n\t\t\t\tPUID:        \"mypu\",\n\t\t\t}\n\n\t\t\terr := validateEvent(event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(event.PUID, ShouldResemble, \"mypu\")\n\t\t})\n\n\t\tConvey(\"If I get a Create  with the HostService and no networktraffic only, I should get PUID with the same name\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType:          common.EventCreate,\n\t\t\t\tPID:                1,\n\t\t\t\tHostService:        true,\n\t\t\t\tNetworkOnlyTraffic: false,\n\t\t\t\tPUID:               \"mypu\",\n\t\t\t}\n\n\t\t\terr := validateEvent(event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(event.PUID, ShouldResemble, \"mypu\")\n\t\t})\n\n\t\tConvey(\"If I get a Create  with the HostService and networktraffic only, I should get PUID as my name\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType:          common.EventCreate,\n\t\t\t\tPID:                1,\n\t\t\t\tHostService:        true,\n\t\t\t\tNetworkOnlyTraffic: true,\n\t\t\t\tName:               \"myservice\",\n\t\t\t\tPUID:               \"mypu\",\n\t\t\t}\n\n\t\t\terr := validateEvent(event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(event.PUID, ShouldResemble, \"mypu\")\n\t\t})\n\n\t\tConvey(\"If I get a Stop event and cgroup is in the right format, it should return nil.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStop,\n\t\t\t\tCgroup:    \"/trireme/1234\",\n\t\t\t}\n\n\t\t\terr := validateEvent(event)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"If I get a Stop event and cgroup is in the wrong format, it should error.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStop,\n\t\t\t\tCgroup:    \"/potatoes\",\n\t\t\t}\n\n\t\t\terr := validateEvent(event)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t})\n}\n\nfunc TestCreate(t *testing.T) {\n\tctrl := gomock.NewController(t)\n\tdefer ctrl.Finish()\n\n\tConvey(\"Given a new server\", t, func() {\n\t\treg := registerer.New()\n\t\ts, err := NewEventServer(\"/tmp/trireme.sock\", reg)\n\t\tproc := mockprocessor.NewMockProcessor(ctrl)\n\t\tprocerr := reg.RegisterProcessor(common.ContainerPU, proc)\n\t\tSo(procerr, ShouldBeNil)\n\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"Given a valid event, I should get 200 response.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tPID:       int32(os.Getpid()),\n\t\t\t\tName:      \"Container\",\n\t\t\t\tCgroup:    \"/trireme/123\",\n\t\t\t\tNS:        \"/var/run/docker/netns/6f7287cc342b\",\n\t\t\t\tIPs:       map[string]string{\"bridge\": \"172.17.0.1\"},\n\t\t\t}\n\n\t\t\tproc.EXPECT().Start(gomock.Any(), gomock.Any()).Return(nil)\n\n\t\t\tb := new(bytes.Buffer)\n\t\t\terr := json.NewEncoder(b).Encode(event)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\treq := httptest.NewRequest(\"POST\", \"http://unix\", b)\n\t\t\treq.RemoteAddr = strconv.Itoa(os.Getuid()) + \":\" + strconv.Itoa(os.Getgid()) + \":\" + strconv.Itoa(int(event.PID))\n\t\t\tw := httptest.NewRecorder()\n\t\t\ts.create(w, req)\n\n\t\t\tSo(w.Result().StatusCode, ShouldEqual, http.StatusAccepted)\n\t\t})\n\n\t\tConvey(\"Given bad json a BadRequest\", func() {\n\n\t\t\tb := new(bytes.Buffer)\n\t\t\tb.WriteString(\"garbage\")\n\n\t\t\treq := httptest.NewRequest(\"POST\", \"http://unix\", b)\n\n\t\t\tw := httptest.NewRecorder()\n\t\t\ts.create(w, req)\n\n\t\t\tSo(w.Result().StatusCode, ShouldEqual, http.StatusBadRequest)\n\t\t})\n\n\t\tConvey(\"Given bad event type, I should get BadRequest \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tPID:       -1,\n\t\t\t\tName:      \"^^^^\",\n\t\t\t\tCgroup:    \"/trireme/123\",\n\t\t\t\tNS:        \"/var/run/docker/netns/6f7287cc342b\",\n\t\t\t\tIPs:       map[string]string{\"bridge\": \"172.17.0.1\", \"ip\": \"thisisnotip\"},\n\t\t\t}\n\n\t\t\tb := new(bytes.Buffer)\n\t\t\terr := json.NewEncoder(b).Encode(event)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\treq := httptest.NewRequest(\"POST\", \"http://unix\", b)\n\t\t\t// req.RemoteAddr = strconv.Itoa(os.Getuid()) + \":\" + strconv.Itoa(os.Getgid()) + \":\" + strconv.Itoa(int(event.PID))\n\t\t\tw := httptest.NewRecorder()\n\t\t\ts.create(w, req)\n\n\t\t\tSo(w.Result().StatusCode, ShouldEqual, http.StatusBadRequest)\n\t\t})\n\n\t\tConvey(\"Given a bad user request, I should get StatusForbidden \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tPID:       1,\n\t\t\t\tName:      \"name\",\n\t\t\t\tCgroup:    \"/trireme/123\",\n\t\t\t\tNS:        \"/var/run/docker/netns/6f7287cc342b\",\n\t\t\t\tIPs:       map[string]string{\"bridge\": \"172.17.0.1\"},\n\t\t\t}\n\n\t\t\tb := new(bytes.Buffer)\n\t\t\terr := json.NewEncoder(b).Encode(event)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\treq := httptest.NewRequest(\"POST\", \"http://unix\", b)\n\t\t\treq.RemoteAddr = strconv.Itoa(os.Getuid()) + \":\" + strconv.Itoa(os.Getgid()) + \":\" + strconv.Itoa(int(event.PID))\n\t\t\tw := httptest.NewRecorder()\n\t\t\ts.create(w, req)\n\n\t\t\tSo(w.Result().StatusCode, ShouldEqual, http.StatusForbidden)\n\t\t})\n\n\t\tConvey(\"Given a bad event, I should get BadRequest \", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType:          common.EventStart,\n\t\t\t\tPUType:             common.ContainerPU,\n\t\t\t\tPID:                int32(os.Getpid()),\n\t\t\t\tName:               \"\",\n\t\t\t\tCgroup:             \"/trireme/123\",\n\t\t\t\tNS:                 \"/var/run/docker/netns/6f7287cc342b\",\n\t\t\t\tIPs:                map[string]string{\"bridge\": \"172.17.0.1\"},\n\t\t\t\tHostService:        true,\n\t\t\t\tNetworkOnlyTraffic: true,\n\t\t\t}\n\n\t\t\tb := new(bytes.Buffer)\n\t\t\terr := json.NewEncoder(b).Encode(event)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\treq := httptest.NewRequest(\"POST\", \"http://unix\", b)\n\t\t\treq.RemoteAddr = strconv.Itoa(os.Getuid()) + \":\" + strconv.Itoa(os.Getgid()) + \":\" + strconv.Itoa(int(event.PID))\n\t\t\tw := httptest.NewRecorder()\n\t\t\ts.create(w, req)\n\n\t\t\tSo(w.Result().StatusCode, ShouldEqual, http.StatusBadRequest)\n\t\t})\n\n\t\tConvey(\"Given a valid event,where the processor fails, I should get InternalServerError.\", func() {\n\t\t\tevent := &common.EventInfo{\n\t\t\t\tEventType: common.EventStart,\n\t\t\t\tPUType:    common.ContainerPU,\n\t\t\t\tPID:       int32(os.Getpid()),\n\t\t\t\tName:      \"Container\",\n\t\t\t\tCgroup:    \"/trireme/123\",\n\t\t\t\tNS:        \"/var/run/docker/netns/6f7287cc342b\",\n\t\t\t\tIPs:       map[string]string{\"bridge\": \"172.17.0.1\"},\n\t\t\t}\n\n\t\t\tproc.EXPECT().Start(gomock.Any(), gomock.Any()).Return(errors.New(\"some error\"))\n\n\t\t\tb := new(bytes.Buffer)\n\t\t\terr := json.NewEncoder(b).Encode(event)\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\treq := httptest.NewRequest(\"POST\", \"http://unix\", b)\n\t\t\treq.RemoteAddr = strconv.Itoa(os.Getuid()) + \":\" + strconv.Itoa(os.Getgid()) + \":\" + strconv.Itoa(int(event.PID))\n\t\t\tw := httptest.NewRecorder()\n\t\t\ts.create(w, req)\n\n\t\t\tSo(w.Result().StatusCode, ShouldEqual, http.StatusInternalServerError)\n\t\t})\n\n\t})\n\n}\n"
  },
  {
    "path": "monitor/remoteapi/server/server_windows.go",
    "content": "// +build windows\n\npackage server\n\nimport (\n\t\"net\"\n\t\"net/http\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"gopkg.in/natefinch/npipe.v2\"\n)\n\nfunc cleanupPipe(address string) error {\n\t// TODO(windows): anything?\n\treturn nil\n}\n\nfunc (e *EventServer) makePipe() (net.Listener, error) {\n\tpipeName := `\\\\.\\pipe\\` + e.socketPath\n\tpipeListener, err := npipe.Listen(pipeName)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn pipeListener, nil\n}\n\n// TODO(windows): Uids() not impl currently in Windows\nfunc validateUser(r *http.Request, event *common.EventInfo) error {\n\treturn nil\n}\n"
  },
  {
    "path": "monitor/remoteapi/server/uidlistener.go",
    "content": "// +build linux\n\npackage server\n\nimport (\n\t\"net\"\n\t\"strconv\"\n\t\"syscall\"\n\t\"time\"\n)\n\n// UIDConnection is a connection wrapper that allows to recover\n// the user ID of the calling process.\ntype UIDConnection struct {\n\tnc *net.UnixConn\n}\n\n// Read implements the read interface of the connection.\nfunc (c UIDConnection) Read(b []byte) (n int, err error) {\n\treturn c.nc.Read(b)\n}\n\n// Write implements the write interface of the connection.\nfunc (c UIDConnection) Write(b []byte) (n int, err error) {\n\treturn c.nc.Write(b)\n}\n\n// Close implements the close interface of the connection.\nfunc (c UIDConnection) Close() error {\n\treturn c.nc.Close()\n}\n\n// LocalAddr implements the LocalAddr interface of the connection.\nfunc (c UIDConnection) LocalAddr() net.Addr {\n\treturn c.nc.LocalAddr()\n}\n\n// RemoteAddr implements the RemoteAddr interface of the connection.\n// This is the main change where we actually use the FD of the unix\n// socket to find the remote UID.\nfunc (c UIDConnection) RemoteAddr() net.Addr {\n\n\tuidAddr := &UIDAddr{\n\t\tNetworkAddress: c.nc.RemoteAddr().Network(),\n\t}\n\n\tf, err := c.nc.File()\n\tif err != nil {\n\t\tuidAddr.Address = \"NotAvailable\"\n\t}\n\tdefer f.Close() // nolint: errcheck\n\n\tcred, err := syscall.GetsockoptUcred(int(f.Fd()), syscall.SOL_SOCKET, syscall.SO_PEERCRED)\n\tif err != nil {\n\t\tuidAddr.Address = \"NotAvailable\"\n\t}\n\n\tuidAddr.Address = strconv.Itoa(int(cred.Uid)) + \":\" + strconv.Itoa(int(cred.Gid)) + \":\" + strconv.Itoa(int(cred.Pid))\n\n\treturn uidAddr\n\n}\n\n// SetDeadline implements the SetDeadLine interface.\nfunc (c UIDConnection) SetDeadline(t time.Time) error {\n\treturn c.nc.SetDeadline(t)\n}\n\n// SetReadDeadline implements the SetReadDeadling interface\nfunc (c UIDConnection) SetReadDeadline(t time.Time) error {\n\treturn c.nc.SetReadDeadline(t)\n}\n\n// SetWriteDeadline implements the SetWriteDeadline method of the interface.\nfunc (c UIDConnection) SetWriteDeadline(t time.Time) error {\n\treturn c.nc.SetWriteDeadline(t)\n}\n\n// UIDAddr implements the Addr interface and allows us to customize the address.\ntype UIDAddr struct {\n\tNetworkAddress string\n\tAddress        string\n}\n\n// Network returns the network of the connection\nfunc (a *UIDAddr) Network() string {\n\treturn a.NetworkAddress\n}\n\n// String returns a string representation.\nfunc (a *UIDAddr) String() string {\n\treturn a.Address\n}\n\n// UIDListener is a custom net listener that uses the UID connection\ntype UIDListener struct {\n\tnl *net.UnixListener\n}\n\n// NewUIDListener creates a new UID listener.\nfunc NewUIDListener(nl *net.UnixListener) *UIDListener {\n\treturn &UIDListener{nl: nl}\n}\n\n// Accept implements the accept method of the interface.\nfunc (l UIDListener) Accept() (c net.Conn, err error) {\n\tnc, err := l.nl.AcceptUnix()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn UIDConnection{nc}, nil\n}\n\n// Close implements the close method of the interface.\nfunc (l UIDListener) Close() error {\n\treturn l.nl.Close()\n}\n\n// Addr returns the address of the listener.\nfunc (l UIDListener) Addr() net.Addr {\n\treturn l.nl.Addr()\n}\n"
  },
  {
    "path": "monitor/remoteapi/server/uidlistener_nonlinux.go",
    "content": "// +build darwin windows\n\npackage server\n\nimport (\n\t\"net\"\n\t\"time\"\n)\n\n// UIDConnection is a connection wrapper that allows to recover\n// the user ID of the calling process.\ntype UIDConnection struct {\n\tnc *net.UnixConn\n}\n\n// Read implements the read interface of the connection.\nfunc (c UIDConnection) Read(b []byte) (n int, err error) {\n\treturn c.nc.Read(b)\n}\n\n// Write implements the write interface of the connection.\nfunc (c UIDConnection) Write(b []byte) (n int, err error) {\n\treturn c.nc.Write(b)\n}\n\n// Close implements the close interface of the connection.\nfunc (c UIDConnection) Close() error {\n\treturn c.nc.Close()\n}\n\n// LocalAddr implements the LocalAddr interface of the connection.\nfunc (c UIDConnection) LocalAddr() net.Addr {\n\treturn c.nc.LocalAddr()\n}\n\n// RemoteAddr implements the RemoteAddr interface of the connection.\n// This is the main change where we actually use the FD of the unix\n// socket to find the remote UID.\nfunc (c UIDConnection) RemoteAddr() net.Addr {\n\treturn c.nc.RemoteAddr()\n}\n\n// SetDeadline implements the SetDeadLine interface.\nfunc (c UIDConnection) SetDeadline(t time.Time) error {\n\treturn c.nc.SetDeadline(t)\n}\n\n// SetReadDeadline implements the SetReadDeadling interface\nfunc (c UIDConnection) SetReadDeadline(t time.Time) error {\n\treturn c.nc.SetReadDeadline(t)\n}\n\n// SetWriteDeadline implements the SetWriteDeadline method of the interface.\nfunc (c UIDConnection) SetWriteDeadline(t time.Time) error {\n\treturn c.nc.SetWriteDeadline(t)\n}\n\n// UIDAddr implements the Addr interface and allows us to customize the address.\ntype UIDAddr struct {\n\tNetworkAddress string\n\tAddress        string\n}\n\n// Network returns the network of the connection\nfunc (a *UIDAddr) Network() string {\n\treturn a.NetworkAddress\n}\n\n// String returns a string representation.\nfunc (a *UIDAddr) String() string {\n\treturn a.Address\n}\n\n// UIDListener is a custom net listener that uses the UID connection\ntype UIDListener struct {\n\tnl *net.UnixListener\n}\n\n// NewUIDListener creates a new listener with UID information.\nfunc NewUIDListener(nl *net.UnixListener) *UIDListener {\n\treturn &UIDListener{\n\t\tnl: nl,\n\t}\n}\n\n// Accept implements the accept method of the interface.\nfunc (l UIDListener) Accept() (c net.Conn, err error) {\n\tnc, err := l.nl.AcceptUnix()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn UIDConnection{nc}, nil\n}\n\n// Close implements the close method of the interface.\nfunc (l UIDListener) Close() error {\n\treturn l.nl.Close()\n}\n\n// Addr returns the address of the listener.\nfunc (l UIDListener) Addr() net.Addr {\n\treturn l.nl.Addr()\n}\n"
  },
  {
    "path": "monitor/server/pipe.go",
    "content": "// +build !windows\n\npackage server\n\nimport (\n\t\"net\"\n\t\"os\"\n\n\t\"go.uber.org/zap\"\n)\n\nfunc cleanupPipe(address string) error {\n\t// Cleanup the leftover socket first.\n\tif _, err := os.Stat(address); err == nil {\n\t\tif err := os.Remove(address); err != nil {\n\t\t\tzap.L().Error(\"Cannot remove existing pipe\", zap.String(\"address\", address), zap.Error(err))\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc makePipe(address string) (net.Listener, error) {\n\t// Start a custom listener\n\taddr, _ := net.ResolveUnixAddr(\"unix\", address)\n\tnl, err := net.ListenUnix(\"unix\", addr)\n\tif err != nil {\n\t\tzap.L().Error(\"Unable to start the listener\", zap.String(\"address\", address), zap.Error(err))\n\t\treturn nil, err\n\t}\n\n\t// make it owner,group rw only.\n\t// TODO: which group ID? or should it be owner root rw only ?\n\tif err := os.Chmod(addr.String(), 0600); err != nil {\n\t\tzap.L().Error(\"Cannot set permissions on the pipe\", zap.String(\"address\", address), zap.Error(err))\n\t\treturn nil, err\n\t}\n\n\treturn nl, nil\n}\n"
  },
  {
    "path": "monitor/server/pipe_windows.go",
    "content": "// +build windows\n\npackage server\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"os/user\"\n\t\"strings\"\n\n\twinio \"github.com/Microsoft/go-winio\"\n\tzap \"go.uber.org/zap\"\n)\n\nconst pipePrefix = `\\\\.\\pipe\\`\n\nfunc cleanupPipe(address string) error {\n\treturn nil\n}\n\nfunc makePipe(address string) (net.Listener, error) {\n\tvar pipeListener net.Listener\n\tvar err error\n\n\tpipeName := address\n\tif !strings.HasPrefix(pipeName, pipePrefix) {\n\t\tpipeName = pipePrefix + pipeName\n\t}\n\n\tpipeCfg := &winio.PipeConfig{}\n\n\tcurrent, err := user.Current()\n\tif err != nil {\n\t\tzap.L().Error(\"Unable to get the current user\", zap.String(\"address\", address), zap.Error(err))\n\t\treturn nil, err\n\t}\n\n\t// A discretionary access control list (DACL) identifies the trustees that are allowed or denied access to a securable object.\n\t// D:P(A;;GA;;;SY)(A;;GA;;;BA) = DACL allowing (A) General all access (GA) for SYSTEM (SY), Admin (BA) and current user.\n\t// This library is creating the pipe using undocumented kernel functions instead of using the win32 functions.\n\t// So if the code is running as the administrator, then the security descriptor works just fine, but\n\t// if you are running as a non admin even if you are in the administrator's group, then you don't get access to the pipe,\n\t// and that is why the code is also granting access to the current user.  Normally you would not need to do this.\n\n\tpipeCfg.SecurityDescriptor = fmt.Sprintf(\"D:P(A;;GA;;;SY)(A;;GA;;;BA)(A;;GA;;;%s)\", current.Uid)\n\n\tpipeListener, err = winio.ListenPipe(pipeName, pipeCfg)\n\n\tif err != nil {\n\t\tzap.L().Error(\"Unable to start the listener\", zap.String(\"address\", address), zap.Error(err))\n\t\treturn nil, err\n\t}\n\treturn pipeListener, nil\n}\n"
  },
  {
    "path": "monitor/server/server.go",
    "content": "package server\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"sync\"\n\n\t\"time\"\n\n\t\"github.com/golang/protobuf/ptypes/empty\"\n\t\"go.aporeto.io/enforcerd/internal/extractors/containermetadata\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/counters\"\n\tmonitorpb \"go.aporeto.io/enforcerd/trireme-lib/monitor/api/spec/protos\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/constants\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/monitor/external\"\n\t\"go.uber.org/zap\"\n\t\"google.golang.org/grpc\"\n)\n\nvar _ Controls = &Server{}\n\nvar _ external.ReceiverRegistration = &Server{}\n\nvar _ monitorpb.CNIServer = &Server{}\nvar _ monitorpb.RunCServer = &Server{}\n\n// Controls is the controlling interface for starting/stopping the server\ntype Controls interface {\n\tStart(context.Context) error\n\tStop() error\n}\n\n// Server is the grpcMonitorServer server\ntype Server struct {\n\tctx                             context.Context\n\tenforcerID                      string\n\tstop                            chan struct{}\n\tenforcerStop                    chan struct{}\n\tsocketAddress                   string\n\tsocketType                      int\n\trunning                         bool\n\tmonitors                        map[string]external.ReceiveEvents\n\tmonitorsLock                    sync.RWMutex\n\truncProxyStarted                bool\n\tcniInstalled                    bool\n\tnotifyProcessRuncProxyStartedCh chan struct{}\n\tnotifyProcessCniInstalledCh     chan struct{}\n\textMonitorStartedLock           sync.RWMutex\n\twaitStopGrp                     sync.WaitGroup\n\tapoRuncWaitGrp                  *sync.WaitGroup\n}\n\nconst (\n\tsocketTypeUnix = iota\n\tsocketTypeTCP  // nolint: varcheck\n\tsocketTypeWindowsNamedPipe\n)\n\n// NewMonitorServer creates a gRPC server for the twistlock defender integration\nfunc NewMonitorServer(\n\tsocketAddress string,\n\tstopchan chan struct{},\n\tenforcerID string,\n\truncWaitGrp *sync.WaitGroup,\n) *Server {\n\treturn &Server{\n\t\tenforcerID:                      enforcerID,\n\t\tstop:                            make(chan struct{}),\n\t\tenforcerStop:                    stopchan,\n\t\tsocketAddress:                   socketAddress,\n\t\tsocketType:                      socketTypeUnix,\n\t\trunning:                         false,\n\t\tmonitors:                        make(map[string]external.ReceiveEvents),\n\t\tnotifyProcessRuncProxyStartedCh: make(chan struct{}),\n\t\tnotifyProcessCniInstalledCh:     make(chan struct{}),\n\t\twaitStopGrp:                     sync.WaitGroup{},\n\t\tapoRuncWaitGrp:                  runcWaitGrp,\n\t}\n}\n\n// Start the grpcMonitorServer gRPC server\nfunc (s *Server) Start(ctx context.Context) (err error) {\n\n\ts.ctx = ctx\n\n\terrChan := make(chan error)\n\tzap.L().Info(\"Starting the gRPC Monitor server, listening on\", zap.String(\"address\", s.socketAddress))\n\n\tif err := cleanupPipe(s.socketAddress); err != nil {\n\t\tzap.L().Fatal(\"unable to cleanup the old gRPC Monitor server socket address\", zap.String(\"address\", s.socketAddress), zap.Error(err))\n\t}\n\n\t// create the listener\n\tlis, err := makePipe(s.socketAddress)\n\tif err != nil {\n\t\tzap.L().Fatal(\"Failed to create the listener socket\", zap.String(\"address\", s.socketAddress), zap.Error(err))\n\t}\n\n\tvar opts []grpc.ServerOption\n\n\t// TODO - TLS certs for the gRPC connection ??\n\t// if tls {\n\t// \tcreds, err := credentials.NewServerTLSFromFile(tls.certFile, tls.keyFile)\n\t// \tif err != nil {\n\t// \t\tzap.L().Fatal(\"Failed to load TLS credentials %v\", zap.Error(err))\n\t// \t}\n\t//\n\t// \topts = []grpc.ServerOption{grpc.Creds(creds)}\n\t// }\n\n\tgrpcServer := grpc.NewServer(opts...)\n\n\t// now register the runc and CNI servers.\n\tmonitorpb.RegisterCNIServer(grpcServer, s)\n\tmonitorpb.RegisterRunCServer(grpcServer, s)\n\tzap.L().Debug(\"Starting the gRPC Monitor' server loop\")\n\n\tgo s.processExtMonitorStarted(ctx)\n\n\t// run blocking call in a separate goroutine, report errors via channel\n\tgo func() {\n\t\tif err := grpcServer.Serve(lis); err != nil {\n\t\t\tzap.L().Error(\"failed to start the gRPC Monitor' server\", zap.Error(err))\n\t\t\terrChan <- err\n\t\t}\n\t\tzap.L().Debug(\"Exiting gRPC Monitor' server go func\")\n\n\t\t// the listener should be closed by this time, remove it\n\t\tif s.socketType == socketTypeUnix || s.socketType == socketTypeWindowsNamedPipe {\n\t\t\tif err := cleanupPipe(s.socketAddress); err != nil {\n\t\t\t\tzap.L().Error(\"unable to cleanup the gRPC Monitor' server socket address\", zap.String(\"address\", s.socketAddress), zap.Error(err))\n\t\t\t\terrChan <- err\n\t\t\t}\n\t\t}\n\t}()\n\t// add the waitGrp to make sure that the GRPC shuts down graceFully.\n\ts.waitStopGrp.Add(1)\n\n\t// Start() is non-blocking, but we block in the go routine\n\t// until either OS signal, or server fatal error\n\tgo func() {\n\n\t\ts.running = true\n\t\tzap.L().Debug(\"the gRPC Monitor' server loop is running\")\n\n\t\t// terminate gracefully\n\t\tdefer func() {\n\t\t\tzap.L().Debug(\"Stopping the gRPC Monitor' server loop and listener socket\")\n\t\t\tgrpcServer.GracefulStop()\n\t\t\t// now we are sure that the connections have been drained completely.\n\t\t\ts.waitStopGrp.Done()\n\t\t\ts.running = false\n\t\t}()\n\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-s.stop:\n\t\t\t\tzap.L().Debug(\"gRPC Monitor' server channel loop: got a stop notification on the stop channel\")\n\t\t\t\treturn\n\t\t\tcase err := <-errChan:\n\t\t\t\tzap.L().Fatal(\"gRPC Monitor' server channel loop: got an error notification on the error channel\", zap.Error(err))\n\t\t\tcase <-ctx.Done():\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}()\n\n\treturn nil\n}\n\n// Stop stops the Monitor' gRPC server (does not stop enforcer)\nfunc (s *Server) Stop() error {\n\tif s.running {\n\t\tzap.L().Debug(\"gRPC Server: notified the graceful stop\")\n\t\tclose(s.stop)\n\t}\n\t// add the wait for to make sure the GRPC gracefulStop drains all the connections.\n\ts.waitStopGrp.Wait()\n\treturn nil\n}\n\n// RuncProxyStarted gets sent by the defender once when the defender has started the runc-proxy.\nfunc (s *Server) RuncProxyStarted(context.Context, *empty.Empty) (*empty.Empty, error) {\n\tzap.L().Info(\"grpc: runc-proxy has started\")\n\ts.extMonitorStartedLock.Lock()\n\ts.runcProxyStarted = true\n\ts.extMonitorStartedLock.Unlock()\n\ts.notifyProcessRuncProxyStartedCh <- struct{}{}\n\treturn &empty.Empty{}, nil\n}\n\n// isRuncProxyStarted returns the internal state of RuncProxyStarted as a copy\nfunc (s *Server) isRuncProxyStarted() bool {\n\ts.extMonitorStartedLock.RLock()\n\tdefer s.extMonitorStartedLock.RUnlock()\n\treturn s.runcProxyStarted\n}\n\n// CniPluginInstalled gets sent by the defender once when the defender has started the runc-proxy.\nfunc (s *Server) CniPluginInstalled(context.Context, *empty.Empty) (*empty.Empty, error) {\n\tzap.L().Info(\"grpc: cni Plugin is installed\")\n\ts.extMonitorStartedLock.Lock()\n\ts.cniInstalled = true\n\ts.extMonitorStartedLock.Unlock()\n\ts.notifyProcessCniInstalledCh <- struct{}{}\n\treturn &empty.Empty{}, nil\n}\n\n// isCniInstalled returns the internal state of RuncProxyStarted as a copy\nfunc (s *Server) isCniInstalled() bool {\n\ts.extMonitorStartedLock.RLock()\n\tdefer s.extMonitorStartedLock.RUnlock()\n\treturn s.cniInstalled\n}\n\nfunc (s *Server) processExtMonitorStarted(ctx context.Context) {\n\tm := make(map[string]struct{})\n\tfor {\n\t\t// signal only when runc/cni has not yet started\n\t\tif !s.isRuncProxyStarted() && !s.isCniInstalled() {\n\t\t\ts.apoRuncWaitGrp.Done()\n\t\t}\n\t\t// wait for a notification: this will be sent for two cases:\n\t\t// - RuncProxyStarted was called\n\t\t// - a new monitor registers with the grpc servcer\n\t\tselect {\n\t\tcase <-ctx.Done():\n\t\t\treturn\n\t\tcase <-s.notifyProcessRuncProxyStartedCh:\n\t\t\t// continue here\n\t\tcase <-s.notifyProcessCniInstalledCh:\n\t\t}\n\t\tif s.isRuncProxyStarted() || s.isCniInstalled() {\n\t\t\ts.monitorsLock.RLock()\n\t\t\t// iterate over all currently registered monitors\n\t\t\t// and if they haven't gotten the SenderReady() yet\n\t\t\t// we will send it to them\n\t\t\tfor name, monitor := range s.monitors {\n\t\t\t\tif _, ok := m[name]; ok {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tmonitor.SenderReady()\n\t\t\t\tm[name] = struct{}{}\n\t\t\t}\n\t\t\ts.monitorsLock.RUnlock()\n\t\t}\n\t}\n}\n\nconst maxProcessingTime = time.Second * 5\n\nfunc calProcessingTime(onStart time.Time, containerID string) {\n\tprocessingTime := time.Since(onStart)\n\tif processingTime > (maxProcessingTime) {\n\t\tcounters.IncrementCounter(counters.ErrSegmentServerContainerEventExceedsProcessingTime)\n\t\tzap.L().Warn(\n\t\t\t\"grpc: ContainerEvent: processing of container event took longer than allowed processing time\",\n\t\t\tzap.String(\"id\", containerID),\n\t\t\tzap.Duration(\"processingTime\", processingTime),\n\t\t\tzap.Duration(\"maxProcessingTime\", maxProcessingTime),\n\t\t)\n\t} else {\n\t\tzap.L().Debug(\n\t\t\t\"grpc: ContainerEvent: processing of container event was within allowed time frame\",\n\t\t\tzap.String(\"id\", containerID),\n\t\t\tzap.Duration(\"processingTime\", processingTime),\n\t\t\tzap.Duration(\"maxProcessingTime\", maxProcessingTime),\n\t\t)\n\t}\n}\n\n// CNIContainerEvent handles container event requests\nfunc (s *Server) CNIContainerEvent(ctx context.Context, req *monitorpb.CNIContainerEventRequest) (*monitorpb.ContainerEventResponse, error) {\n\tzap.L().Debug(\"grpc: CNI ContainerEvent received\", zap.Any(\"request\", req), zap.Any(\"type\", req.Type))\n\n\t// calculate the time that this function takes and log accordingly\n\tonStart := time.Now()\n\tdefer func() {\n\t\tcalProcessingTime(onStart, req.ContainerID)\n\t}()\n\tcontainerArgs := containermetadata.NewCniArguments(req)\n\t// now send the container event to the monitor\n\treturn s.sendContainerEvent(ctx, containerArgs)\n}\n\n// RunCContainerEvent handles container event requests\nfunc (s *Server) RunCContainerEvent(ctx context.Context, req *monitorpb.RunCContainerEventRequest) (*monitorpb.ContainerEventResponse, error) {\n\tzap.L().Debug(\"grpc: runc ContainerEvent received\", zap.Strings(\"commandLine\", req.GetCommandLine()))\n\n\tif !s.isRuncProxyStarted() {\n\t\tzap.L().Warn(\"grpc: receiving ContainerEvent, but have not received RuncProxyStarted event yet. Compensating...\")\n\t\ts.RuncProxyStarted(ctx, &empty.Empty{}) // nolint\n\t\treturn &monitorpb.ContainerEventResponse{\n\t\t\tErrorMessage: \"received ContainerEvent before RuncProxyStarted event\",\n\t\t}, nil\n\t}\n\n\t// parse the runc command-line first\n\tcontainerArgs, err := containermetadata.ParseRuncArguments(req.GetCommandLine())\n\tif err != nil {\n\t\tzap.L().Error(\"grpc: ContainerEvent: failed to parse runc commandline\")\n\t\treturn &monitorpb.ContainerEventResponse{\n\t\t\tErrorMessage: fmt.Sprintf(\"failed to parse runc commandline: %s\", err),\n\t\t}, nil\n\t}\n\t// calculate the time that this function takes and log accordingly\n\tonStart := time.Now()\n\tdefer func() {\n\t\tcalProcessingTime(onStart, containerArgs.ID())\n\t}()\n\t// now send the container event to the monitor\n\treturn s.sendContainerEvent(ctx, containerArgs)\n}\n\nfunc (s *Server) sendContainerEvent(ctx context.Context, containerArgs containermetadata.ContainerArgs) (*monitorpb.ContainerEventResponse, error) {\n\tvar kmd containermetadata.CommonKubernetesContainerMetadata\n\tvar md containermetadata.CommonContainerMetadata\n\tvar err error\n\t// now 1st check if the netnsPath is given, if given then its a CNI event and process it 1st\n\t// if the netnsPath is not given then we fallback to the default mechanism for extraction.\n\t// if we can identify that we have this container\n\tif len(containerArgs.NetNsPath()) > 0 && len(containerArgs.PodName()) > 0 && len(containerArgs.PodNamespace()) > 0 {\n\t\t// create the cni containerMetadata\n\t\tkmd = containermetadata.NewCniContainerMetadata(containerArgs)\n\t} else if containermetadata.AutoDetect().Has(containerArgs) {\n\n\t\t// then extract the common container metadata\n\t\tmd, kmd, err = containermetadata.AutoDetect().Extract(containerArgs)\n\t\tif err != nil {\n\t\t\treturn &monitorpb.ContainerEventResponse{\n\t\t\t\tErrorMessage: fmt.Sprintf(\"failed to parse runc commandline: %s\", err),\n\t\t\t}, nil\n\t\t}\n\n\t\t// as we are only interested in Kubernetes containers at the moment\n\t\t// simply log if this is a non-Kubernetes event\n\t\tif md != nil && kmd == nil {\n\t\t\tzap.L().Debug(\n\t\t\t\t\"grpc: ContainerEvent: container event does not belong to a Kubernetes container\",\n\t\t\t\tzap.String(\"md.ID()\", md.ID()),\n\t\t\t\tzap.String(\"md.Root()\", md.Root()),\n\t\t\t\tzap.String(\"md.Kind()\", md.Kind().String()),\n\t\t\t\tzap.String(\"md.Runtime()\", md.Runtime().String()),\n\t\t\t\tzap.Int(\"md.PID()\", md.PID()),\n\t\t\t\tzap.Bool(\"md.SystemdCgroups()\", md.SystemdCgroups()),\n\t\t\t)\n\t\t\treturn &monitorpb.ContainerEventResponse{}, nil\n\t\t}\n\t}\n\n\t// and now send an event to the K8s monitor\n\tif kmd != nil {\n\t\tzap.L().Debug(\n\t\t\t\"grpc: ContainerEvent: container event belongs to a Kubernetes container\",\n\t\t\tzap.String(\"kmd.ID()\", kmd.ID()),\n\t\t\tzap.String(\"kmd.Root()\", kmd.Root()),\n\t\t\tzap.String(\"kmd.Kind()\", kmd.Kind().String()),\n\t\t\tzap.String(\"kmd.Runtime()\", kmd.Runtime().String()),\n\t\t\tzap.Int(\"kmd.PID()\", kmd.PID()),\n\t\t\tzap.Bool(\"kmd.SystemdCgroups()\", kmd.SystemdCgroups()),\n\t\t\tzap.String(\"kmd.PodName()\", kmd.PodName()),\n\t\t\tzap.String(\"kmd.NetNsPath()\", kmd.NetNSPath()),\n\t\t\tzap.String(\"kmd.PodNamespace()\", kmd.PodNamespace()),\n\t\t\tzap.String(\"kmd.PodUID()\", kmd.PodUID()),\n\t\t\tzap.String(\"kmd.PodSandboxID()\", kmd.PodSandboxID()),\n\t\t)\n\n\t\ts.monitorsLock.RLock()\n\t\tdefer s.monitorsLock.RUnlock()\n\t\tmonitor, ok := s.monitors[constants.K8sMonitorRegistrationName]\n\t\tif !ok {\n\t\t\tzap.L().Debug(\"grpc: K8s monitor is not registered yet. Skipping processing of event.\")\n\t\t\treturn &monitorpb.ContainerEventResponse{\n\t\t\t\tErrorMessage: \"K8s monitor is not initialized yet\",\n\t\t\t}, nil\n\t\t}\n\n\t\tswitch containerArgs.Action() {\n\t\tcase containermetadata.StartAction:\n\t\t\t// the start action MUST be synchronous at all costs\n\t\t\tmonitor.Event(ctx, common.EventStart, kmd) // nolint: errcheck\n\t\tcase containermetadata.DeleteAction:\n\t\t\t// the delete event SHOULD be synchronous\n\t\t\t// however, we can unblock the caller and respect the context if it is not\n\t\t\tch := make(chan struct{})\n\t\t\tgo func() {\n\t\t\t\tmonitor.Event(context.Background(), common.EventDestroy, kmd) // nolint: errcheck\n\t\t\t\tclose(ch)\n\t\t\t}()\n\t\t\tselect {\n\t\t\tcase <-ctx.Done():\n\t\t\t\tzap.L().Warn(\"grpc: ContainerEvent: failed to process delete event within the context constraints\",\n\t\t\t\t\tzap.String(\"kmd.ID()\", kmd.ID()),\n\t\t\t\t\tzap.String(\"kmd.PodName()\", kmd.PodName()),\n\t\t\t\t\tzap.String(\"kmd.PodNamespace()\", kmd.PodNamespace()),\n\t\t\t\t\tzap.String(\"kmd.PodUID()\", kmd.PodUID()),\n\t\t\t\t\tzap.String(\"kmd.NetNsPath()\", kmd.NetNSPath()),\n\t\t\t\t\tzap.Error(ctx.Err()),\n\t\t\t\t)\n\t\t\tcase <-ch:\n\t\t\t\t// success, nothing more needs to be done\n\t\t\t}\n\t\tdefault:\n\t\t\tzap.L().Debug(\"grpc: unsupported action by the K8s monitor\", zap.String(\"action\", containerArgs.Action().String()))\n\t\t\treturn &monitorpb.ContainerEventResponse{\n\t\t\t\tErrorMessage: \"unexpected action received: \" + containerArgs.Action().String(),\n\t\t\t}, nil\n\t\t}\n\n\t\treturn &monitorpb.ContainerEventResponse{}, nil\n\t}\n\n\t// log an error if we can't find it because we should always be able to find it, and this is an error in the extractor\n\tzap.L().Error(\"grpc: ContainerEvent: container not found\", zap.String(\"containerID\", containerArgs.ID()), zap.String(\"action\", containerArgs.Action().String()))\n\treturn &monitorpb.ContainerEventResponse{\n\t\tErrorMessage: \"container not found\",\n\t}, nil\n}\n\n// SenderName must return a globally unique name of the implementor.\nfunc (s *Server) SenderName() string {\n\treturn constants.MonitorExtSenderName\n}\n\n// Register will register the given `monitor` for receiving events under `name`.\n// Multiple calls to this function for the same `name` must update the internal\n// state of the implementor to now send events to the newly regitered monitor of this\n// name. Only one registration of a monitor of the same name is allowed.\nfunc (s *Server) Register(name string, monitor external.ReceiveEvents) error {\n\ts.monitorsLock.Lock()\n\tdefer s.monitorsLock.Unlock()\n\ts.monitors[name] = monitor\n\ts.notifyProcessRuncProxyStartedCh <- struct{}{}\n\treturn nil\n}\n"
  },
  {
    "path": "plugins/pam/README.md",
    "content": "# PAM Authorization Module for Trireme \n\nThe PAM Authorization module allws the integration of Trireme with PAM Linux module. On every authorization\nrequest to the PAM module, the plugin can intercept the login or sudo attempt and activate the user \nin a specific network context where access to network resources is managed through the Trireme \nend-to-end authorization process. A simple use case is to give specific network access to specific \nusers such as the case of a jump-box in a cloud environment. \n\nTo build the module simple do:\n\n```bash \ngo build -buildmode=c-shared -o pam-module.so\n```\n\nThis file needs to be copied to the directory of PAM modules (usually in /lib/x86_64-linux-gnu/security/). Once \ninstalled there, you can configure the PAM module to invoke the plugin by adding the corresponding\ndirective. For example, you can add this line to /etc/pam.d/sudo \n\n```\nsession required pam_aporeto_uidm.so in \n```\n\nOnce this is installed, running sudo -u <anyuser> /bin/bash will cause the PAM module to send an event\nto Trireme and a unique network context will be activated for this user. Based on the user\ninformation one can select the right network policy to apply to the user.\n\nYou can achieve the same thing for the login shell by adding the directive to the \n/etc/pam.d/login file. \n"
  },
  {
    "path": "plugins/pam/uidmonitorpam.go",
    "content": "package main\n\n/*\n#cgo LDFLAGS: -lpam -fPIC\n#include <security/pam_appl.h>\n#include <stdlib.h>\nchar *get_user(pam_handle_t *pamh);\nchar *get_ruser(pam_handle_t *pamh);\nchar *get_rhost(pam_handle_t *pamh);\nchar *get_service(pam_handle_t *pam_h);\nvoid initLog() ;\nint is_system_user(char *user);\nint is_root(char *user);\n*/\nimport \"C\"\nimport (\n\t\"fmt\"\n\t\"log/syslog\"\n\t\"os\"\n\t\"os/user\"\n\n\t\"go.aporeto.io/trireme-lib/common\"\n\t\"go.aporeto.io/trireme-lib/monitor/remoteapi/client\"\n)\n\nfunc getGroupList(username string) ([]string, error) {\n\tslog, _ := syslog.New(syslog.LOG_ALERT|syslog.LOG_AUTH, \"mypam\")\n\tdefer func() {\n\t\t_ = slog.Close()\n\t}()\n\tuserhdl, err := user.Lookup(username)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tgids, err := userhdl.GroupIds()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tgroups := make([]string, len(gids))\n\tindex := 0\n\tfor _, gid := range gids {\n\t\tgrphdl, err := user.LookupGroupId(gid)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\t\tgroups[index] = \"groupname=\" + grphdl.Name\n\t\tindex++\n\n\t}\n\treturn groups[:index], nil\n}\n\n// nolint\n//export pam_sm_open_session\nfunc pam_sm_open_session(pamh *C.pam_handle_t, flags, argc int, argv **C.char) C.int {\n\tC.initLog()\n\tuser := C.get_user(pamh)\n\tservice := C.get_service(pamh)\n\tmetadatamap := []string{}\n\tuserstring := \"user=\" + C.GoString(user)\n\tmetadatamap = append(metadatamap, userstring)\n\tif groups, err := getGroupList(C.GoString(user)); err == nil {\n\t\tmetadatamap = append(metadatamap, groups...)\n\t}\n\n\tif service != nil {\n\t\tmetadatamap = append(metadatamap, \"SessionType=\"+C.GoString(service))\n\t} else {\n\t\tmetadatamap = append(metadatamap, \"SessionType=login\")\n\t}\n\n\trequest := &common.EventInfo{\n\t\tPUType:    common.UIDLoginPU,\n\t\tPUID:      C.GoString(user),\n\t\tName:      \"login-\" + C.GoString(user),\n\t\tPID:       int32(os.Getpid()),\n\t\tTags:      metadatamap,\n\t\tEventType: \"start\",\n\t}\n\n\tif C.is_root(user) == 1 {\n\t\t//Do nothing this is login shell account\n\t} else {\n\t\t//Do something\n\t\tslog, _ := syslog.New(syslog.LOG_ALERT|syslog.LOG_AUTH, \"mypam\")\n\t\tdefer func() {\n\t\t\t_ = slog.Close()\n\t\t}()\n\n\t\tclient, err := client.NewClient(common.TriremeSocket)\n\t\tif err != nil {\n\t\t\treturn C.PAM_SUCCESS\n\t\t}\n\n\t\tslog.Alert(\"Calling Trireme\") // nolit\n\t\tif err := client.SendRequest(request); err != nil {\n\t\t\terr = fmt.Errorf(\"Policy Server call failed %s\", err)\n\t\t\t_ = slog.Alert(err.Error())\n\t\t\treturn C.PAM_SESSION_ERR\n\t\t}\n\t}\n\treturn C.PAM_SUCCESS\n}\n\n// nolint\n//export pam_sm_close_session\nfunc pam_sm_close_session(pamh *C.pam_handle_t, flags, argc int, argv **C.char) C.int {\n\tslog, _ := syslog.New(syslog.LOG_ALERT|syslog.LOG_AUTH, \"mypam\")\n\tslog.Alert(\"pam_sm_close_session\") // nolint\n\tslog.Close()                       // nolint\n\treturn C.PAM_SUCCESS\n}\n\nfunc main() {\n}\n"
  },
  {
    "path": "plugins/pam/uidmonitorpam_c.go",
    "content": "package main\n\n/*\n#cgo LDFLAGS: -lpam -fPIC\n#include <errno.h>\n#include <pwd.h>\n#include <security/pam_appl.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <unistd.h>\n#include<syslog.h>\nint get_uid(char *user);\n\n// get_user pulls the username out of the pam handle.\nchar *get_user(pam_handle_t *pamh) {\n  if (!pamh)\n    return NULL;\n  int pam_err = 0;\n  const char *user;\n\n  if ((pam_err = pam_get_item(pamh, PAM_USER, (const void**)&user)) != PAM_SUCCESS)\n    return NULL;\n\n  return strdup(user);\n}\n\n// get_user pulls the username out of the pam handle.\nchar *get_ruser(pam_handle_t *pamh) {\n  if (!pamh)\n    return NULL;\n  int pam_err = 0;\n  const char *user;\n  if ((pam_err = pam_get_item(pamh, PAM_RUSER, (const void**)&user)) != PAM_SUCCESS)\n    return NULL;\n  return strdup(user);\n}\n\n\n\nchar *get_service(pam_handle_t *pamh){\n  int pam_err = 0;\n  if (!pamh)\n    return NULL;\n  const char *service;\n  if ((pam_err = pam_get_item(pamh, PAM_SERVICE, (const void**)&service)) != PAM_SUCCESS)\n    return NULL;\n  return strdup(service);\n}\n\nvoid initLog() {\n   openlog(NULL,LOG_PID,LOG_AUTH);\n}\n\nint is_system_user(char *user){\n   struct passwd entry;\n   struct passwd *result;\n   char *buf;\n   size_t bufsize;\n  int s;\n  bufsize = sysconf(_SC_GETPW_R_SIZE_MAX);\n  if (bufsize == -1)\n        bufsize = 16384;\n   buf = malloc(bufsize);\n   s = getpwnam_r(user,&entry,buf,bufsize,&result);\n   if(result == NULL){\n     if (s ==0){\n       return 0;\n     }\n   }\n//We are late enough in the stack to get no errors about missing users ideally\n\nif(strcmp(\"/bin/nologin\",entry.pw_shell)== 0 || strcmp(\"/bin/false\",entry.pw_shell) || strlen(entry.pw_shell) < 1){\n    syslog(LOG_ALERT,\"Called with ruser %s\",entry.pw_shell);\n    syslog(LOG_ALERT,\"Called with ruser %s\",entry.pw_passwd);\n    return 1;\n }\nif(entry.pw_passwd[0] == '!' || entry.pw_passwd[0] == '*' || strcmp(entry.pw_passwd,\"x\") == 0){\nreturn 1;\n}\n  return 0;\n}\n\nint is_root(char *user){\n  struct passwd entry;\n  struct passwd *result;\n  char *buf;\n  size_t bufsize;\n  int s;\n  int i =0;\n\n  bufsize = sysconf(_SC_GETPW_R_SIZE_MAX);\n  if (bufsize == -1)\n        bufsize = 16384;\n   buf = malloc(bufsize);\n   s = getpwnam_r(user,&entry,buf,bufsize,&result);\n   if(result == NULL){\n     if (s ==0){\n       return 0;\n     }\n   }\n   if (entry.pw_uid == 0){\n      return 1;\n   }\n\n  return 0;\n}\n*/\nimport \"C\"\n"
  },
  {
    "path": "policy/apiservices.go",
    "content": "package policy\n\nimport (\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens\"\n)\n\n// ServiceType are the types of services that can are suported.\ntype ServiceType int\n\n// Values of ServiceType\nconst (\n\tServiceL3 ServiceType = iota\n\tServiceHTTP\n\tServiceTCP\n\tServiceSecretsProxy\n)\n\n// ServiceTLSType is the types of TLS used on public port\ntype ServiceTLSType int\n\n// Values of UserAuthorizationTypeValues\nconst (\n\tServiceTLSTypeNone ServiceTLSType = iota\n\tServiceTLSTypeAporeto\n\tServiceTLSTypeCustom\n)\n\n// UserAuthorizationTypeValues is the types of user authorization methods that\n// are supported.\ntype UserAuthorizationTypeValues int\n\n// Values of UserAuthorizationTypeValues\nconst (\n\tUserAuthorizationNone UserAuthorizationTypeValues = iota\n\tUserAuthorizationMutualTLS\n\tUserAuthorizationJWT\n\tUserAuthorizationOIDC\n)\n\n// ApplicationServicesList is a list of ApplicationServices.\ntype ApplicationServicesList []*ApplicationService\n\n// ApplicationService is the type of service that this PU exposes.\ntype ApplicationService struct {\n\t// ID is the id of the service\n\tID string\n\n\t// NetworkInfo provides the network information (addresses/ports) of the service.\n\t// This is the public facing network information, or how the service can be\n\t// accessed. In the case of Load Balancers for example, this would be the\n\t// IP/port of the load balancer.\n\tNetworkInfo *common.Service\n\n\t// PrivateNetworkInfo captures the network service definition of an application\n\t// as seen by the application. For example the port that the application is\n\t// listening to. This is needed in the case of port mappings.\n\tPrivateNetworkInfo *common.Service\n\n\t// PrivateTLSListener indicates that the service uses a TLS listener. As a\n\t// result we must TLS for traffic send locally in the service.\n\tPrivateTLSListener bool\n\n\t// NoTLSExternalService indicates that TLS should not be used for an external\n\t// service. This option is used for API calls to local metadata APIs and\n\t// should not be used for access to the Internet.\n\tNoTLSExternalService bool\n\n\t// PublicNetworkInfo provides the network information where the enforcer\n\t// should listen for incoming connections of the service. This can be\n\t// different than the PrivateNetworkInfo where the application is listening\n\t// and it essentially allows users to create Virtual IPs and Virtual Ports\n\t// for the new exposed TLS services. So, if an application is listening\n\t// on port 80, users do not need to access the application from external\n\t// network through TLS on port 80, that looks weird. They can instead create\n\t// a PublicNetworkInfo and have the trireme listen on port 443, while the\n\t// application is still listening on port 80.\n\tPublicNetworkInfo *common.Service\n\n\t// Type is the type of the service.\n\tType ServiceType\n\n\t// HTTPRules are only valid for HTTP Services and capture the list of APIs\n\t// exposed by the service.\n\tHTTPRules []*HTTPRule\n\n\t// Tags are the tags of the service.\n\tTags []string\n\n\t// FallbackJWTAuthorizationCert is the certificate that has been used to sign\n\t// JWTs if they are not signed by the datapath\n\tFallbackJWTAuthorizationCert string\n\n\t// UserAuthorizationType is the type of user authorization that must be used.\n\tUserAuthorizationType UserAuthorizationTypeValues\n\n\t// UserAuthorizationHandler is the token handler for validating user tokens.\n\tUserAuthorizationHandler usertokens.Verifier\n\n\t// UserTokenToHTTPMappings is a map of mappings between JWT claims arriving in\n\t// a user request and outgoing HTTP headers towards an application. It\n\t// is used to allow operators to map claims to HTTP headers that downstream\n\t// applications can understand.\n\tUserTokenToHTTPMappings map[string]string\n\n\t// UserRedirectOnAuthorizationFail is the URL that the user can be redirected\n\t// if there is an authorization failure. This allows the display of a custom\n\t// message.\n\tUserRedirectOnAuthorizationFail string\n\n\t// External indicates if this is an external service. For external services\n\t// access control is implemented at the ingress.\n\tExternal bool\n\n\t// CACert is the certificate of the CA of external services. This allows TLS to\n\t// work with external services that use private CAs.\n\tCACert []byte\n\n\t// AuthToken is the authentication token for any external API service calls. It is\n\t// used for example by the secrets proxy.\n\tAuthToken string\n\n\t// MutualTLSTrustedRoots is the CA that must be used for mutual TLS authentication.\n\tMutualTLSTrustedRoots []byte\n\n\t// PublicServiceCertificate is a publically signed certificate that can be used\n\t// by the service to expose TLS to users without a Trireme client\n\tPublicServiceCertificate []byte\n\n\t// PublicServiceCertificateKey is the corresponding private key.\n\tPublicServiceCertificateKey []byte\n\n\t// PublicServiceTLSType specifies TLS Type to support on PublicService port.\n\t// This is useful for health checks. It should not be used for API access.\n\tPublicServiceTLSType ServiceTLSType\n}\n\n// HTTPRule holds a rule for a particular HTTPService. The rule\n// relates a set of URIs defined as regular expressions with associated\n// verbs. The * VERB indicates all actions.\ntype HTTPRule struct {\n\t// URIs is a list of regular expressions that describe the URIs that\n\t// a service is exposing.\n\tURIs []string\n\n\t// Methods is a list of the allowed verbs for the given list of URIs.\n\tMethods []string\n\n\t// ClaimMatchingRules is a list of matching rules associated with this rule. Clients\n\t// must present a set of claims that will satisfy these rules. Each rule\n\t// is an AND clause. The list of expressions is an OR of the AND clauses.\n\tClaimMatchingRules [][]string\n\n\t// Public indicates that this is a public API and anyone can access it.\n\t// No authorization will be performed on public APIs.\n\tPublic bool\n\n\t// HookMethod indicates that this rule is not for generic proxying but\n\t// must first be processed by the hook with the corresponding name.\n\tHookMethod string\n}\n\n// PublicPort returns the min port in the spec for the publicly exposed port.\nfunc (a *ApplicationService) PublicPort() int {\n\tif a.PublicNetworkInfo == nil {\n\t\treturn 0\n\t}\n\treturn int(a.PublicNetworkInfo.Ports.Min)\n}\n\n// PrivatePort returns the min port in the spec for the private listening port.\nfunc (a *ApplicationService) PrivatePort() int {\n\tif a.PrivateNetworkInfo == nil {\n\t\treturn 0\n\t}\n\treturn int(a.PrivateNetworkInfo.Ports.Min)\n}\n"
  },
  {
    "path": "policy/interfaces.go",
    "content": "// Package policy describes a generic interface for retrieving policies.\n// Different implementations are possible for environments such as Kubernetes,\n// Mesos or other custom environments. An implementation has to provide\n// a method for retrieving policy based on the metadata associated with the container\n// and deleting the policy when the container dies. It is up to the implementation\n// to decide how to generate the policy. The package also defines the basic data\n// structure for communicating policy information. The implementations are responsible\n// for providing all the necessary data.\npackage policy\n\nimport (\n\t\"context\"\n\n\t\"github.com/docker/go-connections/nat\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// A RuntimeReader allows to get the specific parameters stored in the Runtime\ntype RuntimeReader interface {\n\n\t// Pid returns the Pid of the Runtime.\n\tPid() int\n\n\t// Name returns the process name of the Runtime.\n\tName() string\n\n\t// NSPath returns the path to the namespace of the PU, if applicable\n\tNSPath() string\n\n\t// Tag returns  the value of the given tag.\n\tTag(string) (string, bool)\n\n\t// Tags returns a copy of the list of the tags.\n\tTags() *TagStore\n\n\t// Options returns a copy of the list of options.\n\tOptions() OptionsType\n\n\t// IPAddresses returns a copy of all the IP addresses.\n\tIPAddresses() ExtendedMap\n\n\t// Returns the PUType for the PU\n\tPUType() common.PUType\n\n\t// SetServices sets the services of the runtime.\n\tSetServices(services []common.Service)\n\n\t// PortMap returns portmap (container port -> host port)\n\tPortMap() map[nat.Port][]string\n}\n\n// A Resolver must be implemented by a policy engine that receives monitor events.\ntype Resolver interface {\n\n\t// HandlePUEvent is called by all monitors when a PU event is generated. The implementer\n\t// is responsible to update all components by explicitly adding a new PU.\n\tHandlePUEvent(ctx context.Context, puID string, event common.Event, runtime RuntimeReader) error\n}\n"
  },
  {
    "path": "policy/mockpolicy/mockpolicy.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: policy/interfaces.go\n\n// Package mockpolicy is a generated GoMock package.\npackage mockpolicy\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\tnat \"github.com/docker/go-connections/nat\"\n\tgomock \"github.com/golang/mock/gomock\"\n\tcommon \"go.aporeto.io/enforcerd/trireme-lib/common\"\n\tpolicy \"go.aporeto.io/enforcerd/trireme-lib/policy\"\n)\n\n// MockRuntimeReader is a mock of RuntimeReader interface\n// nolint\ntype MockRuntimeReader struct {\n\tctrl     *gomock.Controller\n\trecorder *MockRuntimeReaderMockRecorder\n}\n\n// MockRuntimeReaderMockRecorder is the mock recorder for MockRuntimeReader\n// nolint\ntype MockRuntimeReaderMockRecorder struct {\n\tmock *MockRuntimeReader\n}\n\n// NewMockRuntimeReader creates a new mock instance\n// nolint\nfunc NewMockRuntimeReader(ctrl *gomock.Controller) *MockRuntimeReader {\n\tmock := &MockRuntimeReader{ctrl: ctrl}\n\tmock.recorder = &MockRuntimeReaderMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockRuntimeReader) EXPECT() *MockRuntimeReaderMockRecorder {\n\treturn m.recorder\n}\n\n// Pid mocks base method\n// nolint\nfunc (m *MockRuntimeReader) Pid() int {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Pid\")\n\tret0, _ := ret[0].(int)\n\treturn ret0\n}\n\n// Pid indicates an expected call of Pid\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) Pid() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Pid\", reflect.TypeOf((*MockRuntimeReader)(nil).Pid))\n}\n\n// Name mocks base method\n// nolint\nfunc (m *MockRuntimeReader) Name() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Name\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// Name indicates an expected call of Name\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) Name() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Name\", reflect.TypeOf((*MockRuntimeReader)(nil).Name))\n}\n\n// NSPath mocks base method\n// nolint\nfunc (m *MockRuntimeReader) NSPath() string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"NSPath\")\n\tret0, _ := ret[0].(string)\n\treturn ret0\n}\n\n// NSPath indicates an expected call of NSPath\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) NSPath() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"NSPath\", reflect.TypeOf((*MockRuntimeReader)(nil).NSPath))\n}\n\n// Tag mocks base method\n// nolint\nfunc (m *MockRuntimeReader) Tag(arg0 string) (string, bool) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Tag\", arg0)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(bool)\n\treturn ret0, ret1\n}\n\n// Tag indicates an expected call of Tag\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) Tag(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Tag\", reflect.TypeOf((*MockRuntimeReader)(nil).Tag), arg0)\n}\n\n// Tags mocks base method\n// nolint\nfunc (m *MockRuntimeReader) Tags() *policy.TagStore {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Tags\")\n\tret0, _ := ret[0].(*policy.TagStore)\n\treturn ret0\n}\n\n// Tags indicates an expected call of Tags\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) Tags() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Tags\", reflect.TypeOf((*MockRuntimeReader)(nil).Tags))\n}\n\n// Options mocks base method\n// nolint\nfunc (m *MockRuntimeReader) Options() policy.OptionsType {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Options\")\n\tret0, _ := ret[0].(policy.OptionsType)\n\treturn ret0\n}\n\n// Options indicates an expected call of Options\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) Options() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Options\", reflect.TypeOf((*MockRuntimeReader)(nil).Options))\n}\n\n// IPAddresses mocks base method\n// nolint\nfunc (m *MockRuntimeReader) IPAddresses() policy.ExtendedMap {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"IPAddresses\")\n\tret0, _ := ret[0].(policy.ExtendedMap)\n\treturn ret0\n}\n\n// IPAddresses indicates an expected call of IPAddresses\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) IPAddresses() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"IPAddresses\", reflect.TypeOf((*MockRuntimeReader)(nil).IPAddresses))\n}\n\n// PUType mocks base method\n// nolint\nfunc (m *MockRuntimeReader) PUType() common.PUType {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PUType\")\n\tret0, _ := ret[0].(common.PUType)\n\treturn ret0\n}\n\n// PUType indicates an expected call of PUType\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) PUType() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PUType\", reflect.TypeOf((*MockRuntimeReader)(nil).PUType))\n}\n\n// SetServices mocks base method\n// nolint\nfunc (m *MockRuntimeReader) SetServices(services []common.Service) {\n\tm.ctrl.T.Helper()\n\tm.ctrl.Call(m, \"SetServices\", services)\n}\n\n// SetServices indicates an expected call of SetServices\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) SetServices(services interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"SetServices\", reflect.TypeOf((*MockRuntimeReader)(nil).SetServices), services)\n}\n\n// PortMap mocks base method\n// nolint\nfunc (m *MockRuntimeReader) PortMap() map[nat.Port][]string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PortMap\")\n\tret0, _ := ret[0].(map[nat.Port][]string)\n\treturn ret0\n}\n\n// PortMap indicates an expected call of PortMap\n// nolint\nfunc (mr *MockRuntimeReaderMockRecorder) PortMap() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PortMap\", reflect.TypeOf((*MockRuntimeReader)(nil).PortMap))\n}\n\n// MockResolver is a mock of Resolver interface\n// nolint\ntype MockResolver struct {\n\tctrl     *gomock.Controller\n\trecorder *MockResolverMockRecorder\n}\n\n// MockResolverMockRecorder is the mock recorder for MockResolver\n// nolint\ntype MockResolverMockRecorder struct {\n\tmock *MockResolver\n}\n\n// NewMockResolver creates a new mock instance\n// nolint\nfunc NewMockResolver(ctrl *gomock.Controller) *MockResolver {\n\tmock := &MockResolver{ctrl: ctrl}\n\tmock.recorder = &MockResolverMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockResolver) EXPECT() *MockResolverMockRecorder {\n\treturn m.recorder\n}\n\n// HandlePUEvent mocks base method\n// nolint\nfunc (m *MockResolver) HandlePUEvent(ctx context.Context, puID string, event common.Event, runtime policy.RuntimeReader) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"HandlePUEvent\", ctx, puID, event, runtime)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// HandlePUEvent indicates an expected call of HandlePUEvent\n// nolint\nfunc (mr *MockResolverMockRecorder) HandlePUEvent(ctx, puID, event, runtime interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"HandlePUEvent\", reflect.TypeOf((*MockResolver)(nil).HandlePUEvent), ctx, puID, event, runtime)\n}\n"
  },
  {
    "path": "policy/policy.go",
    "content": "package policy\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/controller/pkg/usertokens\"\n)\n\n// EnforcerType defines which enforcer type should be selected\ntype EnforcerType int\n\nconst (\n\t// EnforcerMapping lets the default enforcer configuration deal with it\n\tEnforcerMapping EnforcerType = iota\n\t// EnvoyAuthorizerEnforcer specifically asks for running an envoy enforcer/authorizer\n\tEnvoyAuthorizerEnforcer\n)\n\n// String implements the string interface\nfunc (t EnforcerType) String() string {\n\tswitch t {\n\tcase EnforcerMapping:\n\t\treturn \"EnforcerMapping\"\n\tcase EnvoyAuthorizerEnforcer:\n\t\treturn \"EnvoyAuthorizerEnforcer\"\n\tdefault:\n\t\treturn strconv.Itoa(int(t))\n\t}\n}\n\n// EnforcerTypeFromString parses `str` and tries to convert it to\nfunc EnforcerTypeFromString(str string) (EnforcerType, error) {\n\tswitch str {\n\tcase \"EnforcerMapping\":\n\t\treturn EnforcerMapping, nil\n\tcase \"EnvoyAuthorizerEnforcer\":\n\t\treturn EnvoyAuthorizerEnforcer, nil\n\tdefault:\n\t\ti, err := strconv.Atoi(str)\n\t\tif err != nil {\n\t\t\treturn EnforcerMapping, fmt.Errorf(\"failed to parse enforcer type from string number (input '%s'): %s\", str, err.Error())\n\t\t}\n\t\tif i < int(EnforcerMapping) {\n\t\t\treturn EnforcerMapping, fmt.Errorf(\"failed to parse enforcer type from string number (input '%s'): below possible valid value\", str)\n\t\t}\n\t\tif i > int(EnvoyAuthorizerEnforcer) {\n\t\t\treturn EnforcerMapping, fmt.Errorf(\"failed to parse enforcer type from string number (input '%s'): above possible valid value\", str)\n\t\t}\n\n\t\treturn EnforcerType(i), nil\n\t}\n}\n\n// PUPolicy captures all policy information related ot the container\ntype PUPolicy struct {\n\n\t// ManagementID is provided for the policy implementations as a means of\n\t// holding a policy identifier related to the implementation.\n\tmanagementID string\n\t// managementNamespace is provided for the policy implementations as a means of\n\t// holding a policy sub identifier related to the implementation.\n\tmanagementNamespace string\n\t// triremeAction defines what level of policy should be applied to that container.\n\ttriremeAction PUAction\n\t// dnsACLs is the list of DNS names and the associated ports that the container is\n\t// allowed to talk to outside the data center\n\tDNSACLs DNSRuleList\n\t// applicationACLs is the list of ACLs to be applied when the container talks\n\t// to IP Addresses outside the data center\n\tapplicationACLs IPRuleList\n\t// networkACLs is the list of ACLs to be applied from IP Addresses outside\n\t// the data center\n\tnetworkACLs IPRuleList\n\t// identity is the set of key value pairs that must be send over the wire.\n\tidentity *TagStore\n\t// compressedTags is the set of of compressed key/value pairs as binary values.\n\tcompressedTags *TagStore\n\t// annotations are key/value pairs  that should be used for accounting reasons\n\tannotations *TagStore\n\t// transmitterRules is the set of rules that implement the label matching at the Transmitter\n\ttransmitterRules TagSelectorList\n\t// teceiverRules is the set of rules that implement matching at the Receiver\n\treceiverRules TagSelectorList\n\t// ips is the set of IP addresses and namespaces that the policy must be applied to\n\tips ExtendedMap\n\t// servicesListeningPort is the port that we will use for the proxy.\n\tservicesListeningPort int\n\t// dnsProxyPort is the proxy port that listens dns traffic\n\tdnsProxyPort int\n\t// exposedServices is the list of services that this PU is exposing.\n\texposedServices ApplicationServicesList\n\t// dependentServices is the list of services that this PU depends on.\n\tdependentServices ApplicationServicesList\n\t// servicesCertificate is the services certificate\n\tservicesCertificate string\n\t// servicePrivateKey is the service private key\n\tservicesPrivateKey string\n\t// servicesCA is the CA to be used for the outgoing services\n\tservicesCA string\n\t// scopes are the processing unit granted scopes\n\tscopes []string\n\t// enforcerType is the enforcer type that is supposed to get used for this PU\n\tenforcerType EnforcerType\n\t// appDefaultPolicyAction is the application default action of the namespace\n\tappDefaultPolicyAction ActionType\n\t// netDefaultPolicyAction is the network default action of the namespace\n\tnetDefaultPolicyAction ActionType\n\t// logPrefixMapping maps a short nlog prefix it it's long prefix\n\tlogPrefixMapping map[string]string\n\t// logPrefixMappingCalculated is used to no when to calculate the log mapping\n\tlogPrefixMappingCalculated bool\n\n\tsync.Mutex\n}\n\n// PUAction defines the action types that applies for a specific PU as a whole.\ntype PUAction int\n\nconst (\n\t// AllowAll allows everything for the specific PU.\n\tAllowAll = 0x1\n\t// Police filters on the PU based on the PolicyRules.\n\tPolice = 0x2\n)\n\n// NewPUPolicy generates a new ContainerPolicyInfo\n// appACLs are the ACLs for packet coming from the Application/PU to the Network.\n// netACLs are the ACLs for packet coming from the Network to the Application/PU.\nfunc NewPUPolicy(\n\tid string,\n\tnamespace string,\n\taction PUAction,\n\tappACLs IPRuleList,\n\tnetACLs IPRuleList,\n\tdnsACLs DNSRuleList,\n\ttxtags TagSelectorList,\n\trxtags TagSelectorList,\n\tidentity *TagStore,\n\tannotations *TagStore,\n\tcompressedTags *TagStore,\n\tips ExtendedMap,\n\tservicesListeningPort int,\n\tdnsProxyPort int,\n\texposedServices ApplicationServicesList,\n\tdependentServices ApplicationServicesList,\n\tscopes []string,\n\tenforcerType EnforcerType,\n\tappDefaultPolicyAction ActionType,\n\tnetDefaultPolicyAction ActionType,\n) *PUPolicy {\n\n\tif appACLs == nil {\n\t\tappACLs = IPRuleList{}\n\t}\n\tif netACLs == nil {\n\t\tnetACLs = IPRuleList{}\n\t}\n\tif dnsACLs == nil {\n\t\tdnsACLs = DNSRuleList{}\n\t}\n\tif txtags == nil {\n\t\ttxtags = TagSelectorList{}\n\t}\n\tif rxtags == nil {\n\t\trxtags = TagSelectorList{}\n\t}\n\n\tif identity == nil {\n\t\tidentity = NewTagStore()\n\t}\n\n\tif annotations == nil {\n\t\tannotations = NewTagStore()\n\t}\n\n\tif compressedTags == nil {\n\t\tcompressedTags = NewTagStore()\n\t}\n\n\tif ips == nil {\n\t\tips = ExtendedMap{}\n\t}\n\n\tif exposedServices == nil {\n\t\texposedServices = ApplicationServicesList{}\n\t}\n\n\tif dependentServices == nil {\n\t\tdependentServices = ApplicationServicesList{}\n\t}\n\n\treturn &PUPolicy{\n\t\tmanagementID:           id,\n\t\tmanagementNamespace:    namespace,\n\t\ttriremeAction:          action,\n\t\tapplicationACLs:        appACLs,\n\t\tnetworkACLs:            netACLs,\n\t\tDNSACLs:                dnsACLs,\n\t\ttransmitterRules:       txtags,\n\t\treceiverRules:          rxtags,\n\t\tidentity:               identity,\n\t\tcompressedTags:         compressedTags,\n\t\tannotations:            annotations,\n\t\tips:                    ips,\n\t\tservicesListeningPort:  servicesListeningPort,\n\t\tdnsProxyPort:           dnsProxyPort,\n\t\texposedServices:        exposedServices,\n\t\tdependentServices:      dependentServices,\n\t\tscopes:                 scopes,\n\t\tenforcerType:           enforcerType,\n\t\tappDefaultPolicyAction: appDefaultPolicyAction,\n\t\tnetDefaultPolicyAction: netDefaultPolicyAction,\n\t}\n}\n\n// NewPUPolicyWithDefaults sets up a PU policy with defaults\nfunc NewPUPolicyWithDefaults() *PUPolicy {\n\treturn NewPUPolicy(\"\", \"\", AllowAll, nil, nil, nil, nil, nil, nil, nil, nil, nil, 0, 0, nil, nil, []string{}, EnforcerMapping, Reject|Log, Reject|Log)\n}\n\n// Clone returns a copy of the policy\nfunc (p *PUPolicy) Clone() *PUPolicy {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tnp := NewPUPolicy(\n\t\tp.managementID,\n\t\tp.managementNamespace,\n\t\tp.triremeAction,\n\t\tp.applicationACLs.Copy(),\n\t\tp.networkACLs.Copy(),\n\t\tp.DNSACLs.Copy(),\n\t\tp.transmitterRules.Copy(),\n\t\tp.receiverRules.Copy(),\n\t\tp.identity.Copy(),\n\t\tp.annotations.Copy(),\n\t\tp.compressedTags.Copy(),\n\t\tp.ips.Copy(),\n\t\tp.servicesListeningPort,\n\t\tp.dnsProxyPort,\n\t\tp.exposedServices,\n\t\tp.dependentServices,\n\t\tp.scopes,\n\t\tp.enforcerType,\n\t\tp.appDefaultPolicyAction,\n\t\tp.netDefaultPolicyAction,\n\t)\n\n\treturn np\n}\n\n// ManagementID returns the management ID\nfunc (p *PUPolicy) ManagementID() string {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.managementID\n}\n\n// ManagementNamespace returns the management Namespace\nfunc (p *PUPolicy) ManagementNamespace() string {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.managementNamespace\n}\n\n// TriremeAction returns the TriremeAction\nfunc (p *PUPolicy) TriremeAction() PUAction {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.triremeAction\n}\n\n// SetTriremeAction returns the TriremeAction\nfunc (p *PUPolicy) SetTriremeAction(action PUAction) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.triremeAction = action\n}\n\n// ApplicationACLs returns a copy of IPRuleList\nfunc (p *PUPolicy) ApplicationACLs() IPRuleList {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.applicationACLs.Copy()\n}\n\n// NetworkACLs returns a copy of IPRuleList\nfunc (p *PUPolicy) NetworkACLs() IPRuleList {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.networkACLs.Copy()\n}\n\n// DNSNameACLs returns a copy of DNSRuleList\nfunc (p *PUPolicy) DNSNameACLs() DNSRuleList {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.DNSACLs.Copy()\n}\n\n// ReceiverRules returns a copy of TagSelectorList\nfunc (p *PUPolicy) ReceiverRules() TagSelectorList {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.receiverRules.Copy()\n}\n\n// AddReceiverRules adds a receiver rule\nfunc (p *PUPolicy) AddReceiverRules(t TagSelector) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.receiverRules = append(p.receiverRules, t)\n}\n\n// TransmitterRules returns a copy of TagSelectorList\nfunc (p *PUPolicy) TransmitterRules() TagSelectorList {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.transmitterRules.Copy()\n}\n\n// AddTransmitterRules adds a transmitter rule\nfunc (p *PUPolicy) AddTransmitterRules(t TagSelector) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.transmitterRules = append(p.transmitterRules, t)\n}\n\n// Identity returns a copy of the Identity\nfunc (p *PUPolicy) Identity() *TagStore {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.identity.Copy()\n}\n\n// CompressedTags returns the compressed tags of the policy.\nfunc (p *PUPolicy) CompressedTags() *TagStore {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.compressedTags.Copy()\n}\n\n// Annotations returns a copy of the annotations\nfunc (p *PUPolicy) Annotations() *TagStore {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.annotations.Copy()\n}\n\n// AddIdentityTag adds a policy tag\nfunc (p *PUPolicy) AddIdentityTag(k, v string) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.identity.AppendKeyValue(k, v)\n}\n\n// IPAddresses returns all the IP addresses for the processing unit\nfunc (p *PUPolicy) IPAddresses() ExtendedMap {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.ips.Copy()\n}\n\n// SetIPAddresses sets the IP addresses for the processing unit\nfunc (p *PUPolicy) SetIPAddresses(l ExtendedMap) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.ips = l\n}\n\n// ExposedServices returns the exposed services\nfunc (p *PUPolicy) ExposedServices() ApplicationServicesList {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.exposedServices\n}\n\n// DNSProxyPort gets the dns proxy port\nfunc (p *PUPolicy) DNSProxyPort() string {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn strconv.Itoa(p.dnsProxyPort)\n}\n\n// DependentServices returns the external services.\nfunc (p *PUPolicy) DependentServices() ApplicationServicesList {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.dependentServices\n}\n\n// ServicesListeningPort returns the port that should be used by the proxies.\nfunc (p *PUPolicy) ServicesListeningPort() string {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn strconv.Itoa(p.servicesListeningPort)\n}\n\n// UpdateDNSNetworks updates the set of FQDN names allowed by the policy\nfunc (p *PUPolicy) UpdateDNSNetworks(networks DNSRuleList) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tfor k, v := range networks {\n\t\tp.DNSACLs[k] = v\n\t}\n}\n\n// UpdateServiceCertificates updates the certificate and private key of the policy\nfunc (p *PUPolicy) UpdateServiceCertificates(cert, key string) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tp.servicesCertificate = cert\n\tp.servicesPrivateKey = key\n}\n\n// ServiceCertificates returns the service certificate.\nfunc (p *PUPolicy) ServiceCertificates() (string, string, string) {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.servicesCertificate, p.servicesPrivateKey, p.servicesCA\n}\n\n// Scopes returns the scopes of the policy.\nfunc (p *PUPolicy) Scopes() []string {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.scopes\n}\n\n// EnforcerType returns the enforcer type of the policy.\nfunc (p *PUPolicy) EnforcerType() EnforcerType {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.enforcerType\n}\n\n// AppDefaultPolicyAction returns default application action.\nfunc (p *PUPolicy) AppDefaultPolicyAction() ActionType {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.appDefaultPolicyAction\n}\n\n// NetDefaultPolicyAction returns default network action.\nfunc (p *PUPolicy) NetDefaultPolicyAction() ActionType {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn p.netDefaultPolicyAction\n}\n\n// ToPublicPolicy converts the object to a marshallable object.\nfunc (p *PUPolicy) ToPublicPolicy() *PUPolicyPublic {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\treturn &PUPolicyPublic{\n\t\tManagementID:           p.managementID,\n\t\tManagementNamespace:    p.managementNamespace,\n\t\tTriremeAction:          p.triremeAction,\n\t\tApplicationACLs:        p.applicationACLs.Copy(),\n\t\tNetworkACLs:            p.networkACLs.Copy(),\n\t\tDNSACLs:                p.DNSACLs.Copy(),\n\t\tTransmitterRules:       p.transmitterRules.Copy(),\n\t\tReceiverRules:          p.receiverRules.Copy(),\n\t\tAnnotations:            p.annotations.GetSlice(),\n\t\tCompressedTags:         p.compressedTags.GetSlice(),\n\t\tIdentity:               p.identity.GetSlice(),\n\t\tIPs:                    p.ips.Copy(),\n\t\tServicesListeningPort:  p.servicesListeningPort,\n\t\tDNSProxyPort:           p.dnsProxyPort,\n\t\tExposedServices:        p.exposedServices,\n\t\tDependentServices:      p.dependentServices,\n\t\tScopes:                 p.scopes,\n\t\tServicesCA:             p.servicesCA,\n\t\tServicesCertificate:    p.servicesCertificate,\n\t\tServicesPrivateKey:     p.servicesPrivateKey,\n\t\tEnforcerType:           p.enforcerType,\n\t\tAppDefaultPolicyAction: p.appDefaultPolicyAction,\n\t\tNetDefaultPolicyAction: p.netDefaultPolicyAction,\n\t}\n}\n\n// LookupLogPrefix returns the long version of the nlog prefix\nfunc (p *PUPolicy) LookupLogPrefix(key string) (string, bool) {\n\n\tp.Lock()\n\tdefer p.Unlock()\n\n\t// On demand calculate the mapping\n\tp.calculateLogPrefixes()\n\n\tlogPrefix, ok := p.logPrefixMapping[key]\n\treturn logPrefix, ok\n}\n\n// GetLogPrefixes returns the current map of logging prefixes\nfunc (p *PUPolicy) GetLogPrefixes() map[string]string {\n\n\tp.Lock()\n\tdefer p.Unlock()\n\n\t// On demand calculate the mapping\n\tp.calculateLogPrefixes()\n\n\tclone := map[string]string{}\n\tfor key, value := range p.logPrefixMapping {\n\t\tclone[key] = value\n\t}\n\treturn clone\n}\n\n// MergeLogPrefixes merges existing prefixes with the current logging prefixes\nfunc (p *PUPolicy) MergeLogPrefixes(prefixes map[string]string) {\n\n\tp.Lock()\n\tdefer p.Unlock()\n\n\t// On demand calculate the mapping\n\tp.calculateLogPrefixes()\n\n\tfor key, value := range prefixes {\n\t\tp.logPrefixMapping[key] = value\n\t}\n}\n\n// calculateLogPrefixes calculates the short/long logging prefixes\nfunc (p *PUPolicy) calculateLogPrefixes() {\n\n\t// On demand calculate the mapping\n\tif !p.logPrefixMappingCalculated {\n\t\tp.logPrefixMapping = map[string]string{}\n\t\tcompute := func(ruleList IPRuleList) {\n\t\t\tfor _, ipRule := range ruleList {\n\t\t\t\tif ipRule.Policy != nil {\n\t\t\t\t\tkey, value := ipRule.Policy.GetShortAndLongLogPrefix()\n\t\t\t\t\tp.logPrefixMapping[key] = value\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tcompute(p.applicationACLs)\n\t\tcompute(p.networkACLs)\n\n\t\t// mapping has been calculated\n\t\tp.logPrefixMappingCalculated = true\n\t}\n}\n\n// PUPolicyPublic captures all policy information related ot the processing\n// unit in an object that can be marshalled and transmitted over the RPC interface.\ntype PUPolicyPublic struct {\n\tManagementID           string                  `json:\"managementID,omitempty\"`\n\tManagementNamespace    string                  `json:\"managementNamespace,omitempty\"`\n\tTriremeAction          PUAction                `json:\"triremeAction,omitempty\"`\n\tApplicationACLs        IPRuleList              `json:\"applicationACLs,omitempty\"`\n\tNetworkACLs            IPRuleList              `json:\"networkACLs,omitempty\"`\n\tDNSACLs                DNSRuleList             `json:\"dnsACLs,omitempty\"`\n\tIdentity               []string                `json:\"identity,omitempty\"`\n\tAnnotations            []string                `json:\"annotations,omitempty\"`\n\tCompressedTags         []string                `json:\"compressedtags,omitempty\"`\n\tTransmitterRules       TagSelectorList         `json:\"transmitterRules,omitempty\"`\n\tReceiverRules          TagSelectorList         `json:\"receiverRules,omitempty\"`\n\tIPs                    ExtendedMap             `json:\"IPs,omitempty\"`\n\tServicesListeningPort  int                     `json:\"servicesListeningPort,omitempty\"`\n\tDNSProxyPort           int                     `json:\"dnsProxyPort,omitempty\"`\n\tExposedServices        ApplicationServicesList `json:\"exposedServices,omitempty\"`\n\tDependentServices      ApplicationServicesList `json:\"dependentServices,omitempty\"`\n\tServicesCertificate    string                  `json:\"servicesCertificate,omitempty\"`\n\tServicesPrivateKey     string                  `json:\"servicesPrivateKey,omitempty\"`\n\tServicesCA             string                  `json:\"servicesCA,omitempty\"`\n\tScopes                 []string                `json:\"scopes,omitempty\"`\n\tEnforcerType           EnforcerType            `json:\"enforcerTypes,omitempty\"`\n\tAppDefaultPolicyAction ActionType              `json:\"appDefaultPolicyAction,omitempty\"`\n\tNetDefaultPolicyAction ActionType              `json:\"netDefaultPolicyAction,omitempty\"`\n}\n\n// ToPrivatePolicy converts the object to a private object.\nfunc (p *PUPolicyPublic) ToPrivatePolicy(ctx context.Context, convert bool) (*PUPolicy, error) {\n\tvar err error\n\n\texposedServices := ApplicationServicesList{}\n\tfor _, e := range p.ExposedServices {\n\t\tif convert {\n\t\t\te.UserAuthorizationHandler, err = usertokens.NewVerifier(ctx, e.UserAuthorizationHandler)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"unable to initialize user authorization handler for service: %s - error %s\", e.ID, err)\n\t\t\t}\n\t\t}\n\t\texposedServices = append(exposedServices, e)\n\t}\n\n\treturn &PUPolicy{\n\t\tmanagementID:           p.ManagementID,\n\t\tmanagementNamespace:    p.ManagementNamespace,\n\t\ttriremeAction:          p.TriremeAction,\n\t\tapplicationACLs:        p.ApplicationACLs,\n\t\tnetworkACLs:            p.NetworkACLs.Copy(),\n\t\tDNSACLs:                p.DNSACLs.Copy(),\n\t\ttransmitterRules:       p.TransmitterRules.Copy(),\n\t\treceiverRules:          p.ReceiverRules.Copy(),\n\t\tannotations:            NewTagStoreFromSlice(p.Annotations),\n\t\tcompressedTags:         NewTagStoreFromSlice(p.CompressedTags),\n\t\tidentity:               NewTagStoreFromSlice(p.Identity),\n\t\tips:                    p.IPs.Copy(),\n\t\tservicesListeningPort:  p.ServicesListeningPort,\n\t\tdnsProxyPort:           p.DNSProxyPort,\n\t\texposedServices:        exposedServices,\n\t\tdependentServices:      p.DependentServices,\n\t\tscopes:                 p.Scopes,\n\t\tenforcerType:           p.EnforcerType,\n\t\tservicesCA:             p.ServicesCA,\n\t\tservicesCertificate:    p.ServicesCertificate,\n\t\tservicesPrivateKey:     p.ServicesPrivateKey,\n\t\tappDefaultPolicyAction: p.AppDefaultPolicyAction,\n\t\tnetDefaultPolicyAction: p.NetDefaultPolicyAction,\n\t}, nil\n}\n"
  },
  {
    "path": "policy/policy_test.go",
    "content": "// +build !windows\n\npackage policy\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\nfunc TestNewPolicy(t *testing.T) {\n\tConvey(\"Given that I instantiate a new policy\", t, func() {\n\n\t\tConvey(\"When I provide only the mandatory fields\", func() {\n\t\t\tp := NewPUPolicy(\"id1\", \"/abc\", AllowAll, nil, nil, nil, nil, nil, nil, nil, nil, nil, 0, 0, nil, nil, []string{}, EnforcerMapping, Reject|Log, Reject|Log)\n\t\t\tConvey(\"I shpuld get an empty policy\", func() {\n\t\t\t\tSo(p, ShouldNotBeNil)\n\t\t\t\tSo(p.applicationACLs, ShouldNotBeNil)\n\t\t\t\tSo(p.networkACLs, ShouldNotBeNil)\n\t\t\t\tSo(p.triremeAction, ShouldEqual, AllowAll)\n\t\t\t\tSo(p.transmitterRules, ShouldNotBeNil)\n\t\t\t\tSo(p.receiverRules, ShouldNotBeNil)\n\t\t\t\tSo(p.identity, ShouldNotBeNil)\n\t\t\t\tSo(p.ips, ShouldNotBeNil)\n\t\t\t\tShouldEqual(p.appDefaultPolicyAction, Reject|Log)\n\t\t\t\tShouldEqual(p.netDefaultPolicyAction, Reject|Log)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I provide all the feilds\", func() {\n\t\t\tappACL := IPRule{\n\t\t\t\tPolicy: &FlowPolicy{\n\t\t\t\t\tAction:   Accept,\n\t\t\t\t\tPolicyID: \"1\",\n\t\t\t\t},\n\t\t\t\tAddresses: []string{\"10.0.0.0/8\"},\n\t\t\t\tProtocols: []string{\"tcp\"},\n\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t}\n\n\t\t\tnetACL := IPRule{\n\t\t\t\tPolicy: &FlowPolicy{\n\t\t\t\t\tAction:   Accept,\n\t\t\t\t\tPolicyID: \"2\",\n\t\t\t\t},\n\t\t\t\tAddresses: []string{\"20.0.0.0/8\"},\n\t\t\t\tProtocols: []string{\"tcp\"},\n\t\t\t\tPorts:     []string{\"80\"},\n\t\t\t}\n\n\t\t\tclause := KeyValueOperator{\n\t\t\t\tKey:      \"app\",\n\t\t\t\tValue:    []string{\"web\"},\n\t\t\t\tOperator: Equal,\n\t\t\t}\n\n\t\t\ttxtags := TagSelectorList{\n\t\t\t\tTagSelector{\n\t\t\t\t\tClause: []KeyValueOperator{clause},\n\t\t\t\t\tPolicy: &FlowPolicy{Action: Accept, PolicyID: \"3\"},\n\t\t\t\t},\n\t\t\t}\n\t\t\trxtags := TagSelectorList{\n\t\t\t\tTagSelector{\n\t\t\t\t\tClause: []KeyValueOperator{clause},\n\t\t\t\t\tPolicy: &FlowPolicy{Action: Reject, PolicyID: \"4\"},\n\t\t\t\t},\n\t\t\t}\n\n\t\t\tidentity := NewTagStore()\n\t\t\tidentity.AppendKeyValue(\"image\", \"nginx\")\n\n\t\t\tannotations := NewTagStore()\n\t\t\tannotations.AppendKeyValue(\"image\", \"nginx\")\n\t\t\tannotations.AppendKeyValue(\"server\", \"local\")\n\n\t\t\tips := ExtendedMap{DefaultNamespace: \"172.0.0.1\"}\n\n\t\t\tp := NewPUPolicy(\n\t\t\t\t\"id\",\n\t\t\t\t\"/abc\",\n\t\t\t\tAllowAll,\n\t\t\t\tIPRuleList{appACL},\n\t\t\t\tIPRuleList{netACL},\n\t\t\t\tnil,\n\t\t\t\ttxtags,\n\t\t\t\trxtags,\n\t\t\t\tidentity,\n\t\t\t\tannotations,\n\t\t\t\tnil,\n\t\t\t\tips,\n\t\t\t\t0,\n\t\t\t\t0,\n\t\t\t\tnil,\n\t\t\t\tnil,\n\t\t\t\t[]string{},\n\t\t\t\tEnforcerMapping,\n\t\t\t\tReject|Log,\n\t\t\t\tReject|Log,\n\t\t\t)\n\n\t\t\tConvey(\"Then I should get the right policy\", func() {\n\t\t\t\tSo(p, ShouldNotBeNil)\n\t\t\t\tSo(p.triremeAction, ShouldEqual, AllowAll)\n\t\t\t\tSo(p.applicationACLs, ShouldResemble, IPRuleList{appACL})\n\t\t\t\tSo(p.networkACLs, ShouldResemble, IPRuleList{netACL})\n\t\t\t\tSo(p.transmitterRules, ShouldResemble, txtags)\n\t\t\t\tSo(p.receiverRules, ShouldResemble, rxtags)\n\t\t\t\tSo(p.identity, ShouldResemble, identity)\n\t\t\t\tSo(p.annotations, ShouldResemble, annotations)\n\t\t\t\tSo(p.ips, ShouldResemble, ips)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestNewPolicyWithDefaults(t *testing.T) {\n\tConvey(\"When I create a default policy\", t, func() {\n\t\tp := NewPUPolicyWithDefaults()\n\t\tConvey(\"I shpuld get an empty policy\", func() {\n\t\t\tSo(p, ShouldNotBeNil)\n\t\t\tSo(p.applicationACLs, ShouldNotBeNil)\n\t\t\tSo(p.networkACLs, ShouldNotBeNil)\n\t\t\tSo(p.triremeAction, ShouldEqual, AllowAll)\n\t\t\tSo(p.transmitterRules, ShouldNotBeNil)\n\t\t\tSo(p.receiverRules, ShouldNotBeNil)\n\t\t\tSo(p.identity, ShouldNotBeNil)\n\t\t\tSo(p.ips, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc TestFuncClone(t *testing.T) {\n\tConvey(\"When I have a default policy\", t, func() {\n\t\tappACL := IPRule{\n\t\t\tPolicy: &FlowPolicy{\n\t\t\t\tAction:   Accept,\n\t\t\t\tPolicyID: \"1\",\n\t\t\t},\n\t\t\tAddresses: []string{\"10.0.0.0/8\"},\n\t\t\tProtocols: []string{\"tcp\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t}\n\n\t\tnetACL := IPRule{\n\t\t\tPolicy: &FlowPolicy{\n\t\t\t\tAction:   Accept,\n\t\t\t\tPolicyID: \"2\",\n\t\t\t},\n\t\t\tAddresses: []string{\"20.0.0.0/8\"},\n\t\t\tProtocols: []string{\"tcp\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t}\n\n\t\tclause := KeyValueOperator{\n\t\t\tKey:      \"app\",\n\t\t\tValue:    []string{\"web\"},\n\t\t\tOperator: Equal,\n\t\t}\n\n\t\ttxtags := TagSelectorList{\n\t\t\tTagSelector{\n\t\t\t\tClause: []KeyValueOperator{clause},\n\t\t\t\tPolicy: &FlowPolicy{Action: Accept, PolicyID: \"3\"},\n\t\t\t},\n\t\t}\n\t\trxtags := TagSelectorList{\n\t\t\tTagSelector{\n\t\t\t\tClause: []KeyValueOperator{clause},\n\t\t\t\tPolicy: &FlowPolicy{Action: Reject, PolicyID: \"4\"},\n\t\t\t},\n\t\t}\n\n\t\tidentity := NewTagStore()\n\t\tidentity.AppendKeyValue(\"image\", \"nginx\")\n\n\t\tannotations := NewTagStore()\n\t\tannotations.AppendKeyValue(\"image\", \"nginx\")\n\t\tannotations.AppendKeyValue(\"server\", \"local\")\n\n\t\tips := ExtendedMap{DefaultNamespace: \"172.0.0.1\"}\n\n\t\td := NewPUPolicy(\n\t\t\t\"id\",\n\t\t\t\"/abc\",\n\t\t\tAllowAll,\n\t\t\tIPRuleList{appACL},\n\t\t\tIPRuleList{netACL},\n\t\t\tnil,\n\t\t\ttxtags,\n\t\t\trxtags,\n\t\t\tidentity,\n\t\t\tannotations,\n\t\t\tnil,\n\t\t\tips,\n\t\t\t0,\n\t\t\t0,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\t[]string{},\n\t\t\tEnforcerMapping,\n\t\t\tReject|Log,\n\t\t\tReject|Log,\n\t\t)\n\t\tConvey(\"If I clone the policy\", func() {\n\t\t\tp := d.Clone()\n\t\t\tConvey(\"I should get the same policy\", func() {\n\t\t\t\tSo(p, ShouldNotBeNil)\n\t\t\t\tSo(p.triremeAction, ShouldEqual, AllowAll)\n\t\t\t\tSo(p.applicationACLs, ShouldResemble, IPRuleList{appACL})\n\t\t\t\tSo(p.networkACLs, ShouldResemble, IPRuleList{netACL})\n\t\t\t\tSo(p.transmitterRules, ShouldResemble, txtags)\n\t\t\t\tSo(p.receiverRules, ShouldResemble, rxtags)\n\t\t\t\tSo(p.identity, ShouldResemble, identity)\n\t\t\t\tSo(p.annotations, ShouldResemble, annotations)\n\t\t\t\tSo(p.ips, ShouldResemble, ips)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestAllLockedSetGet(t *testing.T) {\n\tConvey(\"Given a good policy\", t, func() {\n\t\tappACL := IPRule{\n\t\t\tPolicy: &FlowPolicy{\n\t\t\t\tAction:   Accept,\n\t\t\t\tPolicyID: \"1\",\n\t\t\t},\n\t\t\tAddresses: []string{\"10.0.0.0/8\"},\n\t\t\tProtocols: []string{\"tcp\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t}\n\n\t\tnetACL := IPRule{\n\t\t\tPolicy: &FlowPolicy{\n\t\t\t\tAction:   Accept,\n\t\t\t\tPolicyID: \"2\",\n\t\t\t},\n\t\t\tAddresses: []string{\"20.0.0.0/8\"},\n\t\t\tProtocols: []string{\"tcp\"},\n\t\t\tPorts:     []string{\"80\"},\n\t\t}\n\n\t\tclause := KeyValueOperator{\n\t\t\tKey:      \"app\",\n\t\t\tValue:    []string{\"web\"},\n\t\t\tOperator: Equal,\n\t\t}\n\n\t\ttxtags := TagSelectorList{\n\t\t\tTagSelector{\n\t\t\t\tClause: []KeyValueOperator{clause},\n\t\t\t\tPolicy: &FlowPolicy{Action: Accept, PolicyID: \"3\"},\n\t\t\t},\n\t\t}\n\t\trxtags := TagSelectorList{\n\t\t\tTagSelector{\n\t\t\t\tClause: []KeyValueOperator{clause},\n\t\t\t\tPolicy: &FlowPolicy{Action: Reject, PolicyID: \"4\"},\n\t\t\t},\n\t\t}\n\n\t\tidentity := NewTagStore()\n\t\tidentity.AppendKeyValue(\"image\", \"nginx\")\n\n\t\tannotations := NewTagStore()\n\t\tannotations.AppendKeyValue(\"image\", \"nginx\")\n\t\tannotations.AppendKeyValue(\"server\", \"local\")\n\n\t\tips := ExtendedMap{DefaultNamespace: \"172.0.0.1\"}\n\n\t\tp := NewPUPolicy(\n\t\t\t\"id1\",\n\t\t\t\"/abc\",\n\t\t\tAllowAll,\n\t\t\tIPRuleList{appACL},\n\t\t\tIPRuleList{netACL},\n\t\t\tnil,\n\t\t\ttxtags,\n\t\t\trxtags,\n\t\t\tidentity,\n\t\t\tannotations,\n\t\t\tnil,\n\t\t\tips,\n\t\t\t0,\n\t\t\t0,\n\t\t\tnil,\n\t\t\tnil,\n\t\t\t[]string{},\n\t\t\tEnvoyAuthorizerEnforcer,\n\t\t\tReject|Log,\n\t\t\tReject|Log,\n\t\t)\n\n\t\tConvey(\"I should be able to retrieve the management ID \", func() {\n\t\t\tid := p.ManagementID()\n\t\t\tSo(id, ShouldResemble, \"id1\")\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the management namespace \", func() {\n\t\t\tns := p.ManagementNamespace()\n\t\t\tSo(ns, ShouldResemble, \"/abc\")\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the Action\", func() {\n\t\t\tSo(p.TriremeAction(), ShouldEqual, AllowAll)\n\t\t})\n\n\t\tConvey(\"I should be able to set the trireme action\", func() {\n\t\t\tp.SetTriremeAction(Police)\n\t\t\tSo(p.triremeAction, ShouldEqual, Police)\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the APP acls \", func() {\n\t\t\tSo(p.ApplicationACLs(), ShouldResemble, IPRuleList{appACL})\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the NET acls\", func() {\n\t\t\tSo(p.NetworkACLs(), ShouldResemble, IPRuleList{netACL})\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the receiver rules\", func() {\n\t\t\tSo(p.ReceiverRules(), ShouldResemble, rxtags)\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the transmitter rules\", func() {\n\t\t\tSo(p.TransmitterRules(), ShouldResemble, txtags)\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the identity\", func() {\n\t\t\tSo(p.Identity(), ShouldResemble, identity)\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the annotations\", func() {\n\t\t\tSo(p.Annotations(), ShouldResemble, annotations)\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the IPAddresses\", func() {\n\t\t\tSo(p.IPAddresses(), ShouldResemble, ips)\n\t\t})\n\n\t\tConvey(\"I should be able to retrieve the EnforcerType\", func() {\n\t\t\tSo(p.EnforcerType(), ShouldEqual, EnvoyAuthorizerEnforcer)\n\t\t})\n\n\t\tConvey(\"If I add an identity key/value pair, it should succeed\", func() {\n\t\t\tp.AddIdentityTag(\"key\", \"value\")\n\t\t\tt, ok := p.Identity().Get(\"key\")\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(t, ShouldResemble, \"value\")\n\t\t})\n\n\t\tConvey(\"If I update the IPS, it should succeed\", func() {\n\t\t\tp.SetIPAddresses(ExtendedMap{DefaultNamespace: \"40.0.0.0/8\"})\n\t\t\tSo(p.IPAddresses(), ShouldResemble, ExtendedMap{DefaultNamespace: \"40.0.0.0/8\"})\n\t\t})\n\n\t\tnewclause := KeyValueOperator{\n\t\t\tKey:      \"app\",\n\t\t\tValue:    []string{\"added\"},\n\t\t\tOperator: Equal,\n\t\t}\n\n\t\tConvey(\"If I add a transmitter rule, it should succeed\", func() {\n\n\t\t\trule := TagSelector{\n\t\t\t\tClause: []KeyValueOperator{newclause},\n\t\t\t\tPolicy: &FlowPolicy{Action: Accept, PolicyID: \"3\"},\n\t\t\t}\n\n\t\t\tp.AddTransmitterRules(rule)\n\t\t\tSo(len(p.TransmitterRules()), ShouldEqual, 2)\n\t\t\tSo(p.TransmitterRules()[1], ShouldResemble, rule)\n\t\t})\n\n\t\tConvey(\"If I add a receiver rule, it should succeed\", func() {\n\t\t\trule := TagSelector{\n\t\t\t\tClause: []KeyValueOperator{newclause},\n\t\t\t\tPolicy: &FlowPolicy{Action: Reject, PolicyID: \"4\"},\n\t\t\t}\n\t\t\tp.AddReceiverRules(rule)\n\t\t\tSo(len(p.ReceiverRules()), ShouldEqual, 2)\n\t\t\tSo(p.ReceiverRules()[1], ShouldResemble, rule)\n\t\t})\n\t})\n\n}\n\nfunc TestPUInfo(t *testing.T) {\n\tConvey(\"Given I try to initiate a new container policy\", t, func() {\n\t\tpuInfor := NewPUInfo(\"123\", \"/abc\", common.ContainerPU)\n\t\tpolicy := NewPUPolicy(\"123\", \"/abc\", AllowAll, nil, nil, nil, nil, nil, nil, nil, nil, nil, 0, 0, nil, nil, []string{}, EnforcerMapping, Reject|Log, Reject|Log)\n\t\truntime := NewPURuntime(\"\", 0, \"\", nil, nil, common.ContainerPU, None, nil)\n\t\tConvey(\"Then I expect the struct to be populated\", func() {\n\t\t\tSo(puInfor.ContextID, ShouldEqual, \"123\")\n\t\t\tSo(puInfor.Policy, ShouldResemble, policy)\n\t\t\tSo(puInfor.Runtime, ShouldResemble, runtime)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "policy/policyerror.go",
    "content": "package policy\n\nimport \"fmt\"\n\n// ErrorReason is the reason for an error\ntype ErrorReason string\n\nconst (\n\t// PUNotFound error reason\n\tPUNotFound ErrorReason = \"PUNotFound\"\n\n\t// PUNotUnique error reason\n\tPUNotUnique ErrorReason = \"PUNotUnique\"\n\n\t// PUCreateFailed error reason\n\tPUCreateFailed ErrorReason = \"PUCreateFailed\"\n\n\t// PUAlreadyActivated error reason\n\tPUAlreadyActivated ErrorReason = \"PUAlreadyActivated\"\n\n\t// PUPolicyPending error reason indicates that policy activation is pending.\n\tPUPolicyPending ErrorReason = \"PUPolicyPending\"\n\n\t// PUPolicyEnforcementFailed error reason indicates that enforcement failed.\n\tPUPolicyEnforcementFailed\n)\n\nvar policyErrorDescription = map[ErrorReason]string{\n\tPUNotFound:         \"unable to find PU with ID\",\n\tPUNotUnique:        \"more than one PU with ID exist\",\n\tPUCreateFailed:     \"failed to create PU\",\n\tPUAlreadyActivated: \"PU has been already activated previously\",\n}\n\n// Error is a specific error type for context\ntype Error struct {\n\tpuID   string\n\treason ErrorReason\n\terr    error\n}\n\nfunc (e *Error) Error() string {\n\tdesc, ok := policyErrorDescription[e.reason]\n\tvar err string\n\tif e.err != nil {\n\t\terr = \": \" + e.err.Error()\n\t}\n\tif !ok {\n\t\treturn fmt.Sprintf(\"%s %s%s\", e.reason, e.puID, err)\n\t}\n\treturn fmt.Sprintf(\"%s %s: %s%s\", e.reason, e.puID, desc, err)\n}\n\n// ErrPUNotFound creates a new context not found error\nfunc ErrPUNotFound(puID string, err error) error {\n\treturn &Error{\n\t\tpuID:   puID,\n\t\treason: PUNotFound,\n\t\terr:    err,\n\t}\n}\n\n// ErrPUNotUnique creates a new not unique error\nfunc ErrPUNotUnique(puID string, err error) error {\n\treturn &Error{\n\t\tpuID:   puID,\n\t\treason: PUNotUnique,\n\t\terr:    err,\n\t}\n}\n\n// ErrPUCreateFailed creates a new PU create failed error\nfunc ErrPUCreateFailed(puID string, err error) error {\n\treturn &Error{\n\t\tpuID:   puID,\n\t\treason: PUCreateFailed,\n\t\terr:    err,\n\t}\n}\n\n// ErrPUAlreadyActivated creates a new PU already activated error\nfunc ErrPUAlreadyActivated(puID string, err error) error {\n\treturn &Error{\n\t\tpuID:   puID,\n\t\treason: PUAlreadyActivated,\n\t\terr:    err,\n\t}\n}\n\n// ErrPUPolicyPending creates a new PU policy pending error.\nfunc ErrPUPolicyPending(puID string, err error) error {\n\treturn &Error{\n\t\tpuID:   puID,\n\t\treason: PUPolicyPending,\n\t\terr:    err,\n\t}\n}\n\n// ErrPUPolicyEnforcementFailed creates a new PU policy pending error.\nfunc ErrPUPolicyEnforcementFailed(puID string, err error) error {\n\treturn &Error{\n\t\tpuID:   puID,\n\t\treason: PUPolicyEnforcementFailed,\n\t\terr:    err,\n\t}\n}\n\n// IsErrPUNotFound checks if this error is a PU not found error\nfunc IsErrPUNotFound(err error) bool {\n\tswitch t := err.(type) {\n\tcase *Error:\n\t\treturn t.reason == PUNotFound\n\tdefault:\n\t\treturn false\n\t}\n}\n\n// IsErrPUNotUnique checks if this error is a PU not unique error\nfunc IsErrPUNotUnique(err error) bool {\n\tswitch t := err.(type) {\n\tcase *Error:\n\t\treturn t.reason == PUNotUnique\n\tdefault:\n\t\treturn false\n\t}\n}\n\n// IsErrPUCreateFailed checks if this error is a PU not unique error\nfunc IsErrPUCreateFailed(err error) bool {\n\tswitch t := err.(type) {\n\tcase *Error:\n\t\treturn t.reason == PUCreateFailed\n\tdefault:\n\t\treturn false\n\t}\n}\n\n// IsErrPUAlreadyActivated checks if this error is a PU already activated error\nfunc IsErrPUAlreadyActivated(err error) bool {\n\tswitch t := err.(type) {\n\tcase *Error:\n\t\treturn t.reason == PUAlreadyActivated\n\tdefault:\n\t\treturn false\n\t}\n}\n\n// IsErrPUPolicyPending checks if this error is a PU policy pending error.\nfunc IsErrPUPolicyPending(err error) bool {\n\tswitch t := err.(type) {\n\tcase *Error:\n\t\treturn t.reason == PUPolicyPending\n\tdefault:\n\t\treturn false\n\t}\n}\n\n// IsErrPUEnforcementFailed checks if this error is a PU policy pending error.\nfunc IsErrPUEnforcementFailed(err error) bool {\n\tswitch t := err.(type) {\n\tcase *Error:\n\t\treturn t.reason == PUPolicyEnforcementFailed\n\tdefault:\n\t\treturn false\n\t}\n}\n"
  },
  {
    "path": "policy/policyerror_test.go",
    "content": "// +build !windows\n\npackage policy\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestPolicyErrors(t *testing.T) {\n\tConvey(\"A manually constructed error object\", t, func() {\n\t\terr := Error{\n\t\t\tpuID:   \"whatever\",\n\t\t\treason: ErrorReason(\"its-typed-for-a-reason\"),\n\t\t\terr:    nil,\n\t\t}\n\t\tConvey(\"should still print a proper error message\", func() {\n\t\t\texpected := \"its-typed-for-a-reason whatever\"\n\t\t\tSo(err.Error(), ShouldEqual, expected)\n\t\t})\n\t})\n\tConvey(\"Creating error objects using their initializers for\", t, func() {\n\t\tfailure := fmt.Errorf(\"failure\")\n\t\tConvey(\"ErrPUNotFound\", func() {\n\t\t\terr := ErrPUNotFound(\"default/pod\", failure)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tConvey(\"should print an expected error message\", func() {\n\t\t\t\texpected := fmt.Sprintf(\"%s %s: %s%s\", PUNotFound, \"default/pod\", policyErrorDescription[PUNotFound], \": \"+failure.Error())\n\t\t\t\tSo(err.Error(), ShouldEqual, expected)\n\t\t\t})\n\t\t\tConvey(\"should successfully test against its probing function\", func() {\n\t\t\t\tSo(IsErrPUNotFound(err), ShouldBeTrue)\n\t\t\t\tSo(IsErrPUNotFound(failure), ShouldBeFalse)\n\t\t\t})\n\t\t})\n\t\tConvey(\"ErrPUNotUnique\", func() {\n\t\t\terr := ErrPUNotUnique(\"default/pod\", failure)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tConvey(\"should print an expected error message\", func() {\n\t\t\t\texpected := fmt.Sprintf(\"%s %s: %s%s\", PUNotUnique, \"default/pod\", policyErrorDescription[PUNotUnique], \": \"+failure.Error())\n\t\t\t\tSo(err.Error(), ShouldEqual, expected)\n\t\t\t})\n\t\t\tConvey(\"should successfully test against its probing function\", func() {\n\t\t\t\tSo(IsErrPUNotUnique(err), ShouldBeTrue)\n\t\t\t\tSo(IsErrPUNotUnique(failure), ShouldBeFalse)\n\t\t\t})\n\t\t})\n\t\tConvey(\"ErrPUCreateFailed\", func() {\n\t\t\terr := ErrPUCreateFailed(\"default/pod\", failure)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tConvey(\"should print an expected error message\", func() {\n\t\t\t\texpected := fmt.Sprintf(\"%s %s: %s%s\", PUCreateFailed, \"default/pod\", policyErrorDescription[PUCreateFailed], \": \"+failure.Error())\n\t\t\t\tSo(err.Error(), ShouldEqual, expected)\n\t\t\t})\n\t\t\tConvey(\"should successfully test against its probing function\", func() {\n\t\t\t\tSo(IsErrPUCreateFailed(err), ShouldBeTrue)\n\t\t\t\tSo(IsErrPUCreateFailed(failure), ShouldBeFalse)\n\t\t\t})\n\t\t})\n\t\tConvey(\"ErrPUAlreadyActivated\", func() {\n\t\t\terr := ErrPUAlreadyActivated(\"default/pod\", failure)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t\tConvey(\"should print an expected error message\", func() {\n\t\t\t\texpected := fmt.Sprintf(\"%s %s: %s%s\", PUAlreadyActivated, \"default/pod\", policyErrorDescription[PUAlreadyActivated], \": \"+failure.Error())\n\t\t\t\tSo(err.Error(), ShouldEqual, expected)\n\t\t\t})\n\t\t\tConvey(\"should successfully test against its probing function\", func() {\n\t\t\t\tSo(IsErrPUAlreadyActivated(err), ShouldBeTrue)\n\t\t\t\tSo(IsErrPUAlreadyActivated(failure), ShouldBeFalse)\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "policy/puinfo.go",
    "content": "package policy\n\nimport \"go.aporeto.io/enforcerd/trireme-lib/common\"\n\n// PUInfo  captures all policy information related to a connection as well as runtime.\n// It makes passing data around simpler.\ntype PUInfo struct {\n\t// ContextID is the ID of the container that the policy applies to\n\tContextID string\n\t// Policy is an instantiation of the container policy\n\tPolicy *PUPolicy\n\t// RunTime captures all data that are captured from the container\n\tRuntime *PURuntime\n}\n\n// NewPUInfo instantiates a new ContainerPolicy\nfunc NewPUInfo(contextID, namespace string, puType common.PUType) *PUInfo {\n\tpolicy := NewPUPolicy(contextID, namespace, AllowAll, nil, nil, nil, nil, nil, nil, nil, nil, nil, 0, 0, nil, nil, []string{}, EnforcerMapping, Reject|Log, Reject|Log)\n\truntime := NewPURuntime(\"\", 0, \"\", nil, nil, puType, None, nil)\n\treturn PUInfoFromPolicyAndRuntime(contextID, policy, runtime)\n}\n\n// PUInfoFromPolicyAndRuntime generates a ContainerInfo Struct from an existing RuntimeInfo and PolicyInfo\nfunc PUInfoFromPolicyAndRuntime(contextID string, policyInfo *PUPolicy, runtimeInfo *PURuntime) *PUInfo {\n\treturn &PUInfo{\n\t\tContextID: contextID,\n\t\tPolicy:    policyInfo,\n\t\tRuntime:   runtimeInfo,\n\t}\n}\n"
  },
  {
    "path": "policy/runtime.go",
    "content": "package policy\n\nimport (\n\t\"encoding/json\"\n\t\"sync\"\n\n\t\"github.com/docker/go-connections/nat\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\n// PURuntime holds all data related to the status of the container run time\ntype PURuntime struct {\n\t// puType is the type of the PU (container or process )\n\tpuType common.PUType\n\t// Pid holds the value of the first process of the container\n\tpid int\n\t// NsPath is the path to the networking namespace for this PURuntime if applicable.\n\tnsPath string\n\t// Name is the name of the container\n\tname string\n\t// IPAddress is the IP Address of the container\n\tips ExtendedMap\n\t// Tags is a map of the metadata of the container\n\ttags *TagStore\n\t// options\n\toptions *OptionsType\n\t// ServiceMeshType determines which serviceMesh is enabled ont he pod\n\tServiceMeshType ServiceMesh\n\n\tsync.Mutex\n}\n\n// PURuntimeJSON is a Json representation of PURuntime\ntype PURuntimeJSON struct {\n\t// PUType is the type of the PU\n\tPUType common.PUType\n\t// Pid holds the value of the first process of the container\n\tPid int\n\t// NSPath is the path to the networking namespace for this PURuntime if applicable.\n\tNSPath string\n\t// Name is the name of the container\n\tName string\n\t// IPAddress is the IP Address of the container\n\tIPAddresses ExtendedMap\n\t// Tags is a map of the metadata of the container\n\tTags []string\n\t// Options is a map of the options of the container\n\tOptions *OptionsType\n}\n\n// NewPURuntime Generate a new RuntimeInfo\nfunc NewPURuntime(\n\tname string, pid int, nsPath string, tags *TagStore,\n\tips ExtendedMap, puType common.PUType, serviceMeshType ServiceMesh, options *OptionsType) *PURuntime {\n\n\tif tags == nil {\n\t\ttags = NewTagStore()\n\t}\n\n\tif ips == nil {\n\t\tips = ExtendedMap{}\n\t}\n\n\tif options == nil {\n\t\toptions = &OptionsType{}\n\t}\n\n\treturn &PURuntime{\n\t\tpuType:          puType,\n\t\ttags:            tags,\n\t\tips:             ips,\n\t\toptions:         options,\n\t\tpid:             pid,\n\t\tnsPath:          nsPath,\n\t\tname:            name,\n\t\tServiceMeshType: serviceMeshType,\n\t}\n}\n\n// NewPURuntimeWithDefaults sets up PURuntime with defaults\nfunc NewPURuntimeWithDefaults() *PURuntime {\n\n\treturn NewPURuntime(\"\", 0, \"\", nil, nil, common.ContainerPU, None, nil)\n}\n\n// Clone returns a copy of the policy\nfunc (r *PURuntime) Clone() *PURuntime {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\treturn NewPURuntime(r.name, r.pid, r.nsPath, r.tags.Copy(), r.ips.Copy(), r.puType, None, r.options)\n}\n\n// MarshalJSON Marshals this struct.\nfunc (r *PURuntime) MarshalJSON() ([]byte, error) {\n\treturn json.Marshal(&PURuntimeJSON{\n\t\tPUType:      r.puType,\n\t\tPid:         r.pid,\n\t\tNSPath:      r.nsPath,\n\t\tName:        r.name,\n\t\tIPAddresses: r.ips,\n\t\tTags:        r.tags.GetSlice(),\n\t\tOptions:     r.options,\n\t})\n}\n\n// UnmarshalJSON Unmarshals this struct.\nfunc (r *PURuntime) UnmarshalJSON(param []byte) error {\n\ta := &PURuntimeJSON{}\n\tif err := json.Unmarshal(param, &a); err != nil {\n\t\treturn err\n\t}\n\tr.pid = a.Pid\n\tr.nsPath = a.NSPath\n\tr.name = a.Name\n\tr.ips = a.IPAddresses\n\tr.tags = NewTagStoreFromSlice(a.Tags)\n\tr.options = a.Options\n\tr.puType = a.PUType\n\treturn nil\n}\n\n// Pid returns the PID\nfunc (r *PURuntime) Pid() int {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\treturn r.pid\n}\n\n// SetPid sets the PID\nfunc (r *PURuntime) SetPid(pid int) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.pid = pid\n}\n\n// NSPath returns the NSPath\nfunc (r *PURuntime) NSPath() string {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\treturn r.nsPath\n}\n\n// SetNSPath sets the NSPath\nfunc (r *PURuntime) SetNSPath(nsPath string) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.nsPath = nsPath\n}\n\n// SetPUType sets the PU Type\nfunc (r *PURuntime) SetPUType(puType common.PUType) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.puType = puType\n}\n\n// SetOptions sets the Options\nfunc (r *PURuntime) SetOptions(options OptionsType) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.options = &options\n}\n\n// Name returns the PID\nfunc (r *PURuntime) Name() string {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\treturn r.name\n}\n\n// PUType returns the PU type\nfunc (r *PURuntime) PUType() common.PUType {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\treturn r.puType\n}\n\n// IPAddresses returns all the IP addresses for the processing unit\nfunc (r *PURuntime) IPAddresses() ExtendedMap {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\treturn r.ips.Copy()\n}\n\n// SetIPAddresses sets up all the IP addresses for the processing unit\nfunc (r *PURuntime) SetIPAddresses(ipa ExtendedMap) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.ips = ipa.Copy()\n}\n\n// Tag returns a specific tag for the processing unit\nfunc (r *PURuntime) Tag(key string) (string, bool) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\ttag, ok := r.tags.Get(key)\n\treturn tag, ok\n}\n\n// Tags returns a copy of the tags for the processing unit\nfunc (r *PURuntime) Tags() *TagStore {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\treturn r.tags.Copy()\n}\n\n// SetTags returns tags for the processing unit\nfunc (r *PURuntime) SetTags(t *TagStore) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.tags = t.Copy()\n}\n\n// Options returns tags for the processing unit\nfunc (r *PURuntime) Options() OptionsType {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.ensureOptions()\n\n\treturn *r.options\n}\n\n// SetServices updates the services of the runtime.\nfunc (r *PURuntime) SetServices(services []common.Service) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.ensureOptions()\n\n\tr.options.Services = services\n}\n\n// PortMap returns the mapping from host port->container port\nfunc (r *PURuntime) PortMap() map[nat.Port][]string {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tif r.options != nil {\n\t\treturn r.options.PortMap\n\t}\n\n\treturn nil\n}\n\nfunc (r *PURuntime) ensureOptions() {\n\tif r.options == nil {\n\t\tr.options = &OptionsType{}\n\t}\n}\n"
  },
  {
    "path": "policy/runtime_test.go",
    "content": "// +build !windows\n\npackage policy\n\nimport (\n\t\"testing\"\n\n\t\"github.com/docker/go-connections/nat\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n)\n\nfunc TestNewPURunTime(t *testing.T) {\n\tConvey(\"When I create a new run time, it should be valid\", t, func() {\n\n\t\ttags := NewTagStore()\n\t\ttags.AppendKeyValue(\"image\", \"nginx\")\n\t\ttags.AppendKeyValue(\"server\", \"local\")\n\n\t\tips := ExtendedMap{DefaultNamespace: \"172.0.0.1\"}\n\n\t\truntime := NewPURuntime(\n\t\t\t\"container1\",\n\t\t\t123,\n\t\t\t\"\",\n\t\t\ttags,\n\t\t\tips,\n\t\t\tcommon.ContainerPU,\n\t\t\tNone,\n\t\t\tnil,\n\t\t)\n\n\t\tSo(runtime, ShouldNotBeNil)\n\t\tSo(runtime.puType, ShouldEqual, common.ContainerPU)\n\t\tSo(runtime.tags, ShouldResemble, tags)\n\t\tSo(runtime.ips, ShouldResemble, ips)\n\t\tSo(runtime.options, ShouldNotBeNil)\n\t\tSo(runtime.pid, ShouldEqual, 123)\n\t\tSo(runtime.name, ShouldResemble, \"container1\")\n\t})\n}\n\nfunc TestNewPDefaultURunTime(t *testing.T) {\n\tConvey(\"When I create a new run time, it should be valid\", t, func() {\n\t\truntime := NewPURuntimeWithDefaults()\n\n\t\tSo(runtime, ShouldNotBeNil)\n\t\tSo(runtime.puType, ShouldEqual, common.ContainerPU)\n\t\tSo(runtime.tags, ShouldResemble, NewTagStore())\n\t\tSo(runtime.ips, ShouldResemble, ExtendedMap{})\n\t\tSo(runtime.options, ShouldNotBeNil)\n\t\tSo(runtime.pid, ShouldEqual, 0)\n\t\tSo(runtime.name, ShouldResemble, \"\")\n\t})\n}\n\nfunc TestBasicFunctions(t *testing.T) {\n\tConvey(\"Given a valid runtime\", t, func() {\n\t\ttags := NewTagStore()\n\t\ttags.AppendKeyValue(\"image\", \"nginx\")\n\t\ttags.AppendKeyValue(\"server\", \"local\")\n\n\t\tips := ExtendedMap{DefaultNamespace: \"172.0.0.1\"}\n\n\t\tportMap := map[nat.Port][]string{nat.Port(\"80\"): {\"8001\", \"8002\"}}\n\n\t\truntime := NewPURuntime(\n\t\t\t\"container1\",\n\t\t\t123,\n\t\t\t\"\",\n\t\t\ttags,\n\t\t\tips,\n\t\t\tcommon.ContainerPU,\n\t\t\tNone,\n\t\t\tnil,\n\t\t)\n\n\t\tConvey(\"When I clone it, I should get the right runtime\", func() {\n\t\t\tcloned := runtime.Clone()\n\t\t\tSo(cloned, ShouldResemble, runtime)\n\t\t})\n\n\t\tConvey(\"I should retrieve the right Pid\", func() {\n\t\t\tSo(runtime.Pid(), ShouldEqual, 123)\n\t\t})\n\n\t\tConvey(\"I shopuld be able to set the Pid\", func() {\n\t\t\truntime.SetPid(567)\n\t\t\tSo(runtime.Pid(), ShouldEqual, 567)\n\t\t})\n\n\t\tConvey(\"I should be able to update and get the PUType\", func() {\n\t\t\truntime.SetPUType(common.LinuxProcessPU)\n\t\t\tSo(runtime.PUType(), ShouldEqual, common.LinuxProcessPU)\n\t\t})\n\n\t\tConvey(\"I should be able to set and get the right options\", func() {\n\t\t\truntime.SetOptions(OptionsType{CgroupName: \"test\"})\n\t\t\tSo(runtime.Options(), ShouldResemble, OptionsType{CgroupName: \"test\"})\n\t\t})\n\n\t\tConvey(\"I should be able to set portmap in options and get the right portmap\", func() {\n\t\t\truntime.SetOptions(OptionsType{PortMap: portMap})\n\t\t\tSo(runtime.PortMap(), ShouldResemble, portMap)\n\t\t})\n\n\t\tConvey(\"I should ge the right name\", func() {\n\t\t\tSo(runtime.Name(), ShouldEqual, \"container1\")\n\t\t})\n\n\t\tConvey(\"If I update the IP addresses, they should updated\", func() {\n\t\t\truntime.SetIPAddresses(ExtendedMap{DefaultNamespace: \"10.1.1.1\"})\n\t\t\tSo(runtime.IPAddresses(), ShouldResemble, ExtendedMap{DefaultNamespace: \"10.1.1.1\"})\n\t\t})\n\n\t\tConvey(\"I should be able to get the tags\", func() {\n\t\t\tSo(runtime.Tags(), ShouldResemble, tags)\n\t\t\tvalue, ok := runtime.Tag(\"image\")\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(value, ShouldEqual, \"nginx\")\n\t\t})\n\n\t\tConvey(\"I should be able to set the tags\", func() {\n\t\t\tmodify := NewTagStoreFromSlice([]string{\"$set=new\"})\n\t\t\truntime.SetTags(modify)\n\t\t\tSo(runtime.Tags(), ShouldResemble, modify)\n\t\t\tvalue, ok := runtime.Tag(\"$set\")\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(value, ShouldEqual, \"new\")\n\t\t\t_, ok = runtime.Tag(\"image\")\n\t\t\tSo(ok, ShouldBeFalse)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "policy/tagstore.go",
    "content": "package policy\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"strings\"\n)\n\n// TagStore stores the tags - it allows duplicate key values\ntype TagStore struct {\n\t// Could have used a map of maps, but I want to preserve the insert order of the key=value.\n\ttags map[string][]string\n}\n\n// UnmarshalJSON custom unmarshal bytes to tagstore\nfunc (t *TagStore) UnmarshalJSON(b []byte) error {\n\tvar s []string\n\tif err := json.Unmarshal(b, &s); err != nil {\n\t\treturn err\n\t}\n\t// create new map because I expect only the unmarshalled data be in the storen\n\tt.tags = map[string][]string{}\n\tt.MergeSlice(s)\n\treturn nil\n}\n\n// MarshalJSON custom marshal tagstore to bytes\nfunc (t *TagStore) MarshalJSON() ([]byte, error) {\n\treturn json.Marshal(t.GetSlice())\n}\n\n// NewTagStore creates a new TagStore\nfunc NewTagStore() *TagStore {\n\treturn &TagStore{map[string][]string{}}\n}\n\n// NewTagStoreFromSlice creates a new tag store from a slice.\nfunc NewTagStoreFromSlice(tags []string) *TagStore {\n\ttagStore := NewTagStore()\n\ttagStore.MergeSlice(tags)\n\treturn tagStore\n}\n\n// NewTagStoreFromMap creates a tag store from an input map\nfunc NewTagStoreFromMap(tags map[string]string) *TagStore {\n\ttagStore := NewTagStore()\n\ttagStore.MergeMap(tags)\n\treturn tagStore\n}\n\n// GetSlice returns the tagstore as a slice\nfunc (t *TagStore) GetSlice() []string {\n\tslice := []string{}\n\tfor key, values := range t.tags {\n\t\tif len(values) == 0 {\n\t\t\tslice = append(slice, key)\n\t\t} else {\n\t\t\tfor _, value := range values {\n\t\t\t\tslice = append(slice, fmt.Sprintf(\"%s=%s\", key, value))\n\t\t\t}\n\t\t}\n\t}\n\treturn slice\n}\n\n// Copy copies a TagStore\nfunc (t *TagStore) Copy() *TagStore {\n\ttagStore := NewTagStore()\n\ttagStore.MergeSlice(t.GetSlice())\n\treturn tagStore\n}\n\n// Get does a lookup in the list of tags\nfunc (t *TagStore) Get(key string) (string, bool) {\n\t// This function doesn't handle duplicate keys, so we grab the first one that was inserted.\n\t// This is how the previous code would have worked when it used a slice\n\tif _, ok := t.tags[key]; ok {\n\t\tvalues := t.tags[key]\n\t\tif len(values) != 0 {\n\t\t\treturn values[0], true\n\t\t}\n\t}\n\treturn \"\", false\n}\n\n// Merge merges tags from m into native tag store.\nfunc (t *TagStore) Merge(m *TagStore) {\n\tfor key, values := range m.tags {\n\t\tif len(values) == 0 {\n\t\t\tt.AppendKeyValue(key, \"\")\n\t\t} else {\n\t\t\tfor _, value := range values {\n\t\t\t\tt.AppendKeyValue(key, value)\n\t\t\t}\n\t\t}\n\t}\n}\n\n// MergeSlice merges slice of tags into the tag store.\nfunc (t *TagStore) MergeSlice(tags []string) {\n\tfor _, tag := range tags {\n\t\tt.Add(tag)\n\t}\n}\n\n// MergeMap merges map of tags into the tag store.\nfunc (t *TagStore) MergeMap(tags map[string]string) {\n\tfor key, value := range tags {\n\t\tt.AppendKeyValue(key, value)\n\t}\n}\n\n// Add appends tag to the tag store\nfunc (t *TagStore) Add(tag string) {\n\tparts := strings.Split(tag, \"=\")\n\tswitch len(parts) {\n\tcase 1:\n\t\tt.AppendKeyValue(tag, \"\")\n\tcase 2:\n\t\tt.AppendKeyValue(parts[0], parts[1])\n\t}\n}\n\n// AppendKeyValue appends a key and value to the tag store\nfunc (t *TagStore) AppendKeyValue(key, value string) {\n\tif _, ok := t.tags[key]; !ok {\n\t\tt.tags[key] = []string{}\n\t}\n\t// Dont add empty string to the slice\n\tif len(value) == 0 {\n\t\treturn\n\t}\n\t// Only add if it doesn't exist\n\tfor _, v := range t.tags[key] {\n\t\tif v == value {\n\t\t\treturn\n\t\t}\n\t}\n\tt.tags[key] = append(t.tags[key], value)\n}\n\n// String provides a string representation of tag store.\nfunc (t *TagStore) String() string {\n\tbuilder := strings.Builder{}\n\tfor key, values := range t.tags {\n\t\tif builder.Len() != 0 {\n\t\t\tbuilder.WriteString(\" \")\n\t\t}\n\t\tif len(values) == 0 {\n\t\t\tbuilder.WriteString(key)\n\t\t} else {\n\t\t\tfor index, value := range values {\n\t\t\t\tif index != 0 {\n\t\t\t\t\tbuilder.WriteString(\" \")\n\t\t\t\t}\n\t\t\t\tbuilder.WriteString(key)\n\t\t\t\tbuilder.WriteString(\"=\")\n\t\t\t\tbuilder.WriteString(value)\n\t\t\t}\n\t\t}\n\t}\n\treturn builder.String()\n}\n\n// IsEmpty if no key value pairs exist.\nfunc (t *TagStore) IsEmpty() bool {\n\treturn len(t.tags) == 0\n}\n\n// GetKeys returns the unique keys for this tag store\nfunc (t *TagStore) GetKeys() []string {\n\tkeys := []string{}\n\tfor k := range t.tags {\n\t\tkeys = append(keys, k)\n\t}\n\treturn keys\n}\n\n// RemoveTagsByKeys removes all tags by key\nfunc (t *TagStore) RemoveTagsByKeys(keys []string) {\n\tfor _, k := range keys {\n\t\tdelete(t.tags, k)\n\t}\n}\n"
  },
  {
    "path": "policy/tagstore_test.go",
    "content": "// +build !windows\n\npackage policy\n\nimport (\n\t\"encoding/json\"\n\t\"strings\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestNewTagStore(t *testing.T) {\n\tConvey(\"When I create a new Tagstore\", t, func() {\n\t\tt := NewTagStore()\n\t\tConvey(\"It should not be nil\", func() {\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tSo(t.IsEmpty(), ShouldEqual, true)\n\t\t})\n\t})\n}\n\nfunc TestNewTagStoreFromMap(t *testing.T) {\n\tConvey(\"When I create a new tagstore from a map\", t, func() {\n\t\tt := NewTagStoreFromMap(map[string]string{\n\t\t\t\"app\":   \"web\",\n\t\t\t\"image\": \"nginx\",\n\t\t})\n\n\t\tConvey(\"I should have the right store\", func() {\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\ttags := t.GetSlice()\n\n\t\t\tSo(len(tags), ShouldEqual, 2)\n\t\t\tSo(tags, ShouldContain, \"app=web\")\n\t\t\tSo(tags, ShouldContain, \"image=nginx\")\n\t\t})\n\t})\n}\n\nfunc TestMerge(t *testing.T) {\n\tConvey(\"When I create a new tagstore from a map\", t, func() {\n\t\tt := NewTagStoreFromMap(map[string]string{\n\t\t\t\"app\":   \"web\",\n\t\t\t\"image\": \"nginx\",\n\t\t})\n\n\t\tConvey(\"When I merge another store\", func() {\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tm := NewTagStoreFromMap(map[string]string{\n\t\t\t\t\"location\": \"somewhere\",\n\t\t\t})\n\t\t\tSo(m, ShouldNotBeNil)\n\t\t\tt.Merge(m)\n\n\t\t\ttags := t.GetSlice()\n\t\t\tSo(len(tags), ShouldEqual, 3)\n\t\t\tSo(tags, ShouldContain, \"app=web\")\n\t\t\tSo(tags, ShouldContain, \"image=nginx\")\n\t\t\tSo(tags, ShouldContain, \"location=somewhere\")\n\t\t})\n\t})\n}\n\nfunc TestString(t *testing.T) {\n\tConvey(\"When I create a new tagstore the String() should match\", t, func() {\n\t\ttags := []string{\"app=web\", \"app=web1\", \"id\", \"image=nginx\"}\n\t\tt := NewTagStoreFromSlice(tags)\n\t\tSo(t.IsEmpty(), ShouldEqual, false)\n\t\tnewTags := strings.Split(t.String(), \" \")\n\t\tSo(len(tags), ShouldEqual, len(newTags))\n\t\tfor _, tag := range newTags {\n\t\t\tSo(tags, ShouldContain, tag)\n\t\t}\n\t})\n}\n\nfunc TestMergeCollision(t *testing.T) {\n\tConvey(\"When I create a new tagstore from a map\", t, func() {\n\t\tt := NewTagStoreFromMap(map[string]string{\n\t\t\t\"app\":   \"web\",\n\t\t\t\"image\": \"nginx\",\n\t\t})\n\n\t\tConvey(\"When I merge another store with collisions\", func() {\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tm := NewTagStoreFromMap(map[string]string{\n\t\t\t\t\"app\": \"web\",\n\t\t\t})\n\t\t\tSo(m, ShouldNotBeNil)\n\t\t\tt.Merge(m)\n\n\t\t\ttags := t.GetSlice()\n\t\t\tSo(len(tags), ShouldEqual, 2)\n\t\t\tSo(tags, ShouldContain, \"app=web\")\n\t\t\tSo(tags, ShouldContain, \"image=nginx\")\n\t\t})\n\n\t\tConvey(\"When I merge another store with duplicate keys\", func() {\n\t\t\tSo(t, ShouldNotBeNil)\n\t\t\tm := NewTagStoreFromSlice([]string{\n\t\t\t\t\"fred\",\n\t\t\t\t\"app\",\n\t\t\t\t\"app=web2\",\n\t\t\t})\n\t\t\tSo(m, ShouldNotBeNil)\n\t\t\tt.Merge(m)\n\n\t\t\ttags := t.GetSlice()\n\t\t\tSo(len(tags), ShouldEqual, 4)\n\t\t\tSo(tags, ShouldContain, \"fred\")\n\t\t\tSo(tags, ShouldContain, \"app=web\")\n\t\t\tSo(tags, ShouldContain, \"app=web2\")\n\t\t\tSo(tags, ShouldContain, \"image=nginx\")\n\t\t})\n\t})\n}\n\nfunc TestAllSettersGetters(t *testing.T) {\n\tConvey(\"When I create a new tagstore from a map\", t, func() {\n\t\tts := NewTagStoreFromMap(map[string]string{\n\t\t\t\"app\":   \"web\",\n\t\t\t\"image\": \"nginx\",\n\t\t})\n\n\t\tConvey(\"If I copy the tag store, it should be equal\", func() {\n\t\t\tnewstore := ts.Copy()\n\t\t\tSo(newstore, ShouldNotBeNil)\n\t\t\tSo(newstore, ShouldResemble, ts)\n\t\t})\n\n\t\tConvey(\"When I get a valid key, it should return the value\", func() {\n\t\t\tvalue, ok := ts.Get(\"app\")\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(value, ShouldResemble, \"web\")\n\t\t})\n\n\t\tConvey(\"When I get a non valid key, it should return false\", func() {\n\t\t\tvalue, ok := ts.Get(\"randomkey\")\n\t\t\tSo(ok, ShouldBeFalse)\n\t\t\tSo(value, ShouldEqual, \"\")\n\t\t})\n\n\t\tConvey(\"If I append a key/value pair, it should be in the store\", func() {\n\t\t\tts.AppendKeyValue(\"NewKey\", \"NewValue\")\n\t\t\tvalue, ok := ts.Get(\"NewKey\")\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t\tSo(value, ShouldEqual, \"NewValue\")\n\t\t})\n\n\t\tConvey(\"If the store is corrupted\", func() {\n\t\t\ttags := ts.GetSlice()\n\t\t\ttags = append(tags, \"badtag\")\n\t\t\tts = NewTagStoreFromSlice(tags)\n\t\t\tvalue, ok := ts.Get(\"randomeky\")\n\t\t\tSo(ok, ShouldBeFalse)\n\t\t\tSo(value, ShouldEqual, \"\")\n\t\t})\n\t})\n}\n\nfunc TestJsonMarshal(t *testing.T) {\n\n\ttags := []string{\n\t\t\"app=web\",\n\t\t\"image=nginx\",\n\t\t\"fred\",\n\t}\n\n\tConvey(\"When I create a new tagstore from a slice\", t, func() {\n\t\tts := NewTagStoreFromSlice(tags)\n\n\t\tbytes, err := json.Marshal(ts)\n\t\tSo(err, ShouldBeNil)\n\t\tSo(bytes, ShouldNotBeNil)\n\n\t\tnewTS := &TagStore{}\n\t\terr = json.Unmarshal(bytes, newTS)\n\t\tSo(err, ShouldBeNil)\n\n\t\ttags := newTS.GetSlice()\n\t\tSo(len(tags), ShouldEqual, 3)\n\t\tSo(tags, ShouldContain, \"fred\")\n\t\tSo(tags, ShouldContain, \"app=web\")\n\t\tSo(tags, ShouldContain, \"image=nginx\")\n\n\t\t// this test will make sure the tagstore is re-initialized\n\t\tnewTS2 := NewTagStoreFromSlice([]string{\"dummy\"})\n\t\terr = json.Unmarshal(bytes, newTS2)\n\t\tSo(err, ShouldBeNil)\n\t\ttags = newTS2.GetSlice()\n\t\tSo(len(tags), ShouldEqual, 3)\n\t\tSo(tags, ShouldContain, \"fred\")\n\t\tSo(tags, ShouldContain, \"app=web\")\n\t\tSo(tags, ShouldContain, \"image=nginx\")\n\t})\n}\n\nfunc TestGetKeys(t *testing.T) {\n\tConvey(\"When I create a new tagstore from a map\", t, func() {\n\t\tt := NewTagStoreFromMap(map[string]string{\n\t\t\t\"app\":   \"web\",\n\t\t\t\"image\": \"nginx\",\n\t\t})\n\n\t\tkeys := t.GetKeys()\n\t\tSo(len(keys), ShouldEqual, 2)\n\t\tSo(keys, ShouldContain, \"app\")\n\t\tSo(keys, ShouldContain, \"image\")\n\t})\n}\n\nfunc TestRemoveKeys(t *testing.T) {\n\tConvey(\"When I create a new tagstore from a map\", t, func() {\n\t\tt := NewTagStoreFromSlice([]string{\n\t\t\t\"app=web\",\n\t\t\t\"app1=web\",\n\t\t\t\"image=nginx\",\n\t\t\t\"image=nginx1\",\n\t\t})\n\n\t\tt.RemoveTagsByKeys([]string{\"app\", \"image\", \"fred\"})\n\n\t\tkeys := t.GetKeys()\n\t\tSo(len(keys), ShouldEqual, 1)\n\t\tSo(keys, ShouldContain, \"app1\")\n\t})\n}\n"
  },
  {
    "path": "policy/types.go",
    "content": "package policy\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"hash/fnv\"\n\t\"net\"\n\t\"strings\"\n\n\t\"github.com/docker/go-connections/nat\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n\t\"go.aporeto.io/gaia\"\n\t\"go.uber.org/zap\"\n)\n\n// Aporeto tag key and value constants\nconst (\n\tTagKeyController = \"$controller\"\n\tTagKeyID         = \"$id\"\n\tTagKeyIdentity   = \"$identity\"\n\n\tTagValueProcessingUnit = \"processingunit\"\n)\n\nconst (\n\t// DefaultNamespace is the default namespace for applying policy\n\tDefaultNamespace = \"bridge\"\n)\n\n// constants for various actions\nconst (\n\tactionReject      = \"reject\"\n\tactionAccept      = \"accept\"\n\tactionPassthrough = \"passthrough\"\n\tactionEncrypt     = \"encrypt\"\n\tactionLog         = \"log\"\n\n\toactionContinue = \"continue\"\n\toactionApply    = \"apply\"\n\n\tactionNone    = \"none\"\n\tactionUnknown = \"unknown\"\n)\n\n// Operator defines the operation between your key and value.\ntype Operator string\n\nconst (\n\t// Equal is the equal operator\n\tEqual = \"=\"\n\t// NotEqual is the not equal operator\n\tNotEqual = \"=!\"\n\t// KeyExists is the key=* operator\n\tKeyExists = \"*\"\n\t// KeyNotExists means that the key doesnt exist in the incoming tags\n\tKeyNotExists = \"!*\"\n)\n\n// ActionType   is the action that can be applied to a flow.\ntype ActionType byte\n\n// Accepted returns if the action mask contains the Accepted mask.\nfunc (f ActionType) Accepted() bool {\n\treturn f&Accept > 0\n}\n\n// Rejected returns if the action mask contains the Rejected mask.\nfunc (f ActionType) Rejected() bool {\n\treturn f&Reject > 0\n}\n\n// Encrypted returns if the action mask contains the Encrypted mask.\nfunc (f ActionType) Encrypted() bool {\n\treturn f&Encrypt > 0\n}\n\n// Logged returns if the action mask contains the Logged mask.\nfunc (f ActionType) Logged() bool {\n\treturn f&Log > 0\n}\n\n// Observed returns if the action mask contains the Observed mask.\nfunc (f ActionType) Observed() bool {\n\treturn f&Observe > 0\n}\n\n// ActionString returns if the action if accepted of rejected as a long string.\nfunc (f ActionType) ActionString() string {\n\tif f.Accepted() && !f.Rejected() {\n\t\treturn actionAccept\n\t}\n\n\tif !f.Accepted() && f.Rejected() {\n\t\treturn actionReject\n\t}\n\n\treturn actionPassthrough\n}\n\nfunc (f ActionType) String() string {\n\tswitch f {\n\tcase Accept:\n\t\treturn actionAccept\n\tcase Reject:\n\t\treturn actionReject\n\tcase Encrypt:\n\t\treturn actionEncrypt\n\tcase Log:\n\t\treturn actionLog\n\t}\n\n\treturn actionUnknown\n}\n\nconst (\n\t// Accept is the accept action\n\tAccept ActionType = 0x1\n\t// Reject is the reject  action\n\tReject ActionType = 0x2\n\t// Encrypt instructs data to be encrypted\n\tEncrypt ActionType = 0x4\n\t// Log instructs the datapath to log the IP addresses\n\tLog ActionType = 0x8\n\t// Observe instructs the datapath to observe policy results\n\tObserve ActionType = 0x10\n)\n\n// ObserveActionType is the action that can be applied to a flow for an observation rule.\ntype ObserveActionType byte\n\n// Observed returns true if any observed action was found.\nfunc (f ObserveActionType) Observed() bool {\n\treturn f != ObserveNone\n}\n\n// ObserveContinue returns if the action of observation rule is continue.\nfunc (f ObserveActionType) ObserveContinue() bool {\n\treturn f&ObserveContinue > 0\n}\n\n// ObserveApply returns if the action of observation rule is allow.\nfunc (f ObserveActionType) ObserveApply() bool {\n\treturn f&ObserveApply > 0\n}\n\nfunc (f ObserveActionType) String() string {\n\tswitch f {\n\tcase ObserveNone:\n\t\treturn actionNone\n\tcase ObserveContinue:\n\t\treturn oactionContinue\n\tcase ObserveApply:\n\t\treturn oactionApply\n\t}\n\n\treturn actionUnknown\n}\n\n// Observe actions are used in conjunction with action.\nconst (\n\t// ObserveNone specifies if any observation was made or not.\n\tObserveNone ObserveActionType = 0x0\n\t// ObserveContinue is used to not take any action on packet and is deferred to\n\t// an actual rule with accept or deny action.\n\tObserveContinue ObserveActionType = 0x1\n\t// ObserveApply is used to apply action to packets hitting this rule.\n\tObserveApply ObserveActionType = 0x2\n)\n\n// FlowPolicy captures the policy for a particular flow\ntype FlowPolicy struct {\n\tObserveAction   ObserveActionType\n\tAction          ActionType\n\tServiceID       string\n\tPolicyID        string\n\tRuleName        string\n\tLabels          []string\n\tServicePriority uint32 // A hash of the ServiceID\n\tPriority        uint32 // Priority based on the ExternalNetwork entries\n}\n\n// Clone creates a copy of the FlowPolicy\nfunc (f *FlowPolicy) Clone() *FlowPolicy {\n\tclone := &FlowPolicy{\n\t\tObserveAction:   f.ObserveAction,\n\t\tAction:          f.Action,\n\t\tServiceID:       f.ServiceID,\n\t\tPolicyID:        f.PolicyID,\n\t\tRuleName:        f.RuleName,\n\t\tLabels:          f.Labels,\n\t\tServicePriority: f.ServicePriority,\n\t\tPriority:        f.Priority,\n\t}\n\treturn clone\n}\n\n// LogPrefix is the prefix used in nf-log action. It must be less than\nfunc (f *FlowPolicy) LogPrefix(contextID string) string {\n\n\thash, err := Fnv32Hash(contextID)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to generate log prefix hash\", zap.Error(err))\n\t}\n\n\truleExtNetName, _ := f.GetShortAndLongLogPrefix()\n\treturn hash + \":\" + ruleExtNetName + \":\" + f.EncodedActionString()\n}\n\n// LogPrefixAction is the prefix used in nf-log action with the given action.\n// NOTE: If 0 or empty action is passed, the default is reject (6).\nfunc (f *FlowPolicy) LogPrefixAction(contextID string, action string) string {\n\n\thash, err := Fnv32Hash(contextID)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to generate log prefix hash\", zap.Error(err))\n\t}\n\n\tif len(action) == 0 || action == \"0\" {\n\t\taction = \"6\"\n\t}\n\n\truleExtNetName, _ := f.GetShortAndLongLogPrefix()\n\treturn hash + \":\" + ruleExtNetName + \":\" + action\n}\n\n// DefaultLogPrefix return the prefix used in nf-log action for default rule.\nfunc DefaultLogPrefix(contextID string, action ActionType) string {\n\n\thash, err := Fnv32Hash(contextID)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to generate log prefix hash\", zap.Error(err))\n\t}\n\n\tif action.Accepted() {\n\t\treturn hash + \":default:default:3\"\n\t}\n\n\treturn hash + \":default:default:6\"\n}\n\n// DefaultDropPacketLogPrefix generates the nflog prefix for packets logged by the catch all default rule\nfunc DefaultDropPacketLogPrefix(contextID string) string {\n\n\thash, err := Fnv32Hash(contextID)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to generate log prefix hash\", zap.Error(err))\n\t}\n\n\treturn hash + \":default:default:10\"\n}\n\n// DefaultAction generates the default action of the rule\nfunc DefaultAction(action ActionType) string {\n\tif action.Accepted() {\n\t\treturn \"ACCEPT\"\n\t}\n\treturn \"DROP\"\n}\n\n// EncodedActionString is used to encode observed action as well as action\nfunc (f *FlowPolicy) EncodedActionString() string {\n\n\tvar e string\n\n\tif f.Action.Accepted() && !f.Action.Rejected() {\n\t\tif f.ObserveAction.ObserveContinue() {\n\t\t\te = \"1\"\n\t\t} else if f.ObserveAction.ObserveApply() {\n\t\t\te = \"2\"\n\t\t} else {\n\t\t\te = \"3\"\n\t\t}\n\t} else if !f.Action.Accepted() && f.Action.Rejected() {\n\t\tif f.ObserveAction.ObserveContinue() {\n\t\t\te = \"4\"\n\t\t} else if f.ObserveAction.ObserveApply() {\n\t\t\te = \"5\"\n\t\t} else {\n\t\t\te = \"6\"\n\t\t}\n\t} else {\n\t\tif f.ObserveAction.ObserveContinue() {\n\t\t\te = \"7\"\n\t\t} else if f.ObserveAction.ObserveApply() {\n\t\t\te = \"8\"\n\t\t} else {\n\t\t\te = \"9\"\n\t\t}\n\t}\n\treturn e\n}\n\n// GetShortAndLongLogPrefix returns the short and long log prefix\nfunc (f *FlowPolicy) GetShortAndLongLogPrefix() (string, string) {\n\n\tgetFirstXChars := func(str string, numChars int) string {\n\t\tif len(str) <= numChars {\n\t\t\treturn str\n\t\t}\n\t\treturn str[0:numChars]\n\t}\n\n\tgetLastXChars := func(str string, numChars int) string {\n\t\tstrLen := len(str)\n\t\tif strLen <= numChars {\n\t\t\treturn str\n\t\t}\n\t\tindex := strLen - numChars\n\t\treturn str[index:]\n\t}\n\n\t// If we don't have a rulename, then do it the old way\n\tif (len(f.RuleName)) <= 0 {\n\t\tprefix := f.PolicyID + \":\" + f.ServiceID\n\t\treturn prefix, prefix\n\t}\n\n\t// The shortPrefix will become a key in a map we use to look up the long prefix.\n\t// I want to put as much of the info in the short logging prefix as possible to help\n\t// with debugging, but it needs to be unique\n\thash, err := Fnv32Hash(f.PolicyID, f.ServiceID, f.RuleName)\n\tif err != nil {\n\t\tzap.L().Warn(\"unable to generate log prefix hash\", zap.Error(err))\n\t}\n\n\tlongPrefix := f.PolicyID + \":\" + f.ServiceID + \":\" + f.RuleName\n\n\t// We have 64 characters max for the logging prefix.\n\t// The ContextHash and Action are appended else where\n\t// ContextHash(10):PolicyID(10):ServiceID(10):RuleName(10)_hash(10):Action(2) = a max of 57 chars\n\n\tvar builder strings.Builder\n\tbuilder.WriteString(getLastXChars(f.PolicyID, 10))\n\tbuilder.WriteString(\":\")\n\tbuilder.WriteString(getLastXChars(f.ServiceID, 10))\n\tbuilder.WriteString(\":\")\n\tbuilder.WriteString(getFirstXChars(f.RuleName, 10))\n\tbuilder.WriteString(\"_\")\n\tbuilder.WriteString(hash)\n\treturn builder.String(), longPrefix\n}\n\n// EncodedStringToAction returns action and observed action from encoded string.\nfunc EncodedStringToAction(e string) (ActionType, ObserveActionType, error) {\n\n\tswitch e {\n\tcase \"1\":\n\t\treturn Observe | Accept, ObserveContinue, nil\n\tcase \"2\":\n\t\treturn Observe | Accept, ObserveApply, nil\n\tcase \"3\":\n\t\treturn Accept, ObserveNone, nil\n\tcase \"4\":\n\t\treturn Observe | Reject, ObserveContinue, nil\n\tcase \"5\":\n\t\treturn Observe | Reject, ObserveApply, nil\n\tcase \"6\":\n\t\treturn Reject, ObserveNone, nil\n\tcase \"7\":\n\t\treturn Observe, ObserveContinue, nil\n\tcase \"8\":\n\t\treturn Observe, ObserveApply, nil\n\tcase \"9\":\n\t\treturn 0, ObserveNone, nil\n\t}\n\n\treturn 0, 0, errors.New(\"Invalid encoding\")\n}\n\n// IPRule holds IP rules to external services\ntype IPRule struct {\n\tAddresses  []string\n\tPorts      []string\n\tProtocols  []string\n\tExtensions []string\n\tPolicy     *FlowPolicy\n}\n\n// IPRuleList is a list of IP rules\ntype IPRuleList []IPRule\n\n// PortProtocolPolicy holds the assicated ports, protocols and policy\ntype PortProtocolPolicy struct {\n\tPorts     []string\n\tProtocols []string\n\tPolicy    *FlowPolicy\n}\n\n// DNSRuleList is a map from fqdns to a list of policies.\ntype DNSRuleList map[string][]PortProtocolPolicy\n\n// Copy creates a clone of DNS rule list\nfunc (l DNSRuleList) Copy() DNSRuleList {\n\tdnsRuleList := DNSRuleList{}\n\n\t// nolint:gosimple //S1001: should use copy() instead of a loop (gosimple) // false positive\n\tfor k, v := range l {\n\t\tdnsRuleList[k] = v\n\t}\n\n\treturn dnsRuleList\n}\n\n// Copy creates a clone of the IP rule list\nfunc (l IPRuleList) Copy() IPRuleList {\n\tlist := make(IPRuleList, len(l))\n\n\t// nolint:gosimple //S1001: should use copy() instead of a loop (gosimple) // false positive\n\tfor i, v := range l {\n\t\tlist[i] = v\n\t}\n\n\treturn list\n}\n\n// KeyValueOperator describes an individual matchinggit  rule\ntype KeyValueOperator struct {\n\tKey       string\n\tValue     []string\n\tOperator  Operator\n\tID        string\n\tPortRange *portspec.PortSpec\n}\n\n// TagSelector info describes a tag selector key Operator value\ntype TagSelector struct {\n\tClause []KeyValueOperator\n\tPolicy *FlowPolicy\n}\n\n// TagSelectorList defines a list of TagSelectors\ntype TagSelectorList []TagSelector\n\n// Copy  returns a copy of the TagSelectorList\nfunc (t TagSelectorList) Copy() TagSelectorList {\n\tlist := make(TagSelectorList, len(t))\n\n\t// nolint:gosimple //S1001: should use copy() instead of a loop (gosimple) // false positive\n\tfor i, v := range t {\n\t\tlist[i] = v\n\t}\n\n\treturn list\n}\n\n// ExtendedMap is a common map with additional functions\ntype ExtendedMap map[string]string\n\n// Copy copies an ExtendedMap\nfunc (s ExtendedMap) Copy() ExtendedMap {\n\t// nolint:gosimple //S1001: should use copy() instead of a loop (gosimple) // false positive\n\tc := ExtendedMap{}\n\tfor k, v := range s {\n\t\tc[k] = v\n\t}\n\treturn c\n}\n\n// Get does a lookup in the map\nfunc (s ExtendedMap) Get(key string) (string, bool) {\n\tvalue, ok := s[key]\n\treturn value, ok\n}\n\n// OptionsType is a set of options that can be passed with a policy request\ntype OptionsType struct {\n\t// CgroupName is the name of the cgroup\n\tCgroupName string\n\n\t// CgroupMark is the tag of the cgroup\n\tCgroupMark string\n\n\t// UserID is the user ID if it exists\n\tUserID string\n\n\t// AutoPort option is set if auto port is enabled\n\tAutoPort bool\n\n\t// Services is the list of services of interest\n\tServices []common.Service\n\n\t// PolicyExtensions is policy resolution extensions\n\tPolicyExtensions interface{}\n\n\t// PortMap maps container port -> host ports.\n\tPortMap map[nat.Port][]string\n\n\t// ConvertedDockerPU is set when a docker PU is converted to LinuxProcess\n\t// in order to implement host network containers.\n\tConvertedDockerPU bool\n}\n\n// RuntimeError is an error detected by the TriremeController that has to be\n// returned at a later time to the policy engine to take action.\ntype RuntimeError struct {\n\tContextID string\n\tError     error\n}\n\n// DebugConfigInput holds information needed to start a debug collect.\ntype DebugConfigInput struct {\n\tDebugType   gaia.EnforcerRefreshDebugValue\n\tNativeID    string\n\tFilePath    string\n\tPcapFilter  string\n\tCommandExec string\n}\n\n// DebugConfigResult holds results from a debug collect.\ntype DebugConfigResult struct {\n\tPID           int\n\tCommandOutput string\n}\n\n// DebugConfig holds information needed for a single debug collect operation.\ntype DebugConfig struct {\n\tDebugConfigInput\n\tDebugConfigResult\n}\n\n// DebugConfigMulti holds information needed for a debug collect operation on all remote enforcers.\ntype DebugConfigMulti struct {\n\tDebugConfigInput\n\tResults map[string]*DebugConfigResult\n}\n\n// PingConfig holds the configuration to run ping.\ntype PingConfig struct {\n\tMode               gaia.ProcessingUnitRefreshPingModeValue\n\tID                 string\n\tIP                 net.IP\n\tPort               uint16\n\tIterations         int\n\tTargetTCPNetworks  bool\n\tExcludedNetworks   bool\n\tServiceCertificate string\n\tServiceKey         string\n\tServiceAddresses   map[string][]string\n}\n\n// Ping Errors.\nconst (\n\tErrExcludedNetworks  = \"excludednetworks\"\n\tErrTargetTCPNetworks = \"targettcpnetworks\"\n)\n\n// Error returns error as string from ping config.\nfunc (p *PingConfig) Error() string {\n\n\tswitch {\n\tcase p.ExcludedNetworks:\n\t\treturn ErrExcludedNetworks\n\tcase !p.TargetTCPNetworks:\n\t\treturn ErrTargetTCPNetworks\n\tdefault:\n\t\treturn \"\"\n\t}\n}\n\n// PingPayload holds the payload carried on the wire.\ntype PingPayload struct {\n\tPingID               string      `codec:\",omitempty\"`\n\tIterationID          int         `codec:\",omitempty\"`\n\tApplicationListening bool        `codec:\",omitempty\"`\n\tNamespaceHash        string      `codec:\",omitempty\"`\n\tServiceType          ServiceType `codec:\",omitempty\"`\n}\n\n// Fnv32Hash hash the given data by Fnv32-bit algorithm.\nfunc Fnv32Hash(data ...string) (string, error) {\n\n\tif len(data) == 0 {\n\t\treturn \"\", fmt.Errorf(\"no data to hash\")\n\t}\n\n\taggregatedData := \"\"\n\tfor _, ed := range data {\n\t\taggregatedData += ed\n\t}\n\n\thash := fnv.New32()\n\tif _, err := hash.Write([]byte(aggregatedData)); err != nil {\n\t\treturn \"\", fmt.Errorf(\"unable to hash data: %v\", err)\n\t}\n\n\treturn fmt.Sprintf(\"%d\", hash.Sum32()), nil\n}\n\n// ServiceMesh to determine pod is of which servicemesh type\ntype ServiceMesh int\n\nconst (\n\t// None means the pod have no servicemesh enabled on it\n\tNone ServiceMesh = iota\n\t// Istio servicemesh enabled on the pod\n\tIstio\n)\n\nfunc (s ServiceMesh) String() string {\n\treturn [...]string{\"None\", \"Istio\"}[s]\n}\n"
  },
  {
    "path": "policy/types_test.go",
    "content": "// +build !windows\n\npackage policy\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestDefaultLogPrefix(t *testing.T) {\n\tConvey(\"When I request a new default log prefix\", t, func() {\n\t\tt := DefaultLogPrefix(\"abc\", Reject)\n\t\tf := &FlowPolicy{\n\t\t\tAction: Reject,\n\t\t}\n\t\tConvey(\"I should have the correct default prefix\", func() {\n\t\t\tSo(t, ShouldEqual, \"1134309195:default:default:\"+f.EncodedActionString())\n\t\t})\n\t})\n\n\tConvey(\"When I request a new default log prefix\", t, func() {\n\t\tt := DefaultLogPrefix(\"abc\", Accept)\n\t\tf := &FlowPolicy{\n\t\t\tAction: Accept,\n\t\t}\n\t\tConvey(\"I should have the correct default prefix\", func() {\n\t\t\tSo(t, ShouldEqual, \"1134309195:default:default:\"+f.EncodedActionString())\n\t\t})\n\t})\n}\n\nfunc Test_DefaultDropPacketLogPrefix(t *testing.T) {\n\tConvey(\"When I request a new default reject log prefix\", t, func() {\n\t\tt := DefaultDropPacketLogPrefix(\"abcasd\")\n\n\t\tConvey(\"I should have the correct default prefix\", func() {\n\t\t\tSo(t, ShouldEqual, \"2569040509:default:default:10\")\n\t\t})\n\t})\n}\n\nfunc TestDefaultAction(t *testing.T) {\n\tConvey(\"When I request the default action for Reject\", t, func() {\n\t\tt := DefaultAction(Reject)\n\n\t\tConvey(\"I should have the correct default\", func() {\n\t\t\tSo(t, ShouldEqual, \"DROP\")\n\t\t})\n\t})\n\n\tConvey(\"When I request the default action for Accept\", t, func() {\n\t\tt := DefaultAction(Accept)\n\n\t\tConvey(\"I should have the correct default\", func() {\n\t\t\tSo(t, ShouldEqual, \"ACCEPT\")\n\t\t})\n\t})\n}\n\nfunc TestLogPrefix(t *testing.T) {\n\tConvey(\"When I request log prefix reject\", t, func() {\n\t\tf := &FlowPolicy{\n\t\t\tAction:        Reject,\n\t\t\tObserveAction: ObserveNone,\n\t\t\tPolicyID:      \"deadbeef\",\n\t\t\tServiceID:     \"beaddead\",\n\t\t}\n\t\tConvey(\"I should have the correct log prefix\", func() {\n\t\t\tSo(f.LogPrefix(\"somecontextID\"), ShouldEqual, \"3985287229:deadbeef:beaddead:6\")\n\t\t})\n\t})\n\n\tConvey(\"When I request log prefix\", t, func() {\n\t\tf := &FlowPolicy{\n\t\t\tAction:        Accept,\n\t\t\tObserveAction: ObserveNone,\n\t\t\tPolicyID:      \"deadbeef\",\n\t\t\tServiceID:     \"beaddead\",\n\t\t}\n\t\tConvey(\"I should have the correct log prefix\", func() {\n\t\t\tSo(f.LogPrefix(\"somecontextID\"), ShouldEqual, \"3985287229:deadbeef:beaddead:3\")\n\t\t})\n\t})\n}\n\nfunc TestRuleNameLogPrefix(t *testing.T) {\n\tConvey(\"When I request log prefix reject\", t, func() {\n\t\tf := &FlowPolicy{\n\t\t\tAction:        Reject,\n\t\t\tObserveAction: ObserveNone,\n\t\t\tPolicyID:      \"deadbeef\",\n\t\t\tServiceID:     \"beaddead\",\n\t\t\tRuleName:      \"rule\",\n\t\t}\n\t\tConvey(\"I should have the correct log prefix\", func() {\n\t\t\tSo(f.LogPrefix(\"somecontextID\"), ShouldEqual, \"3985287229:deadbeef:beaddead:rule_2929530537:6\")\n\t\t})\n\t})\n\n\tConvey(\"When I request log with long PolicyID, ServiceID and RuleName prefixes\", t, func() {\n\t\tf := &FlowPolicy{\n\t\t\tAction:        Accept,\n\t\t\tObserveAction: ObserveNone,\n\t\t\tPolicyID:      \"0123456789ABCDEFG\",\n\t\t\tServiceID:     \"0123456789ABCDEFG\",\n\t\t\tRuleName:      \"LongRuleName\",\n\t\t}\n\t\tConvey(\"I should have the correct log prefix\", func() {\n\t\t\tSo(f.LogPrefix(\"somecontextID\"), ShouldEqual, \"3985287229:789ABCDEFG:789ABCDEFG:LongRuleNa_3922170690:3\")\n\t\t\tSo(len(f.LogPrefix(\"somecontextID\")), ShouldBeLessThanOrEqualTo, 64)\n\t\t})\n\t})\n}\n\nfunc TestLogPrefixAction(t *testing.T) {\n\tConvey(\"When I request log prefix action 6\", t, func() {\n\t\tf := &FlowPolicy{\n\t\t\tAction:        Accept,\n\t\t\tObserveAction: ObserveNone,\n\t\t\tPolicyID:      \"deadbeef\",\n\t\t\tServiceID:     \"beaddead\",\n\t\t}\n\t\tConvey(\"I should have the correct log prefix\", func() {\n\t\t\tSo(f.LogPrefixAction(\"somecontextID\", \"6\"), ShouldEqual, \"3985287229:deadbeef:beaddead:6\")\n\t\t})\n\t})\n\n\tConvey(\"When I request log prefix action 0\", t, func() {\n\t\tf := &FlowPolicy{\n\t\t\tAction:        Accept,\n\t\t\tObserveAction: ObserveNone,\n\t\t\tPolicyID:      \"deadbeef\",\n\t\t\tServiceID:     \"beaddead\",\n\t\t}\n\t\tConvey(\"I should have the correct log prefix\", func() {\n\t\t\tSo(f.LogPrefixAction(\"somecontextID\", \"0\"), ShouldEqual, \"3985287229:deadbeef:beaddead:6\")\n\t\t})\n\t})\n\n\tConvey(\"When I request log prefix action empty\", t, func() {\n\t\tf := &FlowPolicy{\n\t\t\tAction:        Accept,\n\t\t\tObserveAction: ObserveNone,\n\t\t\tPolicyID:      \"deadbeef\",\n\t\t\tServiceID:     \"beaddead\",\n\t\t}\n\t\tConvey(\"I should have the correct log prefix\", func() {\n\t\t\tSo(f.LogPrefixAction(\"somecontextID\", \"\"), ShouldEqual, \"3985287229:deadbeef:beaddead:6\")\n\t\t})\n\t})\n}\n\nfunc TestFnv32(t *testing.T) {\n\n\tConvey(\"When I request log prefix with no data\", t, func() {\n\t\thash, err := Fnv32Hash()\n\n\t\tConvey(\"I should have the hash\", func() {\n\t\t\tSo(hash, ShouldBeEmpty)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I request log prefix with small data\", t, func() {\n\t\thash, err := Fnv32Hash(\"xyz\")\n\n\t\tConvey(\"I should have the hash\", func() {\n\t\t\tSo(hash, ShouldEqual, \"845396910\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"When I request log prefix with large data\", t, func() {\n\t\thash, err := Fnv32Hash(\"xyzsadsadasfkjhjkasdjhsajkdhsad\", \"asdasdasda\", \"asdhjkashdjkashdjashdkasjdhasjkdhjashdkasjdhkaslfjsalkjdklasjdklasjdk\")\n\n\t\tConvey(\"I should have the hash\", func() {\n\t\t\tSo(hash, ShouldEqual, \"2149035768\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n}\n\nfunc TestEncodedStringToActionInvalidValue(t *testing.T) {\n\tConvey(\"When I run decode and encode, the results should match\", t, func() {\n\t\tea := \"badvalue\"\n\t\t_, _, err := EncodedStringToAction(ea)\n\t\tif err == nil {\n\t\t\tConvey(\"I should get an error for value \"+ea, func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t}\n\t})\n}\n\nfunc TestEncodeDecodePrefix(t *testing.T) {\n\tConvey(\"When I run decode and encode, the results should match\", t, func() {\n\t\tencodedAction := []string{\"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"}\n\t\tfor _, ea := range encodedAction {\n\t\t\tf := &FlowPolicy{}\n\t\t\tvar err error\n\t\t\tf.Action, f.ObserveAction, err = EncodedStringToAction(ea)\n\t\t\tConvey(\"I should have the same actions after decoding and encoding for action \"+ea, func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(f.EncodedActionString(), ShouldEqual, ea)\n\t\t\t})\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "protogen.sh",
    "content": "#!/bin/bash\n\nCUR_DIR=\"$(pwd)\"\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd )\"\n\nENVOY_REPO=\"github.com/envoyproxy/data-plane-api\"\nENVOY_REPO_PKG=\"go.aporeto.io/enforcerd/trireme-lib/third_party/generated/envoyproxy/data-plane-api\"\nPB_OUT=\"${DIR}/third_party/generated/envoyproxy/data-plane-api\"\n# gogofaster and gogoslick don't work unfortunately\n# also, they currently don't work with validate\nPB_GENERATOR=\"gogofast\"\n\nPROTOC=\"$(which protoc)\"\nif [ $? -ne 0 ] ; then\n  echo \"ERROR: protoc needs to be installed and in the PATH\" 1>&2\n  exit 2\nfi\n\nPB_GENERATOR_BIN=\"$(which protoc-gen-${PB_GENERATOR})\"\nif [ $? -ne 0 ] ; then\n  echo \"ERROR: protoc-gen-${PB_GENERATOR} needs to be installed and in the PATH\" 1>&2\n  exit 2\nfi\n\necho \"Protobuf compilers are at:\"\necho \"PROTOC=${PROTOC}\"\necho \"PB_GENERATOR_BIN=${PB_GENERATOR_BIN}\"\necho\n\necho \"Ensuring output folder exists: ${PB_OUT}\"\nmkdir -v -p ${PB_OUT}\necho\n\necho \"Updating / Downloading necessary packages and repo...\"\ngo get -u -v google.golang.org/grpc\necho\ngo get -u -v github.com/gogo/protobuf/types\necho\ngo get -u -v github.com/gogo/googleapis/google/rpc\necho\ngo get -u -v github.com/gogo/googleapis/google/api\necho\ngo get -u -v github.com/envoyproxy/protoc-gen-validate\necho \"NOTE: it is okay for this to fail, there is no go code in here\"\ngo get -v -u -d ${ENVOY_REPO}\necho\n\n# useful when you need to find out dependencies and things for mapping\n#  --descriptor_set_out=${PB_OUT}/${input}.descriptor_set \\\n#  --include_imports \\\n#  --dependency_out=${PB_OUT}/${input}.dependencies \\\n# validate currently doesn't work\n#  --validate_out=lang=go:${PB_OUT} \\\nPROTOC_CMD=\"\n${PROTOC} \\\n  -I${GOPATH:-${HOME/go}}/src/${ENVOY_REPO} \\\n  -I${GOPATH:-${HOME/go}}/src/github.com/gogo/protobuf/protobuf \\\n  -I${GOPATH:-${HOME/go}}/src/github.com/gogo/protobuf \\\n  -I${GOPATH:-${HOME/go}}/src/github.com/gogo/googleapis \\\n  -I${GOPATH:-${HOME/go}}/src/github.com/envoyproxy/protoc-gen-validate \\\n  --${PB_GENERATOR}_out=plugins=\\\ngrpc,\\\nMenvoy/type/percent.proto=${ENVOY_REPO_PKG}/envoy/type,\\\nMenvoy/type/http_status.proto=${ENVOY_REPO_PKG}/envoy/type,\\\nMenvoy/api/v2/discovery.proto=${ENVOY_REPO_PKG}/envoy/api/v2,\\\nMenvoy/api/v2/core/address.proto=${ENVOY_REPO_PKG}/envoy/api/v2/core,\\\nMenvoy/api/v2/core/base.proto=${ENVOY_REPO_PKG}/envoy/api/v2/core,\\\nMenvoy/api/v2/core/http_uri.proto=${ENVOY_REPO_PKG}/envoy/api/v2/core,\\\nMgoogle/rpc/status.proto=github.com/gogo/googleapis/google/rpc,\\\nMgoogle/api/annotations.proto=github.com/gogo/googleapis/google/api,\\\nMgoogle/protobuf/any.proto=github.com/gogo/protobuf/types,\\\nMgoogle/protobuf/duration.proto=github.com/gogo/protobuf/types,\\\nMgoogle/protobuf/struct.proto=github.com/gogo/protobuf/types,\\\nMgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types,\\\nMgoogle/protobuf/wrappers.proto=github.com/gogo/protobuf/types\\\n:${PB_OUT} \\\n\"\n\necho \"Changing working directory to the envoy repo...\"\ncd ${GOPATH:-${HOME/go}}/src/${ENVOY_REPO}\n\necho \"running protoc for dependencies from envoy/type...\"\n$PROTOC_CMD \\\n  envoy/type/http_status.proto \\\n  envoy/type/percent.proto\necho\n\necho \"running protoc for dependencies from envoy/api/v2/core...\"\n$PROTOC_CMD \\\n  envoy/api/v2/core/address.proto \\\n  envoy/api/v2/core/base.proto \\\n  envoy/api/v2/core/http_uri.proto\necho\n\necho \"running protoc for ext_authz_v2...\"\n$PROTOC_CMD \\\n  envoy/service/auth/v2/attribute_context.proto \\\n  envoy/service/auth/v2/external_auth.proto\necho\n\necho \"running protoc for discovery...\"\n$PROTOC_CMD \\\n  envoy/api/v2/discovery.proto\n\necho \"running protoc for disovery services...\"\n$PROTOC_CMD \\\n  envoy/service/discovery/v2/sds.proto\n\ncd ${CUR_DIR}\n"
  },
  {
    "path": "scripts/fix_bpf",
    "content": "#!/bin/bash\n\nprintf \"fix_bfp not needed for go modules\"\nexit 0\n"
  },
  {
    "path": "scripts/lint.sh",
    "content": "#!/usr/bin/env bash\n\nexport GO111MODULE=auto\n\n# goimports and gofmt complain about cr-lf line endings, so don't run them on\n#\ta Windows machine where git is configured to auto-convert line endings\n\nOS=\"$(uname -s)\"\nif [[ \"$OS\" == *\"NT-\"* ]]; then\n\tGOIMPORTS_OPTION=\n\tGOFMT_OPTION=\nelse\n\tGOIMPORTS_OPTION=\"--enable=goimports\"\n\tGOFMT_OPTION=\"--enable=gofmt\"\nfi\n\ngolangci-lint run \\\n    --verbose \\\n    --skip-dirs='[third_party|test|vendor]' \\\n    --skip-files='[.*\\.pb\\.go|.*\\.gen\\.go]' \\\n    --deadline=20m \\\n    --disable-all \\\n    --exclude-use-default=false \\\n    --enable=errcheck \\\n    --enable=ineffassign \\\n    --enable=govet \\\n    --enable=golint \\\n    --enable=unused \\\n    --enable=structcheck \\\n    --enable=varcheck \\\n    --enable=deadcode \\\n    --enable=unconvert \\\n    --enable=goconst \\\n    --enable=gosimple \\\n    --enable=misspell \\\n    --enable=staticcheck \\\n    --enable=unparam \\\n    --enable=prealloc \\\n    --enable=nakedret \\\n    --enable=typecheck \\\n    $GOIMPORTS_OPTION \\\n    $GOFMT_OPTION \\\n    ./...\n\n"
  },
  {
    "path": "scripts/lint.windows.sh",
    "content": "#!/usr/bin/env bash\n\nexport GO111MODULE=auto\nexport CGO_ENABLED=0\nexport GOOS=windows\nexport GOARCH=amd64\n\n# goimports and gofmt complain about cr-lf line endings, so don't run them on\n#\ta Windows machine where git is configured to auto-convert line endings\n\nOS=\"$(uname -s)\"\nif [[ \"$OS\" == *\"NT-\"* ]]; then\n\tGOIMPORTS_OPTION=\n\tGOFMT_OPTION=\nelse\n\tGOIMPORTS_OPTION=\"--enable=goimports\"\n\tGOFMT_OPTION=\"--enable=gofmt\"\nfi\n\ngolangci-lint run \\\n    --verbose \\\n    --skip-dirs='[third_party|test|vendor]' \\\n    --skip-files='[.*\\.pb\\.go|.*\\.gen\\.go]' \\\n    --deadline=20m \\\n    --disable-all \\\n    --exclude-use-default=false \\\n    --enable=errcheck \\\n    --enable=ineffassign \\\n    --enable=govet \\\n    --enable=golint \\\n    --enable=unused \\\n    --enable=structcheck \\\n    --enable=varcheck \\\n    --enable=deadcode \\\n    --enable=unconvert \\\n    --enable=goconst \\\n    --enable=gosimple \\\n    --enable=misspell \\\n    --enable=staticcheck \\\n    --enable=unparam \\\n    --enable=prealloc \\\n    --enable=nakedret \\\n    --enable=typecheck \\\n    $GOIMPORTS_OPTION \\\n    $GOFMT_OPTION \\\n    ./...\n"
  },
  {
    "path": "scripts/test.sh",
    "content": "#!/usr/bin/env bash\n\nexport GO111MODULE=auto\n\n## FIX ME. go1.14 automatically enables unsafe ptr checks when doing race checks,\n## and it is not clear if this is compatible (it is disabled on Windows)\n##\n## This needs to be revisited and maybe remove \"-gcflags=all=-d=checkptr=0\" below\n## for go1.14 once we determine if there is a real pointer issue in the tests.\n##\n## this is the file that fails when ptr checking is enabled:\n## go.aporeto.io/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn_test.go\n##\n## to see the failure, test that package individually setting \"checkptr=1\"\n\ncase \"$(go version)\" in\n    *1.13*) CHECKPTR=\"\"  ;;\n    *)      CHECKPTR=\"-gcflags=all=-d=checkptr=0\" ;;\nesac\n\n# set -e\nrm -f coverage.txt\ntouch coverage.txt\n\n## FIX ME. go1.14 automatically enables unsafe ptr checks when doing race checks,\n## and it is not clear if this is compatible (it is disabled on Windows)\n##\n## This needs to be revisited and maybe remove \"-gcflags=all=-d=checkptr=0\" below\n## for go1.14 once we determine if there is a real pointer issue in the tests.\n##\n## this is the file that fails when ptr checking is enabled:\n## go.aporeto.io/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn_test.go\n##\n## to see the failure, test that package individually setting \"checkptr=1\"\n\ncase \"$(go version)\" in\n    *1.13*) CHECKPTR=\"\"  ;;\n    *)      CHECKPTR=\"-gcflags=all=-d=checkptr=0\" ;;\nesac\n\nif [[ $# -gt 0 ]]; then\n    pkglist=\"$*\"\nelse\n    pkglist=\"$(go list ./... | grep -E -v '(mock|bpf)')\"\nfi\n\necho\necho  \"========= BEGIN LINUX TESTS ===========\"\necho\n\nfor pkg in $pkglist ; do\n    go test -tags test $CHECKPTR -race -coverprofile=profile.out -covermode=atomic \"$pkg\"\n    if [ -f profile.out ]; then\n        cat profile.out >> coverage.txt\n        rm profile.out\n    fi\ndone\n\necho\necho  \"========= END LINUX TESTS ===========\"\necho\n"
  },
  {
    "path": "scripts/test.windows.sh",
    "content": "#!/usr/bin/env bash\n\nexport GO111MODULES=auto\nexport CGO_ENABLED=0\nexport GOOS=windows\nexport GOARCH=amd64\n\n# set -e\nrm -f coverage.windows.txt\ntouch coverage.windows.txt\n\n# use wine to execute if not running tests on a Windows machine\nOS=\"$(uname -s)\"\nif [[ \"$OS\" == *\"NT-\"* ]]; then\n\tWINE_EXEC=\nelse\n\tWINE_EXEC=\"-exec wine\"\nfi\n\n## FIX ME. go1.14 automatically enables unsafe ptr checks when doing race checks,\n## and it is not clear if this is compatible (it is disabled on Windows)\n##\n## This needs to be revisited and maybe remove \"-gcflags=all=-d=checkptr=0\" below\n## for go1.14 once we determine if there is a real pointer issue in the tests.\n##\n## this is the file that fails when ptr checking is enabled:\n## go.aporeto.io/trireme-lib/controller/internal/enforcer/applicationproxy/markedconn_test.go\n##\n## to see the failure, test that package individually setting \"checkptr=1\"\n\ncase \"$(go version)\" in\n    *1.13*) CHECKPTR=\"\"  ;;\n    *)      CHECKPTR=\"-gcflags=all=-d=checkptr=0\" ;;\nesac\n\nif [[ $# -gt 0 ]]; then\n    pkglist=\"$*\"\nelse\n    pkglist=\"$(go list ./... | grep -v remoteenforcer | grep -v remoteapi | grep -v \"plugins/pam\")\"\nfi\n\necho\necho  \"========= BEGIN WINDOWS TESTS ===========\"\necho\n\nfor pkg in $pkglist; do\n    go test -tags test $WINE_EXEC $CHECKPTR -coverprofile=profile.windows.out -covermode=atomic \"$pkg\"\n    if [ -f profile.windows.out ]; then\n        cat profile.windows.out >> coverage.windows.txt\n        rm profile.windows.out\n    fi\ndone\n\necho\necho  \"========= END WINDOWS TESTS ===========\"\necho\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/api/v2/core/address.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/api/v2/core/address.proto\n\npackage core\n\nimport (\n\tbytes \"bytes\"\n\tfmt \"fmt\"\n\t_ \"github.com/envoyproxy/protoc-gen-validate/validate\"\n\t_ \"github.com/gogo/protobuf/gogoproto\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\ttypes \"github.com/gogo/protobuf/types\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\ntype SocketAddress_Protocol int32\n\nconst (\n\tTCP SocketAddress_Protocol = 0\n\t// [#not-implemented-hide:]\n\tUDP SocketAddress_Protocol = 1\n)\n\nvar SocketAddress_Protocol_name = map[int32]string{\n\t0: \"TCP\",\n\t1: \"UDP\",\n}\n\nvar SocketAddress_Protocol_value = map[string]int32{\n\t\"TCP\": 0,\n\t\"UDP\": 1,\n}\n\nfunc (x SocketAddress_Protocol) String() string {\n\treturn proto.EnumName(SocketAddress_Protocol_name, int32(x))\n}\n\nfunc (SocketAddress_Protocol) EnumDescriptor() ([]byte, []int) {\n\treturn fileDescriptor_6906417f87bcce55, []int{1, 0}\n}\n\ntype Pipe struct {\n\t// Unix Domain Socket path. On Linux, paths starting with '@' will use the\n\t// abstract namespace. The starting '@' is replaced by a null byte by Envoy.\n\t// Paths starting with '@' will result in an error in environments other than\n\t// Linux.\n\tPath                 string   `protobuf:\"bytes,1,opt,name=path,proto3\" json:\"path,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *Pipe) Reset()         { *m = Pipe{} }\nfunc (m *Pipe) String() string { return proto.CompactTextString(m) }\nfunc (*Pipe) ProtoMessage()    {}\nfunc (*Pipe) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_6906417f87bcce55, []int{0}\n}\nfunc (m *Pipe) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *Pipe) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_Pipe.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *Pipe) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_Pipe.Merge(m, src)\n}\nfunc (m *Pipe) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *Pipe) XXX_DiscardUnknown() {\n\txxx_messageInfo_Pipe.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_Pipe proto.InternalMessageInfo\n\nfunc (m *Pipe) GetPath() string {\n\tif m != nil {\n\t\treturn m.Path\n\t}\n\treturn \"\"\n}\n\ntype SocketAddress struct {\n\tProtocol SocketAddress_Protocol `protobuf:\"varint,1,opt,name=protocol,proto3,enum=envoy.api.v2.core.SocketAddress_Protocol\" json:\"protocol,omitempty\"`\n\t// The address for this socket. :ref:`Listeners <config_listeners>` will bind\n\t// to the address. An empty address is not allowed. Specify ``0.0.0.0`` or ``::``\n\t// to bind to any address. [#comment:TODO(zuercher) reinstate when implemented:\n\t// It is possible to distinguish a Listener address via the prefix/suffix matching\n\t// in :ref:`FilterChainMatch <envoy_api_msg_listener.FilterChainMatch>`.] When used\n\t// within an upstream :ref:`BindConfig <envoy_api_msg_core.BindConfig>`, the address\n\t// controls the source address of outbound connections. For :ref:`clusters\n\t// <envoy_api_msg_Cluster>`, the cluster type determines whether the\n\t// address must be an IP (*STATIC* or *EDS* clusters) or a hostname resolved by DNS\n\t// (*STRICT_DNS* or *LOGICAL_DNS* clusters). Address resolution can be customized\n\t// via :ref:`resolver_name <envoy_api_field_core.SocketAddress.resolver_name>`.\n\tAddress string `protobuf:\"bytes,2,opt,name=address,proto3\" json:\"address,omitempty\"`\n\t// Types that are valid to be assigned to PortSpecifier:\n\t//\t*SocketAddress_PortValue\n\t//\t*SocketAddress_NamedPort\n\tPortSpecifier isSocketAddress_PortSpecifier `protobuf_oneof:\"port_specifier\"`\n\t// The name of the custom resolver. This must have been registered with Envoy. If\n\t// this is empty, a context dependent default applies. If the address is a concrete\n\t// IP address, no resolution will occur. If address is a hostname this\n\t// should be set for resolution other than DNS. Specifying a custom resolver with\n\t// *STRICT_DNS* or *LOGICAL_DNS* will generate an error at runtime.\n\tResolverName string `protobuf:\"bytes,5,opt,name=resolver_name,json=resolverName,proto3\" json:\"resolver_name,omitempty\"`\n\t// When binding to an IPv6 address above, this enables `IPv4 compatibility\n\t// <https://tools.ietf.org/html/rfc3493#page-11>`_. Binding to ``::`` will\n\t// allow both IPv4 and IPv6 connections, with peer IPv4 addresses mapped into\n\t// IPv6 space as ``::FFFF:<IPv4-address>``.\n\tIpv4Compat           bool     `protobuf:\"varint,6,opt,name=ipv4_compat,json=ipv4Compat,proto3\" json:\"ipv4_compat,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *SocketAddress) Reset()         { *m = SocketAddress{} }\nfunc (m *SocketAddress) String() string { return proto.CompactTextString(m) }\nfunc (*SocketAddress) ProtoMessage()    {}\nfunc (*SocketAddress) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_6906417f87bcce55, []int{1}\n}\nfunc (m *SocketAddress) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *SocketAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_SocketAddress.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *SocketAddress) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_SocketAddress.Merge(m, src)\n}\nfunc (m *SocketAddress) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *SocketAddress) XXX_DiscardUnknown() {\n\txxx_messageInfo_SocketAddress.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_SocketAddress proto.InternalMessageInfo\n\ntype isSocketAddress_PortSpecifier interface {\n\tisSocketAddress_PortSpecifier()\n\tEqual(interface{}) bool\n\tMarshalTo([]byte) (int, error)\n\tSize() int\n}\n\ntype SocketAddress_PortValue struct {\n\tPortValue uint32 `protobuf:\"varint,3,opt,name=port_value,json=portValue,proto3,oneof\"`\n}\ntype SocketAddress_NamedPort struct {\n\tNamedPort string `protobuf:\"bytes,4,opt,name=named_port,json=namedPort,proto3,oneof\"`\n}\n\nfunc (*SocketAddress_PortValue) isSocketAddress_PortSpecifier() {}\nfunc (*SocketAddress_NamedPort) isSocketAddress_PortSpecifier() {}\n\nfunc (m *SocketAddress) GetPortSpecifier() isSocketAddress_PortSpecifier {\n\tif m != nil {\n\t\treturn m.PortSpecifier\n\t}\n\treturn nil\n}\n\nfunc (m *SocketAddress) GetProtocol() SocketAddress_Protocol {\n\tif m != nil {\n\t\treturn m.Protocol\n\t}\n\treturn TCP\n}\n\nfunc (m *SocketAddress) GetAddress() string {\n\tif m != nil {\n\t\treturn m.Address\n\t}\n\treturn \"\"\n}\n\nfunc (m *SocketAddress) GetPortValue() uint32 {\n\tif x, ok := m.GetPortSpecifier().(*SocketAddress_PortValue); ok {\n\t\treturn x.PortValue\n\t}\n\treturn 0\n}\n\nfunc (m *SocketAddress) GetNamedPort() string {\n\tif x, ok := m.GetPortSpecifier().(*SocketAddress_NamedPort); ok {\n\t\treturn x.NamedPort\n\t}\n\treturn \"\"\n}\n\nfunc (m *SocketAddress) GetResolverName() string {\n\tif m != nil {\n\t\treturn m.ResolverName\n\t}\n\treturn \"\"\n}\n\nfunc (m *SocketAddress) GetIpv4Compat() bool {\n\tif m != nil {\n\t\treturn m.Ipv4Compat\n\t}\n\treturn false\n}\n\n// XXX_OneofFuncs is for the internal use of the proto package.\nfunc (*SocketAddress) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {\n\treturn _SocketAddress_OneofMarshaler, _SocketAddress_OneofUnmarshaler, _SocketAddress_OneofSizer, []interface{}{\n\t\t(*SocketAddress_PortValue)(nil),\n\t\t(*SocketAddress_NamedPort)(nil),\n\t}\n}\n\nfunc _SocketAddress_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {\n\tm := msg.(*SocketAddress)\n\t// port_specifier\n\tswitch x := m.PortSpecifier.(type) {\n\tcase *SocketAddress_PortValue:\n\t\t_ = b.EncodeVarint(3<<3 | proto.WireVarint)\n\t\t_ = b.EncodeVarint(uint64(x.PortValue))\n\tcase *SocketAddress_NamedPort:\n\t\t_ = b.EncodeVarint(4<<3 | proto.WireBytes)\n\t\t_ = b.EncodeStringBytes(x.NamedPort)\n\tcase nil:\n\tdefault:\n\t\treturn fmt.Errorf(\"SocketAddress.PortSpecifier has unexpected type %T\", x)\n\t}\n\treturn nil\n}\n\nfunc _SocketAddress_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {\n\tm := msg.(*SocketAddress)\n\tswitch tag {\n\tcase 3: // port_specifier.port_value\n\t\tif wire != proto.WireVarint {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tx, err := b.DecodeVarint()\n\t\tm.PortSpecifier = &SocketAddress_PortValue{uint32(x)}\n\t\treturn true, err\n\tcase 4: // port_specifier.named_port\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tx, err := b.DecodeStringBytes()\n\t\tm.PortSpecifier = &SocketAddress_NamedPort{x}\n\t\treturn true, err\n\tdefault:\n\t\treturn false, nil\n\t}\n}\n\nfunc _SocketAddress_OneofSizer(msg proto.Message) (n int) {\n\tm := msg.(*SocketAddress)\n\t// port_specifier\n\tswitch x := m.PortSpecifier.(type) {\n\tcase *SocketAddress_PortValue:\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(x.PortValue))\n\tcase *SocketAddress_NamedPort:\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(len(x.NamedPort)))\n\t\tn += len(x.NamedPort)\n\tcase nil:\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"proto: unexpected type %T in oneof\", x))\n\t}\n\treturn n\n}\n\ntype TcpKeepalive struct {\n\t// Maximum number of keepalive probes to send without response before deciding\n\t// the connection is dead. Default is to use the OS level configuration (unless\n\t// overridden, Linux defaults to 9.)\n\tKeepaliveProbes *types.UInt32Value `protobuf:\"bytes,1,opt,name=keepalive_probes,json=keepaliveProbes,proto3\" json:\"keepalive_probes,omitempty\"`\n\t// The number of seconds a connection needs to be idle before keep-alive probes\n\t// start being sent. Default is to use the OS level configuration (unless\n\t// overridden, Linux defaults to 7200s (ie 2 hours.)\n\tKeepaliveTime *types.UInt32Value `protobuf:\"bytes,2,opt,name=keepalive_time,json=keepaliveTime,proto3\" json:\"keepalive_time,omitempty\"`\n\t// The number of seconds between keep-alive probes. Default is to use the OS\n\t// level configuration (unless overridden, Linux defaults to 75s.)\n\tKeepaliveInterval    *types.UInt32Value `protobuf:\"bytes,3,opt,name=keepalive_interval,json=keepaliveInterval,proto3\" json:\"keepalive_interval,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}           `json:\"-\"`\n\tXXX_unrecognized     []byte             `json:\"-\"`\n\tXXX_sizecache        int32              `json:\"-\"`\n}\n\nfunc (m *TcpKeepalive) Reset()         { *m = TcpKeepalive{} }\nfunc (m *TcpKeepalive) String() string { return proto.CompactTextString(m) }\nfunc (*TcpKeepalive) ProtoMessage()    {}\nfunc (*TcpKeepalive) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_6906417f87bcce55, []int{2}\n}\nfunc (m *TcpKeepalive) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *TcpKeepalive) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_TcpKeepalive.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *TcpKeepalive) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_TcpKeepalive.Merge(m, src)\n}\nfunc (m *TcpKeepalive) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *TcpKeepalive) XXX_DiscardUnknown() {\n\txxx_messageInfo_TcpKeepalive.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_TcpKeepalive proto.InternalMessageInfo\n\nfunc (m *TcpKeepalive) GetKeepaliveProbes() *types.UInt32Value {\n\tif m != nil {\n\t\treturn m.KeepaliveProbes\n\t}\n\treturn nil\n}\n\nfunc (m *TcpKeepalive) GetKeepaliveTime() *types.UInt32Value {\n\tif m != nil {\n\t\treturn m.KeepaliveTime\n\t}\n\treturn nil\n}\n\nfunc (m *TcpKeepalive) GetKeepaliveInterval() *types.UInt32Value {\n\tif m != nil {\n\t\treturn m.KeepaliveInterval\n\t}\n\treturn nil\n}\n\ntype BindConfig struct {\n\t// The address to bind to when creating a socket.\n\tSourceAddress *SocketAddress `protobuf:\"bytes,1,opt,name=source_address,json=sourceAddress,proto3\" json:\"source_address,omitempty\"`\n\t// Whether to set the *IP_FREEBIND* option when creating the socket. When this\n\t// flag is set to true, allows the :ref:`source_address\n\t// <envoy_api_field_UpstreamBindConfig.source_address>` to be an IP address\n\t// that is not configured on the system running Envoy. When this flag is set\n\t// to false, the option *IP_FREEBIND* is disabled on the socket. When this\n\t// flag is not set (default), the socket is not modified, i.e. the option is\n\t// neither enabled nor disabled.\n\tFreebind *types.BoolValue `protobuf:\"bytes,2,opt,name=freebind,proto3\" json:\"freebind,omitempty\"`\n\t// Additional socket options that may not be present in Envoy source code or\n\t// precompiled binaries.\n\tSocketOptions        []*SocketOption `protobuf:\"bytes,3,rep,name=socket_options,json=socketOptions,proto3\" json:\"socket_options,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}        `json:\"-\"`\n\tXXX_unrecognized     []byte          `json:\"-\"`\n\tXXX_sizecache        int32           `json:\"-\"`\n}\n\nfunc (m *BindConfig) Reset()         { *m = BindConfig{} }\nfunc (m *BindConfig) String() string { return proto.CompactTextString(m) }\nfunc (*BindConfig) ProtoMessage()    {}\nfunc (*BindConfig) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_6906417f87bcce55, []int{3}\n}\nfunc (m *BindConfig) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *BindConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_BindConfig.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *BindConfig) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_BindConfig.Merge(m, src)\n}\nfunc (m *BindConfig) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *BindConfig) XXX_DiscardUnknown() {\n\txxx_messageInfo_BindConfig.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_BindConfig proto.InternalMessageInfo\n\nfunc (m *BindConfig) GetSourceAddress() *SocketAddress {\n\tif m != nil {\n\t\treturn m.SourceAddress\n\t}\n\treturn nil\n}\n\nfunc (m *BindConfig) GetFreebind() *types.BoolValue {\n\tif m != nil {\n\t\treturn m.Freebind\n\t}\n\treturn nil\n}\n\nfunc (m *BindConfig) GetSocketOptions() []*SocketOption {\n\tif m != nil {\n\t\treturn m.SocketOptions\n\t}\n\treturn nil\n}\n\n// Addresses specify either a logical or physical address and port, which are\n// used to tell Envoy where to bind/listen, connect to upstream and find\n// management servers.\ntype Address struct {\n\t// Types that are valid to be assigned to Address:\n\t//\t*Address_SocketAddress\n\t//\t*Address_Pipe\n\tAddress              isAddress_Address `protobuf_oneof:\"address\"`\n\tXXX_NoUnkeyedLiteral struct{}          `json:\"-\"`\n\tXXX_unrecognized     []byte            `json:\"-\"`\n\tXXX_sizecache        int32             `json:\"-\"`\n}\n\nfunc (m *Address) Reset()         { *m = Address{} }\nfunc (m *Address) String() string { return proto.CompactTextString(m) }\nfunc (*Address) ProtoMessage()    {}\nfunc (*Address) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_6906417f87bcce55, []int{4}\n}\nfunc (m *Address) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *Address) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_Address.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *Address) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_Address.Merge(m, src)\n}\nfunc (m *Address) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *Address) XXX_DiscardUnknown() {\n\txxx_messageInfo_Address.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_Address proto.InternalMessageInfo\n\ntype isAddress_Address interface {\n\tisAddress_Address()\n\tEqual(interface{}) bool\n\tMarshalTo([]byte) (int, error)\n\tSize() int\n}\n\ntype Address_SocketAddress struct {\n\tSocketAddress *SocketAddress `protobuf:\"bytes,1,opt,name=socket_address,json=socketAddress,proto3,oneof\"`\n}\ntype Address_Pipe struct {\n\tPipe *Pipe `protobuf:\"bytes,2,opt,name=pipe,proto3,oneof\"`\n}\n\nfunc (*Address_SocketAddress) isAddress_Address() {}\nfunc (*Address_Pipe) isAddress_Address()          {}\n\nfunc (m *Address) GetAddress() isAddress_Address {\n\tif m != nil {\n\t\treturn m.Address\n\t}\n\treturn nil\n}\n\nfunc (m *Address) GetSocketAddress() *SocketAddress {\n\tif x, ok := m.GetAddress().(*Address_SocketAddress); ok {\n\t\treturn x.SocketAddress\n\t}\n\treturn nil\n}\n\nfunc (m *Address) GetPipe() *Pipe {\n\tif x, ok := m.GetAddress().(*Address_Pipe); ok {\n\t\treturn x.Pipe\n\t}\n\treturn nil\n}\n\n// XXX_OneofFuncs is for the internal use of the proto package.\nfunc (*Address) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {\n\treturn _Address_OneofMarshaler, _Address_OneofUnmarshaler, _Address_OneofSizer, []interface{}{\n\t\t(*Address_SocketAddress)(nil),\n\t\t(*Address_Pipe)(nil),\n\t}\n}\n\nfunc _Address_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {\n\tm := msg.(*Address)\n\t// address\n\tswitch x := m.Address.(type) {\n\tcase *Address_SocketAddress:\n\t\t_ = b.EncodeVarint(1<<3 | proto.WireBytes)\n\t\tif err := b.EncodeMessage(x.SocketAddress); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase *Address_Pipe:\n\t\t_ = b.EncodeVarint(2<<3 | proto.WireBytes)\n\t\tif err := b.EncodeMessage(x.Pipe); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase nil:\n\tdefault:\n\t\treturn fmt.Errorf(\"Address.Address has unexpected type %T\", x)\n\t}\n\treturn nil\n}\n\nfunc _Address_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {\n\tm := msg.(*Address)\n\tswitch tag {\n\tcase 1: // address.socket_address\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tmsg := new(SocketAddress)\n\t\terr := b.DecodeMessage(msg)\n\t\tm.Address = &Address_SocketAddress{msg}\n\t\treturn true, err\n\tcase 2: // address.pipe\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tmsg := new(Pipe)\n\t\terr := b.DecodeMessage(msg)\n\t\tm.Address = &Address_Pipe{msg}\n\t\treturn true, err\n\tdefault:\n\t\treturn false, nil\n\t}\n}\n\nfunc _Address_OneofSizer(msg proto.Message) (n int) {\n\tm := msg.(*Address)\n\t// address\n\tswitch x := m.Address.(type) {\n\tcase *Address_SocketAddress:\n\t\ts := proto.Size(x.SocketAddress)\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(s))\n\t\tn += s\n\tcase *Address_Pipe:\n\t\ts := proto.Size(x.Pipe)\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(s))\n\t\tn += s\n\tcase nil:\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"proto: unexpected type %T in oneof\", x))\n\t}\n\treturn n\n}\n\n// CidrRange specifies an IP Address and a prefix length to construct\n// the subnet mask for a `CIDR <https://tools.ietf.org/html/rfc4632>`_ range.\ntype CidrRange struct {\n\t// IPv4 or IPv6 address, e.g. ``192.0.0.0`` or ``2001:db8::``.\n\tAddressPrefix string `protobuf:\"bytes,1,opt,name=address_prefix,json=addressPrefix,proto3\" json:\"address_prefix,omitempty\"`\n\t// Length of prefix, e.g. 0, 32.\n\tPrefixLen            *types.UInt32Value `protobuf:\"bytes,2,opt,name=prefix_len,json=prefixLen,proto3\" json:\"prefix_len,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}           `json:\"-\"`\n\tXXX_unrecognized     []byte             `json:\"-\"`\n\tXXX_sizecache        int32              `json:\"-\"`\n}\n\nfunc (m *CidrRange) Reset()         { *m = CidrRange{} }\nfunc (m *CidrRange) String() string { return proto.CompactTextString(m) }\nfunc (*CidrRange) ProtoMessage()    {}\nfunc (*CidrRange) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_6906417f87bcce55, []int{5}\n}\nfunc (m *CidrRange) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *CidrRange) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_CidrRange.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *CidrRange) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_CidrRange.Merge(m, src)\n}\nfunc (m *CidrRange) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *CidrRange) XXX_DiscardUnknown() {\n\txxx_messageInfo_CidrRange.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_CidrRange proto.InternalMessageInfo\n\nfunc (m *CidrRange) GetAddressPrefix() string {\n\tif m != nil {\n\t\treturn m.AddressPrefix\n\t}\n\treturn \"\"\n}\n\nfunc (m *CidrRange) GetPrefixLen() *types.UInt32Value {\n\tif m != nil {\n\t\treturn m.PrefixLen\n\t}\n\treturn nil\n}\n\nfunc init() {\n\tproto.RegisterEnum(\"envoy.api.v2.core.SocketAddress_Protocol\", SocketAddress_Protocol_name, SocketAddress_Protocol_value)\n\tproto.RegisterType((*Pipe)(nil), \"envoy.api.v2.core.Pipe\")\n\tproto.RegisterType((*SocketAddress)(nil), \"envoy.api.v2.core.SocketAddress\")\n\tproto.RegisterType((*TcpKeepalive)(nil), \"envoy.api.v2.core.TcpKeepalive\")\n\tproto.RegisterType((*BindConfig)(nil), \"envoy.api.v2.core.BindConfig\")\n\tproto.RegisterType((*Address)(nil), \"envoy.api.v2.core.Address\")\n\tproto.RegisterType((*CidrRange)(nil), \"envoy.api.v2.core.CidrRange\")\n}\n\nfunc init() { proto.RegisterFile(\"envoy/api/v2/core/address.proto\", fileDescriptor_6906417f87bcce55) }\n\nvar fileDescriptor_6906417f87bcce55 = []byte{\n\t// 695 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x53, 0xcd, 0x6e, 0xd3, 0x4c,\n\t0x14, 0xcd, 0x24, 0x69, 0x93, 0xdc, 0x36, 0xf9, 0xd2, 0xd1, 0x87, 0x6a, 0x85, 0x2a, 0x09, 0xe9,\n\t0x26, 0x54, 0xc2, 0x96, 0x52, 0xc4, 0xbe, 0x0e, 0x3f, 0xad, 0x8a, 0xa8, 0x31, 0x2d, 0x5b, 0x6b,\n\t0x92, 0x4c, 0xc2, 0xa8, 0x8e, 0x67, 0x34, 0x76, 0x4d, 0xbb, 0x43, 0x5d, 0x21, 0xb6, 0x3c, 0x02,\n\t0x1b, 0x1e, 0x85, 0x25, 0x3c, 0x01, 0x28, 0x3b, 0xde, 0x00, 0x75, 0x53, 0x34, 0x63, 0x3b, 0x55,\n\t0x09, 0xa8, 0xb0, 0x9b, 0x39, 0xf7, 0x9e, 0x33, 0xe7, 0xfe, 0x0c, 0xb4, 0x68, 0x10, 0xf3, 0x33,\n\t0x8b, 0x08, 0x66, 0xc5, 0x3d, 0x6b, 0xc8, 0x25, 0xb5, 0xc8, 0x68, 0x24, 0x69, 0x18, 0x9a, 0x42,\n\t0xf2, 0x88, 0xe3, 0x35, 0x9d, 0x60, 0x12, 0xc1, 0xcc, 0xb8, 0x67, 0xaa, 0x84, 0xc6, 0xc6, 0x22,\n\t0x67, 0x40, 0x42, 0x9a, 0x10, 0x1a, 0xcd, 0x09, 0xe7, 0x13, 0x9f, 0x5a, 0xfa, 0x36, 0x38, 0x19,\n\t0x5b, 0xaf, 0x25, 0x11, 0x82, 0xca, 0x54, 0xb0, 0xb1, 0x1e, 0x13, 0x9f, 0x8d, 0x48, 0x44, 0xad,\n\t0xec, 0x90, 0x06, 0xfe, 0x9f, 0xf0, 0x09, 0xd7, 0x47, 0x4b, 0x9d, 0x12, 0xb4, 0xb3, 0x09, 0x45,\n\t0x87, 0x09, 0x8a, 0x6f, 0x43, 0x51, 0x90, 0xe8, 0x95, 0x81, 0xda, 0xa8, 0x5b, 0xb1, 0x4b, 0x17,\n\t0x76, 0x51, 0xe6, 0xdb, 0xc8, 0xd5, 0x60, 0xe7, 0x4b, 0x1e, 0xaa, 0x2f, 0xf8, 0xf0, 0x98, 0x46,\n\t0x3b, 0x89, 0x79, 0x7c, 0x00, 0x65, 0xcd, 0x1f, 0x72, 0x5f, 0x53, 0x6a, 0xbd, 0xbb, 0xe6, 0x42,\n\t0x25, 0xe6, 0x35, 0x8e, 0xe9, 0xa4, 0x04, 0xbb, 0x7c, 0x61, 0x2f, 0x9d, 0xa3, 0x7c, 0x1d, 0xb9,\n\t0x73, 0x11, 0x7c, 0x07, 0x4a, 0x69, 0x63, 0x8c, 0xfc, 0x75, 0x0b, 0x19, 0x8e, 0xb7, 0x00, 0x04,\n\t0x97, 0x91, 0x17, 0x13, 0xff, 0x84, 0x1a, 0x85, 0x36, 0xea, 0x56, 0xed, 0xca, 0x85, 0xbd, 0xbc,\n\t0x55, 0x34, 0x2e, 0x2f, 0x0b, 0xbb, 0x39, 0xb7, 0xa2, 0xc2, 0x2f, 0x55, 0x14, 0xb7, 0x00, 0x02,\n\t0x32, 0xa5, 0x23, 0x4f, 0x41, 0x46, 0x51, 0x29, 0xaa, 0x04, 0x8d, 0x39, 0x5c, 0x46, 0x78, 0x13,\n\t0xaa, 0x92, 0x86, 0xdc, 0x8f, 0xa9, 0xf4, 0x14, 0x6a, 0x2c, 0xa9, 0x1c, 0x77, 0x35, 0x03, 0x9f,\n\t0x91, 0xa9, 0x52, 0x59, 0x61, 0x22, 0xbe, 0xef, 0x0d, 0xf9, 0x54, 0x90, 0xc8, 0x58, 0x6e, 0xa3,\n\t0x6e, 0xd9, 0x05, 0x05, 0xf5, 0x35, 0xd2, 0xe9, 0x40, 0x39, 0xab, 0x0a, 0x97, 0xa0, 0x70, 0xd8,\n\t0x77, 0xea, 0x39, 0x75, 0x38, 0x7a, 0xe8, 0xd4, 0x51, 0xa3, 0xf8, 0xf6, 0x43, 0x33, 0x67, 0xdf,\n\t0x82, 0x9a, 0xb6, 0x1d, 0x0a, 0x3a, 0x64, 0x63, 0x46, 0x25, 0x2e, 0xfc, 0xb0, 0x51, 0xe7, 0x3b,\n\t0x82, 0xd5, 0xc3, 0xa1, 0xd8, 0xa7, 0x54, 0x10, 0x9f, 0xc5, 0x14, 0x3f, 0x81, 0xfa, 0x71, 0x76,\n\t0xf1, 0x84, 0xe4, 0x03, 0x1a, 0xea, 0xd6, 0xae, 0xf4, 0x36, 0xcc, 0x64, 0xe6, 0x66, 0x36, 0x73,\n\t0xf3, 0x68, 0x2f, 0x88, 0xb6, 0x7b, 0xba, 0x54, 0xf7, 0xbf, 0x39, 0xcb, 0xd1, 0x24, 0xdc, 0x87,\n\t0xda, 0x95, 0x50, 0xc4, 0xa6, 0x54, 0x77, 0xf4, 0x26, 0x99, 0xea, 0x9c, 0x73, 0xc8, 0xa6, 0x14,\n\t0xef, 0x03, 0xbe, 0x12, 0x61, 0x41, 0x44, 0x65, 0x4c, 0x7c, 0xdd, 0xf4, 0x9b, 0x84, 0xd6, 0xe6,\n\t0xbc, 0xbd, 0x94, 0xd6, 0xf9, 0x8a, 0x00, 0x6c, 0x16, 0x8c, 0xfa, 0x3c, 0x18, 0xb3, 0x09, 0x7e,\n\t0x0e, 0xb5, 0x90, 0x9f, 0xc8, 0x21, 0xf5, 0xb2, 0x91, 0x27, 0x75, 0xb6, 0x6f, 0x5a, 0x21, 0xbd,\n\t0x39, 0xef, 0xf4, 0xe6, 0x54, 0x13, 0x85, 0x6c, 0x1f, 0x1f, 0x40, 0x79, 0x2c, 0x29, 0x1d, 0xb0,\n\t0x60, 0x94, 0x56, 0xdb, 0x58, 0x30, 0x69, 0x73, 0xee, 0x27, 0x16, 0xe7, 0xb9, 0xf8, 0xb1, 0xb2,\n\t0xa2, 0x5e, 0xf0, 0xb8, 0x88, 0x18, 0x0f, 0x42, 0xa3, 0xd0, 0x2e, 0x74, 0x57, 0x7a, 0xad, 0x3f,\n\t0x5a, 0x39, 0xd0, 0x79, 0xea, 0xfd, 0xab, 0x5b, 0xd8, 0x79, 0x8f, 0xa0, 0x94, 0x79, 0xd9, 0x9b,\n\t0x6b, 0xfe, 0x63, 0x79, 0xbb, 0xb9, 0x4c, 0x36, 0x93, 0xba, 0x07, 0x45, 0xc1, 0x44, 0x36, 0xc0,\n\t0xf5, 0xdf, 0x08, 0xa8, 0xcf, 0xbb, 0x9b, 0x73, 0x75, 0x9a, 0x5d, 0x9b, 0x7f, 0xa2, 0x64, 0xc7,\n\t0xce, 0x11, 0x54, 0xfa, 0x6c, 0x24, 0x5d, 0x12, 0x4c, 0x28, 0x36, 0xa1, 0x96, 0x46, 0x3d, 0x21,\n\t0xe9, 0x98, 0x9d, 0xfe, 0xfa, 0xd9, 0xab, 0x69, 0xd8, 0xd1, 0x51, 0xfc, 0x08, 0x20, 0xc9, 0xf3,\n\t0x7c, 0x1a, 0xfc, 0xcd, 0x0e, 0xe9, 0xf1, 0x6c, 0x15, 0x8c, 0x37, 0xc8, 0xad, 0x24, 0xcc, 0xa7,\n\t0x34, 0xb0, 0x77, 0x3e, 0xce, 0x9a, 0xe8, 0xd3, 0xac, 0x89, 0x3e, 0xcf, 0x9a, 0xe8, 0xdb, 0xac,\n\t0x89, 0xa0, 0xc5, 0x78, 0x52, 0x89, 0x90, 0xfc, 0xf4, 0x6c, 0xb1, 0x28, 0x7b, 0x75, 0x27, 0x33,\n\t0xc2, 0x23, 0xee, 0xa0, 0xc1, 0xb2, 0x7e, 0x6d, 0xfb, 0x67, 0x00, 0x00, 0x00, 0xff, 0xff, 0xed,\n\t0x28, 0x0a, 0x59, 0x4e, 0x05, 0x00, 0x00,\n}\n\nfunc (this *Pipe) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Pipe)\n\tif !ok {\n\t\tthat2, ok := that.(Pipe)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Path != that1.Path {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *SocketAddress) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*SocketAddress)\n\tif !ok {\n\t\tthat2, ok := that.(SocketAddress)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Protocol != that1.Protocol {\n\t\treturn false\n\t}\n\tif this.Address != that1.Address {\n\t\treturn false\n\t}\n\tif that1.PortSpecifier == nil {\n\t\tif this.PortSpecifier != nil {\n\t\t\treturn false\n\t\t}\n\t} else if this.PortSpecifier == nil {\n\t\treturn false\n\t} else if !this.PortSpecifier.Equal(that1.PortSpecifier) {\n\t\treturn false\n\t}\n\tif this.ResolverName != that1.ResolverName {\n\t\treturn false\n\t}\n\tif this.Ipv4Compat != that1.Ipv4Compat {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *SocketAddress_PortValue) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*SocketAddress_PortValue)\n\tif !ok {\n\t\tthat2, ok := that.(SocketAddress_PortValue)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.PortValue != that1.PortValue {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *SocketAddress_NamedPort) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*SocketAddress_NamedPort)\n\tif !ok {\n\t\tthat2, ok := that.(SocketAddress_NamedPort)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.NamedPort != that1.NamedPort {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *TcpKeepalive) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*TcpKeepalive)\n\tif !ok {\n\t\tthat2, ok := that.(TcpKeepalive)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.KeepaliveProbes.Equal(that1.KeepaliveProbes) {\n\t\treturn false\n\t}\n\tif !this.KeepaliveTime.Equal(that1.KeepaliveTime) {\n\t\treturn false\n\t}\n\tif !this.KeepaliveInterval.Equal(that1.KeepaliveInterval) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *BindConfig) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*BindConfig)\n\tif !ok {\n\t\tthat2, ok := that.(BindConfig)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.SourceAddress.Equal(that1.SourceAddress) {\n\t\treturn false\n\t}\n\tif !this.Freebind.Equal(that1.Freebind) {\n\t\treturn false\n\t}\n\tif len(this.SocketOptions) != len(that1.SocketOptions) {\n\t\treturn false\n\t}\n\tfor i := range this.SocketOptions {\n\t\tif !this.SocketOptions[i].Equal(that1.SocketOptions[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *Address) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Address)\n\tif !ok {\n\t\tthat2, ok := that.(Address)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif that1.Address == nil {\n\t\tif this.Address != nil {\n\t\t\treturn false\n\t\t}\n\t} else if this.Address == nil {\n\t\treturn false\n\t} else if !this.Address.Equal(that1.Address) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *Address_SocketAddress) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Address_SocketAddress)\n\tif !ok {\n\t\tthat2, ok := that.(Address_SocketAddress)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.SocketAddress.Equal(that1.SocketAddress) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *Address_Pipe) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Address_Pipe)\n\tif !ok {\n\t\tthat2, ok := that.(Address_Pipe)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.Pipe.Equal(that1.Pipe) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *CidrRange) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*CidrRange)\n\tif !ok {\n\t\tthat2, ok := that.(CidrRange)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.AddressPrefix != that1.AddressPrefix {\n\t\treturn false\n\t}\n\tif !this.PrefixLen.Equal(that1.PrefixLen) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (m *Pipe) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *Pipe) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Path) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(len(m.Path)))\n\t\ti += copy(dAtA[i:], m.Path)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *SocketAddress) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *SocketAddress) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Protocol != 0 {\n\t\tdAtA[i] = 0x8\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.Protocol))\n\t}\n\tif len(m.Address) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(len(m.Address)))\n\t\ti += copy(dAtA[i:], m.Address)\n\t}\n\tif m.PortSpecifier != nil {\n\t\tnn1, err1 := m.PortSpecifier.MarshalTo(dAtA[i:])\n\t\tif err1 != nil {\n\t\t\treturn 0, err1\n\t\t}\n\t\ti += nn1\n\t}\n\tif len(m.ResolverName) > 0 {\n\t\tdAtA[i] = 0x2a\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(len(m.ResolverName)))\n\t\ti += copy(dAtA[i:], m.ResolverName)\n\t}\n\tif m.Ipv4Compat {\n\t\tdAtA[i] = 0x30\n\t\ti++\n\t\tif m.Ipv4Compat {\n\t\t\tdAtA[i] = 1\n\t\t} else {\n\t\t\tdAtA[i] = 0\n\t\t}\n\t\ti++\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *SocketAddress_PortValue) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tdAtA[i] = 0x18\n\ti++\n\ti = encodeVarintAddress(dAtA, i, uint64(m.PortValue))\n\treturn i, nil\n}\nfunc (m *SocketAddress_NamedPort) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tdAtA[i] = 0x22\n\ti++\n\ti = encodeVarintAddress(dAtA, i, uint64(len(m.NamedPort)))\n\ti += copy(dAtA[i:], m.NamedPort)\n\treturn i, nil\n}\nfunc (m *TcpKeepalive) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *TcpKeepalive) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.KeepaliveProbes != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.KeepaliveProbes.Size()))\n\t\tn2, err2 := m.KeepaliveProbes.MarshalTo(dAtA[i:])\n\t\tif err2 != nil {\n\t\t\treturn 0, err2\n\t\t}\n\t\ti += n2\n\t}\n\tif m.KeepaliveTime != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.KeepaliveTime.Size()))\n\t\tn3, err3 := m.KeepaliveTime.MarshalTo(dAtA[i:])\n\t\tif err3 != nil {\n\t\t\treturn 0, err3\n\t\t}\n\t\ti += n3\n\t}\n\tif m.KeepaliveInterval != nil {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.KeepaliveInterval.Size()))\n\t\tn4, err4 := m.KeepaliveInterval.MarshalTo(dAtA[i:])\n\t\tif err4 != nil {\n\t\t\treturn 0, err4\n\t\t}\n\t\ti += n4\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *BindConfig) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *BindConfig) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.SourceAddress != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.SourceAddress.Size()))\n\t\tn5, err5 := m.SourceAddress.MarshalTo(dAtA[i:])\n\t\tif err5 != nil {\n\t\t\treturn 0, err5\n\t\t}\n\t\ti += n5\n\t}\n\tif m.Freebind != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.Freebind.Size()))\n\t\tn6, err6 := m.Freebind.MarshalTo(dAtA[i:])\n\t\tif err6 != nil {\n\t\t\treturn 0, err6\n\t\t}\n\t\ti += n6\n\t}\n\tif len(m.SocketOptions) > 0 {\n\t\tfor _, msg := range m.SocketOptions {\n\t\t\tdAtA[i] = 0x1a\n\t\t\ti++\n\t\t\ti = encodeVarintAddress(dAtA, i, uint64(msg.Size()))\n\t\t\tn, err := msg.MarshalTo(dAtA[i:])\n\t\t\tif err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t\ti += n\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *Address) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *Address) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Address != nil {\n\t\tnn7, err7 := m.Address.MarshalTo(dAtA[i:])\n\t\tif err7 != nil {\n\t\t\treturn 0, err7\n\t\t}\n\t\ti += nn7\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *Address_SocketAddress) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.SocketAddress != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.SocketAddress.Size()))\n\t\tn8, err8 := m.SocketAddress.MarshalTo(dAtA[i:])\n\t\tif err8 != nil {\n\t\t\treturn 0, err8\n\t\t}\n\t\ti += n8\n\t}\n\treturn i, nil\n}\nfunc (m *Address_Pipe) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.Pipe != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.Pipe.Size()))\n\t\tn9, err9 := m.Pipe.MarshalTo(dAtA[i:])\n\t\tif err9 != nil {\n\t\t\treturn 0, err9\n\t\t}\n\t\ti += n9\n\t}\n\treturn i, nil\n}\nfunc (m *CidrRange) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *CidrRange) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.AddressPrefix) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(len(m.AddressPrefix)))\n\t\ti += copy(dAtA[i:], m.AddressPrefix)\n\t}\n\tif m.PrefixLen != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAddress(dAtA, i, uint64(m.PrefixLen.Size()))\n\t\tn10, err10 := m.PrefixLen.MarshalTo(dAtA[i:])\n\t\tif err10 != nil {\n\t\t\treturn 0, err10\n\t\t}\n\t\ti += n10\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc encodeVarintAddress(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *Pipe) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Path)\n\tif l > 0 {\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *SocketAddress) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Protocol != 0 {\n\t\tn += 1 + sovAddress(uint64(m.Protocol))\n\t}\n\tl = len(m.Address)\n\tif l > 0 {\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.PortSpecifier != nil {\n\t\tn += m.PortSpecifier.Size()\n\t}\n\tl = len(m.ResolverName)\n\tif l > 0 {\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.Ipv4Compat {\n\t\tn += 2\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *SocketAddress_PortValue) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tn += 1 + sovAddress(uint64(m.PortValue))\n\treturn n\n}\nfunc (m *SocketAddress_NamedPort) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.NamedPort)\n\tn += 1 + l + sovAddress(uint64(l))\n\treturn n\n}\nfunc (m *TcpKeepalive) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.KeepaliveProbes != nil {\n\t\tl = m.KeepaliveProbes.Size()\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.KeepaliveTime != nil {\n\t\tl = m.KeepaliveTime.Size()\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.KeepaliveInterval != nil {\n\t\tl = m.KeepaliveInterval.Size()\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *BindConfig) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.SourceAddress != nil {\n\t\tl = m.SourceAddress.Size()\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.Freebind != nil {\n\t\tl = m.Freebind.Size()\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif len(m.SocketOptions) > 0 {\n\t\tfor _, e := range m.SocketOptions {\n\t\t\tl = e.Size()\n\t\t\tn += 1 + l + sovAddress(uint64(l))\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *Address) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Address != nil {\n\t\tn += m.Address.Size()\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *Address_SocketAddress) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.SocketAddress != nil {\n\t\tl = m.SocketAddress.Size()\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *Address_Pipe) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Pipe != nil {\n\t\tl = m.Pipe.Size()\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *CidrRange) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.AddressPrefix)\n\tif l > 0 {\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.PrefixLen != nil {\n\t\tl = m.PrefixLen.Size()\n\t\tn += 1 + l + sovAddress(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc sovAddress(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozAddress(x uint64) (n int) {\n\treturn sovAddress(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *Pipe) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Pipe: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Pipe: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Path\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Path = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAddress(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *SocketAddress) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: SocketAddress: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: SocketAddress: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Protocol\", wireType)\n\t\t\t}\n\t\t\tm.Protocol = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.Protocol |= SocketAddress_Protocol(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Address\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Address = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field PortValue\", wireType)\n\t\t\t}\n\t\t\tvar v uint32\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tv |= uint32(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.PortSpecifier = &SocketAddress_PortValue{v}\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field NamedPort\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.PortSpecifier = &SocketAddress_NamedPort{string(dAtA[iNdEx:postIndex])}\n\t\t\tiNdEx = postIndex\n\t\tcase 5:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ResolverName\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.ResolverName = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 6:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Ipv4Compat\", wireType)\n\t\t\t}\n\t\t\tvar v int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tv |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.Ipv4Compat = bool(v != 0)\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAddress(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *TcpKeepalive) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: TcpKeepalive: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: TcpKeepalive: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field KeepaliveProbes\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.KeepaliveProbes == nil {\n\t\t\t\tm.KeepaliveProbes = &types.UInt32Value{}\n\t\t\t}\n\t\t\tif err := m.KeepaliveProbes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field KeepaliveTime\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.KeepaliveTime == nil {\n\t\t\t\tm.KeepaliveTime = &types.UInt32Value{}\n\t\t\t}\n\t\t\tif err := m.KeepaliveTime.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field KeepaliveInterval\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.KeepaliveInterval == nil {\n\t\t\t\tm.KeepaliveInterval = &types.UInt32Value{}\n\t\t\t}\n\t\t\tif err := m.KeepaliveInterval.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAddress(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *BindConfig) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: BindConfig: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: BindConfig: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field SourceAddress\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.SourceAddress == nil {\n\t\t\t\tm.SourceAddress = &SocketAddress{}\n\t\t\t}\n\t\t\tif err := m.SourceAddress.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Freebind\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Freebind == nil {\n\t\t\t\tm.Freebind = &types.BoolValue{}\n\t\t\t}\n\t\t\tif err := m.Freebind.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field SocketOptions\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.SocketOptions = append(m.SocketOptions, &SocketOption{})\n\t\t\tif err := m.SocketOptions[len(m.SocketOptions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAddress(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *Address) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Address: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Address: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field SocketAddress\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := &SocketAddress{}\n\t\t\tif err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tm.Address = &Address_SocketAddress{v}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Pipe\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := &Pipe{}\n\t\t\tif err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tm.Address = &Address_Pipe{v}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAddress(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *CidrRange) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: CidrRange: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: CidrRange: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field AddressPrefix\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.AddressPrefix = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field PrefixLen\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.PrefixLen == nil {\n\t\t\t\tm.PrefixLen = &types.UInt32Value{}\n\t\t\t}\n\t\t\tif err := m.PrefixLen.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAddress(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipAddress(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowAddress\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowAddress\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthAddress\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthAddress\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowAddress\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipAddress(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthAddress\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthAddress = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowAddress   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/api/v2/core/base.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/api/v2/core/base.proto\n\npackage core\n\nimport (\n\tbytes \"bytes\"\n\tfmt \"fmt\"\n\t_ \"github.com/envoyproxy/protoc-gen-validate/validate\"\n\t_ \"github.com/gogo/protobuf/gogoproto\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\tgithub_com_gogo_protobuf_sortkeys \"github.com/gogo/protobuf/sortkeys\"\n\ttypes \"github.com/gogo/protobuf/types\"\n\t_type \"go.aporeto.io/trireme-lib/third_party/generated/envoyproxy/data-plane-api/envoy/type\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\n// Envoy supports :ref:`upstream priority routing\n// <arch_overview_http_routing_priority>` both at the route and the virtual\n// cluster level. The current priority implementation uses different connection\n// pool and circuit breaking settings for each priority level. This means that\n// even for HTTP/2 requests, two physical connections will be used to an\n// upstream host. In the future Envoy will likely support true HTTP/2 priority\n// over a single upstream connection.\ntype RoutingPriority int32\n\nconst (\n\tRoutingPriority_DEFAULT RoutingPriority = 0\n\tRoutingPriority_HIGH    RoutingPriority = 1\n)\n\nvar RoutingPriority_name = map[int32]string{\n\t0: \"DEFAULT\",\n\t1: \"HIGH\",\n}\n\nvar RoutingPriority_value = map[string]int32{\n\t\"DEFAULT\": 0,\n\t\"HIGH\":    1,\n}\n\nfunc (x RoutingPriority) String() string {\n\treturn proto.EnumName(RoutingPriority_name, int32(x))\n}\n\nfunc (RoutingPriority) EnumDescriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{0}\n}\n\n// HTTP request method.\ntype RequestMethod int32\n\nconst (\n\tMETHOD_UNSPECIFIED RequestMethod = 0\n\tGET                RequestMethod = 1\n\tHEAD               RequestMethod = 2\n\tPOST               RequestMethod = 3\n\tPUT                RequestMethod = 4\n\tDELETE             RequestMethod = 5\n\tCONNECT            RequestMethod = 6\n\tOPTIONS            RequestMethod = 7\n\tTRACE              RequestMethod = 8\n\tPATCH              RequestMethod = 9\n)\n\nvar RequestMethod_name = map[int32]string{\n\t0: \"METHOD_UNSPECIFIED\",\n\t1: \"GET\",\n\t2: \"HEAD\",\n\t3: \"POST\",\n\t4: \"PUT\",\n\t5: \"DELETE\",\n\t6: \"CONNECT\",\n\t7: \"OPTIONS\",\n\t8: \"TRACE\",\n\t9: \"PATCH\",\n}\n\nvar RequestMethod_value = map[string]int32{\n\t\"METHOD_UNSPECIFIED\": 0,\n\t\"GET\":                1,\n\t\"HEAD\":               2,\n\t\"POST\":               3,\n\t\"PUT\":                4,\n\t\"DELETE\":             5,\n\t\"CONNECT\":            6,\n\t\"OPTIONS\":            7,\n\t\"TRACE\":              8,\n\t\"PATCH\":              9,\n}\n\nfunc (x RequestMethod) String() string {\n\treturn proto.EnumName(RequestMethod_name, int32(x))\n}\n\nfunc (RequestMethod) EnumDescriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{1}\n}\n\n// Identifies the direction of the traffic relative to the local Envoy.\ntype TrafficDirection int32\n\nconst (\n\t// Default option is unspecified.\n\tTrafficDirection_UNSPECIFIED TrafficDirection = 0\n\t// The transport is used for incoming traffic.\n\tTrafficDirection_INBOUND TrafficDirection = 1\n\t// The transport is used for outgoing traffic.\n\tTrafficDirection_OUTBOUND TrafficDirection = 2\n)\n\nvar TrafficDirection_name = map[int32]string{\n\t0: \"UNSPECIFIED\",\n\t1: \"INBOUND\",\n\t2: \"OUTBOUND\",\n}\n\nvar TrafficDirection_value = map[string]int32{\n\t\"UNSPECIFIED\": 0,\n\t\"INBOUND\":     1,\n\t\"OUTBOUND\":    2,\n}\n\nfunc (x TrafficDirection) String() string {\n\treturn proto.EnumName(TrafficDirection_name, int32(x))\n}\n\nfunc (TrafficDirection) EnumDescriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{2}\n}\n\ntype SocketOption_SocketState int32\n\nconst (\n\t// Socket options are applied after socket creation but before binding the socket to a port\n\tSTATE_PREBIND SocketOption_SocketState = 0\n\t// Socket options are applied after binding the socket to a port but before calling listen()\n\tSTATE_BOUND SocketOption_SocketState = 1\n\t// Socket options are applied after calling listen()\n\tSTATE_LISTENING SocketOption_SocketState = 2\n)\n\nvar SocketOption_SocketState_name = map[int32]string{\n\t0: \"STATE_PREBIND\",\n\t1: \"STATE_BOUND\",\n\t2: \"STATE_LISTENING\",\n}\n\nvar SocketOption_SocketState_value = map[string]int32{\n\t\"STATE_PREBIND\":   0,\n\t\"STATE_BOUND\":     1,\n\t\"STATE_LISTENING\": 2,\n}\n\nfunc (x SocketOption_SocketState) String() string {\n\treturn proto.EnumName(SocketOption_SocketState_name, int32(x))\n}\n\nfunc (SocketOption_SocketState) EnumDescriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{11, 0}\n}\n\n// Identifies location of where either Envoy runs or where upstream hosts run.\ntype Locality struct {\n\t// Region this :ref:`zone <envoy_api_field_core.Locality.zone>` belongs to.\n\tRegion string `protobuf:\"bytes,1,opt,name=region,proto3\" json:\"region,omitempty\"`\n\t// Defines the local service zone where Envoy is running. Though optional, it\n\t// should be set if discovery service routing is used and the discovery\n\t// service exposes :ref:`zone data <envoy_api_field_endpoint.LocalityLbEndpoints.locality>`,\n\t// either in this message or via :option:`--service-zone`. The meaning of zone\n\t// is context dependent, e.g. `Availability Zone (AZ)\n\t// <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html>`_\n\t// on AWS, `Zone <https://cloud.google.com/compute/docs/regions-zones/>`_ on\n\t// GCP, etc.\n\tZone string `protobuf:\"bytes,2,opt,name=zone,proto3\" json:\"zone,omitempty\"`\n\t// When used for locality of upstream hosts, this field further splits zone\n\t// into smaller chunks of sub-zones so they can be load balanced\n\t// independently.\n\tSubZone              string   `protobuf:\"bytes,3,opt,name=sub_zone,json=subZone,proto3\" json:\"sub_zone,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *Locality) Reset()         { *m = Locality{} }\nfunc (m *Locality) String() string { return proto.CompactTextString(m) }\nfunc (*Locality) ProtoMessage()    {}\nfunc (*Locality) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{0}\n}\nfunc (m *Locality) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *Locality) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *Locality) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_Locality.Merge(m, src)\n}\nfunc (m *Locality) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *Locality) XXX_DiscardUnknown() {\n\txxx_messageInfo_Locality.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_Locality proto.InternalMessageInfo\n\nfunc (m *Locality) GetRegion() string {\n\tif m != nil {\n\t\treturn m.Region\n\t}\n\treturn \"\"\n}\n\nfunc (m *Locality) GetZone() string {\n\tif m != nil {\n\t\treturn m.Zone\n\t}\n\treturn \"\"\n}\n\nfunc (m *Locality) GetSubZone() string {\n\tif m != nil {\n\t\treturn m.SubZone\n\t}\n\treturn \"\"\n}\n\n// Identifies a specific Envoy instance. The node identifier is presented to the\n// management server, which may use this identifier to distinguish per Envoy\n// configuration for serving.\ntype Node struct {\n\t// An opaque node identifier for the Envoy node. This also provides the local\n\t// service node name. It should be set if any of the following features are\n\t// used: :ref:`statsd <arch_overview_statistics>`, :ref:`CDS\n\t// <config_cluster_manager_cds>`, and :ref:`HTTP tracing\n\t// <arch_overview_tracing>`, either in this message or via\n\t// :option:`--service-node`.\n\tId string `protobuf:\"bytes,1,opt,name=id,proto3\" json:\"id,omitempty\"`\n\t// Defines the local service cluster name where Envoy is running. Though\n\t// optional, it should be set if any of the following features are used:\n\t// :ref:`statsd <arch_overview_statistics>`, :ref:`health check cluster\n\t// verification <envoy_api_field_core.HealthCheck.HttpHealthCheck.service_name>`,\n\t// :ref:`runtime override directory <envoy_api_msg_config.bootstrap.v2.Runtime>`,\n\t// :ref:`user agent addition\n\t// <envoy_api_field_config.filter.network.http_connection_manager.v2.HttpConnectionManager.add_user_agent>`,\n\t// :ref:`HTTP global rate limiting <config_http_filters_rate_limit>`,\n\t// :ref:`CDS <config_cluster_manager_cds>`, and :ref:`HTTP tracing\n\t// <arch_overview_tracing>`, either in this message or via\n\t// :option:`--service-cluster`.\n\tCluster string `protobuf:\"bytes,2,opt,name=cluster,proto3\" json:\"cluster,omitempty\"`\n\t// Opaque metadata extending the node identifier. Envoy will pass this\n\t// directly to the management server.\n\tMetadata *types.Struct `protobuf:\"bytes,3,opt,name=metadata,proto3\" json:\"metadata,omitempty\"`\n\t// Locality specifying where the Envoy instance is running.\n\tLocality *Locality `protobuf:\"bytes,4,opt,name=locality,proto3\" json:\"locality,omitempty\"`\n\t// This is motivated by informing a management server during canary which\n\t// version of Envoy is being tested in a heterogeneous fleet. This will be set\n\t// by Envoy in management server RPCs.\n\tBuildVersion         string   `protobuf:\"bytes,5,opt,name=build_version,json=buildVersion,proto3\" json:\"build_version,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *Node) Reset()         { *m = Node{} }\nfunc (m *Node) String() string { return proto.CompactTextString(m) }\nfunc (*Node) ProtoMessage()    {}\nfunc (*Node) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{1}\n}\nfunc (m *Node) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *Node) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *Node) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_Node.Merge(m, src)\n}\nfunc (m *Node) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *Node) XXX_DiscardUnknown() {\n\txxx_messageInfo_Node.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_Node proto.InternalMessageInfo\n\nfunc (m *Node) GetId() string {\n\tif m != nil {\n\t\treturn m.Id\n\t}\n\treturn \"\"\n}\n\nfunc (m *Node) GetCluster() string {\n\tif m != nil {\n\t\treturn m.Cluster\n\t}\n\treturn \"\"\n}\n\nfunc (m *Node) GetMetadata() *types.Struct {\n\tif m != nil {\n\t\treturn m.Metadata\n\t}\n\treturn nil\n}\n\nfunc (m *Node) GetLocality() *Locality {\n\tif m != nil {\n\t\treturn m.Locality\n\t}\n\treturn nil\n}\n\nfunc (m *Node) GetBuildVersion() string {\n\tif m != nil {\n\t\treturn m.BuildVersion\n\t}\n\treturn \"\"\n}\n\n// Metadata provides additional inputs to filters based on matched listeners,\n// filter chains, routes and endpoints. It is structured as a map, usually from\n// filter name (in reverse DNS format) to metadata specific to the filter. Metadata\n// key-values for a filter are merged as connection and request handling occurs,\n// with later values for the same key overriding earlier values.\n//\n// An example use of metadata is providing additional values to\n// http_connection_manager in the envoy.http_connection_manager.access_log\n// namespace.\n//\n// Another example use of metadata is to per service config info in cluster metadata, which may get\n// consumed by multiple filters.\n//\n// For load balancing, Metadata provides a means to subset cluster endpoints.\n// Endpoints have a Metadata object associated and routes contain a Metadata\n// object to match against. There are some well defined metadata used today for\n// this purpose:\n//\n// * ``{\"envoy.lb\": {\"canary\": <bool> }}`` This indicates the canary status of an\n//   endpoint and is also used during header processing\n//   (x-envoy-upstream-canary) and for stats purposes.\ntype Metadata struct {\n\t// Key is the reverse DNS filter name, e.g. com.acme.widget. The envoy.*\n\t// namespace is reserved for Envoy's built-in filters.\n\tFilterMetadata       map[string]*types.Struct `protobuf:\"bytes,1,rep,name=filter_metadata,json=filterMetadata,proto3\" json:\"filter_metadata,omitempty\" protobuf_key:\"bytes,1,opt,name=key,proto3\" protobuf_val:\"bytes,2,opt,name=value,proto3\"`\n\tXXX_NoUnkeyedLiteral struct{}                 `json:\"-\"`\n\tXXX_unrecognized     []byte                   `json:\"-\"`\n\tXXX_sizecache        int32                    `json:\"-\"`\n}\n\nfunc (m *Metadata) Reset()         { *m = Metadata{} }\nfunc (m *Metadata) String() string { return proto.CompactTextString(m) }\nfunc (*Metadata) ProtoMessage()    {}\nfunc (*Metadata) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{2}\n}\nfunc (m *Metadata) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *Metadata) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *Metadata) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_Metadata.Merge(m, src)\n}\nfunc (m *Metadata) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *Metadata) XXX_DiscardUnknown() {\n\txxx_messageInfo_Metadata.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_Metadata proto.InternalMessageInfo\n\nfunc (m *Metadata) GetFilterMetadata() map[string]*types.Struct {\n\tif m != nil {\n\t\treturn m.FilterMetadata\n\t}\n\treturn nil\n}\n\n// Runtime derived uint32 with a default when not specified.\ntype RuntimeUInt32 struct {\n\t// Default value if runtime value is not available.\n\tDefaultValue uint32 `protobuf:\"varint,2,opt,name=default_value,json=defaultValue,proto3\" json:\"default_value,omitempty\"`\n\t// Runtime key to get value for comparison. This value is used if defined.\n\tRuntimeKey           string   `protobuf:\"bytes,3,opt,name=runtime_key,json=runtimeKey,proto3\" json:\"runtime_key,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *RuntimeUInt32) Reset()         { *m = RuntimeUInt32{} }\nfunc (m *RuntimeUInt32) String() string { return proto.CompactTextString(m) }\nfunc (*RuntimeUInt32) ProtoMessage()    {}\nfunc (*RuntimeUInt32) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{3}\n}\nfunc (m *RuntimeUInt32) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *RuntimeUInt32) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *RuntimeUInt32) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_RuntimeUInt32.Merge(m, src)\n}\nfunc (m *RuntimeUInt32) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *RuntimeUInt32) XXX_DiscardUnknown() {\n\txxx_messageInfo_RuntimeUInt32.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_RuntimeUInt32 proto.InternalMessageInfo\n\nfunc (m *RuntimeUInt32) GetDefaultValue() uint32 {\n\tif m != nil {\n\t\treturn m.DefaultValue\n\t}\n\treturn 0\n}\n\nfunc (m *RuntimeUInt32) GetRuntimeKey() string {\n\tif m != nil {\n\t\treturn m.RuntimeKey\n\t}\n\treturn \"\"\n}\n\n// Header name/value pair.\ntype HeaderValue struct {\n\t// Header name.\n\tKey string `protobuf:\"bytes,1,opt,name=key,proto3\" json:\"key,omitempty\"`\n\t// Header value.\n\t//\n\t// The same :ref:`format specifier <config_access_log_format>` as used for\n\t// :ref:`HTTP access logging <config_access_log>` applies here, however\n\t// unknown header values are replaced with the empty string instead of `-`.\n\tValue                string   `protobuf:\"bytes,2,opt,name=value,proto3\" json:\"value,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *HeaderValue) Reset()         { *m = HeaderValue{} }\nfunc (m *HeaderValue) String() string { return proto.CompactTextString(m) }\nfunc (*HeaderValue) ProtoMessage()    {}\nfunc (*HeaderValue) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{4}\n}\nfunc (m *HeaderValue) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *HeaderValue) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *HeaderValue) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_HeaderValue.Merge(m, src)\n}\nfunc (m *HeaderValue) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *HeaderValue) XXX_DiscardUnknown() {\n\txxx_messageInfo_HeaderValue.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_HeaderValue proto.InternalMessageInfo\n\nfunc (m *HeaderValue) GetKey() string {\n\tif m != nil {\n\t\treturn m.Key\n\t}\n\treturn \"\"\n}\n\nfunc (m *HeaderValue) GetValue() string {\n\tif m != nil {\n\t\treturn m.Value\n\t}\n\treturn \"\"\n}\n\n// Header name/value pair plus option to control append behavior.\ntype HeaderValueOption struct {\n\t// Header name/value pair that this option applies to.\n\tHeader *HeaderValue `protobuf:\"bytes,1,opt,name=header,proto3\" json:\"header,omitempty\"`\n\t// Should the value be appended? If true (default), the value is appended to\n\t// existing values.\n\tAppend               *types.BoolValue `protobuf:\"bytes,2,opt,name=append,proto3\" json:\"append,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}         `json:\"-\"`\n\tXXX_unrecognized     []byte           `json:\"-\"`\n\tXXX_sizecache        int32            `json:\"-\"`\n}\n\nfunc (m *HeaderValueOption) Reset()         { *m = HeaderValueOption{} }\nfunc (m *HeaderValueOption) String() string { return proto.CompactTextString(m) }\nfunc (*HeaderValueOption) ProtoMessage()    {}\nfunc (*HeaderValueOption) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{5}\n}\nfunc (m *HeaderValueOption) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *HeaderValueOption) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *HeaderValueOption) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_HeaderValueOption.Merge(m, src)\n}\nfunc (m *HeaderValueOption) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *HeaderValueOption) XXX_DiscardUnknown() {\n\txxx_messageInfo_HeaderValueOption.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_HeaderValueOption proto.InternalMessageInfo\n\nfunc (m *HeaderValueOption) GetHeader() *HeaderValue {\n\tif m != nil {\n\t\treturn m.Header\n\t}\n\treturn nil\n}\n\nfunc (m *HeaderValueOption) GetAppend() *types.BoolValue {\n\tif m != nil {\n\t\treturn m.Append\n\t}\n\treturn nil\n}\n\n// Wrapper for a set of headers.\ntype HeaderMap struct {\n\tHeaders              []*HeaderValue `protobuf:\"bytes,1,rep,name=headers,proto3\" json:\"headers,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}       `json:\"-\"`\n\tXXX_unrecognized     []byte         `json:\"-\"`\n\tXXX_sizecache        int32          `json:\"-\"`\n}\n\nfunc (m *HeaderMap) Reset()         { *m = HeaderMap{} }\nfunc (m *HeaderMap) String() string { return proto.CompactTextString(m) }\nfunc (*HeaderMap) ProtoMessage()    {}\nfunc (*HeaderMap) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{6}\n}\nfunc (m *HeaderMap) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *HeaderMap) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *HeaderMap) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_HeaderMap.Merge(m, src)\n}\nfunc (m *HeaderMap) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *HeaderMap) XXX_DiscardUnknown() {\n\txxx_messageInfo_HeaderMap.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_HeaderMap proto.InternalMessageInfo\n\nfunc (m *HeaderMap) GetHeaders() []*HeaderValue {\n\tif m != nil {\n\t\treturn m.Headers\n\t}\n\treturn nil\n}\n\n// Data source consisting of either a file or an inline value.\ntype DataSource struct {\n\t// Types that are valid to be assigned to Specifier:\n\t//\t*DataSource_Filename\n\t//\t*DataSource_InlineBytes\n\t//\t*DataSource_InlineString\n\tSpecifier            isDataSource_Specifier `protobuf_oneof:\"specifier\"`\n\tXXX_NoUnkeyedLiteral struct{}               `json:\"-\"`\n\tXXX_unrecognized     []byte                 `json:\"-\"`\n\tXXX_sizecache        int32                  `json:\"-\"`\n}\n\nfunc (m *DataSource) Reset()         { *m = DataSource{} }\nfunc (m *DataSource) String() string { return proto.CompactTextString(m) }\nfunc (*DataSource) ProtoMessage()    {}\nfunc (*DataSource) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{7}\n}\nfunc (m *DataSource) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *DataSource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *DataSource) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_DataSource.Merge(m, src)\n}\nfunc (m *DataSource) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *DataSource) XXX_DiscardUnknown() {\n\txxx_messageInfo_DataSource.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_DataSource proto.InternalMessageInfo\n\ntype isDataSource_Specifier interface {\n\tisDataSource_Specifier()\n\tEqual(interface{}) bool\n\tMarshalTo([]byte) (int, error)\n\tSize() int\n}\n\ntype DataSource_Filename struct {\n\tFilename string `protobuf:\"bytes,1,opt,name=filename,proto3,oneof\"`\n}\ntype DataSource_InlineBytes struct {\n\tInlineBytes []byte `protobuf:\"bytes,2,opt,name=inline_bytes,json=inlineBytes,proto3,oneof\"`\n}\ntype DataSource_InlineString struct {\n\tInlineString string `protobuf:\"bytes,3,opt,name=inline_string,json=inlineString,proto3,oneof\"`\n}\n\nfunc (*DataSource_Filename) isDataSource_Specifier()     {}\nfunc (*DataSource_InlineBytes) isDataSource_Specifier()  {}\nfunc (*DataSource_InlineString) isDataSource_Specifier() {}\n\nfunc (m *DataSource) GetSpecifier() isDataSource_Specifier {\n\tif m != nil {\n\t\treturn m.Specifier\n\t}\n\treturn nil\n}\n\nfunc (m *DataSource) GetFilename() string {\n\tif x, ok := m.GetSpecifier().(*DataSource_Filename); ok {\n\t\treturn x.Filename\n\t}\n\treturn \"\"\n}\n\nfunc (m *DataSource) GetInlineBytes() []byte {\n\tif x, ok := m.GetSpecifier().(*DataSource_InlineBytes); ok {\n\t\treturn x.InlineBytes\n\t}\n\treturn nil\n}\n\nfunc (m *DataSource) GetInlineString() string {\n\tif x, ok := m.GetSpecifier().(*DataSource_InlineString); ok {\n\t\treturn x.InlineString\n\t}\n\treturn \"\"\n}\n\n// XXX_OneofFuncs is for the internal use of the proto package.\nfunc (*DataSource) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {\n\treturn _DataSource_OneofMarshaler, _DataSource_OneofUnmarshaler, _DataSource_OneofSizer, []interface{}{\n\t\t(*DataSource_Filename)(nil),\n\t\t(*DataSource_InlineBytes)(nil),\n\t\t(*DataSource_InlineString)(nil),\n\t}\n}\n\nfunc _DataSource_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {\n\tm := msg.(*DataSource)\n\t// specifier\n\tswitch x := m.Specifier.(type) {\n\tcase *DataSource_Filename:\n\t\t_ = b.EncodeVarint(1<<3 | proto.WireBytes)\n\t\t_ = b.EncodeStringBytes(x.Filename)\n\tcase *DataSource_InlineBytes:\n\t\t_ = b.EncodeVarint(2<<3 | proto.WireBytes)\n\t\t_ = b.EncodeRawBytes(x.InlineBytes)\n\tcase *DataSource_InlineString:\n\t\t_ = b.EncodeVarint(3<<3 | proto.WireBytes)\n\t\t_ = b.EncodeStringBytes(x.InlineString)\n\tcase nil:\n\tdefault:\n\t\treturn fmt.Errorf(\"DataSource.Specifier has unexpected type %T\", x)\n\t}\n\treturn nil\n}\n\nfunc _DataSource_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {\n\tm := msg.(*DataSource)\n\tswitch tag {\n\tcase 1: // specifier.filename\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tx, err := b.DecodeStringBytes()\n\t\tm.Specifier = &DataSource_Filename{x}\n\t\treturn true, err\n\tcase 2: // specifier.inline_bytes\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tx, err := b.DecodeRawBytes(true)\n\t\tm.Specifier = &DataSource_InlineBytes{x}\n\t\treturn true, err\n\tcase 3: // specifier.inline_string\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tx, err := b.DecodeStringBytes()\n\t\tm.Specifier = &DataSource_InlineString{x}\n\t\treturn true, err\n\tdefault:\n\t\treturn false, nil\n\t}\n}\n\nfunc _DataSource_OneofSizer(msg proto.Message) (n int) {\n\tm := msg.(*DataSource)\n\t// specifier\n\tswitch x := m.Specifier.(type) {\n\tcase *DataSource_Filename:\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(len(x.Filename)))\n\t\tn += len(x.Filename)\n\tcase *DataSource_InlineBytes:\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(len(x.InlineBytes)))\n\t\tn += len(x.InlineBytes)\n\tcase *DataSource_InlineString:\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(len(x.InlineString)))\n\t\tn += len(x.InlineString)\n\tcase nil:\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"proto: unexpected type %T in oneof\", x))\n\t}\n\treturn n\n}\n\n// The message specifies how to fetch data from remote and how to verify it.\ntype RemoteDataSource struct {\n\t// The HTTP URI to fetch the remote data.\n\tHttpUri *HttpUri `protobuf:\"bytes,1,opt,name=http_uri,json=httpUri,proto3\" json:\"http_uri,omitempty\"`\n\t// SHA256 string for verifying data.\n\tSha256               string   `protobuf:\"bytes,2,opt,name=sha256,proto3\" json:\"sha256,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *RemoteDataSource) Reset()         { *m = RemoteDataSource{} }\nfunc (m *RemoteDataSource) String() string { return proto.CompactTextString(m) }\nfunc (*RemoteDataSource) ProtoMessage()    {}\nfunc (*RemoteDataSource) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{8}\n}\nfunc (m *RemoteDataSource) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *RemoteDataSource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *RemoteDataSource) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_RemoteDataSource.Merge(m, src)\n}\nfunc (m *RemoteDataSource) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *RemoteDataSource) XXX_DiscardUnknown() {\n\txxx_messageInfo_RemoteDataSource.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_RemoteDataSource proto.InternalMessageInfo\n\nfunc (m *RemoteDataSource) GetHttpUri() *HttpUri {\n\tif m != nil {\n\t\treturn m.HttpUri\n\t}\n\treturn nil\n}\n\nfunc (m *RemoteDataSource) GetSha256() string {\n\tif m != nil {\n\t\treturn m.Sha256\n\t}\n\treturn \"\"\n}\n\n// Async data source which support async data fetch.\ntype AsyncDataSource struct {\n\t// Types that are valid to be assigned to Specifier:\n\t//\t*AsyncDataSource_Local\n\t//\t*AsyncDataSource_Remote\n\tSpecifier            isAsyncDataSource_Specifier `protobuf_oneof:\"specifier\"`\n\tXXX_NoUnkeyedLiteral struct{}                    `json:\"-\"`\n\tXXX_unrecognized     []byte                      `json:\"-\"`\n\tXXX_sizecache        int32                       `json:\"-\"`\n}\n\nfunc (m *AsyncDataSource) Reset()         { *m = AsyncDataSource{} }\nfunc (m *AsyncDataSource) String() string { return proto.CompactTextString(m) }\nfunc (*AsyncDataSource) ProtoMessage()    {}\nfunc (*AsyncDataSource) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{9}\n}\nfunc (m *AsyncDataSource) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *AsyncDataSource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *AsyncDataSource) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_AsyncDataSource.Merge(m, src)\n}\nfunc (m *AsyncDataSource) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *AsyncDataSource) XXX_DiscardUnknown() {\n\txxx_messageInfo_AsyncDataSource.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_AsyncDataSource proto.InternalMessageInfo\n\ntype isAsyncDataSource_Specifier interface {\n\tisAsyncDataSource_Specifier()\n\tEqual(interface{}) bool\n\tMarshalTo([]byte) (int, error)\n\tSize() int\n}\n\ntype AsyncDataSource_Local struct {\n\tLocal *DataSource `protobuf:\"bytes,1,opt,name=local,proto3,oneof\"`\n}\ntype AsyncDataSource_Remote struct {\n\tRemote *RemoteDataSource `protobuf:\"bytes,2,opt,name=remote,proto3,oneof\"`\n}\n\nfunc (*AsyncDataSource_Local) isAsyncDataSource_Specifier()  {}\nfunc (*AsyncDataSource_Remote) isAsyncDataSource_Specifier() {}\n\nfunc (m *AsyncDataSource) GetSpecifier() isAsyncDataSource_Specifier {\n\tif m != nil {\n\t\treturn m.Specifier\n\t}\n\treturn nil\n}\n\nfunc (m *AsyncDataSource) GetLocal() *DataSource {\n\tif x, ok := m.GetSpecifier().(*AsyncDataSource_Local); ok {\n\t\treturn x.Local\n\t}\n\treturn nil\n}\n\nfunc (m *AsyncDataSource) GetRemote() *RemoteDataSource {\n\tif x, ok := m.GetSpecifier().(*AsyncDataSource_Remote); ok {\n\t\treturn x.Remote\n\t}\n\treturn nil\n}\n\n// XXX_OneofFuncs is for the internal use of the proto package.\nfunc (*AsyncDataSource) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {\n\treturn _AsyncDataSource_OneofMarshaler, _AsyncDataSource_OneofUnmarshaler, _AsyncDataSource_OneofSizer, []interface{}{\n\t\t(*AsyncDataSource_Local)(nil),\n\t\t(*AsyncDataSource_Remote)(nil),\n\t}\n}\n\nfunc _AsyncDataSource_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {\n\tm := msg.(*AsyncDataSource)\n\t// specifier\n\tswitch x := m.Specifier.(type) {\n\tcase *AsyncDataSource_Local:\n\t\t_ = b.EncodeVarint(1<<3 | proto.WireBytes)\n\t\tif err := b.EncodeMessage(x.Local); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase *AsyncDataSource_Remote:\n\t\t_ = b.EncodeVarint(2<<3 | proto.WireBytes)\n\t\tif err := b.EncodeMessage(x.Remote); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase nil:\n\tdefault:\n\t\treturn fmt.Errorf(\"AsyncDataSource.Specifier has unexpected type %T\", x)\n\t}\n\treturn nil\n}\n\nfunc _AsyncDataSource_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {\n\tm := msg.(*AsyncDataSource)\n\tswitch tag {\n\tcase 1: // specifier.local\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tmsg := new(DataSource)\n\t\terr := b.DecodeMessage(msg)\n\t\tm.Specifier = &AsyncDataSource_Local{msg}\n\t\treturn true, err\n\tcase 2: // specifier.remote\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tmsg := new(RemoteDataSource)\n\t\terr := b.DecodeMessage(msg)\n\t\tm.Specifier = &AsyncDataSource_Remote{msg}\n\t\treturn true, err\n\tdefault:\n\t\treturn false, nil\n\t}\n}\n\nfunc _AsyncDataSource_OneofSizer(msg proto.Message) (n int) {\n\tm := msg.(*AsyncDataSource)\n\t// specifier\n\tswitch x := m.Specifier.(type) {\n\tcase *AsyncDataSource_Local:\n\t\ts := proto.Size(x.Local)\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(s))\n\t\tn += s\n\tcase *AsyncDataSource_Remote:\n\t\ts := proto.Size(x.Remote)\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(s))\n\t\tn += s\n\tcase nil:\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"proto: unexpected type %T in oneof\", x))\n\t}\n\treturn n\n}\n\n// Configuration for transport socket in :ref:`listeners <config_listeners>` and\n// :ref:`clusters <envoy_api_msg_Cluster>`. If the configuration is\n// empty, a default transport socket implementation and configuration will be\n// chosen based on the platform and existence of tls_context.\ntype TransportSocket struct {\n\t// The name of the transport socket to instantiate. The name must match a supported transport\n\t// socket implementation.\n\tName string `protobuf:\"bytes,1,opt,name=name,proto3\" json:\"name,omitempty\"`\n\t// Implementation specific configuration which depends on the implementation being instantiated.\n\t// See the supported transport socket implementations for further documentation.\n\t//\n\t// Types that are valid to be assigned to ConfigType:\n\t//\t*TransportSocket_Config\n\t//\t*TransportSocket_TypedConfig\n\tConfigType           isTransportSocket_ConfigType `protobuf_oneof:\"config_type\"`\n\tXXX_NoUnkeyedLiteral struct{}                     `json:\"-\"`\n\tXXX_unrecognized     []byte                       `json:\"-\"`\n\tXXX_sizecache        int32                        `json:\"-\"`\n}\n\nfunc (m *TransportSocket) Reset()         { *m = TransportSocket{} }\nfunc (m *TransportSocket) String() string { return proto.CompactTextString(m) }\nfunc (*TransportSocket) ProtoMessage()    {}\nfunc (*TransportSocket) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{10}\n}\nfunc (m *TransportSocket) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *TransportSocket) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *TransportSocket) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_TransportSocket.Merge(m, src)\n}\nfunc (m *TransportSocket) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *TransportSocket) XXX_DiscardUnknown() {\n\txxx_messageInfo_TransportSocket.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_TransportSocket proto.InternalMessageInfo\n\ntype isTransportSocket_ConfigType interface {\n\tisTransportSocket_ConfigType()\n\tEqual(interface{}) bool\n\tMarshalTo([]byte) (int, error)\n\tSize() int\n}\n\ntype TransportSocket_Config struct {\n\tConfig *types.Struct `protobuf:\"bytes,2,opt,name=config,proto3,oneof\"`\n}\ntype TransportSocket_TypedConfig struct {\n\tTypedConfig *types.Any `protobuf:\"bytes,3,opt,name=typed_config,json=typedConfig,proto3,oneof\"`\n}\n\nfunc (*TransportSocket_Config) isTransportSocket_ConfigType()      {}\nfunc (*TransportSocket_TypedConfig) isTransportSocket_ConfigType() {}\n\nfunc (m *TransportSocket) GetConfigType() isTransportSocket_ConfigType {\n\tif m != nil {\n\t\treturn m.ConfigType\n\t}\n\treturn nil\n}\n\nfunc (m *TransportSocket) GetName() string {\n\tif m != nil {\n\t\treturn m.Name\n\t}\n\treturn \"\"\n}\n\nfunc (m *TransportSocket) GetConfig() *types.Struct {\n\tif x, ok := m.GetConfigType().(*TransportSocket_Config); ok {\n\t\treturn x.Config\n\t}\n\treturn nil\n}\n\nfunc (m *TransportSocket) GetTypedConfig() *types.Any {\n\tif x, ok := m.GetConfigType().(*TransportSocket_TypedConfig); ok {\n\t\treturn x.TypedConfig\n\t}\n\treturn nil\n}\n\n// XXX_OneofFuncs is for the internal use of the proto package.\nfunc (*TransportSocket) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {\n\treturn _TransportSocket_OneofMarshaler, _TransportSocket_OneofUnmarshaler, _TransportSocket_OneofSizer, []interface{}{\n\t\t(*TransportSocket_Config)(nil),\n\t\t(*TransportSocket_TypedConfig)(nil),\n\t}\n}\n\nfunc _TransportSocket_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {\n\tm := msg.(*TransportSocket)\n\t// config_type\n\tswitch x := m.ConfigType.(type) {\n\tcase *TransportSocket_Config:\n\t\t_ = b.EncodeVarint(2<<3 | proto.WireBytes)\n\t\tif err := b.EncodeMessage(x.Config); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase *TransportSocket_TypedConfig:\n\t\t_ = b.EncodeVarint(3<<3 | proto.WireBytes)\n\t\tif err := b.EncodeMessage(x.TypedConfig); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase nil:\n\tdefault:\n\t\treturn fmt.Errorf(\"TransportSocket.ConfigType has unexpected type %T\", x)\n\t}\n\treturn nil\n}\n\nfunc _TransportSocket_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {\n\tm := msg.(*TransportSocket)\n\tswitch tag {\n\tcase 2: // config_type.config\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tmsg := new(types.Struct)\n\t\terr := b.DecodeMessage(msg)\n\t\tm.ConfigType = &TransportSocket_Config{msg}\n\t\treturn true, err\n\tcase 3: // config_type.typed_config\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tmsg := new(types.Any)\n\t\terr := b.DecodeMessage(msg)\n\t\tm.ConfigType = &TransportSocket_TypedConfig{msg}\n\t\treturn true, err\n\tdefault:\n\t\treturn false, nil\n\t}\n}\n\nfunc _TransportSocket_OneofSizer(msg proto.Message) (n int) {\n\tm := msg.(*TransportSocket)\n\t// config_type\n\tswitch x := m.ConfigType.(type) {\n\tcase *TransportSocket_Config:\n\t\ts := proto.Size(x.Config)\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(s))\n\t\tn += s\n\tcase *TransportSocket_TypedConfig:\n\t\ts := proto.Size(x.TypedConfig)\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(s))\n\t\tn += s\n\tcase nil:\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"proto: unexpected type %T in oneof\", x))\n\t}\n\treturn n\n}\n\n// Generic socket option message. This would be used to set socket options that\n// might not exist in upstream kernels or precompiled Envoy binaries.\ntype SocketOption struct {\n\t// An optional name to give this socket option for debugging, etc.\n\t// Uniqueness is not required and no special meaning is assumed.\n\tDescription string `protobuf:\"bytes,1,opt,name=description,proto3\" json:\"description,omitempty\"`\n\t// Corresponding to the level value passed to setsockopt, such as IPPROTO_TCP\n\tLevel int64 `protobuf:\"varint,2,opt,name=level,proto3\" json:\"level,omitempty\"`\n\t// The numeric name as passed to setsockopt\n\tName int64 `protobuf:\"varint,3,opt,name=name,proto3\" json:\"name,omitempty\"`\n\t// Types that are valid to be assigned to Value:\n\t//\t*SocketOption_IntValue\n\t//\t*SocketOption_BufValue\n\tValue isSocketOption_Value `protobuf_oneof:\"value\"`\n\t// The state in which the option will be applied. When used in BindConfig\n\t// STATE_PREBIND is currently the only valid value.\n\tState                SocketOption_SocketState `protobuf:\"varint,6,opt,name=state,proto3,enum=envoy.api.v2.core.SocketOption_SocketState\" json:\"state,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}                 `json:\"-\"`\n\tXXX_unrecognized     []byte                   `json:\"-\"`\n\tXXX_sizecache        int32                    `json:\"-\"`\n}\n\nfunc (m *SocketOption) Reset()         { *m = SocketOption{} }\nfunc (m *SocketOption) String() string { return proto.CompactTextString(m) }\nfunc (*SocketOption) ProtoMessage()    {}\nfunc (*SocketOption) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{11}\n}\nfunc (m *SocketOption) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *SocketOption) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *SocketOption) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_SocketOption.Merge(m, src)\n}\nfunc (m *SocketOption) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *SocketOption) XXX_DiscardUnknown() {\n\txxx_messageInfo_SocketOption.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_SocketOption proto.InternalMessageInfo\n\ntype isSocketOption_Value interface {\n\tisSocketOption_Value()\n\tEqual(interface{}) bool\n\tMarshalTo([]byte) (int, error)\n\tSize() int\n}\n\ntype SocketOption_IntValue struct {\n\tIntValue int64 `protobuf:\"varint,4,opt,name=int_value,json=intValue,proto3,oneof\"`\n}\ntype SocketOption_BufValue struct {\n\tBufValue []byte `protobuf:\"bytes,5,opt,name=buf_value,json=bufValue,proto3,oneof\"`\n}\n\nfunc (*SocketOption_IntValue) isSocketOption_Value() {}\nfunc (*SocketOption_BufValue) isSocketOption_Value() {}\n\nfunc (m *SocketOption) GetValue() isSocketOption_Value {\n\tif m != nil {\n\t\treturn m.Value\n\t}\n\treturn nil\n}\n\nfunc (m *SocketOption) GetDescription() string {\n\tif m != nil {\n\t\treturn m.Description\n\t}\n\treturn \"\"\n}\n\nfunc (m *SocketOption) GetLevel() int64 {\n\tif m != nil {\n\t\treturn m.Level\n\t}\n\treturn 0\n}\n\nfunc (m *SocketOption) GetName() int64 {\n\tif m != nil {\n\t\treturn m.Name\n\t}\n\treturn 0\n}\n\nfunc (m *SocketOption) GetIntValue() int64 {\n\tif x, ok := m.GetValue().(*SocketOption_IntValue); ok {\n\t\treturn x.IntValue\n\t}\n\treturn 0\n}\n\nfunc (m *SocketOption) GetBufValue() []byte {\n\tif x, ok := m.GetValue().(*SocketOption_BufValue); ok {\n\t\treturn x.BufValue\n\t}\n\treturn nil\n}\n\nfunc (m *SocketOption) GetState() SocketOption_SocketState {\n\tif m != nil {\n\t\treturn m.State\n\t}\n\treturn STATE_PREBIND\n}\n\n// XXX_OneofFuncs is for the internal use of the proto package.\nfunc (*SocketOption) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {\n\treturn _SocketOption_OneofMarshaler, _SocketOption_OneofUnmarshaler, _SocketOption_OneofSizer, []interface{}{\n\t\t(*SocketOption_IntValue)(nil),\n\t\t(*SocketOption_BufValue)(nil),\n\t}\n}\n\nfunc _SocketOption_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {\n\tm := msg.(*SocketOption)\n\t// value\n\tswitch x := m.Value.(type) {\n\tcase *SocketOption_IntValue:\n\t\t_ = b.EncodeVarint(4<<3 | proto.WireVarint)\n\t\t_ = b.EncodeVarint(uint64(x.IntValue))\n\tcase *SocketOption_BufValue:\n\t\t_ = b.EncodeVarint(5<<3 | proto.WireBytes)\n\t\t_ = b.EncodeRawBytes(x.BufValue)\n\tcase nil:\n\tdefault:\n\t\treturn fmt.Errorf(\"SocketOption.Value has unexpected type %T\", x)\n\t}\n\treturn nil\n}\n\nfunc _SocketOption_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {\n\tm := msg.(*SocketOption)\n\tswitch tag {\n\tcase 4: // value.int_value\n\t\tif wire != proto.WireVarint {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tx, err := b.DecodeVarint()\n\t\tm.Value = &SocketOption_IntValue{int64(x)}\n\t\treturn true, err\n\tcase 5: // value.buf_value\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tx, err := b.DecodeRawBytes(true)\n\t\tm.Value = &SocketOption_BufValue{x}\n\t\treturn true, err\n\tdefault:\n\t\treturn false, nil\n\t}\n}\n\nfunc _SocketOption_OneofSizer(msg proto.Message) (n int) {\n\tm := msg.(*SocketOption)\n\t// value\n\tswitch x := m.Value.(type) {\n\tcase *SocketOption_IntValue:\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(x.IntValue))\n\tcase *SocketOption_BufValue:\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(len(x.BufValue)))\n\t\tn += len(x.BufValue)\n\tcase nil:\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"proto: unexpected type %T in oneof\", x))\n\t}\n\treturn n\n}\n\n// Runtime derived FractionalPercent with defaults for when the numerator or denominator is not\n// specified via a runtime key.\ntype RuntimeFractionalPercent struct {\n\t// Default value if the runtime value's for the numerator/denominator keys are not available.\n\tDefaultValue *_type.FractionalPercent `protobuf:\"bytes,1,opt,name=default_value,json=defaultValue,proto3\" json:\"default_value,omitempty\"`\n\t// Runtime key for a YAML representation of a FractionalPercent.\n\tRuntimeKey           string   `protobuf:\"bytes,2,opt,name=runtime_key,json=runtimeKey,proto3\" json:\"runtime_key,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *RuntimeFractionalPercent) Reset()         { *m = RuntimeFractionalPercent{} }\nfunc (m *RuntimeFractionalPercent) String() string { return proto.CompactTextString(m) }\nfunc (*RuntimeFractionalPercent) ProtoMessage()    {}\nfunc (*RuntimeFractionalPercent) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{12}\n}\nfunc (m *RuntimeFractionalPercent) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *RuntimeFractionalPercent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *RuntimeFractionalPercent) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_RuntimeFractionalPercent.Merge(m, src)\n}\nfunc (m *RuntimeFractionalPercent) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *RuntimeFractionalPercent) XXX_DiscardUnknown() {\n\txxx_messageInfo_RuntimeFractionalPercent.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_RuntimeFractionalPercent proto.InternalMessageInfo\n\nfunc (m *RuntimeFractionalPercent) GetDefaultValue() *_type.FractionalPercent {\n\tif m != nil {\n\t\treturn m.DefaultValue\n\t}\n\treturn nil\n}\n\nfunc (m *RuntimeFractionalPercent) GetRuntimeKey() string {\n\tif m != nil {\n\t\treturn m.RuntimeKey\n\t}\n\treturn \"\"\n}\n\n// Identifies a specific ControlPlane instance that Envoy is connected to.\ntype ControlPlane struct {\n\t// An opaque control plane identifier that uniquely identifies an instance\n\t// of control plane. This can be used to identify which control plane instance,\n\t// the Envoy is connected to.\n\tIdentifier           string   `protobuf:\"bytes,1,opt,name=identifier,proto3\" json:\"identifier,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *ControlPlane) Reset()         { *m = ControlPlane{} }\nfunc (m *ControlPlane) String() string { return proto.CompactTextString(m) }\nfunc (*ControlPlane) ProtoMessage()    {}\nfunc (*ControlPlane) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a7738c0f9e1bfff4, []int{13}\n}\nfunc (m *ControlPlane) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *ControlPlane) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *ControlPlane) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_ControlPlane.Merge(m, src)\n}\nfunc (m *ControlPlane) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *ControlPlane) XXX_DiscardUnknown() {\n\txxx_messageInfo_ControlPlane.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_ControlPlane proto.InternalMessageInfo\n\nfunc (m *ControlPlane) GetIdentifier() string {\n\tif m != nil {\n\t\treturn m.Identifier\n\t}\n\treturn \"\"\n}\n\nfunc init() {\n\tproto.RegisterEnum(\"envoy.api.v2.core.RoutingPriority\", RoutingPriority_name, RoutingPriority_value)\n\tproto.RegisterEnum(\"envoy.api.v2.core.RequestMethod\", RequestMethod_name, RequestMethod_value)\n\tproto.RegisterEnum(\"envoy.api.v2.core.TrafficDirection\", TrafficDirection_name, TrafficDirection_value)\n\tproto.RegisterEnum(\"envoy.api.v2.core.SocketOption_SocketState\", SocketOption_SocketState_name, SocketOption_SocketState_value)\n\tproto.RegisterType((*Locality)(nil), \"envoy.api.v2.core.Locality\")\n\tproto.RegisterType((*Node)(nil), \"envoy.api.v2.core.Node\")\n\tproto.RegisterType((*Metadata)(nil), \"envoy.api.v2.core.Metadata\")\n\tproto.RegisterMapType((map[string]*types.Struct)(nil), \"envoy.api.v2.core.Metadata.FilterMetadataEntry\")\n\tproto.RegisterType((*RuntimeUInt32)(nil), \"envoy.api.v2.core.RuntimeUInt32\")\n\tproto.RegisterType((*HeaderValue)(nil), \"envoy.api.v2.core.HeaderValue\")\n\tproto.RegisterType((*HeaderValueOption)(nil), \"envoy.api.v2.core.HeaderValueOption\")\n\tproto.RegisterType((*HeaderMap)(nil), \"envoy.api.v2.core.HeaderMap\")\n\tproto.RegisterType((*DataSource)(nil), \"envoy.api.v2.core.DataSource\")\n\tproto.RegisterType((*RemoteDataSource)(nil), \"envoy.api.v2.core.RemoteDataSource\")\n\tproto.RegisterType((*AsyncDataSource)(nil), \"envoy.api.v2.core.AsyncDataSource\")\n\tproto.RegisterType((*TransportSocket)(nil), \"envoy.api.v2.core.TransportSocket\")\n\tproto.RegisterType((*SocketOption)(nil), \"envoy.api.v2.core.SocketOption\")\n\tproto.RegisterType((*RuntimeFractionalPercent)(nil), \"envoy.api.v2.core.RuntimeFractionalPercent\")\n\tproto.RegisterType((*ControlPlane)(nil), \"envoy.api.v2.core.ControlPlane\")\n}\n\nfunc init() { proto.RegisterFile(\"envoy/api/v2/core/base.proto\", fileDescriptor_a7738c0f9e1bfff4) }\n\nvar fileDescriptor_a7738c0f9e1bfff4 = []byte{\n\t// 1293 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x84, 0x55, 0xcd, 0x6f, 0x1b, 0x45,\n\t0x14, 0xf7, 0xfa, 0xdb, 0xcf, 0x76, 0xb3, 0x9d, 0x56, 0xad, 0x9b, 0xb6, 0x4e, 0xe4, 0x0a, 0x29,\n\t0x2a, 0x60, 0x0b, 0x57, 0x85, 0x82, 0xc4, 0x87, 0xd7, 0xde, 0xd4, 0x56, 0x13, 0xdb, 0xac, 0xd7,\n\t0x15, 0xea, 0x01, 0x6b, 0x6d, 0x8f, 0x9d, 0x51, 0x37, 0xbb, 0xcb, 0xec, 0xac, 0xc1, 0x3d, 0x55,\n\t0x9c, 0x28, 0x17, 0xce, 0x9c, 0x7b, 0x41, 0xe2, 0x9f, 0x00, 0x71, 0xe1, 0xc8, 0x91, 0x23, 0xca,\n\t0x5f, 0x81, 0x72, 0x42, 0xf3, 0xe1, 0xd4, 0x49, 0x2c, 0x7a, 0x9b, 0xf9, 0xfd, 0xde, 0xef, 0xcd,\n\t0xfb, 0x98, 0x79, 0x03, 0x77, 0xb0, 0xb7, 0xf0, 0x97, 0x35, 0x27, 0x20, 0xb5, 0x45, 0xbd, 0x36,\n\t0xf1, 0x29, 0xae, 0x8d, 0x9d, 0x10, 0x57, 0x03, 0xea, 0x33, 0x1f, 0x5d, 0x15, 0x6c, 0xd5, 0x09,\n\t0x48, 0x75, 0x51, 0xaf, 0x72, 0x76, 0x7b, 0xf7, 0xb2, 0xe0, 0x88, 0xb1, 0x60, 0x14, 0x51, 0x22,\n\t0x45, 0xdb, 0xb7, 0xe6, 0xbe, 0x3f, 0x77, 0x71, 0x4d, 0xec, 0xc6, 0xd1, 0xac, 0xe6, 0x78, 0x4b,\n\t0x45, 0xdd, 0xb9, 0x48, 0x85, 0x8c, 0x46, 0x13, 0xa6, 0xd8, 0xf2, 0x45, 0xf6, 0x5b, 0xea, 0x04,\n\t0x01, 0xa6, 0xa1, 0xe2, 0x6f, 0x2e, 0x1c, 0x97, 0x4c, 0x1d, 0x86, 0x6b, 0xab, 0x85, 0x22, 0xae,\n\t0xcf, 0xfd, 0xb9, 0x2f, 0x96, 0x35, 0xbe, 0x52, 0x68, 0x49, 0x46, 0xca, 0x96, 0x01, 0xae, 0x05,\n\t0x98, 0x4e, 0xb0, 0xa7, 0x0e, 0xaa, 0x7c, 0x09, 0xd9, 0x03, 0x7f, 0xe2, 0xb8, 0x84, 0x2d, 0xd1,\n\t0x0d, 0x48, 0x53, 0x3c, 0x27, 0xbe, 0x57, 0xd2, 0x76, 0xb5, 0xbd, 0x9c, 0xa5, 0x76, 0x08, 0x41,\n\t0xf2, 0x85, 0xef, 0xe1, 0x52, 0x5c, 0xa0, 0x62, 0x8d, 0x6e, 0x41, 0x36, 0x8c, 0xc6, 0x23, 0x81,\n\t0x27, 0x04, 0x9e, 0x09, 0xa3, 0xf1, 0x33, 0xdf, 0xc3, 0x95, 0x3f, 0x34, 0x48, 0x76, 0xfd, 0x29,\n\t0x46, 0x57, 0x20, 0x4e, 0xa6, 0xca, 0x57, 0x9c, 0x4c, 0x51, 0x09, 0x32, 0x13, 0x37, 0x0a, 0x19,\n\t0xa6, 0xca, 0xd5, 0x6a, 0x8b, 0x1e, 0x40, 0xf6, 0x18, 0x33, 0x67, 0xea, 0x30, 0x47, 0x78, 0xcb,\n\t0xd7, 0x6f, 0x56, 0x65, 0x05, 0xaa, 0xab, 0x0a, 0x54, 0x07, 0xa2, 0x3e, 0xd6, 0x99, 0x21, 0xfa,\n\t0x08, 0xb2, 0xae, 0x0a, 0xbd, 0x94, 0x14, 0xa2, 0xdb, 0xd5, 0x4b, 0x4d, 0xaa, 0xae, 0xb2, 0xb3,\n\t0xce, 0x8c, 0xd1, 0x3d, 0x28, 0x8e, 0x23, 0xe2, 0x4e, 0x47, 0x0b, 0x4c, 0x43, 0x9e, 0x6e, 0x4a,\n\t0x44, 0x53, 0x10, 0xe0, 0x53, 0x89, 0x55, 0x7e, 0xd3, 0x20, 0x7b, 0xb8, 0x3a, 0xea, 0x2b, 0xd8,\n\t0x9a, 0x11, 0x97, 0x61, 0x3a, 0x3a, 0x0b, 0x53, 0xdb, 0x4d, 0xec, 0xe5, 0xeb, 0xb5, 0x0d, 0x27,\n\t0xae, 0x54, 0xd5, 0x7d, 0x21, 0x59, 0x6d, 0x4d, 0x8f, 0xd1, 0xa5, 0x75, 0x65, 0x76, 0x0e, 0xdc,\n\t0x7e, 0x06, 0xd7, 0x36, 0x98, 0x21, 0x1d, 0x12, 0xcf, 0xf1, 0x52, 0xd5, 0x8e, 0x2f, 0xd1, 0xfb,\n\t0x90, 0x5a, 0x38, 0x6e, 0x24, 0xbb, 0xf0, 0x3f, 0xf5, 0x91, 0x56, 0x9f, 0xc4, 0x1f, 0x69, 0x95,\n\t0xaf, 0xa1, 0x68, 0x45, 0x1e, 0x23, 0xc7, 0x78, 0xd8, 0xf1, 0xd8, 0x83, 0x3a, 0x4f, 0x7c, 0x8a,\n\t0x67, 0x4e, 0xe4, 0xb2, 0xd1, 0x1b, 0x5f, 0x45, 0xab, 0xa0, 0xc0, 0xa7, 0x1c, 0x43, 0x7b, 0x90,\n\t0xa7, 0x52, 0x35, 0xe2, 0x21, 0x88, 0xe6, 0x1a, 0x99, 0x53, 0x23, 0x49, 0xe3, 0xbb, 0x9a, 0x05,\n\t0x8a, 0x7b, 0x82, 0x97, 0x95, 0x43, 0xc8, 0xb7, 0xb1, 0x33, 0xc5, 0x54, 0x0a, 0xef, 0xae, 0xc5,\n\t0x6c, 0xe4, 0x4f, 0x8d, 0x2c, 0x4d, 0xef, 0x6a, 0x7b, 0x2f, 0x5f, 0x6a, 0x32, 0x81, 0x9d, 0xf5,\n\t0x04, 0x72, 0x46, 0xee, 0xd4, 0x48, 0xd3, 0xa4, 0xa0, 0x25, 0x5e, 0x79, 0xa5, 0xc1, 0xd5, 0x35,\n\t0x7f, 0xbd, 0x80, 0xf1, 0xcb, 0xf7, 0x05, 0xa4, 0x8f, 0x04, 0x28, 0x1c, 0xe7, 0xeb, 0xe5, 0x0d,\n\t0x15, 0x5f, 0x53, 0x19, 0xd9, 0x53, 0x23, 0xf5, 0xa3, 0x16, 0xd7, 0x35, 0x4b, 0xe9, 0x50, 0x1d,\n\t0xd2, 0xfc, 0xed, 0x78, 0x53, 0x55, 0xba, 0xed, 0x4b, 0xa5, 0x33, 0x7c, 0xdf, 0x15, 0x6a, 0x4b,\n\t0x59, 0x56, 0x4c, 0xc8, 0x49, 0xa7, 0x87, 0x4e, 0x80, 0x1e, 0x41, 0x46, 0xba, 0x0a, 0x55, 0xd7,\n\t0xdf, 0x12, 0x83, 0xb5, 0x32, 0xaf, 0xbc, 0xd6, 0x00, 0x5a, 0x0e, 0x73, 0x06, 0x7e, 0x44, 0x27,\n\t0x18, 0xbd, 0x03, 0xd9, 0x19, 0x71, 0xb1, 0xe7, 0x1c, 0x63, 0x55, 0xa6, 0x55, 0x5d, 0xdb, 0x31,\n\t0xeb, 0x8c, 0x42, 0xef, 0x41, 0x81, 0x78, 0x2e, 0xf1, 0xf0, 0x68, 0xbc, 0x64, 0x38, 0x14, 0x61,\n\t0x17, 0x84, 0xe9, 0x8b, 0xb8, 0xce, 0x4d, 0xf3, 0x92, 0x36, 0x38, 0x8b, 0xaa, 0x50, 0x54, 0xd6,\n\t0x21, 0xa3, 0xc4, 0x9b, 0x5f, 0xe8, 0x58, 0x3b, 0x66, 0x29, 0x6f, 0x03, 0x41, 0x1b, 0x3a, 0xe4,\n\t0xc2, 0x00, 0x4f, 0xc8, 0x8c, 0x60, 0x8a, 0x12, 0xff, 0x1a, 0x5a, 0x85, 0x81, 0x6e, 0xe1, 0x63,\n\t0x9f, 0xe1, 0xb5, 0x50, 0x3f, 0x87, 0xec, 0x6a, 0x96, 0xa9, 0xc2, 0x6f, 0x6f, 0x4a, 0x9a, 0xb1,\n\t0x60, 0x48, 0xc9, 0x5a, 0xd1, 0x33, 0x47, 0x12, 0x42, 0x3b, 0x90, 0x0e, 0x8f, 0x9c, 0xfa, 0xc3,\n\t0x0f, 0x55, 0xbf, 0xcf, 0x6e, 0x90, 0x82, 0x2b, 0x3f, 0x6b, 0xb0, 0xd5, 0x08, 0x97, 0xde, 0x64,\n\t0xed, 0xd4, 0x87, 0x90, 0x12, 0xaf, 0x54, 0x1d, 0x79, 0x77, 0xc3, 0x91, 0x6f, 0xac, 0xdb, 0x31,\n\t0x4b, 0x5a, 0xa3, 0x4f, 0xf9, 0xe0, 0xe2, 0x09, 0xa8, 0x0e, 0xdf, 0xdb, 0xa0, 0xbb, 0x98, 0x61,\n\t0x3b, 0x66, 0x29, 0xd1, 0x86, 0x8a, 0xfc, 0xaa, 0xc1, 0x96, 0x4d, 0x1d, 0x2f, 0x0c, 0x7c, 0xca,\n\t0x06, 0xfe, 0xe4, 0x39, 0x66, 0xe8, 0x36, 0x24, 0x37, 0x34, 0xce, 0x12, 0x20, 0xfa, 0x00, 0xd2,\n\t0x13, 0xdf, 0x9b, 0x91, 0xf9, 0x5b, 0x9e, 0x27, 0x3f, 0x55, 0x1a, 0xa2, 0x8f, 0xa1, 0xc0, 0xe7,\n\t0xf1, 0x74, 0xa4, 0x84, 0x72, 0xee, 0x5d, 0xbf, 0x24, 0x6c, 0x78, 0x4b, 0xde, 0x72, 0x61, 0xdb,\n\t0x14, 0xa6, 0x46, 0x11, 0xf2, 0x52, 0x34, 0xe2, 0x68, 0xe5, 0xf7, 0x38, 0x14, 0x64, 0x90, 0xea,\n\t0xcd, 0xec, 0x42, 0x7e, 0x8a, 0xc3, 0x09, 0x25, 0x62, 0xab, 0xa6, 0xc8, 0x3a, 0x84, 0xae, 0x43,\n\t0xca, 0xc5, 0x0b, 0xec, 0x8a, 0x70, 0x13, 0x96, 0xdc, 0xf0, 0x41, 0x2f, 0x52, 0x4c, 0x08, 0x50,\n\t0x66, 0x76, 0x17, 0x72, 0xc4, 0x5b, 0xcd, 0x0b, 0x3e, 0x66, 0x13, 0xfc, 0xae, 0x12, 0x8f, 0xad,\n\t0x1e, 0x7d, 0x6e, 0x1c, 0xcd, 0x14, 0xcd, 0xe7, 0x68, 0x81, 0xd3, 0xe3, 0x68, 0x26, 0xe9, 0x27,\n\t0x90, 0x0a, 0x99, 0xc3, 0x70, 0x89, 0x8f, 0x82, 0x2b, 0xf5, 0x77, 0x37, 0x34, 0x66, 0x3d, 0x72,\n\t0xb5, 0x19, 0x70, 0x89, 0xb8, 0x54, 0xdf, 0x8b, 0x4b, 0x25, 0x7d, 0x54, 0x0e, 0x20, 0xbf, 0xc6,\n\t0xa3, 0xab, 0x50, 0x1c, 0xd8, 0x0d, 0xdb, 0x1c, 0xf5, 0x2d, 0xd3, 0xe8, 0x74, 0x5b, 0x7a, 0x0c,\n\t0x6d, 0x41, 0x5e, 0x42, 0x46, 0x6f, 0xd8, 0x6d, 0xe9, 0x1a, 0xba, 0x06, 0x5b, 0x12, 0x38, 0xe8,\n\t0x0c, 0x6c, 0xb3, 0xdb, 0xe9, 0x3e, 0xd6, 0xe3, 0xdb, 0xc9, 0x1f, 0x5e, 0x97, 0x63, 0x46, 0x41,\n\t0xcd, 0x23, 0xd9, 0xf1, 0x57, 0x1a, 0x94, 0xd4, 0xb0, 0xdc, 0xa7, 0xce, 0x84, 0x07, 0xe3, 0xb8,\n\t0x7d, 0xf9, 0x55, 0xa2, 0x83, 0x8b, 0x73, 0xf3, 0xfc, 0xf5, 0xe4, 0x4d, 0xa8, 0x5e, 0x52, 0xad,\n\t0x3d, 0x8a, 0xf3, 0x03, 0x76, 0xe7, 0xfc, 0x80, 0x95, 0x5f, 0xe1, 0xfa, 0x5c, 0xad, 0x42, 0xa1,\n\t0xe9, 0x7b, 0x8c, 0xfa, 0x6e, 0xdf, 0x75, 0x3c, 0x8c, 0xca, 0x00, 0x64, 0x8a, 0x3d, 0x26, 0x2e,\n\t0xa8, 0xea, 0xe6, 0x1a, 0x72, 0x7f, 0x0f, 0xb6, 0x2c, 0x3f, 0x62, 0xc4, 0x9b, 0xf7, 0x29, 0xf1,\n\t0x29, 0xff, 0xe2, 0xf2, 0x90, 0x69, 0x99, 0xfb, 0x8d, 0xe1, 0x81, 0xad, 0xc7, 0x50, 0x16, 0x92,\n\t0xed, 0xce, 0xe3, 0xb6, 0xae, 0xdd, 0xff, 0x49, 0x83, 0xa2, 0x85, 0xbf, 0x89, 0x70, 0xc8, 0x0e,\n\t0x31, 0x3b, 0xf2, 0xa7, 0xe8, 0x06, 0xa0, 0x43, 0xd3, 0x6e, 0xf7, 0x5a, 0xa3, 0x61, 0x77, 0xd0,\n\t0x37, 0x9b, 0x9d, 0xfd, 0x8e, 0xc9, 0x2b, 0x99, 0x81, 0xc4, 0x63, 0xd3, 0xd6, 0x35, 0x21, 0x36,\n\t0x1b, 0x2d, 0x3d, 0xce, 0x57, 0xfd, 0xde, 0xc0, 0xd6, 0x13, 0x9c, 0xec, 0x0f, 0x6d, 0x3d, 0x89,\n\t0x00, 0xd2, 0x2d, 0xf3, 0xc0, 0xb4, 0x4d, 0x3d, 0xc5, 0x8f, 0x6c, 0xf6, 0xba, 0x5d, 0xb3, 0x69,\n\t0xeb, 0x69, 0xbe, 0xe9, 0xf5, 0xed, 0x4e, 0xaf, 0x3b, 0xd0, 0x33, 0x28, 0x07, 0x29, 0xdb, 0x6a,\n\t0x34, 0x4d, 0x3d, 0xcb, 0x97, 0xfd, 0x86, 0xdd, 0x6c, 0xeb, 0x39, 0xd9, 0x85, 0xfb, 0x9f, 0x81,\n\t0x6e, 0x53, 0x67, 0x36, 0x23, 0x93, 0x16, 0xa1, 0x58, 0x54, 0x90, 0x77, 0xf1, 0x7c, 0x30, 0x79,\n\t0xc8, 0x74, 0xba, 0xab, 0x96, 0x16, 0x20, 0xdb, 0x1b, 0xda, 0x72, 0x17, 0x37, 0xda, 0xbf, 0x9c,\n\t0x94, 0xb5, 0x3f, 0x4f, 0xca, 0xda, 0x5f, 0x27, 0x65, 0xed, 0xef, 0x93, 0xb2, 0xf6, 0xcf, 0x49,\n\t0x59, 0x83, 0x1d, 0xe2, 0xcb, 0xde, 0x04, 0xd4, 0xff, 0x6e, 0x79, 0xf9, 0xd2, 0x19, 0x39, 0xc3,\n\t0x09, 0x71, 0x9f, 0x3f, 0xb1, 0xbe, 0xf6, 0x2c, 0xc9, 0xa1, 0x71, 0x5a, 0xbc, 0xb8, 0x07, 0xff,\n\t0x05, 0x00, 0x00, 0xff, 0xff, 0xf4, 0x76, 0xb1, 0x50, 0x07, 0x0a, 0x00, 0x00,\n}\n\nfunc (this *Locality) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Locality)\n\tif !ok {\n\t\tthat2, ok := that.(Locality)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Region != that1.Region {\n\t\treturn false\n\t}\n\tif this.Zone != that1.Zone {\n\t\treturn false\n\t}\n\tif this.SubZone != that1.SubZone {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *Node) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Node)\n\tif !ok {\n\t\tthat2, ok := that.(Node)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Id != that1.Id {\n\t\treturn false\n\t}\n\tif this.Cluster != that1.Cluster {\n\t\treturn false\n\t}\n\tif !this.Metadata.Equal(that1.Metadata) {\n\t\treturn false\n\t}\n\tif !this.Locality.Equal(that1.Locality) {\n\t\treturn false\n\t}\n\tif this.BuildVersion != that1.BuildVersion {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *Metadata) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Metadata)\n\tif !ok {\n\t\tthat2, ok := that.(Metadata)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif len(this.FilterMetadata) != len(that1.FilterMetadata) {\n\t\treturn false\n\t}\n\tfor i := range this.FilterMetadata {\n\t\tif !this.FilterMetadata[i].Equal(that1.FilterMetadata[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *RuntimeUInt32) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*RuntimeUInt32)\n\tif !ok {\n\t\tthat2, ok := that.(RuntimeUInt32)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.DefaultValue != that1.DefaultValue {\n\t\treturn false\n\t}\n\tif this.RuntimeKey != that1.RuntimeKey {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *HeaderValue) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*HeaderValue)\n\tif !ok {\n\t\tthat2, ok := that.(HeaderValue)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Key != that1.Key {\n\t\treturn false\n\t}\n\tif this.Value != that1.Value {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *HeaderValueOption) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*HeaderValueOption)\n\tif !ok {\n\t\tthat2, ok := that.(HeaderValueOption)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.Header.Equal(that1.Header) {\n\t\treturn false\n\t}\n\tif !this.Append.Equal(that1.Append) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *HeaderMap) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*HeaderMap)\n\tif !ok {\n\t\tthat2, ok := that.(HeaderMap)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif len(this.Headers) != len(that1.Headers) {\n\t\treturn false\n\t}\n\tfor i := range this.Headers {\n\t\tif !this.Headers[i].Equal(that1.Headers[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *DataSource) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*DataSource)\n\tif !ok {\n\t\tthat2, ok := that.(DataSource)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif that1.Specifier == nil {\n\t\tif this.Specifier != nil {\n\t\t\treturn false\n\t\t}\n\t} else if this.Specifier == nil {\n\t\treturn false\n\t} else if !this.Specifier.Equal(that1.Specifier) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *DataSource_Filename) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*DataSource_Filename)\n\tif !ok {\n\t\tthat2, ok := that.(DataSource_Filename)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Filename != that1.Filename {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *DataSource_InlineBytes) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*DataSource_InlineBytes)\n\tif !ok {\n\t\tthat2, ok := that.(DataSource_InlineBytes)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.InlineBytes, that1.InlineBytes) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *DataSource_InlineString) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*DataSource_InlineString)\n\tif !ok {\n\t\tthat2, ok := that.(DataSource_InlineString)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.InlineString != that1.InlineString {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *RemoteDataSource) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*RemoteDataSource)\n\tif !ok {\n\t\tthat2, ok := that.(RemoteDataSource)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.HttpUri.Equal(that1.HttpUri) {\n\t\treturn false\n\t}\n\tif this.Sha256 != that1.Sha256 {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *AsyncDataSource) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*AsyncDataSource)\n\tif !ok {\n\t\tthat2, ok := that.(AsyncDataSource)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif that1.Specifier == nil {\n\t\tif this.Specifier != nil {\n\t\t\treturn false\n\t\t}\n\t} else if this.Specifier == nil {\n\t\treturn false\n\t} else if !this.Specifier.Equal(that1.Specifier) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *AsyncDataSource_Local) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*AsyncDataSource_Local)\n\tif !ok {\n\t\tthat2, ok := that.(AsyncDataSource_Local)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.Local.Equal(that1.Local) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *AsyncDataSource_Remote) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*AsyncDataSource_Remote)\n\tif !ok {\n\t\tthat2, ok := that.(AsyncDataSource_Remote)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.Remote.Equal(that1.Remote) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *TransportSocket) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*TransportSocket)\n\tif !ok {\n\t\tthat2, ok := that.(TransportSocket)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Name != that1.Name {\n\t\treturn false\n\t}\n\tif that1.ConfigType == nil {\n\t\tif this.ConfigType != nil {\n\t\t\treturn false\n\t\t}\n\t} else if this.ConfigType == nil {\n\t\treturn false\n\t} else if !this.ConfigType.Equal(that1.ConfigType) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *TransportSocket_Config) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*TransportSocket_Config)\n\tif !ok {\n\t\tthat2, ok := that.(TransportSocket_Config)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.Config.Equal(that1.Config) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *TransportSocket_TypedConfig) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*TransportSocket_TypedConfig)\n\tif !ok {\n\t\tthat2, ok := that.(TransportSocket_TypedConfig)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.TypedConfig.Equal(that1.TypedConfig) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *SocketOption) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*SocketOption)\n\tif !ok {\n\t\tthat2, ok := that.(SocketOption)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Description != that1.Description {\n\t\treturn false\n\t}\n\tif this.Level != that1.Level {\n\t\treturn false\n\t}\n\tif this.Name != that1.Name {\n\t\treturn false\n\t}\n\tif that1.Value == nil {\n\t\tif this.Value != nil {\n\t\t\treturn false\n\t\t}\n\t} else if this.Value == nil {\n\t\treturn false\n\t} else if !this.Value.Equal(that1.Value) {\n\t\treturn false\n\t}\n\tif this.State != that1.State {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *SocketOption_IntValue) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*SocketOption_IntValue)\n\tif !ok {\n\t\tthat2, ok := that.(SocketOption_IntValue)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.IntValue != that1.IntValue {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *SocketOption_BufValue) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*SocketOption_BufValue)\n\tif !ok {\n\t\tthat2, ok := that.(SocketOption_BufValue)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.BufValue, that1.BufValue) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *RuntimeFractionalPercent) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*RuntimeFractionalPercent)\n\tif !ok {\n\t\tthat2, ok := that.(RuntimeFractionalPercent)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.DefaultValue.Equal(that1.DefaultValue) {\n\t\treturn false\n\t}\n\tif this.RuntimeKey != that1.RuntimeKey {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *ControlPlane) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*ControlPlane)\n\tif !ok {\n\t\tthat2, ok := that.(ControlPlane)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Identifier != that1.Identifier {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (m *Locality) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *Locality) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Region) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Region)))\n\t\ti += copy(dAtA[i:], m.Region)\n\t}\n\tif len(m.Zone) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Zone)))\n\t\ti += copy(dAtA[i:], m.Zone)\n\t}\n\tif len(m.SubZone) > 0 {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.SubZone)))\n\t\ti += copy(dAtA[i:], m.SubZone)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *Node) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *Node) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Id) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Id)))\n\t\ti += copy(dAtA[i:], m.Id)\n\t}\n\tif len(m.Cluster) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Cluster)))\n\t\ti += copy(dAtA[i:], m.Cluster)\n\t}\n\tif m.Metadata != nil {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Metadata.Size()))\n\t\tn1, err1 := m.Metadata.MarshalTo(dAtA[i:])\n\t\tif err1 != nil {\n\t\t\treturn 0, err1\n\t\t}\n\t\ti += n1\n\t}\n\tif m.Locality != nil {\n\t\tdAtA[i] = 0x22\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Locality.Size()))\n\t\tn2, err2 := m.Locality.MarshalTo(dAtA[i:])\n\t\tif err2 != nil {\n\t\t\treturn 0, err2\n\t\t}\n\t\ti += n2\n\t}\n\tif len(m.BuildVersion) > 0 {\n\t\tdAtA[i] = 0x2a\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.BuildVersion)))\n\t\ti += copy(dAtA[i:], m.BuildVersion)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *Metadata) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *Metadata) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.FilterMetadata) > 0 {\n\t\tkeysForFilterMetadata := make([]string, 0, len(m.FilterMetadata))\n\t\tfor k, _ := range m.FilterMetadata {\n\t\t\tkeysForFilterMetadata = append(keysForFilterMetadata, string(k))\n\t\t}\n\t\tgithub_com_gogo_protobuf_sortkeys.Strings(keysForFilterMetadata)\n\t\tfor _, k := range keysForFilterMetadata {\n\t\t\tdAtA[i] = 0xa\n\t\t\ti++\n\t\t\tv := m.FilterMetadata[string(k)]\n\t\t\tmsgSize := 0\n\t\t\tif v != nil {\n\t\t\t\tmsgSize = v.Size()\n\t\t\t\tmsgSize += 1 + sovBase(uint64(msgSize))\n\t\t\t}\n\t\t\tmapSize := 1 + len(k) + sovBase(uint64(len(k))) + msgSize\n\t\t\ti = encodeVarintBase(dAtA, i, uint64(mapSize))\n\t\t\tdAtA[i] = 0xa\n\t\t\ti++\n\t\t\ti = encodeVarintBase(dAtA, i, uint64(len(k)))\n\t\t\ti += copy(dAtA[i:], k)\n\t\t\tif v != nil {\n\t\t\t\tdAtA[i] = 0x12\n\t\t\t\ti++\n\t\t\t\ti = encodeVarintBase(dAtA, i, uint64(v.Size()))\n\t\t\t\tn3, err3 := v.MarshalTo(dAtA[i:])\n\t\t\t\tif err3 != nil {\n\t\t\t\t\treturn 0, err3\n\t\t\t\t}\n\t\t\t\ti += n3\n\t\t\t}\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *RuntimeUInt32) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *RuntimeUInt32) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.DefaultValue != 0 {\n\t\tdAtA[i] = 0x10\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.DefaultValue))\n\t}\n\tif len(m.RuntimeKey) > 0 {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.RuntimeKey)))\n\t\ti += copy(dAtA[i:], m.RuntimeKey)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *HeaderValue) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *HeaderValue) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Key) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Key)))\n\t\ti += copy(dAtA[i:], m.Key)\n\t}\n\tif len(m.Value) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Value)))\n\t\ti += copy(dAtA[i:], m.Value)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *HeaderValueOption) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *HeaderValueOption) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Header != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Header.Size()))\n\t\tn4, err4 := m.Header.MarshalTo(dAtA[i:])\n\t\tif err4 != nil {\n\t\t\treturn 0, err4\n\t\t}\n\t\ti += n4\n\t}\n\tif m.Append != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Append.Size()))\n\t\tn5, err5 := m.Append.MarshalTo(dAtA[i:])\n\t\tif err5 != nil {\n\t\t\treturn 0, err5\n\t\t}\n\t\ti += n5\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *HeaderMap) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *HeaderMap) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Headers) > 0 {\n\t\tfor _, msg := range m.Headers {\n\t\t\tdAtA[i] = 0xa\n\t\t\ti++\n\t\t\ti = encodeVarintBase(dAtA, i, uint64(msg.Size()))\n\t\t\tn, err := msg.MarshalTo(dAtA[i:])\n\t\t\tif err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t\ti += n\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *DataSource) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *DataSource) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Specifier != nil {\n\t\tnn6, err6 := m.Specifier.MarshalTo(dAtA[i:])\n\t\tif err6 != nil {\n\t\t\treturn 0, err6\n\t\t}\n\t\ti += nn6\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *DataSource_Filename) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tdAtA[i] = 0xa\n\ti++\n\ti = encodeVarintBase(dAtA, i, uint64(len(m.Filename)))\n\ti += copy(dAtA[i:], m.Filename)\n\treturn i, nil\n}\nfunc (m *DataSource_InlineBytes) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.InlineBytes != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.InlineBytes)))\n\t\ti += copy(dAtA[i:], m.InlineBytes)\n\t}\n\treturn i, nil\n}\nfunc (m *DataSource_InlineString) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tdAtA[i] = 0x1a\n\ti++\n\ti = encodeVarintBase(dAtA, i, uint64(len(m.InlineString)))\n\ti += copy(dAtA[i:], m.InlineString)\n\treturn i, nil\n}\nfunc (m *RemoteDataSource) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *RemoteDataSource) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.HttpUri != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.HttpUri.Size()))\n\t\tn7, err7 := m.HttpUri.MarshalTo(dAtA[i:])\n\t\tif err7 != nil {\n\t\t\treturn 0, err7\n\t\t}\n\t\ti += n7\n\t}\n\tif len(m.Sha256) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Sha256)))\n\t\ti += copy(dAtA[i:], m.Sha256)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *AsyncDataSource) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *AsyncDataSource) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Specifier != nil {\n\t\tnn8, err8 := m.Specifier.MarshalTo(dAtA[i:])\n\t\tif err8 != nil {\n\t\t\treturn 0, err8\n\t\t}\n\t\ti += nn8\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *AsyncDataSource_Local) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.Local != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Local.Size()))\n\t\tn9, err9 := m.Local.MarshalTo(dAtA[i:])\n\t\tif err9 != nil {\n\t\t\treturn 0, err9\n\t\t}\n\t\ti += n9\n\t}\n\treturn i, nil\n}\nfunc (m *AsyncDataSource_Remote) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.Remote != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Remote.Size()))\n\t\tn10, err10 := m.Remote.MarshalTo(dAtA[i:])\n\t\tif err10 != nil {\n\t\t\treturn 0, err10\n\t\t}\n\t\ti += n10\n\t}\n\treturn i, nil\n}\nfunc (m *TransportSocket) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *TransportSocket) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Name) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Name)))\n\t\ti += copy(dAtA[i:], m.Name)\n\t}\n\tif m.ConfigType != nil {\n\t\tnn11, err11 := m.ConfigType.MarshalTo(dAtA[i:])\n\t\tif err11 != nil {\n\t\t\treturn 0, err11\n\t\t}\n\t\ti += nn11\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *TransportSocket_Config) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.Config != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Config.Size()))\n\t\tn12, err12 := m.Config.MarshalTo(dAtA[i:])\n\t\tif err12 != nil {\n\t\t\treturn 0, err12\n\t\t}\n\t\ti += n12\n\t}\n\treturn i, nil\n}\nfunc (m *TransportSocket_TypedConfig) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.TypedConfig != nil {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.TypedConfig.Size()))\n\t\tn13, err13 := m.TypedConfig.MarshalTo(dAtA[i:])\n\t\tif err13 != nil {\n\t\t\treturn 0, err13\n\t\t}\n\t\ti += n13\n\t}\n\treturn i, nil\n}\nfunc (m *SocketOption) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *SocketOption) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Description) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Description)))\n\t\ti += copy(dAtA[i:], m.Description)\n\t}\n\tif m.Level != 0 {\n\t\tdAtA[i] = 0x10\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Level))\n\t}\n\tif m.Name != 0 {\n\t\tdAtA[i] = 0x18\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.Name))\n\t}\n\tif m.Value != nil {\n\t\tnn14, err14 := m.Value.MarshalTo(dAtA[i:])\n\t\tif err14 != nil {\n\t\t\treturn 0, err14\n\t\t}\n\t\ti += nn14\n\t}\n\tif m.State != 0 {\n\t\tdAtA[i] = 0x30\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.State))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *SocketOption_IntValue) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tdAtA[i] = 0x20\n\ti++\n\ti = encodeVarintBase(dAtA, i, uint64(m.IntValue))\n\treturn i, nil\n}\nfunc (m *SocketOption_BufValue) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.BufValue != nil {\n\t\tdAtA[i] = 0x2a\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.BufValue)))\n\t\ti += copy(dAtA[i:], m.BufValue)\n\t}\n\treturn i, nil\n}\nfunc (m *RuntimeFractionalPercent) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *RuntimeFractionalPercent) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.DefaultValue != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(m.DefaultValue.Size()))\n\t\tn15, err15 := m.DefaultValue.MarshalTo(dAtA[i:])\n\t\tif err15 != nil {\n\t\t\treturn 0, err15\n\t\t}\n\t\ti += n15\n\t}\n\tif len(m.RuntimeKey) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.RuntimeKey)))\n\t\ti += copy(dAtA[i:], m.RuntimeKey)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *ControlPlane) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *ControlPlane) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Identifier) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintBase(dAtA, i, uint64(len(m.Identifier)))\n\t\ti += copy(dAtA[i:], m.Identifier)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc encodeVarintBase(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *Locality) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Region)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tl = len(m.Zone)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tl = len(m.SubZone)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *Node) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Id)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tl = len(m.Cluster)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.Metadata != nil {\n\t\tl = m.Metadata.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.Locality != nil {\n\t\tl = m.Locality.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tl = len(m.BuildVersion)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *Metadata) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif len(m.FilterMetadata) > 0 {\n\t\tfor k, v := range m.FilterMetadata {\n\t\t\t_ = k\n\t\t\t_ = v\n\t\t\tl = 0\n\t\t\tif v != nil {\n\t\t\t\tl = v.Size()\n\t\t\t\tl += 1 + sovBase(uint64(l))\n\t\t\t}\n\t\t\tmapEntrySize := 1 + len(k) + sovBase(uint64(len(k))) + l\n\t\t\tn += mapEntrySize + 1 + sovBase(uint64(mapEntrySize))\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *RuntimeUInt32) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.DefaultValue != 0 {\n\t\tn += 1 + sovBase(uint64(m.DefaultValue))\n\t}\n\tl = len(m.RuntimeKey)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *HeaderValue) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Key)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tl = len(m.Value)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *HeaderValueOption) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Header != nil {\n\t\tl = m.Header.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.Append != nil {\n\t\tl = m.Append.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *HeaderMap) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif len(m.Headers) > 0 {\n\t\tfor _, e := range m.Headers {\n\t\t\tl = e.Size()\n\t\t\tn += 1 + l + sovBase(uint64(l))\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *DataSource) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Specifier != nil {\n\t\tn += m.Specifier.Size()\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *DataSource_Filename) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Filename)\n\tn += 1 + l + sovBase(uint64(l))\n\treturn n\n}\nfunc (m *DataSource_InlineBytes) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.InlineBytes != nil {\n\t\tl = len(m.InlineBytes)\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *DataSource_InlineString) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.InlineString)\n\tn += 1 + l + sovBase(uint64(l))\n\treturn n\n}\nfunc (m *RemoteDataSource) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.HttpUri != nil {\n\t\tl = m.HttpUri.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tl = len(m.Sha256)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *AsyncDataSource) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Specifier != nil {\n\t\tn += m.Specifier.Size()\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *AsyncDataSource_Local) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Local != nil {\n\t\tl = m.Local.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *AsyncDataSource_Remote) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Remote != nil {\n\t\tl = m.Remote.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *TransportSocket) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Name)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.ConfigType != nil {\n\t\tn += m.ConfigType.Size()\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *TransportSocket_Config) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Config != nil {\n\t\tl = m.Config.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *TransportSocket_TypedConfig) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.TypedConfig != nil {\n\t\tl = m.TypedConfig.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *SocketOption) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Description)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.Level != 0 {\n\t\tn += 1 + sovBase(uint64(m.Level))\n\t}\n\tif m.Name != 0 {\n\t\tn += 1 + sovBase(uint64(m.Name))\n\t}\n\tif m.Value != nil {\n\t\tn += m.Value.Size()\n\t}\n\tif m.State != 0 {\n\t\tn += 1 + sovBase(uint64(m.State))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *SocketOption_IntValue) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tn += 1 + sovBase(uint64(m.IntValue))\n\treturn n\n}\nfunc (m *SocketOption_BufValue) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.BufValue != nil {\n\t\tl = len(m.BufValue)\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *RuntimeFractionalPercent) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.DefaultValue != nil {\n\t\tl = m.DefaultValue.Size()\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tl = len(m.RuntimeKey)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *ControlPlane) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Identifier)\n\tif l > 0 {\n\t\tn += 1 + l + sovBase(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc sovBase(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozBase(x uint64) (n int) {\n\treturn sovBase(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *Locality) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Locality: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Locality: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Region\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Region = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Zone\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Zone = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field SubZone\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.SubZone = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *Node) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Node: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Node: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Id\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Id = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Cluster\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Cluster = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Metadata\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Metadata == nil {\n\t\t\t\tm.Metadata = &types.Struct{}\n\t\t\t}\n\t\t\tif err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Locality\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Locality == nil {\n\t\t\t\tm.Locality = &Locality{}\n\t\t\t}\n\t\t\tif err := m.Locality.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 5:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field BuildVersion\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.BuildVersion = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *Metadata) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Metadata: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Metadata: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field FilterMetadata\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.FilterMetadata == nil {\n\t\t\t\tm.FilterMetadata = make(map[string]*types.Struct)\n\t\t\t}\n\t\t\tvar mapkey string\n\t\t\tvar mapvalue *types.Struct\n\t\t\tfor iNdEx < postIndex {\n\t\t\t\tentryPreIndex := iNdEx\n\t\t\t\tvar wire uint64\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfieldNum := int32(wire >> 3)\n\t\t\t\tif fieldNum == 1 {\n\t\t\t\t\tvar stringLenmapkey uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapkey |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapkey := int(stringLenmapkey)\n\t\t\t\t\tif intStringLenmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapkey := iNdEx + intStringLenmapkey\n\t\t\t\t\tif postStringIndexmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapkey > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapkey = string(dAtA[iNdEx:postStringIndexmapkey])\n\t\t\t\t\tiNdEx = postStringIndexmapkey\n\t\t\t\t} else if fieldNum == 2 {\n\t\t\t\t\tvar mapmsglen int\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tmapmsglen |= int(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif mapmsglen < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t\t\t}\n\t\t\t\t\tpostmsgIndex := iNdEx + mapmsglen\n\t\t\t\t\tif postmsgIndex < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t\t\t}\n\t\t\t\t\tif postmsgIndex > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapvalue = &types.Struct{}\n\t\t\t\t\tif err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tiNdEx = postmsgIndex\n\t\t\t\t} else {\n\t\t\t\t\tiNdEx = entryPreIndex\n\t\t\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif skippy < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t\t\t}\n\t\t\t\t\tif (iNdEx + skippy) > postIndex {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tiNdEx += skippy\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.FilterMetadata[mapkey] = mapvalue\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *RuntimeUInt32) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: RuntimeUInt32: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: RuntimeUInt32: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 2:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field DefaultValue\", wireType)\n\t\t\t}\n\t\t\tm.DefaultValue = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.DefaultValue |= uint32(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field RuntimeKey\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.RuntimeKey = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *HeaderValue) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: HeaderValue: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: HeaderValue: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Key\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Key = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Value\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Value = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *HeaderValueOption) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: HeaderValueOption: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: HeaderValueOption: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Header\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Header == nil {\n\t\t\t\tm.Header = &HeaderValue{}\n\t\t\t}\n\t\t\tif err := m.Header.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Append\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Append == nil {\n\t\t\t\tm.Append = &types.BoolValue{}\n\t\t\t}\n\t\t\tif err := m.Append.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *HeaderMap) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: HeaderMap: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: HeaderMap: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Headers\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Headers = append(m.Headers, &HeaderValue{})\n\t\t\tif err := m.Headers[len(m.Headers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *DataSource) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: DataSource: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: DataSource: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Filename\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Specifier = &DataSource_Filename{string(dAtA[iNdEx:postIndex])}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field InlineBytes\", wireType)\n\t\t\t}\n\t\t\tvar byteLen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tbyteLen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif byteLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + byteLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := make([]byte, postIndex-iNdEx)\n\t\t\tcopy(v, dAtA[iNdEx:postIndex])\n\t\t\tm.Specifier = &DataSource_InlineBytes{v}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field InlineString\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Specifier = &DataSource_InlineString{string(dAtA[iNdEx:postIndex])}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *RemoteDataSource) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: RemoteDataSource: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: RemoteDataSource: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field HttpUri\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.HttpUri == nil {\n\t\t\t\tm.HttpUri = &HttpUri{}\n\t\t\t}\n\t\t\tif err := m.HttpUri.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Sha256\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Sha256 = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *AsyncDataSource) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: AsyncDataSource: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: AsyncDataSource: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Local\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := &DataSource{}\n\t\t\tif err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tm.Specifier = &AsyncDataSource_Local{v}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Remote\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := &RemoteDataSource{}\n\t\t\tif err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tm.Specifier = &AsyncDataSource_Remote{v}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *TransportSocket) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: TransportSocket: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: TransportSocket: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Name\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Name = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Config\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := &types.Struct{}\n\t\t\tif err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tm.ConfigType = &TransportSocket_Config{v}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field TypedConfig\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := &types.Any{}\n\t\t\tif err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tm.ConfigType = &TransportSocket_TypedConfig{v}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *SocketOption) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: SocketOption: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: SocketOption: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Description\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Description = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Level\", wireType)\n\t\t\t}\n\t\t\tm.Level = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.Level |= int64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tcase 3:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Name\", wireType)\n\t\t\t}\n\t\t\tm.Name = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.Name |= int64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tcase 4:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field IntValue\", wireType)\n\t\t\t}\n\t\t\tvar v int64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tv |= int64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.Value = &SocketOption_IntValue{v}\n\t\tcase 5:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field BufValue\", wireType)\n\t\t\t}\n\t\t\tvar byteLen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tbyteLen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif byteLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + byteLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := make([]byte, postIndex-iNdEx)\n\t\t\tcopy(v, dAtA[iNdEx:postIndex])\n\t\t\tm.Value = &SocketOption_BufValue{v}\n\t\t\tiNdEx = postIndex\n\t\tcase 6:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field State\", wireType)\n\t\t\t}\n\t\t\tm.State = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.State |= SocketOption_SocketState(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *RuntimeFractionalPercent) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: RuntimeFractionalPercent: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: RuntimeFractionalPercent: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field DefaultValue\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.DefaultValue == nil {\n\t\t\t\tm.DefaultValue = &_type.FractionalPercent{}\n\t\t\t}\n\t\t\tif err := m.DefaultValue.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field RuntimeKey\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.RuntimeKey = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *ControlPlane) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: ControlPlane: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: ControlPlane: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Identifier\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Identifier = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipBase(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthBase\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipBase(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowBase\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowBase\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthBase\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthBase\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowBase\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipBase(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthBase\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthBase = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowBase   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/api/v2/core/http_uri.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/api/v2/core/http_uri.proto\n\npackage core\n\nimport (\n\tbytes \"bytes\"\n\tfmt \"fmt\"\n\t_ \"github.com/envoyproxy/protoc-gen-validate/validate\"\n\t_ \"github.com/gogo/protobuf/gogoproto\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\t_ \"github.com/gogo/protobuf/types\"\n\tgithub_com_gogo_protobuf_types \"github.com/gogo/protobuf/types\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n\ttime \"time\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\nvar _ = time.Kitchen\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\n// Envoy external URI descriptor\ntype HttpUri struct {\n\t// The HTTP server URI. It should be a full FQDN with protocol, host and path.\n\t//\n\t// Example:\n\t//\n\t// .. code-block:: yaml\n\t//\n\t//    uri: https://www.googleapis.com/oauth2/v1/certs\n\t//\n\tUri string `protobuf:\"bytes,1,opt,name=uri,proto3\" json:\"uri,omitempty\"`\n\t// Specify how `uri` is to be fetched. Today, this requires an explicit\n\t// cluster, but in the future we may support dynamic cluster creation or\n\t// inline DNS resolution. See `issue\n\t// <https://github.com/envoyproxy/envoy/issues/1606>`_.\n\t//\n\t// Types that are valid to be assigned to HttpUpstreamType:\n\t//\t*HttpUri_Cluster\n\tHttpUpstreamType isHttpUri_HttpUpstreamType `protobuf_oneof:\"http_upstream_type\"`\n\t// Sets the maximum duration in milliseconds that a response can take to arrive upon request.\n\tTimeout              *time.Duration `protobuf:\"bytes,3,opt,name=timeout,proto3,stdduration\" json:\"timeout,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}       `json:\"-\"`\n\tXXX_unrecognized     []byte         `json:\"-\"`\n\tXXX_sizecache        int32          `json:\"-\"`\n}\n\nfunc (m *HttpUri) Reset()         { *m = HttpUri{} }\nfunc (m *HttpUri) String() string { return proto.CompactTextString(m) }\nfunc (*HttpUri) ProtoMessage()    {}\nfunc (*HttpUri) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_1660b946db74c078, []int{0}\n}\nfunc (m *HttpUri) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *HttpUri) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_HttpUri.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *HttpUri) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_HttpUri.Merge(m, src)\n}\nfunc (m *HttpUri) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *HttpUri) XXX_DiscardUnknown() {\n\txxx_messageInfo_HttpUri.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_HttpUri proto.InternalMessageInfo\n\ntype isHttpUri_HttpUpstreamType interface {\n\tisHttpUri_HttpUpstreamType()\n\tEqual(interface{}) bool\n\tMarshalTo([]byte) (int, error)\n\tSize() int\n}\n\ntype HttpUri_Cluster struct {\n\tCluster string `protobuf:\"bytes,2,opt,name=cluster,proto3,oneof\"`\n}\n\nfunc (*HttpUri_Cluster) isHttpUri_HttpUpstreamType() {}\n\nfunc (m *HttpUri) GetHttpUpstreamType() isHttpUri_HttpUpstreamType {\n\tif m != nil {\n\t\treturn m.HttpUpstreamType\n\t}\n\treturn nil\n}\n\nfunc (m *HttpUri) GetUri() string {\n\tif m != nil {\n\t\treturn m.Uri\n\t}\n\treturn \"\"\n}\n\nfunc (m *HttpUri) GetCluster() string {\n\tif x, ok := m.GetHttpUpstreamType().(*HttpUri_Cluster); ok {\n\t\treturn x.Cluster\n\t}\n\treturn \"\"\n}\n\nfunc (m *HttpUri) GetTimeout() *time.Duration {\n\tif m != nil {\n\t\treturn m.Timeout\n\t}\n\treturn nil\n}\n\n// XXX_OneofFuncs is for the internal use of the proto package.\nfunc (*HttpUri) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {\n\treturn _HttpUri_OneofMarshaler, _HttpUri_OneofUnmarshaler, _HttpUri_OneofSizer, []interface{}{\n\t\t(*HttpUri_Cluster)(nil),\n\t}\n}\n\nfunc _HttpUri_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {\n\tm := msg.(*HttpUri)\n\t// http_upstream_type\n\tswitch x := m.HttpUpstreamType.(type) {\n\tcase *HttpUri_Cluster:\n\t\t_ = b.EncodeVarint(2<<3 | proto.WireBytes)\n\t\t_ = b.EncodeStringBytes(x.Cluster)\n\tcase nil:\n\tdefault:\n\t\treturn fmt.Errorf(\"HttpUri.HttpUpstreamType has unexpected type %T\", x)\n\t}\n\treturn nil\n}\n\nfunc _HttpUri_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {\n\tm := msg.(*HttpUri)\n\tswitch tag {\n\tcase 2: // http_upstream_type.cluster\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tx, err := b.DecodeStringBytes()\n\t\tm.HttpUpstreamType = &HttpUri_Cluster{x}\n\t\treturn true, err\n\tdefault:\n\t\treturn false, nil\n\t}\n}\n\nfunc _HttpUri_OneofSizer(msg proto.Message) (n int) {\n\tm := msg.(*HttpUri)\n\t// http_upstream_type\n\tswitch x := m.HttpUpstreamType.(type) {\n\tcase *HttpUri_Cluster:\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(len(x.Cluster)))\n\t\tn += len(x.Cluster)\n\tcase nil:\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"proto: unexpected type %T in oneof\", x))\n\t}\n\treturn n\n}\n\nfunc init() {\n\tproto.RegisterType((*HttpUri)(nil), \"envoy.api.v2.core.HttpUri\")\n}\n\nfunc init() { proto.RegisterFile(\"envoy/api/v2/core/http_uri.proto\", fileDescriptor_1660b946db74c078) }\n\nvar fileDescriptor_1660b946db74c078 = []byte{\n\t// 298 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x64, 0x90, 0xc1, 0x4e, 0x32, 0x31,\n\t0x14, 0x85, 0xb9, 0xf0, 0xff, 0x8e, 0x56, 0x37, 0x36, 0x24, 0x02, 0x8b, 0x61, 0xa2, 0x1b, 0x56,\n\t0x6d, 0x32, 0x3e, 0x81, 0x8d, 0x0b, 0xdc, 0x11, 0x12, 0xd7, 0xa4, 0x40, 0x1d, 0x9b, 0x00, 0xb7,\n\t0x29, 0xb7, 0x13, 0x79, 0x13, 0x1f, 0xc1, 0xb8, 0xf1, 0x35, 0x5c, 0xfa, 0x06, 0x9a, 0x79, 0x0a,\n\t0xc3, 0xca, 0xcc, 0x0c, 0xb3, 0x30, 0xae, 0x7a, 0x72, 0xcf, 0x39, 0xc9, 0xd7, 0xc3, 0x12, 0xb3,\n\t0xc9, 0x71, 0x27, 0xb5, 0xb3, 0x32, 0x4f, 0xe5, 0x02, 0xbd, 0x91, 0x8f, 0x44, 0x6e, 0x16, 0xbc,\n\t0x15, 0xce, 0x23, 0x21, 0x3f, 0xaf, 0x12, 0x42, 0x3b, 0x2b, 0xf2, 0x54, 0x94, 0x89, 0x41, 0x9c,\n\t0x21, 0x66, 0x2b, 0x23, 0xab, 0xc0, 0x3c, 0x3c, 0xc8, 0x65, 0xf0, 0x9a, 0x2c, 0x6e, 0xea, 0xca,\n\t0xa0, 0x9b, 0x61, 0x86, 0x95, 0x94, 0xa5, 0x3a, 0x5c, 0x2f, 0x72, 0xbd, 0xb2, 0x4b, 0x4d, 0x46,\n\t0x36, 0xa2, 0x36, 0x2e, 0xdf, 0x80, 0x45, 0x63, 0x22, 0x77, 0xef, 0x2d, 0xef, 0xb3, 0x4e, 0xf0,\n\t0xb6, 0x07, 0x09, 0x8c, 0x4e, 0x54, 0xb4, 0x57, 0xff, 0x7c, 0x3b, 0x81, 0x69, 0x79, 0xe3, 0x57,\n\t0x2c, 0x5a, 0xac, 0xc2, 0x96, 0x8c, 0xef, 0xb5, 0x7f, 0xd9, 0xe3, 0xd6, 0xb4, 0x71, 0xf8, 0x1d,\n\t0x8b, 0xc8, 0xae, 0x0d, 0x06, 0xea, 0x75, 0x12, 0x18, 0x9d, 0xa6, 0x7d, 0x51, 0xc3, 0x8a, 0x06,\n\t0x56, 0xdc, 0x1e, 0x60, 0x55, 0x77, 0xaf, 0xfe, 0xbf, 0x42, 0x3b, 0x6d, 0xd5, 0xef, 0x31, 0x3c,\n\t0x7f, 0x0e, 0x61, 0xda, 0xf4, 0x55, 0x9f, 0xf1, 0x7a, 0x0a, 0xb7, 0x25, 0x6f, 0xf4, 0x7a, 0x46,\n\t0x3b, 0x67, 0x78, 0xe7, 0x5b, 0x81, 0xba, 0x79, 0x29, 0x62, 0x78, 0x2f, 0x62, 0xf8, 0x28, 0x62,\n\t0xf8, 0x2a, 0x62, 0x60, 0x43, 0x8b, 0xa2, 0x1a, 0xca, 0x79, 0x7c, 0xda, 0x89, 0x3f, 0x9b, 0xa9,\n\t0xb3, 0xc3, 0x0f, 0x27, 0x25, 0xc6, 0x04, 0xe6, 0x47, 0x15, 0xcf, 0xf5, 0x4f, 0x00, 0x00, 0x00,\n\t0xff, 0xff, 0xa1, 0xbd, 0x86, 0xa3, 0x81, 0x01, 0x00, 0x00,\n}\n\nfunc (this *HttpUri) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*HttpUri)\n\tif !ok {\n\t\tthat2, ok := that.(HttpUri)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Uri != that1.Uri {\n\t\treturn false\n\t}\n\tif that1.HttpUpstreamType == nil {\n\t\tif this.HttpUpstreamType != nil {\n\t\t\treturn false\n\t\t}\n\t} else if this.HttpUpstreamType == nil {\n\t\treturn false\n\t} else if !this.HttpUpstreamType.Equal(that1.HttpUpstreamType) {\n\t\treturn false\n\t}\n\tif this.Timeout != nil && that1.Timeout != nil {\n\t\tif *this.Timeout != *that1.Timeout {\n\t\t\treturn false\n\t\t}\n\t} else if this.Timeout != nil {\n\t\treturn false\n\t} else if that1.Timeout != nil {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *HttpUri_Cluster) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*HttpUri_Cluster)\n\tif !ok {\n\t\tthat2, ok := that.(HttpUri_Cluster)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Cluster != that1.Cluster {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (m *HttpUri) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *HttpUri) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Uri) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintHttpUri(dAtA, i, uint64(len(m.Uri)))\n\t\ti += copy(dAtA[i:], m.Uri)\n\t}\n\tif m.HttpUpstreamType != nil {\n\t\tnn1, err1 := m.HttpUpstreamType.MarshalTo(dAtA[i:])\n\t\tif err1 != nil {\n\t\t\treturn 0, err1\n\t\t}\n\t\ti += nn1\n\t}\n\tif m.Timeout != nil {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintHttpUri(dAtA, i, uint64(github_com_gogo_protobuf_types.SizeOfStdDuration(*m.Timeout)))\n\t\tn2, err2 := github_com_gogo_protobuf_types.StdDurationMarshalTo(*m.Timeout, dAtA[i:])\n\t\tif err2 != nil {\n\t\t\treturn 0, err2\n\t\t}\n\t\ti += n2\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *HttpUri_Cluster) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tdAtA[i] = 0x12\n\ti++\n\ti = encodeVarintHttpUri(dAtA, i, uint64(len(m.Cluster)))\n\ti += copy(dAtA[i:], m.Cluster)\n\treturn i, nil\n}\nfunc encodeVarintHttpUri(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *HttpUri) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Uri)\n\tif l > 0 {\n\t\tn += 1 + l + sovHttpUri(uint64(l))\n\t}\n\tif m.HttpUpstreamType != nil {\n\t\tn += m.HttpUpstreamType.Size()\n\t}\n\tif m.Timeout != nil {\n\t\tl = github_com_gogo_protobuf_types.SizeOfStdDuration(*m.Timeout)\n\t\tn += 1 + l + sovHttpUri(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *HttpUri_Cluster) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Cluster)\n\tn += 1 + l + sovHttpUri(uint64(l))\n\treturn n\n}\n\nfunc sovHttpUri(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozHttpUri(x uint64) (n int) {\n\treturn sovHttpUri(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *HttpUri) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowHttpUri\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: HttpUri: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: HttpUri: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Uri\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowHttpUri\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Uri = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Cluster\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowHttpUri\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.HttpUpstreamType = &HttpUri_Cluster{string(dAtA[iNdEx:postIndex])}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Timeout\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowHttpUri\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Timeout == nil {\n\t\t\t\tm.Timeout = new(time.Duration)\n\t\t\t}\n\t\t\tif err := github_com_gogo_protobuf_types.StdDurationUnmarshal(m.Timeout, dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipHttpUri(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipHttpUri(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowHttpUri\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowHttpUri\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowHttpUri\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthHttpUri\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowHttpUri\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipHttpUri(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthHttpUri\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthHttpUri = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowHttpUri   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/api/v2/discovery.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/api/v2/discovery.proto\n\npackage v2\n\nimport (\n\tbytes \"bytes\"\n\tfmt \"fmt\"\n\trpc \"github.com/gogo/googleapis/google/rpc\"\n\t_ \"github.com/gogo/protobuf/gogoproto\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\tgithub_com_gogo_protobuf_sortkeys \"github.com/gogo/protobuf/sortkeys\"\n\ttypes \"github.com/gogo/protobuf/types\"\n\tcore \"go.aporeto.io/trireme-lib/third_party/generated/envoyproxy/data-plane-api/envoy/api/v2/core\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\n// A DiscoveryRequest requests a set of versioned resources of the same type for\n// a given Envoy node on some API.\ntype DiscoveryRequest struct {\n\t// The version_info provided in the request messages will be the version_info\n\t// received with the most recent successfully processed response or empty on\n\t// the first request. It is expected that no new request is sent after a\n\t// response is received until the Envoy instance is ready to ACK/NACK the new\n\t// configuration. ACK/NACK takes place by returning the new API config version\n\t// as applied or the previous API config version respectively. Each type_url\n\t// (see below) has an independent version associated with it.\n\tVersionInfo string `protobuf:\"bytes,1,opt,name=version_info,json=versionInfo,proto3\" json:\"version_info,omitempty\"`\n\t// The node making the request.\n\tNode *core.Node `protobuf:\"bytes,2,opt,name=node,proto3\" json:\"node,omitempty\"`\n\t// List of resources to subscribe to, e.g. list of cluster names or a route\n\t// configuration name. If this is empty, all resources for the API are\n\t// returned. LDS/CDS may have empty resource_names, which will cause all\n\t// resources for the Envoy instance to be returned. The LDS and CDS responses\n\t// will then imply a number of resources that need to be fetched via EDS/RDS,\n\t// which will be explicitly enumerated in resource_names.\n\tResourceNames []string `protobuf:\"bytes,3,rep,name=resource_names,json=resourceNames,proto3\" json:\"resource_names,omitempty\"`\n\t// Type of the resource that is being requested, e.g.\n\t// \"type.googleapis.com/envoy.api.v2.ClusterLoadAssignment\". This is implicit\n\t// in requests made via singleton xDS APIs such as CDS, LDS, etc. but is\n\t// required for ADS.\n\tTypeUrl string `protobuf:\"bytes,4,opt,name=type_url,json=typeUrl,proto3\" json:\"type_url,omitempty\"`\n\t// nonce corresponding to DiscoveryResponse being ACK/NACKed. See above\n\t// discussion on version_info and the DiscoveryResponse nonce comment. This\n\t// may be empty if no nonce is available, e.g. at startup or for non-stream\n\t// xDS implementations.\n\tResponseNonce string `protobuf:\"bytes,5,opt,name=response_nonce,json=responseNonce,proto3\" json:\"response_nonce,omitempty\"`\n\t// This is populated when the previous :ref:`DiscoveryResponse <envoy_api_msg_DiscoveryResponse>`\n\t// failed to update configuration. The *message* field in *error_details* provides the Envoy\n\t// internal exception related to the failure. It is only intended for consumption during manual\n\t// debugging, the string provided is not guaranteed to be stable across Envoy versions.\n\tErrorDetail          *rpc.Status `protobuf:\"bytes,6,opt,name=error_detail,json=errorDetail,proto3\" json:\"error_detail,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}    `json:\"-\"`\n\tXXX_unrecognized     []byte      `json:\"-\"`\n\tXXX_sizecache        int32       `json:\"-\"`\n}\n\nfunc (m *DiscoveryRequest) Reset()         { *m = DiscoveryRequest{} }\nfunc (m *DiscoveryRequest) String() string { return proto.CompactTextString(m) }\nfunc (*DiscoveryRequest) ProtoMessage()    {}\nfunc (*DiscoveryRequest) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_2c7365e287e5c035, []int{0}\n}\nfunc (m *DiscoveryRequest) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *DiscoveryRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *DiscoveryRequest) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_DiscoveryRequest.Merge(m, src)\n}\nfunc (m *DiscoveryRequest) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *DiscoveryRequest) XXX_DiscardUnknown() {\n\txxx_messageInfo_DiscoveryRequest.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_DiscoveryRequest proto.InternalMessageInfo\n\nfunc (m *DiscoveryRequest) GetVersionInfo() string {\n\tif m != nil {\n\t\treturn m.VersionInfo\n\t}\n\treturn \"\"\n}\n\nfunc (m *DiscoveryRequest) GetNode() *core.Node {\n\tif m != nil {\n\t\treturn m.Node\n\t}\n\treturn nil\n}\n\nfunc (m *DiscoveryRequest) GetResourceNames() []string {\n\tif m != nil {\n\t\treturn m.ResourceNames\n\t}\n\treturn nil\n}\n\nfunc (m *DiscoveryRequest) GetTypeUrl() string {\n\tif m != nil {\n\t\treturn m.TypeUrl\n\t}\n\treturn \"\"\n}\n\nfunc (m *DiscoveryRequest) GetResponseNonce() string {\n\tif m != nil {\n\t\treturn m.ResponseNonce\n\t}\n\treturn \"\"\n}\n\nfunc (m *DiscoveryRequest) GetErrorDetail() *rpc.Status {\n\tif m != nil {\n\t\treturn m.ErrorDetail\n\t}\n\treturn nil\n}\n\ntype DiscoveryResponse struct {\n\t// The version of the response data.\n\tVersionInfo string `protobuf:\"bytes,1,opt,name=version_info,json=versionInfo,proto3\" json:\"version_info,omitempty\"`\n\t// The response resources. These resources are typed and depend on the API being called.\n\tResources []*types.Any `protobuf:\"bytes,2,rep,name=resources,proto3\" json:\"resources,omitempty\"`\n\t// [#not-implemented-hide:]\n\t// Canary is used to support two Envoy command line flags:\n\t//\n\t// * --terminate-on-canary-transition-failure. When set, Envoy is able to\n\t//   terminate if it detects that configuration is stuck at canary. Consider\n\t//   this example sequence of updates:\n\t//   - Management server applies a canary config successfully.\n\t//   - Management server rolls back to a production config.\n\t//   - Envoy rejects the new production config.\n\t//   Since there is no sensible way to continue receiving configuration\n\t//   updates, Envoy will then terminate and apply production config from a\n\t//   clean slate.\n\t// * --dry-run-canary. When set, a canary response will never be applied, only\n\t//   validated via a dry run.\n\tCanary bool `protobuf:\"varint,3,opt,name=canary,proto3\" json:\"canary,omitempty\"`\n\t// Type URL for resources. Identifies the xDS API when muxing over ADS.\n\t// Must be consistent with the type_url in the 'resources' repeated Any (if non-empty).\n\tTypeUrl string `protobuf:\"bytes,4,opt,name=type_url,json=typeUrl,proto3\" json:\"type_url,omitempty\"`\n\t// For gRPC based subscriptions, the nonce provides a way to explicitly ack a\n\t// specific DiscoveryResponse in a following DiscoveryRequest. Additional\n\t// messages may have been sent by Envoy to the management server for the\n\t// previous version on the stream prior to this DiscoveryResponse, that were\n\t// unprocessed at response send time. The nonce allows the management server\n\t// to ignore any further DiscoveryRequests for the previous version until a\n\t// DiscoveryRequest bearing the nonce. The nonce is optional and is not\n\t// required for non-stream based xDS implementations.\n\tNonce string `protobuf:\"bytes,5,opt,name=nonce,proto3\" json:\"nonce,omitempty\"`\n\t// [#not-implemented-hide:]\n\t// The control plane instance that sent the response.\n\tControlPlane         *core.ControlPlane `protobuf:\"bytes,6,opt,name=control_plane,json=controlPlane,proto3\" json:\"control_plane,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}           `json:\"-\"`\n\tXXX_unrecognized     []byte             `json:\"-\"`\n\tXXX_sizecache        int32              `json:\"-\"`\n}\n\nfunc (m *DiscoveryResponse) Reset()         { *m = DiscoveryResponse{} }\nfunc (m *DiscoveryResponse) String() string { return proto.CompactTextString(m) }\nfunc (*DiscoveryResponse) ProtoMessage()    {}\nfunc (*DiscoveryResponse) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_2c7365e287e5c035, []int{1}\n}\nfunc (m *DiscoveryResponse) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *DiscoveryResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *DiscoveryResponse) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_DiscoveryResponse.Merge(m, src)\n}\nfunc (m *DiscoveryResponse) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *DiscoveryResponse) XXX_DiscardUnknown() {\n\txxx_messageInfo_DiscoveryResponse.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_DiscoveryResponse proto.InternalMessageInfo\n\nfunc (m *DiscoveryResponse) GetVersionInfo() string {\n\tif m != nil {\n\t\treturn m.VersionInfo\n\t}\n\treturn \"\"\n}\n\nfunc (m *DiscoveryResponse) GetResources() []*types.Any {\n\tif m != nil {\n\t\treturn m.Resources\n\t}\n\treturn nil\n}\n\nfunc (m *DiscoveryResponse) GetCanary() bool {\n\tif m != nil {\n\t\treturn m.Canary\n\t}\n\treturn false\n}\n\nfunc (m *DiscoveryResponse) GetTypeUrl() string {\n\tif m != nil {\n\t\treturn m.TypeUrl\n\t}\n\treturn \"\"\n}\n\nfunc (m *DiscoveryResponse) GetNonce() string {\n\tif m != nil {\n\t\treturn m.Nonce\n\t}\n\treturn \"\"\n}\n\nfunc (m *DiscoveryResponse) GetControlPlane() *core.ControlPlane {\n\tif m != nil {\n\t\treturn m.ControlPlane\n\t}\n\treturn nil\n}\n\n// DeltaDiscoveryRequest and DeltaDiscoveryResponse are used in a new gRPC\n// endpoint for Delta xDS.\n//\n// With Delta xDS, the DeltaDiscoveryResponses do not need to include a full\n// snapshot of the tracked resources. Instead, DeltaDiscoveryResponses are a\n// diff to the state of a xDS client.\n// In Delta XDS there are per-resource versions, which allow tracking state at\n// the resource granularity.\n// An xDS Delta session is always in the context of a gRPC bidirectional\n// stream. This allows the xDS server to keep track of the state of xDS clients\n// connected to it.\n//\n// In Delta xDS the nonce field is required and used to pair\n// DeltaDiscoveryResponse to a DeltaDiscoveryRequest ACK or NACK.\n// Optionally, a response message level system_version_info is present for\n// debugging purposes only.\n//\n// DeltaDiscoveryRequest plays two independent roles. Any DeltaDiscoveryRequest\n// can be either or both of: [1] informing the server of what resources the\n// client has gained/lost interest in (using resource_names_subscribe and\n// resource_names_unsubscribe), or [2] (N)ACKing an earlier resource update from\n// the server (using response_nonce, with presence of error_detail making it a NACK).\n// Additionally, the first message (for a given type_url) of a reconnected gRPC stream\n// has a third role: informing the server of the resources (and their versions)\n// that the client already possesses, using the initial_resource_versions field.\n//\n// As with state-of-the-world, when multiple resource types are multiplexed (ADS),\n// all requests/acknowledgments/updates are logically walled off by type_url:\n// a Cluster ACK exists in a completely separate world from a prior Route NACK.\n// In particular, initial_resource_versions being sent at the \"start\" of every\n// gRPC stream actually entails a message for each type_url, each with its own\n// initial_resource_versions.\ntype DeltaDiscoveryRequest struct {\n\t// The node making the request.\n\tNode *core.Node `protobuf:\"bytes,1,opt,name=node,proto3\" json:\"node,omitempty\"`\n\t// Type of the resource that is being requested, e.g.\n\t// \"type.googleapis.com/envoy.api.v2.ClusterLoadAssignment\".\n\tTypeUrl string `protobuf:\"bytes,2,opt,name=type_url,json=typeUrl,proto3\" json:\"type_url,omitempty\"`\n\t// DeltaDiscoveryRequests allow the client to add or remove individual\n\t// resources to the set of tracked resources in the context of a stream.\n\t// All resource names in the resource_names_subscribe list are added to the\n\t// set of tracked resources and all resource names in the resource_names_unsubscribe\n\t// list are removed from the set of tracked resources.\n\t//\n\t// *Unlike* state-of-the-world xDS, an empty resource_names_subscribe or\n\t// resource_names_unsubscribe list simply means that no resources are to be\n\t// added or removed to the resource list.\n\t// *Like* state-of-the-world xDS, the server must send updates for all tracked\n\t// resources, but can also send updates for resources the client has not subscribed to.\n\t//\n\t// NOTE: the server must respond with all resources listed in resource_names_subscribe,\n\t// even if it believes the client has the most recent version of them. The reason:\n\t// the client may have dropped them, but then regained interest before it had a chance\n\t// to send the unsubscribe message. See DeltaSubscriptionStateTest.RemoveThenAdd.\n\t//\n\t// These two fields can be set in any DeltaDiscoveryRequest, including ACKs\n\t// and initial_resource_versions.\n\t//\n\t// A list of Resource names to add to the list of tracked resources.\n\tResourceNamesSubscribe []string `protobuf:\"bytes,3,rep,name=resource_names_subscribe,json=resourceNamesSubscribe,proto3\" json:\"resource_names_subscribe,omitempty\"`\n\t// A list of Resource names to remove from the list of tracked resources.\n\tResourceNamesUnsubscribe []string `protobuf:\"bytes,4,rep,name=resource_names_unsubscribe,json=resourceNamesUnsubscribe,proto3\" json:\"resource_names_unsubscribe,omitempty\"`\n\t// Informs the server of the versions of the resources the xDS client knows of, to enable the\n\t// client to continue the same logical xDS session even in the face of gRPC stream reconnection.\n\t// It will not be populated: [1] in the very first stream of a session, since the client will\n\t// not yet have any resources,  [2] in any message after the first in a stream (for a given\n\t// type_url), since the server will already be correctly tracking the client's state.\n\t// (In ADS, the first message *of each type_url* of a reconnected stream populates this map.)\n\t// The map's keys are names of xDS resources known to the xDS client.\n\t// The map's values are opaque resource versions.\n\tInitialResourceVersions map[string]string `protobuf:\"bytes,5,rep,name=initial_resource_versions,json=initialResourceVersions,proto3\" json:\"initial_resource_versions,omitempty\" protobuf_key:\"bytes,1,opt,name=key,proto3\" protobuf_val:\"bytes,2,opt,name=value,proto3\"`\n\t// When the DeltaDiscoveryRequest is a ACK or NACK message in response\n\t// to a previous DeltaDiscoveryResponse, the response_nonce must be the\n\t// nonce in the DeltaDiscoveryResponse.\n\t// Otherwise response_nonce must be omitted.\n\tResponseNonce string `protobuf:\"bytes,6,opt,name=response_nonce,json=responseNonce,proto3\" json:\"response_nonce,omitempty\"`\n\t// This is populated when the previous :ref:`DiscoveryResponse <envoy_api_msg_DiscoveryResponse>`\n\t// failed to update configuration. The *message* field in *error_details*\n\t// provides the Envoy internal exception related to the failure.\n\tErrorDetail          *rpc.Status `protobuf:\"bytes,7,opt,name=error_detail,json=errorDetail,proto3\" json:\"error_detail,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}    `json:\"-\"`\n\tXXX_unrecognized     []byte      `json:\"-\"`\n\tXXX_sizecache        int32       `json:\"-\"`\n}\n\nfunc (m *DeltaDiscoveryRequest) Reset()         { *m = DeltaDiscoveryRequest{} }\nfunc (m *DeltaDiscoveryRequest) String() string { return proto.CompactTextString(m) }\nfunc (*DeltaDiscoveryRequest) ProtoMessage()    {}\nfunc (*DeltaDiscoveryRequest) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_2c7365e287e5c035, []int{2}\n}\nfunc (m *DeltaDiscoveryRequest) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *DeltaDiscoveryRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *DeltaDiscoveryRequest) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_DeltaDiscoveryRequest.Merge(m, src)\n}\nfunc (m *DeltaDiscoveryRequest) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *DeltaDiscoveryRequest) XXX_DiscardUnknown() {\n\txxx_messageInfo_DeltaDiscoveryRequest.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_DeltaDiscoveryRequest proto.InternalMessageInfo\n\nfunc (m *DeltaDiscoveryRequest) GetNode() *core.Node {\n\tif m != nil {\n\t\treturn m.Node\n\t}\n\treturn nil\n}\n\nfunc (m *DeltaDiscoveryRequest) GetTypeUrl() string {\n\tif m != nil {\n\t\treturn m.TypeUrl\n\t}\n\treturn \"\"\n}\n\nfunc (m *DeltaDiscoveryRequest) GetResourceNamesSubscribe() []string {\n\tif m != nil {\n\t\treturn m.ResourceNamesSubscribe\n\t}\n\treturn nil\n}\n\nfunc (m *DeltaDiscoveryRequest) GetResourceNamesUnsubscribe() []string {\n\tif m != nil {\n\t\treturn m.ResourceNamesUnsubscribe\n\t}\n\treturn nil\n}\n\nfunc (m *DeltaDiscoveryRequest) GetInitialResourceVersions() map[string]string {\n\tif m != nil {\n\t\treturn m.InitialResourceVersions\n\t}\n\treturn nil\n}\n\nfunc (m *DeltaDiscoveryRequest) GetResponseNonce() string {\n\tif m != nil {\n\t\treturn m.ResponseNonce\n\t}\n\treturn \"\"\n}\n\nfunc (m *DeltaDiscoveryRequest) GetErrorDetail() *rpc.Status {\n\tif m != nil {\n\t\treturn m.ErrorDetail\n\t}\n\treturn nil\n}\n\ntype DeltaDiscoveryResponse struct {\n\t// The version of the response data (used for debugging).\n\tSystemVersionInfo string `protobuf:\"bytes,1,opt,name=system_version_info,json=systemVersionInfo,proto3\" json:\"system_version_info,omitempty\"`\n\t// The response resources. These are typed resources, whose types must match\n\t// the type_url field.\n\tResources []*Resource `protobuf:\"bytes,2,rep,name=resources,proto3\" json:\"resources,omitempty\"`\n\t// Type URL for resources. Identifies the xDS API when muxing over ADS.\n\t// Must be consistent with the type_url in the Any within 'resources' if 'resources' is non-empty.\n\tTypeUrl string `protobuf:\"bytes,4,opt,name=type_url,json=typeUrl,proto3\" json:\"type_url,omitempty\"`\n\t// Resources names of resources that have be deleted and to be removed from the xDS Client.\n\t// Removed resources for missing resources can be ignored.\n\tRemovedResources []string `protobuf:\"bytes,6,rep,name=removed_resources,json=removedResources,proto3\" json:\"removed_resources,omitempty\"`\n\t// The nonce provides a way for DeltaDiscoveryRequests to uniquely\n\t// reference a DeltaDiscoveryResponse when (N)ACKing. The nonce is required.\n\tNonce                string   `protobuf:\"bytes,5,opt,name=nonce,proto3\" json:\"nonce,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *DeltaDiscoveryResponse) Reset()         { *m = DeltaDiscoveryResponse{} }\nfunc (m *DeltaDiscoveryResponse) String() string { return proto.CompactTextString(m) }\nfunc (*DeltaDiscoveryResponse) ProtoMessage()    {}\nfunc (*DeltaDiscoveryResponse) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_2c7365e287e5c035, []int{3}\n}\nfunc (m *DeltaDiscoveryResponse) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *DeltaDiscoveryResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *DeltaDiscoveryResponse) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_DeltaDiscoveryResponse.Merge(m, src)\n}\nfunc (m *DeltaDiscoveryResponse) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *DeltaDiscoveryResponse) XXX_DiscardUnknown() {\n\txxx_messageInfo_DeltaDiscoveryResponse.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_DeltaDiscoveryResponse proto.InternalMessageInfo\n\nfunc (m *DeltaDiscoveryResponse) GetSystemVersionInfo() string {\n\tif m != nil {\n\t\treturn m.SystemVersionInfo\n\t}\n\treturn \"\"\n}\n\nfunc (m *DeltaDiscoveryResponse) GetResources() []*Resource {\n\tif m != nil {\n\t\treturn m.Resources\n\t}\n\treturn nil\n}\n\nfunc (m *DeltaDiscoveryResponse) GetTypeUrl() string {\n\tif m != nil {\n\t\treturn m.TypeUrl\n\t}\n\treturn \"\"\n}\n\nfunc (m *DeltaDiscoveryResponse) GetRemovedResources() []string {\n\tif m != nil {\n\t\treturn m.RemovedResources\n\t}\n\treturn nil\n}\n\nfunc (m *DeltaDiscoveryResponse) GetNonce() string {\n\tif m != nil {\n\t\treturn m.Nonce\n\t}\n\treturn \"\"\n}\n\ntype Resource struct {\n\t// The resource's name, to distinguish it from others of the same type of resource.\n\tName string `protobuf:\"bytes,3,opt,name=name,proto3\" json:\"name,omitempty\"`\n\t// [#not-implemented-hide:]\n\t// The aliases are a list of other names that this resource can go by.\n\tAliases []string `protobuf:\"bytes,4,rep,name=aliases,proto3\" json:\"aliases,omitempty\"`\n\t// The resource level version. It allows xDS to track the state of individual\n\t// resources.\n\tVersion string `protobuf:\"bytes,1,opt,name=version,proto3\" json:\"version,omitempty\"`\n\t// The resource being tracked.\n\tResource             *types.Any `protobuf:\"bytes,2,opt,name=resource,proto3\" json:\"resource,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}   `json:\"-\"`\n\tXXX_unrecognized     []byte     `json:\"-\"`\n\tXXX_sizecache        int32      `json:\"-\"`\n}\n\nfunc (m *Resource) Reset()         { *m = Resource{} }\nfunc (m *Resource) String() string { return proto.CompactTextString(m) }\nfunc (*Resource) ProtoMessage()    {}\nfunc (*Resource) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_2c7365e287e5c035, []int{4}\n}\nfunc (m *Resource) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *Resource) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *Resource) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_Resource.Merge(m, src)\n}\nfunc (m *Resource) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *Resource) XXX_DiscardUnknown() {\n\txxx_messageInfo_Resource.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_Resource proto.InternalMessageInfo\n\nfunc (m *Resource) GetName() string {\n\tif m != nil {\n\t\treturn m.Name\n\t}\n\treturn \"\"\n}\n\nfunc (m *Resource) GetAliases() []string {\n\tif m != nil {\n\t\treturn m.Aliases\n\t}\n\treturn nil\n}\n\nfunc (m *Resource) GetVersion() string {\n\tif m != nil {\n\t\treturn m.Version\n\t}\n\treturn \"\"\n}\n\nfunc (m *Resource) GetResource() *types.Any {\n\tif m != nil {\n\t\treturn m.Resource\n\t}\n\treturn nil\n}\n\nfunc init() {\n\tproto.RegisterType((*DiscoveryRequest)(nil), \"envoy.api.v2.DiscoveryRequest\")\n\tproto.RegisterType((*DiscoveryResponse)(nil), \"envoy.api.v2.DiscoveryResponse\")\n\tproto.RegisterType((*DeltaDiscoveryRequest)(nil), \"envoy.api.v2.DeltaDiscoveryRequest\")\n\tproto.RegisterMapType((map[string]string)(nil), \"envoy.api.v2.DeltaDiscoveryRequest.InitialResourceVersionsEntry\")\n\tproto.RegisterType((*DeltaDiscoveryResponse)(nil), \"envoy.api.v2.DeltaDiscoveryResponse\")\n\tproto.RegisterType((*Resource)(nil), \"envoy.api.v2.Resource\")\n}\n\nfunc init() { proto.RegisterFile(\"envoy/api/v2/discovery.proto\", fileDescriptor_2c7365e287e5c035) }\n\nvar fileDescriptor_2c7365e287e5c035 = []byte{\n\t// 698 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x54, 0x41, 0x6b, 0xdb, 0x4c,\n\t0x10, 0x65, 0x6d, 0xc7, 0xb1, 0xd7, 0x4e, 0x48, 0xf6, 0xcb, 0xe7, 0x28, 0x26, 0xb8, 0xae, 0xa1,\n\t0x60, 0x08, 0x48, 0xc5, 0x6d, 0x21, 0x94, 0x1e, 0xda, 0xd4, 0x2d, 0xa4, 0x87, 0x10, 0x14, 0x92,\n\t0x43, 0x2f, 0x62, 0x2d, 0x4f, 0x8c, 0xa8, 0xb2, 0xab, 0xee, 0x4a, 0xa2, 0x82, 0x9e, 0x4a, 0x7f,\n\t0x4c, 0x7f, 0x4a, 0x8f, 0x3d, 0xf6, 0xd0, 0x43, 0xf1, 0xbf, 0xe8, 0xa9, 0x45, 0xab, 0x95, 0x6d,\n\t0x25, 0x22, 0xf8, 0xb6, 0xb3, 0xf3, 0xf6, 0x69, 0x66, 0xde, 0x1b, 0xe1, 0x43, 0x60, 0x31, 0x4f,\n\t0x2c, 0x1a, 0x78, 0x56, 0x3c, 0xb2, 0xa6, 0x9e, 0x74, 0x79, 0x0c, 0x22, 0x31, 0x03, 0xc1, 0x43,\n\t0x4e, 0xda, 0x2a, 0x6b, 0xd2, 0xc0, 0x33, 0xe3, 0x51, 0xb7, 0x88, 0x75, 0xb9, 0x00, 0x6b, 0x42,\n\t0x25, 0x64, 0xd8, 0xee, 0xc1, 0x8c, 0xf3, 0x99, 0x0f, 0x96, 0x8a, 0x26, 0xd1, 0xb5, 0x45, 0x99,\n\t0xa6, 0xe9, 0xee, 0xeb, 0x94, 0x08, 0x5c, 0x4b, 0x86, 0x34, 0x8c, 0xa4, 0x4e, 0xec, 0xcd, 0xf8,\n\t0x8c, 0xab, 0xa3, 0x95, 0x9e, 0xb2, 0xdb, 0xc1, 0x97, 0x0a, 0xde, 0x19, 0xe7, 0x95, 0xd8, 0xf0,\n\t0x31, 0x02, 0x19, 0x92, 0x87, 0xb8, 0x1d, 0x83, 0x90, 0x1e, 0x67, 0x8e, 0xc7, 0xae, 0xb9, 0x81,\n\t0xfa, 0x68, 0xd8, 0xb4, 0x5b, 0xfa, 0xee, 0x94, 0x5d, 0x73, 0x72, 0x84, 0x6b, 0x8c, 0x4f, 0xc1,\n\t0xa8, 0xf4, 0xd1, 0xb0, 0x35, 0xda, 0x37, 0x57, 0x8b, 0x37, 0xd3, 0x72, 0xcd, 0x33, 0x3e, 0x05,\n\t0x5b, 0x81, 0xc8, 0x23, 0xbc, 0x2d, 0x40, 0xf2, 0x48, 0xb8, 0xe0, 0x30, 0x7a, 0x03, 0xd2, 0xa8,\n\t0xf6, 0xab, 0xc3, 0xa6, 0xbd, 0x95, 0xdf, 0x9e, 0xa5, 0x97, 0xe4, 0x00, 0x37, 0xc2, 0x24, 0x00,\n\t0x27, 0x12, 0xbe, 0x51, 0x53, 0x9f, 0xdc, 0x4c, 0xe3, 0x4b, 0xe1, 0x6b, 0x86, 0x80, 0x33, 0x09,\n\t0x0e, 0xe3, 0xcc, 0x05, 0x63, 0x43, 0x01, 0xb6, 0xf2, 0xdb, 0xb3, 0xf4, 0x92, 0x3c, 0xc3, 0x6d,\n\t0x10, 0x82, 0x0b, 0x67, 0x0a, 0x21, 0xf5, 0x7c, 0xa3, 0xae, 0xaa, 0x23, 0x66, 0x36, 0x13, 0x53,\n\t0x04, 0xae, 0x79, 0xa1, 0x66, 0x62, 0xb7, 0x14, 0x6e, 0xac, 0x60, 0x83, 0x3f, 0x08, 0xef, 0xae,\n\t0x0c, 0x21, 0x63, 0x5c, 0x67, 0x0a, 0x23, 0xdc, 0xcc, 0x5b, 0x90, 0x46, 0xa5, 0x5f, 0x1d, 0xb6,\n\t0x46, 0x7b, 0xf9, 0xc7, 0x72, 0x6d, 0xcc, 0x57, 0x2c, 0xb1, 0x97, 0x30, 0xd2, 0xc1, 0x75, 0x97,\n\t0x32, 0x2a, 0x12, 0xa3, 0xda, 0x47, 0xc3, 0x86, 0xad, 0xa3, 0xfb, 0xba, 0xdf, 0xc3, 0x1b, 0xab,\n\t0x4d, 0x67, 0x01, 0x19, 0xe3, 0x2d, 0x97, 0xb3, 0x50, 0x70, 0xdf, 0x09, 0x7c, 0xca, 0x40, 0x77,\n\t0xfb, 0xa0, 0x44, 0x8b, 0xd7, 0x19, 0xee, 0x3c, 0x85, 0xd9, 0x6d, 0x77, 0x25, 0x1a, 0xfc, 0xad,\n\t0xe2, 0xff, 0xc7, 0xe0, 0x87, 0xf4, 0x8e, 0x0b, 0x72, 0x89, 0xd1, 0x3a, 0x12, 0xaf, 0x56, 0x5f,\n\t0x29, 0x56, 0x7f, 0x8c, 0x8d, 0xa2, 0xfa, 0x8e, 0x8c, 0x26, 0xd2, 0x15, 0xde, 0x04, 0xb4, 0x0f,\n\t0x3a, 0x05, 0x1f, 0x5c, 0xe4, 0x59, 0xf2, 0x02, 0x77, 0x6f, 0xbd, 0x8c, 0xd8, 0xf2, 0x6d, 0x4d,\n\t0xbd, 0x35, 0x0a, 0x6f, 0x2f, 0x97, 0x79, 0xf2, 0x19, 0x1f, 0x78, 0xcc, 0x0b, 0x3d, 0xea, 0x3b,\n\t0x0b, 0x16, 0x2d, 0x9e, 0x34, 0x36, 0x94, 0x58, 0x2f, 0x8b, 0x4d, 0x95, 0xce, 0xc1, 0x3c, 0xcd,\n\t0x48, 0x6c, 0xcd, 0x71, 0xa5, 0x29, 0xde, 0xb0, 0x50, 0x24, 0xf6, 0xbe, 0x57, 0x9e, 0x2d, 0x71,\n\t0x6c, 0x7d, 0x1d, 0xc7, 0x6e, 0xae, 0xe5, 0xd8, 0xee, 0x3b, 0x7c, 0x78, 0x5f, 0x59, 0x64, 0x07,\n\t0x57, 0x3f, 0x40, 0xa2, 0x2d, 0x9b, 0x1e, 0x53, 0x0f, 0xc5, 0xd4, 0x8f, 0x40, 0xab, 0x93, 0x05,\n\t0xcf, 0x2b, 0xc7, 0x68, 0xf0, 0x0b, 0xe1, 0xce, 0xed, 0xce, 0xf5, 0x0a, 0x98, 0xf8, 0x3f, 0x99,\n\t0xc8, 0x10, 0x6e, 0x9c, 0x92, 0x4d, 0xd8, 0xcd, 0x52, 0x57, 0x2b, 0xfb, 0xf0, 0xf4, 0xee, 0x3e,\n\t0x74, 0x8a, 0x23, 0xce, 0xcb, 0x5d, 0xdd, 0x88, 0x7b, 0x9c, 0x7f, 0x84, 0x77, 0x05, 0xdc, 0xf0,\n\t0x18, 0xa6, 0xce, 0x92, 0xb8, 0xae, 0x84, 0xdf, 0xd1, 0x09, 0x7b, 0xc1, 0x53, 0xba, 0x26, 0x83,\n\t0xaf, 0x08, 0x37, 0x72, 0x0c, 0x21, 0xb8, 0x96, 0x1a, 0x49, 0xad, 0x5e, 0xd3, 0x56, 0x67, 0x62,\n\t0xe0, 0x4d, 0xea, 0x7b, 0x54, 0x82, 0xd4, 0x96, 0xca, 0xc3, 0x34, 0xa3, 0xfb, 0xd6, 0x2d, 0xe7,\n\t0x21, 0x79, 0x8c, 0x1b, 0x79, 0x3d, 0xfa, 0x17, 0x58, 0xbe, 0xf7, 0x0b, 0xd4, 0xc9, 0xdb, 0x6f,\n\t0xf3, 0x1e, 0xfa, 0x3e, 0xef, 0xa1, 0x1f, 0xf3, 0x1e, 0xfa, 0x39, 0xef, 0xa1, 0xdf, 0xf3, 0x1e,\n\t0xc2, 0x5d, 0x8f, 0x67, 0xf3, 0x09, 0x04, 0xff, 0x94, 0x14, 0x46, 0x75, 0xb2, 0xbd, 0xd0, 0xe3,\n\t0x3c, 0xa5, 0x3c, 0x47, 0xef, 0x2b, 0xf1, 0x68, 0x52, 0x57, 0xfc, 0x4f, 0xfe, 0x05, 0x00, 0x00,\n\t0xff, 0xff, 0x85, 0x0a, 0x39, 0xfc, 0x4d, 0x06, 0x00, 0x00,\n}\n\nfunc (this *DiscoveryRequest) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*DiscoveryRequest)\n\tif !ok {\n\t\tthat2, ok := that.(DiscoveryRequest)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.VersionInfo != that1.VersionInfo {\n\t\treturn false\n\t}\n\tif !this.Node.Equal(that1.Node) {\n\t\treturn false\n\t}\n\tif len(this.ResourceNames) != len(that1.ResourceNames) {\n\t\treturn false\n\t}\n\tfor i := range this.ResourceNames {\n\t\tif this.ResourceNames[i] != that1.ResourceNames[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\tif this.TypeUrl != that1.TypeUrl {\n\t\treturn false\n\t}\n\tif this.ResponseNonce != that1.ResponseNonce {\n\t\treturn false\n\t}\n\tif !this.ErrorDetail.Equal(that1.ErrorDetail) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *DiscoveryResponse) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*DiscoveryResponse)\n\tif !ok {\n\t\tthat2, ok := that.(DiscoveryResponse)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.VersionInfo != that1.VersionInfo {\n\t\treturn false\n\t}\n\tif len(this.Resources) != len(that1.Resources) {\n\t\treturn false\n\t}\n\tfor i := range this.Resources {\n\t\tif !this.Resources[i].Equal(that1.Resources[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif this.Canary != that1.Canary {\n\t\treturn false\n\t}\n\tif this.TypeUrl != that1.TypeUrl {\n\t\treturn false\n\t}\n\tif this.Nonce != that1.Nonce {\n\t\treturn false\n\t}\n\tif !this.ControlPlane.Equal(that1.ControlPlane) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *DeltaDiscoveryRequest) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*DeltaDiscoveryRequest)\n\tif !ok {\n\t\tthat2, ok := that.(DeltaDiscoveryRequest)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif !this.Node.Equal(that1.Node) {\n\t\treturn false\n\t}\n\tif this.TypeUrl != that1.TypeUrl {\n\t\treturn false\n\t}\n\tif len(this.ResourceNamesSubscribe) != len(that1.ResourceNamesSubscribe) {\n\t\treturn false\n\t}\n\tfor i := range this.ResourceNamesSubscribe {\n\t\tif this.ResourceNamesSubscribe[i] != that1.ResourceNamesSubscribe[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\tif len(this.ResourceNamesUnsubscribe) != len(that1.ResourceNamesUnsubscribe) {\n\t\treturn false\n\t}\n\tfor i := range this.ResourceNamesUnsubscribe {\n\t\tif this.ResourceNamesUnsubscribe[i] != that1.ResourceNamesUnsubscribe[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\tif len(this.InitialResourceVersions) != len(that1.InitialResourceVersions) {\n\t\treturn false\n\t}\n\tfor i := range this.InitialResourceVersions {\n\t\tif this.InitialResourceVersions[i] != that1.InitialResourceVersions[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\tif this.ResponseNonce != that1.ResponseNonce {\n\t\treturn false\n\t}\n\tif !this.ErrorDetail.Equal(that1.ErrorDetail) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *DeltaDiscoveryResponse) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*DeltaDiscoveryResponse)\n\tif !ok {\n\t\tthat2, ok := that.(DeltaDiscoveryResponse)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.SystemVersionInfo != that1.SystemVersionInfo {\n\t\treturn false\n\t}\n\tif len(this.Resources) != len(that1.Resources) {\n\t\treturn false\n\t}\n\tfor i := range this.Resources {\n\t\tif !this.Resources[i].Equal(that1.Resources[i]) {\n\t\t\treturn false\n\t\t}\n\t}\n\tif this.TypeUrl != that1.TypeUrl {\n\t\treturn false\n\t}\n\tif len(this.RemovedResources) != len(that1.RemovedResources) {\n\t\treturn false\n\t}\n\tfor i := range this.RemovedResources {\n\t\tif this.RemovedResources[i] != that1.RemovedResources[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\tif this.Nonce != that1.Nonce {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *Resource) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Resource)\n\tif !ok {\n\t\tthat2, ok := that.(Resource)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Name != that1.Name {\n\t\treturn false\n\t}\n\tif len(this.Aliases) != len(that1.Aliases) {\n\t\treturn false\n\t}\n\tfor i := range this.Aliases {\n\t\tif this.Aliases[i] != that1.Aliases[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\tif this.Version != that1.Version {\n\t\treturn false\n\t}\n\tif !this.Resource.Equal(that1.Resource) {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (m *DiscoveryRequest) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *DiscoveryRequest) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.VersionInfo) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.VersionInfo)))\n\t\ti += copy(dAtA[i:], m.VersionInfo)\n\t}\n\tif m.Node != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(m.Node.Size()))\n\t\tn1, err1 := m.Node.MarshalTo(dAtA[i:])\n\t\tif err1 != nil {\n\t\t\treturn 0, err1\n\t\t}\n\t\ti += n1\n\t}\n\tif len(m.ResourceNames) > 0 {\n\t\tfor _, s := range m.ResourceNames {\n\t\t\tdAtA[i] = 0x1a\n\t\t\ti++\n\t\t\tl = len(s)\n\t\t\tfor l >= 1<<7 {\n\t\t\t\tdAtA[i] = uint8(uint64(l)&0x7f | 0x80)\n\t\t\t\tl >>= 7\n\t\t\t\ti++\n\t\t\t}\n\t\t\tdAtA[i] = uint8(l)\n\t\t\ti++\n\t\t\ti += copy(dAtA[i:], s)\n\t\t}\n\t}\n\tif len(m.TypeUrl) > 0 {\n\t\tdAtA[i] = 0x22\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.TypeUrl)))\n\t\ti += copy(dAtA[i:], m.TypeUrl)\n\t}\n\tif len(m.ResponseNonce) > 0 {\n\t\tdAtA[i] = 0x2a\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.ResponseNonce)))\n\t\ti += copy(dAtA[i:], m.ResponseNonce)\n\t}\n\tif m.ErrorDetail != nil {\n\t\tdAtA[i] = 0x32\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(m.ErrorDetail.Size()))\n\t\tn2, err2 := m.ErrorDetail.MarshalTo(dAtA[i:])\n\t\tif err2 != nil {\n\t\t\treturn 0, err2\n\t\t}\n\t\ti += n2\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *DiscoveryResponse) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *DiscoveryResponse) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.VersionInfo) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.VersionInfo)))\n\t\ti += copy(dAtA[i:], m.VersionInfo)\n\t}\n\tif len(m.Resources) > 0 {\n\t\tfor _, msg := range m.Resources {\n\t\t\tdAtA[i] = 0x12\n\t\t\ti++\n\t\t\ti = encodeVarintDiscovery(dAtA, i, uint64(msg.Size()))\n\t\t\tn, err := msg.MarshalTo(dAtA[i:])\n\t\t\tif err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t\ti += n\n\t\t}\n\t}\n\tif m.Canary {\n\t\tdAtA[i] = 0x18\n\t\ti++\n\t\tif m.Canary {\n\t\t\tdAtA[i] = 1\n\t\t} else {\n\t\t\tdAtA[i] = 0\n\t\t}\n\t\ti++\n\t}\n\tif len(m.TypeUrl) > 0 {\n\t\tdAtA[i] = 0x22\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.TypeUrl)))\n\t\ti += copy(dAtA[i:], m.TypeUrl)\n\t}\n\tif len(m.Nonce) > 0 {\n\t\tdAtA[i] = 0x2a\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.Nonce)))\n\t\ti += copy(dAtA[i:], m.Nonce)\n\t}\n\tif m.ControlPlane != nil {\n\t\tdAtA[i] = 0x32\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(m.ControlPlane.Size()))\n\t\tn3, err3 := m.ControlPlane.MarshalTo(dAtA[i:])\n\t\tif err3 != nil {\n\t\t\treturn 0, err3\n\t\t}\n\t\ti += n3\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *DeltaDiscoveryRequest) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *DeltaDiscoveryRequest) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Node != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(m.Node.Size()))\n\t\tn4, err4 := m.Node.MarshalTo(dAtA[i:])\n\t\tif err4 != nil {\n\t\t\treturn 0, err4\n\t\t}\n\t\ti += n4\n\t}\n\tif len(m.TypeUrl) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.TypeUrl)))\n\t\ti += copy(dAtA[i:], m.TypeUrl)\n\t}\n\tif len(m.ResourceNamesSubscribe) > 0 {\n\t\tfor _, s := range m.ResourceNamesSubscribe {\n\t\t\tdAtA[i] = 0x1a\n\t\t\ti++\n\t\t\tl = len(s)\n\t\t\tfor l >= 1<<7 {\n\t\t\t\tdAtA[i] = uint8(uint64(l)&0x7f | 0x80)\n\t\t\t\tl >>= 7\n\t\t\t\ti++\n\t\t\t}\n\t\t\tdAtA[i] = uint8(l)\n\t\t\ti++\n\t\t\ti += copy(dAtA[i:], s)\n\t\t}\n\t}\n\tif len(m.ResourceNamesUnsubscribe) > 0 {\n\t\tfor _, s := range m.ResourceNamesUnsubscribe {\n\t\t\tdAtA[i] = 0x22\n\t\t\ti++\n\t\t\tl = len(s)\n\t\t\tfor l >= 1<<7 {\n\t\t\t\tdAtA[i] = uint8(uint64(l)&0x7f | 0x80)\n\t\t\t\tl >>= 7\n\t\t\t\ti++\n\t\t\t}\n\t\t\tdAtA[i] = uint8(l)\n\t\t\ti++\n\t\t\ti += copy(dAtA[i:], s)\n\t\t}\n\t}\n\tif len(m.InitialResourceVersions) > 0 {\n\t\tkeysForInitialResourceVersions := make([]string, 0, len(m.InitialResourceVersions))\n\t\tfor k, _ := range m.InitialResourceVersions {\n\t\t\tkeysForInitialResourceVersions = append(keysForInitialResourceVersions, string(k))\n\t\t}\n\t\tgithub_com_gogo_protobuf_sortkeys.Strings(keysForInitialResourceVersions)\n\t\tfor _, k := range keysForInitialResourceVersions {\n\t\t\tdAtA[i] = 0x2a\n\t\t\ti++\n\t\t\tv := m.InitialResourceVersions[string(k)]\n\t\t\tmapSize := 1 + len(k) + sovDiscovery(uint64(len(k))) + 1 + len(v) + sovDiscovery(uint64(len(v)))\n\t\t\ti = encodeVarintDiscovery(dAtA, i, uint64(mapSize))\n\t\t\tdAtA[i] = 0xa\n\t\t\ti++\n\t\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(k)))\n\t\t\ti += copy(dAtA[i:], k)\n\t\t\tdAtA[i] = 0x12\n\t\t\ti++\n\t\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(v)))\n\t\t\ti += copy(dAtA[i:], v)\n\t\t}\n\t}\n\tif len(m.ResponseNonce) > 0 {\n\t\tdAtA[i] = 0x32\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.ResponseNonce)))\n\t\ti += copy(dAtA[i:], m.ResponseNonce)\n\t}\n\tif m.ErrorDetail != nil {\n\t\tdAtA[i] = 0x3a\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(m.ErrorDetail.Size()))\n\t\tn5, err5 := m.ErrorDetail.MarshalTo(dAtA[i:])\n\t\tif err5 != nil {\n\t\t\treturn 0, err5\n\t\t}\n\t\ti += n5\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *DeltaDiscoveryResponse) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *DeltaDiscoveryResponse) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.SystemVersionInfo) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.SystemVersionInfo)))\n\t\ti += copy(dAtA[i:], m.SystemVersionInfo)\n\t}\n\tif len(m.Resources) > 0 {\n\t\tfor _, msg := range m.Resources {\n\t\t\tdAtA[i] = 0x12\n\t\t\ti++\n\t\t\ti = encodeVarintDiscovery(dAtA, i, uint64(msg.Size()))\n\t\t\tn, err := msg.MarshalTo(dAtA[i:])\n\t\t\tif err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t\ti += n\n\t\t}\n\t}\n\tif len(m.TypeUrl) > 0 {\n\t\tdAtA[i] = 0x22\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.TypeUrl)))\n\t\ti += copy(dAtA[i:], m.TypeUrl)\n\t}\n\tif len(m.Nonce) > 0 {\n\t\tdAtA[i] = 0x2a\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.Nonce)))\n\t\ti += copy(dAtA[i:], m.Nonce)\n\t}\n\tif len(m.RemovedResources) > 0 {\n\t\tfor _, s := range m.RemovedResources {\n\t\t\tdAtA[i] = 0x32\n\t\t\ti++\n\t\t\tl = len(s)\n\t\t\tfor l >= 1<<7 {\n\t\t\t\tdAtA[i] = uint8(uint64(l)&0x7f | 0x80)\n\t\t\t\tl >>= 7\n\t\t\t\ti++\n\t\t\t}\n\t\t\tdAtA[i] = uint8(l)\n\t\t\ti++\n\t\t\ti += copy(dAtA[i:], s)\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *Resource) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *Resource) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Version) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.Version)))\n\t\ti += copy(dAtA[i:], m.Version)\n\t}\n\tif m.Resource != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(m.Resource.Size()))\n\t\tn6, err6 := m.Resource.MarshalTo(dAtA[i:])\n\t\tif err6 != nil {\n\t\t\treturn 0, err6\n\t\t}\n\t\ti += n6\n\t}\n\tif len(m.Name) > 0 {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintDiscovery(dAtA, i, uint64(len(m.Name)))\n\t\ti += copy(dAtA[i:], m.Name)\n\t}\n\tif len(m.Aliases) > 0 {\n\t\tfor _, s := range m.Aliases {\n\t\t\tdAtA[i] = 0x22\n\t\t\ti++\n\t\t\tl = len(s)\n\t\t\tfor l >= 1<<7 {\n\t\t\t\tdAtA[i] = uint8(uint64(l)&0x7f | 0x80)\n\t\t\t\tl >>= 7\n\t\t\t\ti++\n\t\t\t}\n\t\t\tdAtA[i] = uint8(l)\n\t\t\ti++\n\t\t\ti += copy(dAtA[i:], s)\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc encodeVarintDiscovery(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *DiscoveryRequest) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.VersionInfo)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif m.Node != nil {\n\t\tl = m.Node.Size()\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif len(m.ResourceNames) > 0 {\n\t\tfor _, s := range m.ResourceNames {\n\t\t\tl = len(s)\n\t\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t\t}\n\t}\n\tl = len(m.TypeUrl)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tl = len(m.ResponseNonce)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif m.ErrorDetail != nil {\n\t\tl = m.ErrorDetail.Size()\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *DiscoveryResponse) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.VersionInfo)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif len(m.Resources) > 0 {\n\t\tfor _, e := range m.Resources {\n\t\t\tl = e.Size()\n\t\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t\t}\n\t}\n\tif m.Canary {\n\t\tn += 2\n\t}\n\tl = len(m.TypeUrl)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tl = len(m.Nonce)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif m.ControlPlane != nil {\n\t\tl = m.ControlPlane.Size()\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *DeltaDiscoveryRequest) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Node != nil {\n\t\tl = m.Node.Size()\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tl = len(m.TypeUrl)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif len(m.ResourceNamesSubscribe) > 0 {\n\t\tfor _, s := range m.ResourceNamesSubscribe {\n\t\t\tl = len(s)\n\t\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t\t}\n\t}\n\tif len(m.ResourceNamesUnsubscribe) > 0 {\n\t\tfor _, s := range m.ResourceNamesUnsubscribe {\n\t\t\tl = len(s)\n\t\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t\t}\n\t}\n\tif len(m.InitialResourceVersions) > 0 {\n\t\tfor k, v := range m.InitialResourceVersions {\n\t\t\t_ = k\n\t\t\t_ = v\n\t\t\tmapEntrySize := 1 + len(k) + sovDiscovery(uint64(len(k))) + 1 + len(v) + sovDiscovery(uint64(len(v)))\n\t\t\tn += mapEntrySize + 1 + sovDiscovery(uint64(mapEntrySize))\n\t\t}\n\t}\n\tl = len(m.ResponseNonce)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif m.ErrorDetail != nil {\n\t\tl = m.ErrorDetail.Size()\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *DeltaDiscoveryResponse) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.SystemVersionInfo)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif len(m.Resources) > 0 {\n\t\tfor _, e := range m.Resources {\n\t\t\tl = e.Size()\n\t\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t\t}\n\t}\n\tl = len(m.TypeUrl)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tl = len(m.Nonce)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif len(m.RemovedResources) > 0 {\n\t\tfor _, s := range m.RemovedResources {\n\t\t\tl = len(s)\n\t\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *Resource) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Version)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif m.Resource != nil {\n\t\tl = m.Resource.Size()\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tl = len(m.Name)\n\tif l > 0 {\n\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t}\n\tif len(m.Aliases) > 0 {\n\t\tfor _, s := range m.Aliases {\n\t\t\tl = len(s)\n\t\t\tn += 1 + l + sovDiscovery(uint64(l))\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc sovDiscovery(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozDiscovery(x uint64) (n int) {\n\treturn sovDiscovery(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *DiscoveryRequest) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: DiscoveryRequest: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: DiscoveryRequest: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field VersionInfo\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.VersionInfo = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Node\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Node == nil {\n\t\t\t\tm.Node = &core.Node{}\n\t\t\t}\n\t\t\tif err := m.Node.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ResourceNames\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.ResourceNames = append(m.ResourceNames, string(dAtA[iNdEx:postIndex]))\n\t\t\tiNdEx = postIndex\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field TypeUrl\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.TypeUrl = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 5:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ResponseNonce\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.ResponseNonce = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 6:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ErrorDetail\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.ErrorDetail == nil {\n\t\t\t\tm.ErrorDetail = &rpc.Status{}\n\t\t\t}\n\t\t\tif err := m.ErrorDetail.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipDiscovery(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *DiscoveryResponse) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: DiscoveryResponse: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: DiscoveryResponse: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field VersionInfo\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.VersionInfo = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Resources\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Resources = append(m.Resources, &types.Any{})\n\t\t\tif err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Canary\", wireType)\n\t\t\t}\n\t\t\tvar v int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tv |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.Canary = bool(v != 0)\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field TypeUrl\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.TypeUrl = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 5:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Nonce\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Nonce = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 6:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ControlPlane\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.ControlPlane == nil {\n\t\t\t\tm.ControlPlane = &core.ControlPlane{}\n\t\t\t}\n\t\t\tif err := m.ControlPlane.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipDiscovery(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *DeltaDiscoveryRequest) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: DeltaDiscoveryRequest: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: DeltaDiscoveryRequest: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Node\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Node == nil {\n\t\t\t\tm.Node = &core.Node{}\n\t\t\t}\n\t\t\tif err := m.Node.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field TypeUrl\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.TypeUrl = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ResourceNamesSubscribe\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.ResourceNamesSubscribe = append(m.ResourceNamesSubscribe, string(dAtA[iNdEx:postIndex]))\n\t\t\tiNdEx = postIndex\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ResourceNamesUnsubscribe\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.ResourceNamesUnsubscribe = append(m.ResourceNamesUnsubscribe, string(dAtA[iNdEx:postIndex]))\n\t\t\tiNdEx = postIndex\n\t\tcase 5:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field InitialResourceVersions\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.InitialResourceVersions == nil {\n\t\t\t\tm.InitialResourceVersions = make(map[string]string)\n\t\t\t}\n\t\t\tvar mapkey string\n\t\t\tvar mapvalue string\n\t\t\tfor iNdEx < postIndex {\n\t\t\t\tentryPreIndex := iNdEx\n\t\t\t\tvar wire uint64\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfieldNum := int32(wire >> 3)\n\t\t\t\tif fieldNum == 1 {\n\t\t\t\t\tvar stringLenmapkey uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapkey |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapkey := int(stringLenmapkey)\n\t\t\t\t\tif intStringLenmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapkey := iNdEx + intStringLenmapkey\n\t\t\t\t\tif postStringIndexmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapkey > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapkey = string(dAtA[iNdEx:postStringIndexmapkey])\n\t\t\t\t\tiNdEx = postStringIndexmapkey\n\t\t\t\t} else if fieldNum == 2 {\n\t\t\t\t\tvar stringLenmapvalue uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapvalue |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapvalue := int(stringLenmapvalue)\n\t\t\t\t\tif intStringLenmapvalue < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapvalue := iNdEx + intStringLenmapvalue\n\t\t\t\t\tif postStringIndexmapvalue < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapvalue > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapvalue = string(dAtA[iNdEx:postStringIndexmapvalue])\n\t\t\t\t\tiNdEx = postStringIndexmapvalue\n\t\t\t\t} else {\n\t\t\t\t\tiNdEx = entryPreIndex\n\t\t\t\t\tskippy, err := skipDiscovery(dAtA[iNdEx:])\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif skippy < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t\t\t}\n\t\t\t\t\tif (iNdEx + skippy) > postIndex {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tiNdEx += skippy\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.InitialResourceVersions[mapkey] = mapvalue\n\t\t\tiNdEx = postIndex\n\t\tcase 6:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ResponseNonce\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.ResponseNonce = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 7:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ErrorDetail\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.ErrorDetail == nil {\n\t\t\t\tm.ErrorDetail = &rpc.Status{}\n\t\t\t}\n\t\t\tif err := m.ErrorDetail.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipDiscovery(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *DeltaDiscoveryResponse) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: DeltaDiscoveryResponse: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: DeltaDiscoveryResponse: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field SystemVersionInfo\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.SystemVersionInfo = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Resources\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Resources = append(m.Resources, &Resource{})\n\t\t\tif err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field TypeUrl\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.TypeUrl = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 5:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Nonce\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Nonce = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 6:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field RemovedResources\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.RemovedResources = append(m.RemovedResources, string(dAtA[iNdEx:postIndex]))\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipDiscovery(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *Resource) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Resource: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Resource: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Version\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Version = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Resource\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Resource == nil {\n\t\t\t\tm.Resource = &types.Any{}\n\t\t\t}\n\t\t\tif err := m.Resource.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Name\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Name = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Aliases\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Aliases = append(m.Aliases, string(dAtA[iNdEx:postIndex]))\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipDiscovery(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipDiscovery(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowDiscovery\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowDiscovery\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthDiscovery\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowDiscovery\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipDiscovery(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthDiscovery\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthDiscovery = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowDiscovery   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/service/auth/v2/attribute_context.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/service/auth/v2/attribute_context.proto\n\npackage v2\n\nimport (\n\tfmt \"fmt\"\n\t_ \"github.com/gogo/protobuf/gogoproto\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\tgithub_com_gogo_protobuf_sortkeys \"github.com/gogo/protobuf/sortkeys\"\n\ttypes \"github.com/gogo/protobuf/types\"\n\tcore \"go.aporeto.io/trireme-lib/third_party/generated/envoyproxy/data-plane-api/envoy/api/v2/core\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\n// An attribute is a piece of metadata that describes an activity on a network.\n// For example, the size of an HTTP request, or the status code of an HTTP response.\n//\n// Each attribute has a type and a name, which is logically defined as a proto message field\n// of the `AttributeContext`. The `AttributeContext` is a collection of individual attributes\n// supported by Envoy authorization system.\ntype AttributeContext struct {\n\t// The source of a network activity, such as starting a TCP connection.\n\t// In a multi hop network activity, the source represents the sender of the\n\t// last hop.\n\tSource *AttributeContext_Peer `protobuf:\"bytes,1,opt,name=source,proto3\" json:\"source,omitempty\"`\n\t// The destination of a network activity, such as accepting a TCP connection.\n\t// In a multi hop network activity, the destination represents the receiver of\n\t// the last hop.\n\tDestination *AttributeContext_Peer `protobuf:\"bytes,2,opt,name=destination,proto3\" json:\"destination,omitempty\"`\n\t// Represents a network request, such as an HTTP request.\n\tRequest *AttributeContext_Request `protobuf:\"bytes,4,opt,name=request,proto3\" json:\"request,omitempty\"`\n\t// This is analogous to http_request.headers, however these contents will not be sent to the\n\t// upstream server. Context_extensions provide an extension mechanism for sending additional\n\t// information to the auth server without modifying the proto definition. It maps to the\n\t// internal opaque context in the filter chain.\n\tContextExtensions    map[string]string `protobuf:\"bytes,10,rep,name=context_extensions,json=contextExtensions,proto3\" json:\"context_extensions,omitempty\" protobuf_key:\"bytes,1,opt,name=key,proto3\" protobuf_val:\"bytes,2,opt,name=value,proto3\"`\n\tXXX_NoUnkeyedLiteral struct{}          `json:\"-\"`\n\tXXX_unrecognized     []byte            `json:\"-\"`\n\tXXX_sizecache        int32             `json:\"-\"`\n}\n\nfunc (m *AttributeContext) Reset()         { *m = AttributeContext{} }\nfunc (m *AttributeContext) String() string { return proto.CompactTextString(m) }\nfunc (*AttributeContext) ProtoMessage()    {}\nfunc (*AttributeContext) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a6030c9468e3591b, []int{0}\n}\nfunc (m *AttributeContext) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *AttributeContext) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *AttributeContext) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_AttributeContext.Merge(m, src)\n}\nfunc (m *AttributeContext) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *AttributeContext) XXX_DiscardUnknown() {\n\txxx_messageInfo_AttributeContext.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_AttributeContext proto.InternalMessageInfo\n\nfunc (m *AttributeContext) GetSource() *AttributeContext_Peer {\n\tif m != nil {\n\t\treturn m.Source\n\t}\n\treturn nil\n}\n\nfunc (m *AttributeContext) GetDestination() *AttributeContext_Peer {\n\tif m != nil {\n\t\treturn m.Destination\n\t}\n\treturn nil\n}\n\nfunc (m *AttributeContext) GetRequest() *AttributeContext_Request {\n\tif m != nil {\n\t\treturn m.Request\n\t}\n\treturn nil\n}\n\nfunc (m *AttributeContext) GetContextExtensions() map[string]string {\n\tif m != nil {\n\t\treturn m.ContextExtensions\n\t}\n\treturn nil\n}\n\n// This message defines attributes for a node that handles a network request.\n// The node can be either a service or an application that sends, forwards,\n// or receives the request. Service peers should fill in the `service`,\n// `principal`, and `labels` as appropriate.\ntype AttributeContext_Peer struct {\n\t// The address of the peer, this is typically the IP address.\n\t// It can also be UDS path, or others.\n\tAddress *core.Address `protobuf:\"bytes,1,opt,name=address,proto3\" json:\"address,omitempty\"`\n\t// The canonical service name of the peer.\n\t// It should be set to :ref:`the HTTP x-envoy-downstream-service-cluster\n\t// <config_http_conn_man_headers_downstream-service-cluster>`\n\t// If a more trusted source of the service name is available through mTLS/secure naming, it\n\t// should be used.\n\tService string `protobuf:\"bytes,2,opt,name=service,proto3\" json:\"service,omitempty\"`\n\t// The labels associated with the peer.\n\t// These could be pod labels for Kubernetes or tags for VMs.\n\t// The source of the labels could be an X.509 certificate or other configuration.\n\tLabels map[string]string `protobuf:\"bytes,3,rep,name=labels,proto3\" json:\"labels,omitempty\" protobuf_key:\"bytes,1,opt,name=key,proto3\" protobuf_val:\"bytes,2,opt,name=value,proto3\"`\n\t// The authenticated identity of this peer.\n\t// For example, the identity associated with the workload such as a service account.\n\t// If an X.509 certificate is used to assert the identity this field should be sourced from\n\t// `Subject` or `Subject Alternative Names`. The primary identity should be the principal.\n\t// The principal format is issuer specific.\n\t//\n\t// Example:\n\t// *    SPIFFE format is `spiffe://trust-domain/path`\n\t// *    Google account format is `https://accounts.google.com/{userid}`\n\tPrincipal            string   `protobuf:\"bytes,4,opt,name=principal,proto3\" json:\"principal,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *AttributeContext_Peer) Reset()         { *m = AttributeContext_Peer{} }\nfunc (m *AttributeContext_Peer) String() string { return proto.CompactTextString(m) }\nfunc (*AttributeContext_Peer) ProtoMessage()    {}\nfunc (*AttributeContext_Peer) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a6030c9468e3591b, []int{0, 0}\n}\nfunc (m *AttributeContext_Peer) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *AttributeContext_Peer) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *AttributeContext_Peer) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_AttributeContext_Peer.Merge(m, src)\n}\nfunc (m *AttributeContext_Peer) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *AttributeContext_Peer) XXX_DiscardUnknown() {\n\txxx_messageInfo_AttributeContext_Peer.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_AttributeContext_Peer proto.InternalMessageInfo\n\nfunc (m *AttributeContext_Peer) GetAddress() *core.Address {\n\tif m != nil {\n\t\treturn m.Address\n\t}\n\treturn nil\n}\n\nfunc (m *AttributeContext_Peer) GetService() string {\n\tif m != nil {\n\t\treturn m.Service\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_Peer) GetLabels() map[string]string {\n\tif m != nil {\n\t\treturn m.Labels\n\t}\n\treturn nil\n}\n\nfunc (m *AttributeContext_Peer) GetPrincipal() string {\n\tif m != nil {\n\t\treturn m.Principal\n\t}\n\treturn \"\"\n}\n\n// Represents a network request, such as an HTTP request.\ntype AttributeContext_Request struct {\n\t// The timestamp when the proxy receives the first byte of the request.\n\tTime *types.Timestamp `protobuf:\"bytes,1,opt,name=time,proto3\" json:\"time,omitempty\"`\n\t// Represents an HTTP request or an HTTP-like request.\n\tHttp                 *AttributeContext_HttpRequest `protobuf:\"bytes,2,opt,name=http,proto3\" json:\"http,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}                      `json:\"-\"`\n\tXXX_unrecognized     []byte                        `json:\"-\"`\n\tXXX_sizecache        int32                         `json:\"-\"`\n}\n\nfunc (m *AttributeContext_Request) Reset()         { *m = AttributeContext_Request{} }\nfunc (m *AttributeContext_Request) String() string { return proto.CompactTextString(m) }\nfunc (*AttributeContext_Request) ProtoMessage()    {}\nfunc (*AttributeContext_Request) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a6030c9468e3591b, []int{0, 1}\n}\nfunc (m *AttributeContext_Request) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *AttributeContext_Request) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *AttributeContext_Request) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_AttributeContext_Request.Merge(m, src)\n}\nfunc (m *AttributeContext_Request) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *AttributeContext_Request) XXX_DiscardUnknown() {\n\txxx_messageInfo_AttributeContext_Request.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_AttributeContext_Request proto.InternalMessageInfo\n\nfunc (m *AttributeContext_Request) GetTime() *types.Timestamp {\n\tif m != nil {\n\t\treturn m.Time\n\t}\n\treturn nil\n}\n\nfunc (m *AttributeContext_Request) GetHttp() *AttributeContext_HttpRequest {\n\tif m != nil {\n\t\treturn m.Http\n\t}\n\treturn nil\n}\n\n// This message defines attributes for an HTTP request.\n// HTTP/1.x, HTTP/2, gRPC are all considered as HTTP requests.\ntype AttributeContext_HttpRequest struct {\n\t// The unique ID for a request, which can be propagated to downstream\n\t// systems. The ID should have low probability of collision\n\t// within a single day for a specific service.\n\t// For HTTP requests, it should be X-Request-ID or equivalent.\n\tId string `protobuf:\"bytes,1,opt,name=id,proto3\" json:\"id,omitempty\"`\n\t// The HTTP request method, such as `GET`, `POST`.\n\tMethod string `protobuf:\"bytes,2,opt,name=method,proto3\" json:\"method,omitempty\"`\n\t// The HTTP request headers. If multiple headers share the same key, they\n\t// must be merged according to the HTTP spec. All header keys must be\n\t// lowercased, because HTTP header keys are case-insensitive.\n\tHeaders map[string]string `protobuf:\"bytes,3,rep,name=headers,proto3\" json:\"headers,omitempty\" protobuf_key:\"bytes,1,opt,name=key,proto3\" protobuf_val:\"bytes,2,opt,name=value,proto3\"`\n\t// The request target, as it appears in the first line of the HTTP request. This includes\n\t// the URL path and query-string. No decoding is performed.\n\tPath string `protobuf:\"bytes,4,opt,name=path,proto3\" json:\"path,omitempty\"`\n\t// The HTTP request `Host` or 'Authority` header value.\n\tHost string `protobuf:\"bytes,5,opt,name=host,proto3\" json:\"host,omitempty\"`\n\t// The HTTP URL scheme, such as `http` and `https`.\n\tScheme string `protobuf:\"bytes,6,opt,name=scheme,proto3\" json:\"scheme,omitempty\"`\n\t// This field is always empty, and exists for compatibility reasons. The HTTP URL query is\n\t// included in `path` field.\n\tQuery string `protobuf:\"bytes,7,opt,name=query,proto3\" json:\"query,omitempty\"`\n\t// This field is always empty, and exists for compatibility reasons. The URL fragment is\n\t// not submitted as part of HTTP requests; it is unknowable.\n\tFragment string `protobuf:\"bytes,8,opt,name=fragment,proto3\" json:\"fragment,omitempty\"`\n\t// The HTTP request size in bytes. If unknown, it must be -1.\n\tSize_ int64 `protobuf:\"varint,9,opt,name=size,proto3\" json:\"size,omitempty\"`\n\t// The network protocol used with the request, such as \"HTTP/1.0\", \"HTTP/1.1\", or \"HTTP/2\".\n\t//\n\t// See :repo:`headers.h:ProtocolStrings <source/common/http/headers.h>` for a list of all\n\t// possible values.\n\tProtocol string `protobuf:\"bytes,10,opt,name=protocol,proto3\" json:\"protocol,omitempty\"`\n\t// The HTTP request body.\n\tBody                 string   `protobuf:\"bytes,11,opt,name=body,proto3\" json:\"body,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *AttributeContext_HttpRequest) Reset()         { *m = AttributeContext_HttpRequest{} }\nfunc (m *AttributeContext_HttpRequest) String() string { return proto.CompactTextString(m) }\nfunc (*AttributeContext_HttpRequest) ProtoMessage()    {}\nfunc (*AttributeContext_HttpRequest) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_a6030c9468e3591b, []int{0, 2}\n}\nfunc (m *AttributeContext_HttpRequest) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *AttributeContext_HttpRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tb = b[:cap(b)]\n\tn, err := m.MarshalTo(b)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn b[:n], nil\n}\nfunc (m *AttributeContext_HttpRequest) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_AttributeContext_HttpRequest.Merge(m, src)\n}\nfunc (m *AttributeContext_HttpRequest) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *AttributeContext_HttpRequest) XXX_DiscardUnknown() {\n\txxx_messageInfo_AttributeContext_HttpRequest.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_AttributeContext_HttpRequest proto.InternalMessageInfo\n\nfunc (m *AttributeContext_HttpRequest) GetId() string {\n\tif m != nil {\n\t\treturn m.Id\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_HttpRequest) GetMethod() string {\n\tif m != nil {\n\t\treturn m.Method\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_HttpRequest) GetHeaders() map[string]string {\n\tif m != nil {\n\t\treturn m.Headers\n\t}\n\treturn nil\n}\n\nfunc (m *AttributeContext_HttpRequest) GetPath() string {\n\tif m != nil {\n\t\treturn m.Path\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_HttpRequest) GetHost() string {\n\tif m != nil {\n\t\treturn m.Host\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_HttpRequest) GetScheme() string {\n\tif m != nil {\n\t\treturn m.Scheme\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_HttpRequest) GetQuery() string {\n\tif m != nil {\n\t\treturn m.Query\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_HttpRequest) GetFragment() string {\n\tif m != nil {\n\t\treturn m.Fragment\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_HttpRequest) GetSize_() int64 {\n\tif m != nil {\n\t\treturn m.Size_\n\t}\n\treturn 0\n}\n\nfunc (m *AttributeContext_HttpRequest) GetProtocol() string {\n\tif m != nil {\n\t\treturn m.Protocol\n\t}\n\treturn \"\"\n}\n\nfunc (m *AttributeContext_HttpRequest) GetBody() string {\n\tif m != nil {\n\t\treturn m.Body\n\t}\n\treturn \"\"\n}\n\nfunc init() {\n\tproto.RegisterType((*AttributeContext)(nil), \"envoy.service.auth.v2.AttributeContext\")\n\tproto.RegisterMapType((map[string]string)(nil), \"envoy.service.auth.v2.AttributeContext.ContextExtensionsEntry\")\n\tproto.RegisterType((*AttributeContext_Peer)(nil), \"envoy.service.auth.v2.AttributeContext.Peer\")\n\tproto.RegisterMapType((map[string]string)(nil), \"envoy.service.auth.v2.AttributeContext.Peer.LabelsEntry\")\n\tproto.RegisterType((*AttributeContext_Request)(nil), \"envoy.service.auth.v2.AttributeContext.Request\")\n\tproto.RegisterType((*AttributeContext_HttpRequest)(nil), \"envoy.service.auth.v2.AttributeContext.HttpRequest\")\n\tproto.RegisterMapType((map[string]string)(nil), \"envoy.service.auth.v2.AttributeContext.HttpRequest.HeadersEntry\")\n}\n\nfunc init() {\n\tproto.RegisterFile(\"envoy/service/auth/v2/attribute_context.proto\", fileDescriptor_a6030c9468e3591b)\n}\n\nvar fileDescriptor_a6030c9468e3591b = []byte{\n\t// 621 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x53, 0x4d, 0x6f, 0xd3, 0x40,\n\t0x10, 0x95, 0x93, 0x34, 0x69, 0x26, 0x08, 0x95, 0x55, 0x5b, 0x59, 0x16, 0x4a, 0x2b, 0xb8, 0xf4,\n\t0x00, 0x6b, 0x29, 0xe5, 0x50, 0x7a, 0x40, 0xb4, 0xb4, 0xa2, 0x48, 0xa8, 0x8a, 0x2c, 0x4e, 0x5c,\n\t0xaa, 0x8d, 0x3d, 0x8d, 0x57, 0x24, 0x5e, 0x77, 0x77, 0x1d, 0x35, 0xdc, 0x80, 0x1f, 0xc5, 0x5f,\n\t0xe0, 0xc8, 0x91, 0x23, 0xca, 0x2f, 0x41, 0xfb, 0xe1, 0x12, 0x55, 0x3d, 0x34, 0x3d, 0x65, 0x66,\n\t0xf2, 0xe6, 0xed, 0x9b, 0x37, 0x63, 0x78, 0x89, 0xc5, 0x4c, 0xcc, 0x63, 0x85, 0x72, 0xc6, 0x53,\n\t0x8c, 0x59, 0xa5, 0xf3, 0x78, 0x36, 0x88, 0x99, 0xd6, 0x92, 0x8f, 0x2a, 0x8d, 0x17, 0xa9, 0x28,\n\t0x34, 0x5e, 0x6b, 0x5a, 0x4a, 0xa1, 0x05, 0xd9, 0xb2, 0x70, 0xea, 0xe1, 0xd4, 0xc0, 0xe9, 0x6c,\n\t0x10, 0xed, 0x38, 0x16, 0x56, 0x72, 0xd3, 0x9c, 0x0a, 0x89, 0x31, 0xcb, 0x32, 0x89, 0x4a, 0xb9,\n\t0xbe, 0x68, 0x67, 0x2c, 0xc4, 0x78, 0x82, 0xb1, 0xcd, 0x46, 0xd5, 0x65, 0xac, 0xf9, 0x14, 0x95,\n\t0x66, 0xd3, 0xd2, 0x03, 0x36, 0xc7, 0x62, 0x2c, 0x6c, 0x18, 0x9b, 0xc8, 0x55, 0x9f, 0xfd, 0xec,\n\t0xc2, 0xc6, 0x51, 0x2d, 0xe5, 0x9d, 0x53, 0x42, 0x4e, 0xa0, 0xad, 0x44, 0x25, 0x53, 0x0c, 0x83,\n\t0xdd, 0x60, 0xaf, 0x37, 0x78, 0x41, 0xef, 0x14, 0x45, 0x6f, 0x37, 0xd2, 0x21, 0xa2, 0x4c, 0x7c,\n\t0x2f, 0x39, 0x87, 0x5e, 0x86, 0x4a, 0xf3, 0x82, 0x69, 0x2e, 0x8a, 0xb0, 0xf1, 0x00, 0xaa, 0x65,\n\t0x02, 0xf2, 0x01, 0x3a, 0x12, 0xaf, 0x2a, 0x54, 0x3a, 0x6c, 0x59, 0xae, 0xf8, 0xbe, 0x5c, 0x89,\n\t0x6b, 0x4b, 0xea, 0x7e, 0x32, 0x05, 0xe2, 0x5d, 0xbf, 0xc0, 0x6b, 0x8d, 0x85, 0xe2, 0xa2, 0x50,\n\t0x21, 0xec, 0x36, 0xf7, 0x7a, 0x83, 0x37, 0xf7, 0x65, 0xf5, 0xbf, 0xa7, 0x37, 0x04, 0xa7, 0x85,\n\t0x96, 0xf3, 0xe4, 0x49, 0x7a, 0xbb, 0x1e, 0x7d, 0x6b, 0x40, 0xcb, 0xcc, 0x43, 0x5e, 0x41, 0xc7,\n\t0x6f, 0xcd, 0x3b, 0x1b, 0xf9, 0xc7, 0x58, 0xc9, 0xcd, 0x1b, 0x66, 0xaf, 0xf4, 0xc8, 0x21, 0x92,\n\t0x1a, 0x4a, 0x42, 0xe8, 0x78, 0x31, 0xd6, 0xc4, 0x6e, 0x52, 0xa7, 0x64, 0x08, 0xed, 0x09, 0x1b,\n\t0xe1, 0x44, 0x85, 0x4d, 0xab, 0xfd, 0x60, 0x15, 0x77, 0xe9, 0x47, 0xdb, 0xea, 0x54, 0x7b, 0x1e,\n\t0xf2, 0x14, 0xba, 0xa5, 0xe4, 0x45, 0xca, 0x4b, 0x36, 0xb1, 0x36, 0x77, 0x93, 0xff, 0x85, 0xe8,\n\t0x35, 0xf4, 0x96, 0x9a, 0xc8, 0x06, 0x34, 0xbf, 0xe0, 0xdc, 0x8e, 0xd2, 0x4d, 0x4c, 0x48, 0x36,\n\t0x61, 0x6d, 0xc6, 0x26, 0x55, 0x2d, 0xd4, 0x25, 0x87, 0x8d, 0x83, 0x20, 0xfa, 0x1e, 0x40, 0xc7,\n\t0xef, 0x81, 0x50, 0x68, 0x99, 0xeb, 0xbc, 0xf1, 0xc0, 0x9d, 0x2e, 0xad, 0x4f, 0x97, 0x7e, 0xaa,\n\t0x4f, 0x37, 0xb1, 0x38, 0xf2, 0x1e, 0x5a, 0xb9, 0xd6, 0xa5, 0x3f, 0xa1, 0xfd, 0xfb, 0x0e, 0x79,\n\t0xa6, 0x75, 0x59, 0xaf, 0xde, 0x12, 0x44, 0x3f, 0x9a, 0xd0, 0x5b, 0xaa, 0x92, 0xc7, 0xd0, 0xe0,\n\t0x99, 0xd7, 0xdf, 0xe0, 0x19, 0xd9, 0x86, 0xf6, 0x14, 0x75, 0x2e, 0x32, 0xaf, 0xdf, 0x67, 0xe4,\n\t0x33, 0x74, 0x72, 0x64, 0x19, 0xca, 0xda, 0xe8, 0xb7, 0x0f, 0xd0, 0x40, 0xcf, 0x1c, 0x85, 0x33,\n\t0xbc, 0x26, 0x24, 0x04, 0x5a, 0x25, 0xd3, 0xb9, 0x37, 0xdb, 0xc6, 0xa6, 0x96, 0x0b, 0xa5, 0xc3,\n\t0x35, 0x57, 0x33, 0xb1, 0xd1, 0xa6, 0xd2, 0x1c, 0xa7, 0x18, 0xb6, 0x9d, 0x36, 0x97, 0x19, 0xcb,\n\t0xaf, 0x2a, 0x94, 0xf3, 0xb0, 0xe3, 0x2c, 0xb7, 0x09, 0x89, 0x60, 0xfd, 0x52, 0xb2, 0xf1, 0x14,\n\t0x0b, 0x1d, 0xae, 0xdb, 0x3f, 0x6e, 0x72, 0xc3, 0xae, 0xf8, 0x57, 0x0c, 0xbb, 0xbb, 0xc1, 0x5e,\n\t0x33, 0xb1, 0xb1, 0xc1, 0x5b, 0xfb, 0x53, 0x31, 0x09, 0xc1, 0xe1, 0xeb, 0xdc, 0xe0, 0x47, 0x22,\n\t0x9b, 0x87, 0x3d, 0xa7, 0xc6, 0xc4, 0xd1, 0x21, 0x3c, 0x5a, 0x1e, 0x67, 0xa5, 0x53, 0x38, 0x81,\n\t0xed, 0xbb, 0xbf, 0x9d, 0x55, 0x58, 0x8e, 0xcf, 0x7f, 0x2d, 0xfa, 0xc1, 0xef, 0x45, 0x3f, 0xf8,\n\t0xb3, 0xe8, 0x07, 0x7f, 0x17, 0xfd, 0x00, 0x9e, 0x73, 0xe1, 0xd6, 0x52, 0x4a, 0x71, 0x3d, 0xbf,\n\t0x7b, 0x43, 0xc7, 0x5b, 0xb7, 0x57, 0x34, 0x34, 0x63, 0x0e, 0x83, 0x51, 0xdb, 0xce, 0xbb, 0xff,\n\t0x2f, 0x00, 0x00, 0xff, 0xff, 0xf8, 0x85, 0x21, 0xfa, 0xb0, 0x05, 0x00, 0x00,\n}\n\nfunc (m *AttributeContext) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *AttributeContext) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Source != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(m.Source.Size()))\n\t\tn1, err1 := m.Source.MarshalTo(dAtA[i:])\n\t\tif err1 != nil {\n\t\t\treturn 0, err1\n\t\t}\n\t\ti += n1\n\t}\n\tif m.Destination != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(m.Destination.Size()))\n\t\tn2, err2 := m.Destination.MarshalTo(dAtA[i:])\n\t\tif err2 != nil {\n\t\t\treturn 0, err2\n\t\t}\n\t\ti += n2\n\t}\n\tif m.Request != nil {\n\t\tdAtA[i] = 0x22\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(m.Request.Size()))\n\t\tn3, err3 := m.Request.MarshalTo(dAtA[i:])\n\t\tif err3 != nil {\n\t\t\treturn 0, err3\n\t\t}\n\t\ti += n3\n\t}\n\tif len(m.ContextExtensions) > 0 {\n\t\tkeysForContextExtensions := make([]string, 0, len(m.ContextExtensions))\n\t\tfor k, _ := range m.ContextExtensions {\n\t\t\tkeysForContextExtensions = append(keysForContextExtensions, string(k))\n\t\t}\n\t\tgithub_com_gogo_protobuf_sortkeys.Strings(keysForContextExtensions)\n\t\tfor _, k := range keysForContextExtensions {\n\t\t\tdAtA[i] = 0x52\n\t\t\ti++\n\t\t\tv := m.ContextExtensions[string(k)]\n\t\t\tmapSize := 1 + len(k) + sovAttributeContext(uint64(len(k))) + 1 + len(v) + sovAttributeContext(uint64(len(v)))\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(mapSize))\n\t\t\tdAtA[i] = 0xa\n\t\t\ti++\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(k)))\n\t\t\ti += copy(dAtA[i:], k)\n\t\t\tdAtA[i] = 0x12\n\t\t\ti++\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(v)))\n\t\t\ti += copy(dAtA[i:], v)\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *AttributeContext_Peer) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *AttributeContext_Peer) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Address != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(m.Address.Size()))\n\t\tn4, err4 := m.Address.MarshalTo(dAtA[i:])\n\t\tif err4 != nil {\n\t\t\treturn 0, err4\n\t\t}\n\t\ti += n4\n\t}\n\tif len(m.Service) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Service)))\n\t\ti += copy(dAtA[i:], m.Service)\n\t}\n\tif len(m.Labels) > 0 {\n\t\tkeysForLabels := make([]string, 0, len(m.Labels))\n\t\tfor k, _ := range m.Labels {\n\t\t\tkeysForLabels = append(keysForLabels, string(k))\n\t\t}\n\t\tgithub_com_gogo_protobuf_sortkeys.Strings(keysForLabels)\n\t\tfor _, k := range keysForLabels {\n\t\t\tdAtA[i] = 0x1a\n\t\t\ti++\n\t\t\tv := m.Labels[string(k)]\n\t\t\tmapSize := 1 + len(k) + sovAttributeContext(uint64(len(k))) + 1 + len(v) + sovAttributeContext(uint64(len(v)))\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(mapSize))\n\t\t\tdAtA[i] = 0xa\n\t\t\ti++\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(k)))\n\t\t\ti += copy(dAtA[i:], k)\n\t\t\tdAtA[i] = 0x12\n\t\t\ti++\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(v)))\n\t\t\ti += copy(dAtA[i:], v)\n\t\t}\n\t}\n\tif len(m.Principal) > 0 {\n\t\tdAtA[i] = 0x22\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Principal)))\n\t\ti += copy(dAtA[i:], m.Principal)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *AttributeContext_Request) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *AttributeContext_Request) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Time != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(m.Time.Size()))\n\t\tn5, err5 := m.Time.MarshalTo(dAtA[i:])\n\t\tif err5 != nil {\n\t\t\treturn 0, err5\n\t\t}\n\t\ti += n5\n\t}\n\tif m.Http != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(m.Http.Size()))\n\t\tn6, err6 := m.Http.MarshalTo(dAtA[i:])\n\t\tif err6 != nil {\n\t\t\treturn 0, err6\n\t\t}\n\t\ti += n6\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *AttributeContext_HttpRequest) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *AttributeContext_HttpRequest) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Id) > 0 {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Id)))\n\t\ti += copy(dAtA[i:], m.Id)\n\t}\n\tif len(m.Method) > 0 {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Method)))\n\t\ti += copy(dAtA[i:], m.Method)\n\t}\n\tif len(m.Headers) > 0 {\n\t\tkeysForHeaders := make([]string, 0, len(m.Headers))\n\t\tfor k, _ := range m.Headers {\n\t\t\tkeysForHeaders = append(keysForHeaders, string(k))\n\t\t}\n\t\tgithub_com_gogo_protobuf_sortkeys.Strings(keysForHeaders)\n\t\tfor _, k := range keysForHeaders {\n\t\t\tdAtA[i] = 0x1a\n\t\t\ti++\n\t\t\tv := m.Headers[string(k)]\n\t\t\tmapSize := 1 + len(k) + sovAttributeContext(uint64(len(k))) + 1 + len(v) + sovAttributeContext(uint64(len(v)))\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(mapSize))\n\t\t\tdAtA[i] = 0xa\n\t\t\ti++\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(k)))\n\t\t\ti += copy(dAtA[i:], k)\n\t\t\tdAtA[i] = 0x12\n\t\t\ti++\n\t\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(v)))\n\t\t\ti += copy(dAtA[i:], v)\n\t\t}\n\t}\n\tif len(m.Path) > 0 {\n\t\tdAtA[i] = 0x22\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Path)))\n\t\ti += copy(dAtA[i:], m.Path)\n\t}\n\tif len(m.Host) > 0 {\n\t\tdAtA[i] = 0x2a\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Host)))\n\t\ti += copy(dAtA[i:], m.Host)\n\t}\n\tif len(m.Scheme) > 0 {\n\t\tdAtA[i] = 0x32\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Scheme)))\n\t\ti += copy(dAtA[i:], m.Scheme)\n\t}\n\tif len(m.Query) > 0 {\n\t\tdAtA[i] = 0x3a\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Query)))\n\t\ti += copy(dAtA[i:], m.Query)\n\t}\n\tif len(m.Fragment) > 0 {\n\t\tdAtA[i] = 0x42\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Fragment)))\n\t\ti += copy(dAtA[i:], m.Fragment)\n\t}\n\tif m.Size_ != 0 {\n\t\tdAtA[i] = 0x48\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(m.Size_))\n\t}\n\tif len(m.Protocol) > 0 {\n\t\tdAtA[i] = 0x52\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Protocol)))\n\t\ti += copy(dAtA[i:], m.Protocol)\n\t}\n\tif len(m.Body) > 0 {\n\t\tdAtA[i] = 0x5a\n\t\ti++\n\t\ti = encodeVarintAttributeContext(dAtA, i, uint64(len(m.Body)))\n\t\ti += copy(dAtA[i:], m.Body)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc encodeVarintAttributeContext(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *AttributeContext) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Source != nil {\n\t\tl = m.Source.Size()\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif m.Destination != nil {\n\t\tl = m.Destination.Size()\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif m.Request != nil {\n\t\tl = m.Request.Size()\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif len(m.ContextExtensions) > 0 {\n\t\tfor k, v := range m.ContextExtensions {\n\t\t\t_ = k\n\t\t\t_ = v\n\t\t\tmapEntrySize := 1 + len(k) + sovAttributeContext(uint64(len(k))) + 1 + len(v) + sovAttributeContext(uint64(len(v)))\n\t\t\tn += mapEntrySize + 1 + sovAttributeContext(uint64(mapEntrySize))\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *AttributeContext_Peer) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Address != nil {\n\t\tl = m.Address.Size()\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tl = len(m.Service)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif len(m.Labels) > 0 {\n\t\tfor k, v := range m.Labels {\n\t\t\t_ = k\n\t\t\t_ = v\n\t\t\tmapEntrySize := 1 + len(k) + sovAttributeContext(uint64(len(k))) + 1 + len(v) + sovAttributeContext(uint64(len(v)))\n\t\t\tn += mapEntrySize + 1 + sovAttributeContext(uint64(mapEntrySize))\n\t\t}\n\t}\n\tl = len(m.Principal)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *AttributeContext_Request) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Time != nil {\n\t\tl = m.Time.Size()\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif m.Http != nil {\n\t\tl = m.Http.Size()\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *AttributeContext_HttpRequest) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tl = len(m.Id)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tl = len(m.Method)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif len(m.Headers) > 0 {\n\t\tfor k, v := range m.Headers {\n\t\t\t_ = k\n\t\t\t_ = v\n\t\t\tmapEntrySize := 1 + len(k) + sovAttributeContext(uint64(len(k))) + 1 + len(v) + sovAttributeContext(uint64(len(v)))\n\t\t\tn += mapEntrySize + 1 + sovAttributeContext(uint64(mapEntrySize))\n\t\t}\n\t}\n\tl = len(m.Path)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tl = len(m.Host)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tl = len(m.Scheme)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tl = len(m.Query)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tl = len(m.Fragment)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif m.Size_ != 0 {\n\t\tn += 1 + sovAttributeContext(uint64(m.Size_))\n\t}\n\tl = len(m.Protocol)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tl = len(m.Body)\n\tif l > 0 {\n\t\tn += 1 + l + sovAttributeContext(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc sovAttributeContext(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozAttributeContext(x uint64) (n int) {\n\treturn sovAttributeContext(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *AttributeContext) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: AttributeContext: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: AttributeContext: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Source\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Source == nil {\n\t\t\t\tm.Source = &AttributeContext_Peer{}\n\t\t\t}\n\t\t\tif err := m.Source.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Destination\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Destination == nil {\n\t\t\t\tm.Destination = &AttributeContext_Peer{}\n\t\t\t}\n\t\t\tif err := m.Destination.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Request\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Request == nil {\n\t\t\t\tm.Request = &AttributeContext_Request{}\n\t\t\t}\n\t\t\tif err := m.Request.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 10:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field ContextExtensions\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.ContextExtensions == nil {\n\t\t\t\tm.ContextExtensions = make(map[string]string)\n\t\t\t}\n\t\t\tvar mapkey string\n\t\t\tvar mapvalue string\n\t\t\tfor iNdEx < postIndex {\n\t\t\t\tentryPreIndex := iNdEx\n\t\t\t\tvar wire uint64\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfieldNum := int32(wire >> 3)\n\t\t\t\tif fieldNum == 1 {\n\t\t\t\t\tvar stringLenmapkey uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapkey |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapkey := int(stringLenmapkey)\n\t\t\t\t\tif intStringLenmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapkey := iNdEx + intStringLenmapkey\n\t\t\t\t\tif postStringIndexmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapkey > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapkey = string(dAtA[iNdEx:postStringIndexmapkey])\n\t\t\t\t\tiNdEx = postStringIndexmapkey\n\t\t\t\t} else if fieldNum == 2 {\n\t\t\t\t\tvar stringLenmapvalue uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapvalue |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapvalue := int(stringLenmapvalue)\n\t\t\t\t\tif intStringLenmapvalue < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapvalue := iNdEx + intStringLenmapvalue\n\t\t\t\t\tif postStringIndexmapvalue < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapvalue > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapvalue = string(dAtA[iNdEx:postStringIndexmapvalue])\n\t\t\t\t\tiNdEx = postStringIndexmapvalue\n\t\t\t\t} else {\n\t\t\t\t\tiNdEx = entryPreIndex\n\t\t\t\t\tskippy, err := skipAttributeContext(dAtA[iNdEx:])\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif skippy < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif (iNdEx + skippy) > postIndex {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tiNdEx += skippy\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.ContextExtensions[mapkey] = mapvalue\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAttributeContext(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *AttributeContext_Peer) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Peer: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Peer: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Address\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Address == nil {\n\t\t\t\tm.Address = &core.Address{}\n\t\t\t}\n\t\t\tif err := m.Address.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Service\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Service = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Labels\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Labels == nil {\n\t\t\t\tm.Labels = make(map[string]string)\n\t\t\t}\n\t\t\tvar mapkey string\n\t\t\tvar mapvalue string\n\t\t\tfor iNdEx < postIndex {\n\t\t\t\tentryPreIndex := iNdEx\n\t\t\t\tvar wire uint64\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfieldNum := int32(wire >> 3)\n\t\t\t\tif fieldNum == 1 {\n\t\t\t\t\tvar stringLenmapkey uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapkey |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapkey := int(stringLenmapkey)\n\t\t\t\t\tif intStringLenmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapkey := iNdEx + intStringLenmapkey\n\t\t\t\t\tif postStringIndexmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapkey > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapkey = string(dAtA[iNdEx:postStringIndexmapkey])\n\t\t\t\t\tiNdEx = postStringIndexmapkey\n\t\t\t\t} else if fieldNum == 2 {\n\t\t\t\t\tvar stringLenmapvalue uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapvalue |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapvalue := int(stringLenmapvalue)\n\t\t\t\t\tif intStringLenmapvalue < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapvalue := iNdEx + intStringLenmapvalue\n\t\t\t\t\tif postStringIndexmapvalue < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapvalue > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapvalue = string(dAtA[iNdEx:postStringIndexmapvalue])\n\t\t\t\t\tiNdEx = postStringIndexmapvalue\n\t\t\t\t} else {\n\t\t\t\t\tiNdEx = entryPreIndex\n\t\t\t\t\tskippy, err := skipAttributeContext(dAtA[iNdEx:])\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif skippy < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif (iNdEx + skippy) > postIndex {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tiNdEx += skippy\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.Labels[mapkey] = mapvalue\n\t\t\tiNdEx = postIndex\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Principal\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Principal = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAttributeContext(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *AttributeContext_Request) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Request: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Request: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Time\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Time == nil {\n\t\t\t\tm.Time = &types.Timestamp{}\n\t\t\t}\n\t\t\tif err := m.Time.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Http\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Http == nil {\n\t\t\t\tm.Http = &AttributeContext_HttpRequest{}\n\t\t\t}\n\t\t\tif err := m.Http.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAttributeContext(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *AttributeContext_HttpRequest) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: HttpRequest: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: HttpRequest: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Id\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Id = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Method\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Method = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Headers\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Headers == nil {\n\t\t\t\tm.Headers = make(map[string]string)\n\t\t\t}\n\t\t\tvar mapkey string\n\t\t\tvar mapvalue string\n\t\t\tfor iNdEx < postIndex {\n\t\t\t\tentryPreIndex := iNdEx\n\t\t\t\tvar wire uint64\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfieldNum := int32(wire >> 3)\n\t\t\t\tif fieldNum == 1 {\n\t\t\t\t\tvar stringLenmapkey uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapkey |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapkey := int(stringLenmapkey)\n\t\t\t\t\tif intStringLenmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapkey := iNdEx + intStringLenmapkey\n\t\t\t\t\tif postStringIndexmapkey < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapkey > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapkey = string(dAtA[iNdEx:postStringIndexmapkey])\n\t\t\t\t\tiNdEx = postStringIndexmapkey\n\t\t\t\t} else if fieldNum == 2 {\n\t\t\t\t\tvar stringLenmapvalue uint64\n\t\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t\t}\n\t\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\t\tiNdEx++\n\t\t\t\t\t\tstringLenmapvalue |= uint64(b&0x7F) << shift\n\t\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tintStringLenmapvalue := int(stringLenmapvalue)\n\t\t\t\t\tif intStringLenmapvalue < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tpostStringIndexmapvalue := iNdEx + intStringLenmapvalue\n\t\t\t\t\tif postStringIndexmapvalue < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif postStringIndexmapvalue > l {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tmapvalue = string(dAtA[iNdEx:postStringIndexmapvalue])\n\t\t\t\t\tiNdEx = postStringIndexmapvalue\n\t\t\t\t} else {\n\t\t\t\t\tiNdEx = entryPreIndex\n\t\t\t\t\tskippy, err := skipAttributeContext(dAtA[iNdEx:])\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\treturn err\n\t\t\t\t\t}\n\t\t\t\t\tif skippy < 0 {\n\t\t\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif (iNdEx + skippy) > postIndex {\n\t\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tiNdEx += skippy\n\t\t\t\t}\n\t\t\t}\n\t\t\tm.Headers[mapkey] = mapvalue\n\t\t\tiNdEx = postIndex\n\t\tcase 4:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Path\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Path = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 5:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Host\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Host = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 6:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Scheme\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Scheme = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 7:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Query\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Query = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 8:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Fragment\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Fragment = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 9:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Size_\", wireType)\n\t\t\t}\n\t\t\tm.Size_ = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.Size_ |= int64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tcase 10:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Protocol\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Protocol = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tcase 11:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Body\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Body = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipAttributeContext(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipAttributeContext(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowAttributeContext\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowAttributeContext\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthAttributeContext\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowAttributeContext\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipAttributeContext(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthAttributeContext\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthAttributeContext = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowAttributeContext   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/service/auth/v2/external_auth.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/service/auth/v2/external_auth.proto\n\npackage v2\n\nimport (\n\tcontext \"context\"\n\tfmt \"fmt\"\n\t_ \"github.com/envoyproxy/protoc-gen-validate/validate\"\n\trpc \"github.com/gogo/googleapis/google/rpc\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\tcore \"go.aporeto.io/trireme-lib/third_party/generated/envoyproxy/data-plane-api/envoy/api/v2/core\"\n\t_type \"go.aporeto.io/trireme-lib/third_party/generated/envoyproxy/data-plane-api/envoy/type\"\n\tgrpc \"google.golang.org/grpc\"\n\tcodes \"google.golang.org/grpc/codes\"\n\tstatus \"google.golang.org/grpc/status\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\ntype CheckRequest struct {\n\t// The request attributes.\n\tAttributes           *AttributeContext `protobuf:\"bytes,1,opt,name=attributes,proto3\" json:\"attributes,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}          `json:\"-\"`\n\tXXX_unrecognized     []byte            `json:\"-\"`\n\tXXX_sizecache        int32             `json:\"-\"`\n}\n\nfunc (m *CheckRequest) Reset()         { *m = CheckRequest{} }\nfunc (m *CheckRequest) String() string { return proto.CompactTextString(m) }\nfunc (*CheckRequest) ProtoMessage()    {}\nfunc (*CheckRequest) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_5257cfee93a30acb, []int{0}\n}\nfunc (m *CheckRequest) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *CheckRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_CheckRequest.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *CheckRequest) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_CheckRequest.Merge(m, src)\n}\nfunc (m *CheckRequest) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *CheckRequest) XXX_DiscardUnknown() {\n\txxx_messageInfo_CheckRequest.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_CheckRequest proto.InternalMessageInfo\n\nfunc (m *CheckRequest) GetAttributes() *AttributeContext {\n\tif m != nil {\n\t\treturn m.Attributes\n\t}\n\treturn nil\n}\n\n// HTTP attributes for a denied response.\ntype DeniedHttpResponse struct {\n\t// This field allows the authorization service to send a HTTP response status\n\t// code to the downstream client other than 403 (Forbidden).\n\tStatus *_type.HttpStatus `protobuf:\"bytes,1,opt,name=status,proto3\" json:\"status,omitempty\"`\n\t// This field allows the authorization service to send HTTP response headers\n\t// to the downstream client.\n\tHeaders []*core.HeaderValueOption `protobuf:\"bytes,2,rep,name=headers,proto3\" json:\"headers,omitempty\"`\n\t// This field allows the authorization service to send a response body data\n\t// to the downstream client.\n\tBody                 string   `protobuf:\"bytes,3,opt,name=body,proto3\" json:\"body,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *DeniedHttpResponse) Reset()         { *m = DeniedHttpResponse{} }\nfunc (m *DeniedHttpResponse) String() string { return proto.CompactTextString(m) }\nfunc (*DeniedHttpResponse) ProtoMessage()    {}\nfunc (*DeniedHttpResponse) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_5257cfee93a30acb, []int{1}\n}\nfunc (m *DeniedHttpResponse) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *DeniedHttpResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_DeniedHttpResponse.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *DeniedHttpResponse) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_DeniedHttpResponse.Merge(m, src)\n}\nfunc (m *DeniedHttpResponse) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *DeniedHttpResponse) XXX_DiscardUnknown() {\n\txxx_messageInfo_DeniedHttpResponse.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_DeniedHttpResponse proto.InternalMessageInfo\n\nfunc (m *DeniedHttpResponse) GetStatus() *_type.HttpStatus {\n\tif m != nil {\n\t\treturn m.Status\n\t}\n\treturn nil\n}\n\nfunc (m *DeniedHttpResponse) GetHeaders() []*core.HeaderValueOption {\n\tif m != nil {\n\t\treturn m.Headers\n\t}\n\treturn nil\n}\n\nfunc (m *DeniedHttpResponse) GetBody() string {\n\tif m != nil {\n\t\treturn m.Body\n\t}\n\treturn \"\"\n}\n\n// HTTP attributes for an ok response.\ntype OkHttpResponse struct {\n\t// HTTP entity headers in addition to the original request headers. This allows the authorization\n\t// service to append, to add or to override headers from the original request before\n\t// dispatching it to the upstream. By setting `append` field to `true` in the `HeaderValueOption`,\n\t// the filter will append the correspondent header value to the matched request header. Note that\n\t// by Leaving `append` as false, the filter will either add a new header, or override an existing\n\t// one if there is a match.\n\tHeaders              []*core.HeaderValueOption `protobuf:\"bytes,2,rep,name=headers,proto3\" json:\"headers,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}                  `json:\"-\"`\n\tXXX_unrecognized     []byte                    `json:\"-\"`\n\tXXX_sizecache        int32                     `json:\"-\"`\n}\n\nfunc (m *OkHttpResponse) Reset()         { *m = OkHttpResponse{} }\nfunc (m *OkHttpResponse) String() string { return proto.CompactTextString(m) }\nfunc (*OkHttpResponse) ProtoMessage()    {}\nfunc (*OkHttpResponse) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_5257cfee93a30acb, []int{2}\n}\nfunc (m *OkHttpResponse) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *OkHttpResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_OkHttpResponse.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *OkHttpResponse) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_OkHttpResponse.Merge(m, src)\n}\nfunc (m *OkHttpResponse) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *OkHttpResponse) XXX_DiscardUnknown() {\n\txxx_messageInfo_OkHttpResponse.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_OkHttpResponse proto.InternalMessageInfo\n\nfunc (m *OkHttpResponse) GetHeaders() []*core.HeaderValueOption {\n\tif m != nil {\n\t\treturn m.Headers\n\t}\n\treturn nil\n}\n\n// Intended for gRPC and Network Authorization servers `only`.\ntype CheckResponse struct {\n\t// Status `OK` allows the request. Any other status indicates the request should be denied.\n\tStatus *rpc.Status `protobuf:\"bytes,1,opt,name=status,proto3\" json:\"status,omitempty\"`\n\t// An message that contains HTTP response attributes. This message is\n\t// used when the authorization service needs to send custom responses to the\n\t// downstream client or, to modify/add request headers being dispatched to the upstream.\n\t//\n\t// Types that are valid to be assigned to HttpResponse:\n\t//\t*CheckResponse_DeniedResponse\n\t//\t*CheckResponse_OkResponse\n\tHttpResponse         isCheckResponse_HttpResponse `protobuf_oneof:\"http_response\"`\n\tXXX_NoUnkeyedLiteral struct{}                     `json:\"-\"`\n\tXXX_unrecognized     []byte                       `json:\"-\"`\n\tXXX_sizecache        int32                        `json:\"-\"`\n}\n\nfunc (m *CheckResponse) Reset()         { *m = CheckResponse{} }\nfunc (m *CheckResponse) String() string { return proto.CompactTextString(m) }\nfunc (*CheckResponse) ProtoMessage()    {}\nfunc (*CheckResponse) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_5257cfee93a30acb, []int{3}\n}\nfunc (m *CheckResponse) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *CheckResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_CheckResponse.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *CheckResponse) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_CheckResponse.Merge(m, src)\n}\nfunc (m *CheckResponse) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *CheckResponse) XXX_DiscardUnknown() {\n\txxx_messageInfo_CheckResponse.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_CheckResponse proto.InternalMessageInfo\n\ntype isCheckResponse_HttpResponse interface {\n\tisCheckResponse_HttpResponse()\n\tMarshalTo([]byte) (int, error)\n\tSize() int\n}\n\ntype CheckResponse_DeniedResponse struct {\n\tDeniedResponse *DeniedHttpResponse `protobuf:\"bytes,2,opt,name=denied_response,json=deniedResponse,proto3,oneof\"`\n}\ntype CheckResponse_OkResponse struct {\n\tOkResponse *OkHttpResponse `protobuf:\"bytes,3,opt,name=ok_response,json=okResponse,proto3,oneof\"`\n}\n\nfunc (*CheckResponse_DeniedResponse) isCheckResponse_HttpResponse() {}\nfunc (*CheckResponse_OkResponse) isCheckResponse_HttpResponse()     {}\n\nfunc (m *CheckResponse) GetHttpResponse() isCheckResponse_HttpResponse {\n\tif m != nil {\n\t\treturn m.HttpResponse\n\t}\n\treturn nil\n}\n\nfunc (m *CheckResponse) GetStatus() *rpc.Status {\n\tif m != nil {\n\t\treturn m.Status\n\t}\n\treturn nil\n}\n\nfunc (m *CheckResponse) GetDeniedResponse() *DeniedHttpResponse {\n\tif x, ok := m.GetHttpResponse().(*CheckResponse_DeniedResponse); ok {\n\t\treturn x.DeniedResponse\n\t}\n\treturn nil\n}\n\nfunc (m *CheckResponse) GetOkResponse() *OkHttpResponse {\n\tif x, ok := m.GetHttpResponse().(*CheckResponse_OkResponse); ok {\n\t\treturn x.OkResponse\n\t}\n\treturn nil\n}\n\n// XXX_OneofFuncs is for the internal use of the proto package.\nfunc (*CheckResponse) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {\n\treturn _CheckResponse_OneofMarshaler, _CheckResponse_OneofUnmarshaler, _CheckResponse_OneofSizer, []interface{}{\n\t\t(*CheckResponse_DeniedResponse)(nil),\n\t\t(*CheckResponse_OkResponse)(nil),\n\t}\n}\n\nfunc _CheckResponse_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {\n\tm := msg.(*CheckResponse)\n\t// http_response\n\tswitch x := m.HttpResponse.(type) {\n\tcase *CheckResponse_DeniedResponse:\n\t\t_ = b.EncodeVarint(2<<3 | proto.WireBytes)\n\t\tif err := b.EncodeMessage(x.DeniedResponse); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase *CheckResponse_OkResponse:\n\t\t_ = b.EncodeVarint(3<<3 | proto.WireBytes)\n\t\tif err := b.EncodeMessage(x.OkResponse); err != nil {\n\t\t\treturn err\n\t\t}\n\tcase nil:\n\tdefault:\n\t\treturn fmt.Errorf(\"CheckResponse.HttpResponse has unexpected type %T\", x)\n\t}\n\treturn nil\n}\n\nfunc _CheckResponse_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {\n\tm := msg.(*CheckResponse)\n\tswitch tag {\n\tcase 2: // http_response.denied_response\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tmsg := new(DeniedHttpResponse)\n\t\terr := b.DecodeMessage(msg)\n\t\tm.HttpResponse = &CheckResponse_DeniedResponse{msg}\n\t\treturn true, err\n\tcase 3: // http_response.ok_response\n\t\tif wire != proto.WireBytes {\n\t\t\treturn true, proto.ErrInternalBadWireType\n\t\t}\n\t\tmsg := new(OkHttpResponse)\n\t\terr := b.DecodeMessage(msg)\n\t\tm.HttpResponse = &CheckResponse_OkResponse{msg}\n\t\treturn true, err\n\tdefault:\n\t\treturn false, nil\n\t}\n}\n\nfunc _CheckResponse_OneofSizer(msg proto.Message) (n int) {\n\tm := msg.(*CheckResponse)\n\t// http_response\n\tswitch x := m.HttpResponse.(type) {\n\tcase *CheckResponse_DeniedResponse:\n\t\ts := proto.Size(x.DeniedResponse)\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(s))\n\t\tn += s\n\tcase *CheckResponse_OkResponse:\n\t\ts := proto.Size(x.OkResponse)\n\t\tn += 1 // tag and wire\n\t\tn += proto.SizeVarint(uint64(s))\n\t\tn += s\n\tcase nil:\n\tdefault:\n\t\tpanic(fmt.Sprintf(\"proto: unexpected type %T in oneof\", x))\n\t}\n\treturn n\n}\n\nfunc init() {\n\tproto.RegisterType((*CheckRequest)(nil), \"envoy.service.auth.v2.CheckRequest\")\n\tproto.RegisterType((*DeniedHttpResponse)(nil), \"envoy.service.auth.v2.DeniedHttpResponse\")\n\tproto.RegisterType((*OkHttpResponse)(nil), \"envoy.service.auth.v2.OkHttpResponse\")\n\tproto.RegisterType((*CheckResponse)(nil), \"envoy.service.auth.v2.CheckResponse\")\n}\n\nfunc init() {\n\tproto.RegisterFile(\"envoy/service/auth/v2/external_auth.proto\", fileDescriptor_5257cfee93a30acb)\n}\n\nvar fileDescriptor_5257cfee93a30acb = []byte{\n\t// 492 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x93, 0xc1, 0x6e, 0xd3, 0x40,\n\t0x10, 0x86, 0xbb, 0x09, 0x2d, 0xb0, 0x21, 0x2d, 0xac, 0x04, 0x8d, 0x2a, 0x14, 0x45, 0x69, 0x11,\n\t0x29, 0x12, 0x6b, 0xc9, 0x5c, 0x38, 0x21, 0x35, 0x05, 0xe1, 0x0b, 0x6a, 0x64, 0x10, 0x48, 0x5c,\n\t0xa2, 0x8d, 0x3d, 0xaa, 0xad, 0x44, 0xde, 0x65, 0x3d, 0xb6, 0x12, 0x9e, 0x00, 0xf1, 0x18, 0x3c,\n\t0x0d, 0x47, 0x1e, 0x01, 0xe5, 0xcc, 0x13, 0x70, 0x42, 0xde, 0x5d, 0x87, 0x06, 0x12, 0x2e, 0xdc,\n\t0x56, 0x9e, 0xff, 0xff, 0x76, 0xfe, 0xd9, 0x31, 0x3d, 0x85, 0xac, 0x94, 0x0b, 0x2f, 0x07, 0x5d,\n\t0xa6, 0x11, 0x78, 0xa2, 0xc0, 0xc4, 0x2b, 0x7d, 0x0f, 0xe6, 0x08, 0x3a, 0x13, 0xb3, 0x71, 0xf5,\n\t0x81, 0x2b, 0x2d, 0x51, 0xb2, 0xbb, 0x46, 0xca, 0x9d, 0x94, 0x9b, 0x4a, 0xe9, 0x1f, 0xdd, 0xb7,\n\t0x04, 0xa1, 0xd2, 0xca, 0x18, 0x49, 0x0d, 0xde, 0x44, 0xe4, 0x60, 0x4d, 0x75, 0x15, 0x17, 0x0a,\n\t0xbc, 0x04, 0x51, 0x8d, 0x73, 0x14, 0x58, 0xe4, 0xae, 0xfa, 0x78, 0xf3, 0xed, 0x02, 0x51, 0xa7,\n\t0x93, 0x02, 0x61, 0x1c, 0xc9, 0x0c, 0x61, 0x8e, 0x4e, 0x7e, 0x78, 0x29, 0xe5, 0xe5, 0x0c, 0x3c,\n\t0xad, 0x22, 0x6f, 0x8d, 0x73, 0x58, 0x8a, 0x59, 0x1a, 0x0b, 0x04, 0xaf, 0x3e, 0xd8, 0x42, 0xff,\n\t0x1d, 0xbd, 0x75, 0x9e, 0x40, 0x34, 0x0d, 0xe1, 0x43, 0x01, 0x39, 0xb2, 0x97, 0x94, 0xae, 0xe0,\n\t0x79, 0x87, 0xf4, 0xc8, 0xa0, 0xe5, 0x3f, 0xe4, 0x1b, 0x83, 0xf1, 0xb3, 0x5a, 0x78, 0x6e, 0x9b,\n\t0x08, 0xaf, 0x58, 0xfb, 0x5f, 0x08, 0x65, 0xcf, 0x21, 0x4b, 0x21, 0x0e, 0x10, 0x55, 0x08, 0xb9,\n\t0x92, 0x59, 0x0e, 0xec, 0x29, 0xdd, 0xb3, 0x8d, 0x39, 0xf6, 0x3d, 0xc7, 0xae, 0xf2, 0xf3, 0x4a,\n\t0xf9, 0xda, 0x54, 0x87, 0x37, 0x7e, 0x0e, 0x77, 0x3f, 0x93, 0xc6, 0x6d, 0x12, 0x3a, 0x3d, 0x7b,\n\t0x46, 0xaf, 0x27, 0x20, 0x62, 0xd0, 0x79, 0xa7, 0xd1, 0x6b, 0x0e, 0x5a, 0xfe, 0x89, 0xb3, 0x0a,\n\t0x95, 0x56, 0xdd, 0x54, 0x83, 0xe5, 0x81, 0x51, 0xbc, 0x15, 0xb3, 0x02, 0x2e, 0x14, 0xa6, 0x32,\n\t0x0b, 0x6b, 0x13, 0x63, 0xf4, 0xda, 0x44, 0xc6, 0x8b, 0x4e, 0xb3, 0x47, 0x06, 0x37, 0x43, 0x73,\n\t0xee, 0x8f, 0xe8, 0xfe, 0xc5, 0x74, 0xad, 0xbf, 0xff, 0xbc, 0xa5, 0xff, 0x83, 0xd0, 0xb6, 0x1b,\n\t0xa8, 0x23, 0x3e, 0xfa, 0x23, 0x31, 0xe3, 0xf6, 0x91, 0xb8, 0x56, 0x11, 0xb7, 0x69, 0x57, 0x19,\n\t0xdf, 0xd0, 0x83, 0xd8, 0xcc, 0x6c, 0xac, 0x9d, 0xbd, 0xd3, 0x30, 0xa6, 0xd3, 0x2d, 0x4f, 0xf0,\n\t0xf7, 0x84, 0x83, 0x9d, 0x70, 0xdf, 0x32, 0x56, 0x1d, 0x04, 0xb4, 0x25, 0xa7, 0xbf, 0x89, 0x4d,\n\t0x43, 0x7c, 0xb0, 0x85, 0xb8, 0x3e, 0x8f, 0x60, 0x27, 0xa4, 0x72, 0x95, 0x65, 0x78, 0x40, 0xdb,\n\t0x66, 0x47, 0x6b, 0x96, 0x1f, 0xd1, 0xf6, 0x59, 0x81, 0x89, 0xd4, 0xe9, 0x47, 0x51, 0x0d, 0x82,\n\t0x85, 0x74, 0xd7, 0xc4, 0x67, 0xc7, 0x5b, 0xf8, 0x57, 0xb7, 0xed, 0xe8, 0xe4, 0xdf, 0x22, 0x77,\n\t0xeb, 0xab, 0xaf, 0xcb, 0x2e, 0xf9, 0xb6, 0xec, 0x92, 0xef, 0xcb, 0x2e, 0xa1, 0xc7, 0xa9, 0xb4,\n\t0x2e, 0xa5, 0xe5, 0x7c, 0xb1, 0x19, 0x30, 0xbc, 0xf3, 0xc2, 0xfd, 0x9f, 0x55, 0x77, 0xa3, 0x6a,\n\t0xd3, 0x47, 0xe4, 0x7d, 0xa3, 0xf4, 0x3f, 0x11, 0x32, 0xd9, 0x33, 0x9b, 0xff, 0xe4, 0x57, 0x00,\n\t0x00, 0x00, 0xff, 0xff, 0x29, 0xe1, 0x8a, 0xc0, 0xda, 0x03, 0x00, 0x00,\n}\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ context.Context\nvar _ grpc.ClientConn\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the grpc package it is being compiled against.\nconst _ = grpc.SupportPackageIsVersion4\n\n// AuthorizationClient is the client API for Authorization service.\n//\n// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.\ntype AuthorizationClient interface {\n\t// Performs authorization check based on the attributes associated with the\n\t// incoming request, and returns status `OK` or not `OK`.\n\tCheck(ctx context.Context, in *CheckRequest, opts ...grpc.CallOption) (*CheckResponse, error)\n}\n\ntype authorizationClient struct {\n\tcc *grpc.ClientConn\n}\n\nfunc NewAuthorizationClient(cc *grpc.ClientConn) AuthorizationClient {\n\treturn &authorizationClient{cc}\n}\n\nfunc (c *authorizationClient) Check(ctx context.Context, in *CheckRequest, opts ...grpc.CallOption) (*CheckResponse, error) {\n\tout := new(CheckResponse)\n\terr := c.cc.Invoke(ctx, \"/envoy.service.auth.v2.Authorization/Check\", in, out, opts...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn out, nil\n}\n\n// AuthorizationServer is the server API for Authorization service.\ntype AuthorizationServer interface {\n\t// Performs authorization check based on the attributes associated with the\n\t// incoming request, and returns status `OK` or not `OK`.\n\tCheck(context.Context, *CheckRequest) (*CheckResponse, error)\n}\n\n// UnimplementedAuthorizationServer can be embedded to have forward compatible implementations.\ntype UnimplementedAuthorizationServer struct {\n}\n\nfunc (*UnimplementedAuthorizationServer) Check(ctx context.Context, req *CheckRequest) (*CheckResponse, error) {\n\treturn nil, status.Errorf(codes.Unimplemented, \"method Check not implemented\")\n}\n\nfunc RegisterAuthorizationServer(s *grpc.Server, srv AuthorizationServer) {\n\ts.RegisterService(&_Authorization_serviceDesc, srv)\n}\n\nfunc _Authorization_Check_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {\n\tin := new(CheckRequest)\n\tif err := dec(in); err != nil {\n\t\treturn nil, err\n\t}\n\tif interceptor == nil {\n\t\treturn srv.(AuthorizationServer).Check(ctx, in)\n\t}\n\tinfo := &grpc.UnaryServerInfo{\n\t\tServer:     srv,\n\t\tFullMethod: \"/envoy.service.auth.v2.Authorization/Check\",\n\t}\n\thandler := func(ctx context.Context, req interface{}) (interface{}, error) {\n\t\treturn srv.(AuthorizationServer).Check(ctx, req.(*CheckRequest))\n\t}\n\treturn interceptor(ctx, in, info, handler)\n}\n\nvar _Authorization_serviceDesc = grpc.ServiceDesc{\n\tServiceName: \"envoy.service.auth.v2.Authorization\",\n\tHandlerType: (*AuthorizationServer)(nil),\n\tMethods: []grpc.MethodDesc{\n\t\t{\n\t\t\tMethodName: \"Check\",\n\t\t\tHandler:    _Authorization_Check_Handler,\n\t\t},\n\t},\n\tStreams:  []grpc.StreamDesc{},\n\tMetadata: \"envoy/service/auth/v2/external_auth.proto\",\n}\n\nfunc (m *CheckRequest) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *CheckRequest) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Attributes != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintExternalAuth(dAtA, i, uint64(m.Attributes.Size()))\n\t\tn1, err1 := m.Attributes.MarshalTo(dAtA[i:])\n\t\tif err1 != nil {\n\t\t\treturn 0, err1\n\t\t}\n\t\ti += n1\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *DeniedHttpResponse) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *DeniedHttpResponse) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Status != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintExternalAuth(dAtA, i, uint64(m.Status.Size()))\n\t\tn2, err2 := m.Status.MarshalTo(dAtA[i:])\n\t\tif err2 != nil {\n\t\t\treturn 0, err2\n\t\t}\n\t\ti += n2\n\t}\n\tif len(m.Headers) > 0 {\n\t\tfor _, msg := range m.Headers {\n\t\t\tdAtA[i] = 0x12\n\t\t\ti++\n\t\t\ti = encodeVarintExternalAuth(dAtA, i, uint64(msg.Size()))\n\t\t\tn, err := msg.MarshalTo(dAtA[i:])\n\t\t\tif err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t\ti += n\n\t\t}\n\t}\n\tif len(m.Body) > 0 {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintExternalAuth(dAtA, i, uint64(len(m.Body)))\n\t\ti += copy(dAtA[i:], m.Body)\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *OkHttpResponse) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *OkHttpResponse) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif len(m.Headers) > 0 {\n\t\tfor _, msg := range m.Headers {\n\t\t\tdAtA[i] = 0x12\n\t\t\ti++\n\t\t\ti = encodeVarintExternalAuth(dAtA, i, uint64(msg.Size()))\n\t\t\tn, err := msg.MarshalTo(dAtA[i:])\n\t\t\tif err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t\ti += n\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *CheckResponse) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *CheckResponse) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Status != nil {\n\t\tdAtA[i] = 0xa\n\t\ti++\n\t\ti = encodeVarintExternalAuth(dAtA, i, uint64(m.Status.Size()))\n\t\tn3, err3 := m.Status.MarshalTo(dAtA[i:])\n\t\tif err3 != nil {\n\t\t\treturn 0, err3\n\t\t}\n\t\ti += n3\n\t}\n\tif m.HttpResponse != nil {\n\t\tnn4, err4 := m.HttpResponse.MarshalTo(dAtA[i:])\n\t\tif err4 != nil {\n\t\t\treturn 0, err4\n\t\t}\n\t\ti += nn4\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *CheckResponse_DeniedResponse) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.DeniedResponse != nil {\n\t\tdAtA[i] = 0x12\n\t\ti++\n\t\ti = encodeVarintExternalAuth(dAtA, i, uint64(m.DeniedResponse.Size()))\n\t\tn5, err5 := m.DeniedResponse.MarshalTo(dAtA[i:])\n\t\tif err5 != nil {\n\t\t\treturn 0, err5\n\t\t}\n\t\ti += n5\n\t}\n\treturn i, nil\n}\nfunc (m *CheckResponse_OkResponse) MarshalTo(dAtA []byte) (int, error) {\n\ti := 0\n\tif m.OkResponse != nil {\n\t\tdAtA[i] = 0x1a\n\t\ti++\n\t\ti = encodeVarintExternalAuth(dAtA, i, uint64(m.OkResponse.Size()))\n\t\tn6, err6 := m.OkResponse.MarshalTo(dAtA[i:])\n\t\tif err6 != nil {\n\t\t\treturn 0, err6\n\t\t}\n\t\ti += n6\n\t}\n\treturn i, nil\n}\nfunc encodeVarintExternalAuth(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *CheckRequest) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Attributes != nil {\n\t\tl = m.Attributes.Size()\n\t\tn += 1 + l + sovExternalAuth(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *DeniedHttpResponse) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Status != nil {\n\t\tl = m.Status.Size()\n\t\tn += 1 + l + sovExternalAuth(uint64(l))\n\t}\n\tif len(m.Headers) > 0 {\n\t\tfor _, e := range m.Headers {\n\t\t\tl = e.Size()\n\t\t\tn += 1 + l + sovExternalAuth(uint64(l))\n\t\t}\n\t}\n\tl = len(m.Body)\n\tif l > 0 {\n\t\tn += 1 + l + sovExternalAuth(uint64(l))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *OkHttpResponse) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif len(m.Headers) > 0 {\n\t\tfor _, e := range m.Headers {\n\t\t\tl = e.Size()\n\t\t\tn += 1 + l + sovExternalAuth(uint64(l))\n\t\t}\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *CheckResponse) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Status != nil {\n\t\tl = m.Status.Size()\n\t\tn += 1 + l + sovExternalAuth(uint64(l))\n\t}\n\tif m.HttpResponse != nil {\n\t\tn += m.HttpResponse.Size()\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *CheckResponse_DeniedResponse) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.DeniedResponse != nil {\n\t\tl = m.DeniedResponse.Size()\n\t\tn += 1 + l + sovExternalAuth(uint64(l))\n\t}\n\treturn n\n}\nfunc (m *CheckResponse_OkResponse) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.OkResponse != nil {\n\t\tl = m.OkResponse.Size()\n\t\tn += 1 + l + sovExternalAuth(uint64(l))\n\t}\n\treturn n\n}\n\nfunc sovExternalAuth(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozExternalAuth(x uint64) (n int) {\n\treturn sovExternalAuth(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *CheckRequest) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: CheckRequest: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: CheckRequest: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Attributes\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Attributes == nil {\n\t\t\t\tm.Attributes = &AttributeContext{}\n\t\t\t}\n\t\t\tif err := m.Attributes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipExternalAuth(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *DeniedHttpResponse) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: DeniedHttpResponse: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: DeniedHttpResponse: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Status\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Status == nil {\n\t\t\t\tm.Status = &_type.HttpStatus{}\n\t\t\t}\n\t\t\tif err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Headers\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Headers = append(m.Headers, &core.HeaderValueOption{})\n\t\t\tif err := m.Headers[len(m.Headers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Body\", wireType)\n\t\t\t}\n\t\t\tvar stringLen uint64\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tstringLen |= uint64(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tintStringLen := int(stringLen)\n\t\t\tif intStringLen < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tpostIndex := iNdEx + intStringLen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Body = string(dAtA[iNdEx:postIndex])\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipExternalAuth(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *OkHttpResponse) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: OkHttpResponse: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: OkHttpResponse: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Headers\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.Headers = append(m.Headers, &core.HeaderValueOption{})\n\t\t\tif err := m.Headers[len(m.Headers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipExternalAuth(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *CheckResponse) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: CheckResponse: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: CheckResponse: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Status\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif m.Status == nil {\n\t\t\t\tm.Status = &rpc.Status{}\n\t\t\t}\n\t\t\tif err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tiNdEx = postIndex\n\t\tcase 2:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field DeniedResponse\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := &DeniedHttpResponse{}\n\t\t\tif err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tm.HttpResponse = &CheckResponse_DeniedResponse{v}\n\t\t\tiNdEx = postIndex\n\t\tcase 3:\n\t\t\tif wireType != 2 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field OkResponse\", wireType)\n\t\t\t}\n\t\t\tvar msglen int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tmsglen |= int(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif msglen < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tpostIndex := iNdEx + msglen\n\t\t\tif postIndex < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif postIndex > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv := &OkHttpResponse{}\n\t\t\tif err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tm.HttpResponse = &CheckResponse_OkResponse{v}\n\t\t\tiNdEx = postIndex\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipExternalAuth(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipExternalAuth(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowExternalAuth\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowExternalAuth\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthExternalAuth\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowExternalAuth\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipExternalAuth(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthExternalAuth\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthExternalAuth = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowExternalAuth   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/service/discovery/v2/sds.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/service/discovery/v2/sds.proto\n\npackage envoy_service_discovery_v2\n\nimport (\n\tcontext \"context\"\n\tfmt \"fmt\"\n\t_ \"github.com/gogo/googleapis/google/api\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\tv2 \"go.aporeto.io/trireme-lib/third_party/generated/envoyproxy/data-plane-api/envoy/api/v2\"\n\tgrpc \"google.golang.org/grpc\"\n\tcodes \"google.golang.org/grpc/codes\"\n\tstatus \"google.golang.org/grpc/status\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\n// [#not-implemented-hide:] Not configuration. Workaround c++ protobuf issue with importing\n// services: https://github.com/google/protobuf/issues/4221\ntype SdsDummy struct {\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *SdsDummy) Reset()         { *m = SdsDummy{} }\nfunc (m *SdsDummy) String() string { return proto.CompactTextString(m) }\nfunc (*SdsDummy) ProtoMessage()    {}\nfunc (*SdsDummy) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_f2a4da2e99d9a3e6, []int{0}\n}\nfunc (m *SdsDummy) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *SdsDummy) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_SdsDummy.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *SdsDummy) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_SdsDummy.Merge(m, src)\n}\nfunc (m *SdsDummy) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *SdsDummy) XXX_DiscardUnknown() {\n\txxx_messageInfo_SdsDummy.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_SdsDummy proto.InternalMessageInfo\n\nfunc init() {\n\tproto.RegisterType((*SdsDummy)(nil), \"envoy.service.discovery.v2.SdsDummy\")\n}\n\nfunc init() {\n\tproto.RegisterFile(\"envoy/service/discovery/v2/sds.proto\", fileDescriptor_f2a4da2e99d9a3e6)\n}\n\nvar fileDescriptor_f2a4da2e99d9a3e6 = []byte{\n\t// 287 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x91, 0xc1, 0x4a, 0xf3, 0x40,\n\t0x14, 0x85, 0xff, 0xe9, 0xe2, 0x47, 0x86, 0xba, 0x09, 0xe8, 0x22, 0x94, 0x28, 0xb1, 0x8b, 0xe2,\n\t0x62, 0x22, 0x71, 0xd7, 0x65, 0x08, 0xae, 0x8b, 0x01, 0xb7, 0x32, 0x26, 0x97, 0x3a, 0xd0, 0xe4,\n\t0xa6, 0x73, 0xa7, 0x83, 0xd9, 0xfa, 0x0a, 0xbe, 0x92, 0x0b, 0x97, 0x82, 0x2f, 0x20, 0xc1, 0x07,\n\t0x91, 0x64, 0xda, 0x4a, 0x0b, 0x75, 0xe5, 0xfa, 0x3b, 0xe7, 0xbb, 0xc3, 0x1c, 0x3e, 0x86, 0xca,\n\t0x62, 0x13, 0x11, 0x68, 0xab, 0x72, 0x88, 0x0a, 0x45, 0x39, 0x5a, 0xd0, 0x4d, 0x64, 0xe3, 0x88,\n\t0x0a, 0x12, 0xb5, 0x46, 0x83, 0x9e, 0xdf, 0xa7, 0xc4, 0x3a, 0x25, 0xb6, 0x29, 0x61, 0x63, 0x7f,\n\t0xe4, 0x0c, 0xb2, 0x56, 0x5d, 0xe7, 0x07, 0xf5, 0x4d, 0x7f, 0x34, 0x47, 0x9c, 0x2f, 0xa0, 0xc7,\n\t0xb2, 0xaa, 0xd0, 0x48, 0xa3, 0xb0, 0x5a, 0x7b, 0x43, 0xce, 0x8f, 0xb2, 0x82, 0xd2, 0x55, 0x59,\n\t0x36, 0xf1, 0xeb, 0x80, 0x9f, 0x66, 0x90, 0x6b, 0x30, 0xe9, 0xc6, 0x91, 0xb9, 0x7b, 0xde, 0x3d,\n\t0x1f, 0xa6, 0xb0, 0x30, 0xd2, 0x61, 0xf2, 0x2e, 0x84, 0x7b, 0x8f, 0xac, 0x95, 0xb0, 0xb1, 0xe8,\n\t0xd9, 0xb6, 0x74, 0x0b, 0xcb, 0x15, 0x90, 0xf1, 0xc7, 0xbf, 0x87, 0xa8, 0xc6, 0x8a, 0x20, 0xfc,\n\t0x37, 0x61, 0x57, 0xcc, 0xbb, 0xe3, 0xc7, 0x99, 0xd1, 0x20, 0xcb, 0xcd, 0x85, 0x60, 0xaf, 0xbc,\n\t0x2f, 0x3f, 0x3b, 0xc8, 0x77, 0xbc, 0x4b, 0x3e, 0xbc, 0x01, 0x93, 0x3f, 0xfe, 0x99, 0xf6, 0xfc,\n\t0xf9, 0xe3, 0xeb, 0x65, 0xe0, 0x87, 0x27, 0x3b, 0x7f, 0x3d, 0x25, 0xe7, 0x9f, 0xb2, 0xcb, 0x24,\n\t0x79, 0x6b, 0x03, 0xf6, 0xde, 0x06, 0xec, 0xb3, 0x0d, 0x18, 0x9f, 0x28, 0x74, 0xca, 0x5a, 0xe3,\n\t0x53, 0x23, 0x0e, 0xcf, 0x98, 0x74, 0x43, 0xcc, 0xba, 0x51, 0x66, 0xec, 0xe1, 0x7f, 0xbf, 0xce,\n\t0xf5, 0x77, 0x00, 0x00, 0x00, 0xff, 0xff, 0x33, 0xae, 0xee, 0x26, 0x1d, 0x02, 0x00, 0x00,\n}\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ context.Context\nvar _ grpc.ClientConn\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the grpc package it is being compiled against.\nconst _ = grpc.SupportPackageIsVersion4\n\n// SecretDiscoveryServiceClient is the client API for SecretDiscoveryService service.\n//\n// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.\ntype SecretDiscoveryServiceClient interface {\n\tDeltaSecrets(ctx context.Context, opts ...grpc.CallOption) (SecretDiscoveryService_DeltaSecretsClient, error)\n\tStreamSecrets(ctx context.Context, opts ...grpc.CallOption) (SecretDiscoveryService_StreamSecretsClient, error)\n\tFetchSecrets(ctx context.Context, in *v2.DiscoveryRequest, opts ...grpc.CallOption) (*v2.DiscoveryResponse, error)\n}\n\ntype secretDiscoveryServiceClient struct {\n\tcc *grpc.ClientConn\n}\n\nfunc NewSecretDiscoveryServiceClient(cc *grpc.ClientConn) SecretDiscoveryServiceClient {\n\treturn &secretDiscoveryServiceClient{cc}\n}\n\nfunc (c *secretDiscoveryServiceClient) DeltaSecrets(ctx context.Context, opts ...grpc.CallOption) (SecretDiscoveryService_DeltaSecretsClient, error) {\n\tstream, err := c.cc.NewStream(ctx, &_SecretDiscoveryService_serviceDesc.Streams[0], \"/envoy.service.discovery.v2.SecretDiscoveryService/DeltaSecrets\", opts...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tx := &secretDiscoveryServiceDeltaSecretsClient{stream}\n\treturn x, nil\n}\n\ntype SecretDiscoveryService_DeltaSecretsClient interface {\n\tSend(*v2.DeltaDiscoveryRequest) error\n\tRecv() (*v2.DeltaDiscoveryResponse, error)\n\tgrpc.ClientStream\n}\n\ntype secretDiscoveryServiceDeltaSecretsClient struct {\n\tgrpc.ClientStream\n}\n\nfunc (x *secretDiscoveryServiceDeltaSecretsClient) Send(m *v2.DeltaDiscoveryRequest) error {\n\treturn x.ClientStream.SendMsg(m)\n}\n\nfunc (x *secretDiscoveryServiceDeltaSecretsClient) Recv() (*v2.DeltaDiscoveryResponse, error) {\n\tm := new(v2.DeltaDiscoveryResponse)\n\tif err := x.ClientStream.RecvMsg(m); err != nil {\n\t\treturn nil, err\n\t}\n\treturn m, nil\n}\n\nfunc (c *secretDiscoveryServiceClient) StreamSecrets(ctx context.Context, opts ...grpc.CallOption) (SecretDiscoveryService_StreamSecretsClient, error) {\n\tstream, err := c.cc.NewStream(ctx, &_SecretDiscoveryService_serviceDesc.Streams[1], \"/envoy.service.discovery.v2.SecretDiscoveryService/StreamSecrets\", opts...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tx := &secretDiscoveryServiceStreamSecretsClient{stream}\n\treturn x, nil\n}\n\ntype SecretDiscoveryService_StreamSecretsClient interface {\n\tSend(*v2.DiscoveryRequest) error\n\tRecv() (*v2.DiscoveryResponse, error)\n\tgrpc.ClientStream\n}\n\ntype secretDiscoveryServiceStreamSecretsClient struct {\n\tgrpc.ClientStream\n}\n\nfunc (x *secretDiscoveryServiceStreamSecretsClient) Send(m *v2.DiscoveryRequest) error {\n\treturn x.ClientStream.SendMsg(m)\n}\n\nfunc (x *secretDiscoveryServiceStreamSecretsClient) Recv() (*v2.DiscoveryResponse, error) {\n\tm := new(v2.DiscoveryResponse)\n\tif err := x.ClientStream.RecvMsg(m); err != nil {\n\t\treturn nil, err\n\t}\n\treturn m, nil\n}\n\nfunc (c *secretDiscoveryServiceClient) FetchSecrets(ctx context.Context, in *v2.DiscoveryRequest, opts ...grpc.CallOption) (*v2.DiscoveryResponse, error) {\n\tout := new(v2.DiscoveryResponse)\n\terr := c.cc.Invoke(ctx, \"/envoy.service.discovery.v2.SecretDiscoveryService/FetchSecrets\", in, out, opts...)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn out, nil\n}\n\n// SecretDiscoveryServiceServer is the server API for SecretDiscoveryService service.\ntype SecretDiscoveryServiceServer interface {\n\tDeltaSecrets(SecretDiscoveryService_DeltaSecretsServer) error\n\tStreamSecrets(SecretDiscoveryService_StreamSecretsServer) error\n\tFetchSecrets(context.Context, *v2.DiscoveryRequest) (*v2.DiscoveryResponse, error)\n}\n\n// UnimplementedSecretDiscoveryServiceServer can be embedded to have forward compatible implementations.\ntype UnimplementedSecretDiscoveryServiceServer struct {\n}\n\nfunc (*UnimplementedSecretDiscoveryServiceServer) DeltaSecrets(srv SecretDiscoveryService_DeltaSecretsServer) error {\n\treturn status.Errorf(codes.Unimplemented, \"method DeltaSecrets not implemented\")\n}\nfunc (*UnimplementedSecretDiscoveryServiceServer) StreamSecrets(srv SecretDiscoveryService_StreamSecretsServer) error {\n\treturn status.Errorf(codes.Unimplemented, \"method StreamSecrets not implemented\")\n}\nfunc (*UnimplementedSecretDiscoveryServiceServer) FetchSecrets(ctx context.Context, req *v2.DiscoveryRequest) (*v2.DiscoveryResponse, error) {\n\treturn nil, status.Errorf(codes.Unimplemented, \"method FetchSecrets not implemented\")\n}\n\nfunc RegisterSecretDiscoveryServiceServer(s *grpc.Server, srv SecretDiscoveryServiceServer) {\n\ts.RegisterService(&_SecretDiscoveryService_serviceDesc, srv)\n}\n\nfunc _SecretDiscoveryService_DeltaSecrets_Handler(srv interface{}, stream grpc.ServerStream) error {\n\treturn srv.(SecretDiscoveryServiceServer).DeltaSecrets(&secretDiscoveryServiceDeltaSecretsServer{stream})\n}\n\ntype SecretDiscoveryService_DeltaSecretsServer interface {\n\tSend(*v2.DeltaDiscoveryResponse) error\n\tRecv() (*v2.DeltaDiscoveryRequest, error)\n\tgrpc.ServerStream\n}\n\ntype secretDiscoveryServiceDeltaSecretsServer struct {\n\tgrpc.ServerStream\n}\n\nfunc (x *secretDiscoveryServiceDeltaSecretsServer) Send(m *v2.DeltaDiscoveryResponse) error {\n\treturn x.ServerStream.SendMsg(m)\n}\n\nfunc (x *secretDiscoveryServiceDeltaSecretsServer) Recv() (*v2.DeltaDiscoveryRequest, error) {\n\tm := new(v2.DeltaDiscoveryRequest)\n\tif err := x.ServerStream.RecvMsg(m); err != nil {\n\t\treturn nil, err\n\t}\n\treturn m, nil\n}\n\nfunc _SecretDiscoveryService_StreamSecrets_Handler(srv interface{}, stream grpc.ServerStream) error {\n\treturn srv.(SecretDiscoveryServiceServer).StreamSecrets(&secretDiscoveryServiceStreamSecretsServer{stream})\n}\n\ntype SecretDiscoveryService_StreamSecretsServer interface {\n\tSend(*v2.DiscoveryResponse) error\n\tRecv() (*v2.DiscoveryRequest, error)\n\tgrpc.ServerStream\n}\n\ntype secretDiscoveryServiceStreamSecretsServer struct {\n\tgrpc.ServerStream\n}\n\nfunc (x *secretDiscoveryServiceStreamSecretsServer) Send(m *v2.DiscoveryResponse) error {\n\treturn x.ServerStream.SendMsg(m)\n}\n\nfunc (x *secretDiscoveryServiceStreamSecretsServer) Recv() (*v2.DiscoveryRequest, error) {\n\tm := new(v2.DiscoveryRequest)\n\tif err := x.ServerStream.RecvMsg(m); err != nil {\n\t\treturn nil, err\n\t}\n\treturn m, nil\n}\n\nfunc _SecretDiscoveryService_FetchSecrets_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {\n\tin := new(v2.DiscoveryRequest)\n\tif err := dec(in); err != nil {\n\t\treturn nil, err\n\t}\n\tif interceptor == nil {\n\t\treturn srv.(SecretDiscoveryServiceServer).FetchSecrets(ctx, in)\n\t}\n\tinfo := &grpc.UnaryServerInfo{\n\t\tServer:     srv,\n\t\tFullMethod: \"/envoy.service.discovery.v2.SecretDiscoveryService/FetchSecrets\",\n\t}\n\thandler := func(ctx context.Context, req interface{}) (interface{}, error) {\n\t\treturn srv.(SecretDiscoveryServiceServer).FetchSecrets(ctx, req.(*v2.DiscoveryRequest))\n\t}\n\treturn interceptor(ctx, in, info, handler)\n}\n\nvar _SecretDiscoveryService_serviceDesc = grpc.ServiceDesc{\n\tServiceName: \"envoy.service.discovery.v2.SecretDiscoveryService\",\n\tHandlerType: (*SecretDiscoveryServiceServer)(nil),\n\tMethods: []grpc.MethodDesc{\n\t\t{\n\t\t\tMethodName: \"FetchSecrets\",\n\t\t\tHandler:    _SecretDiscoveryService_FetchSecrets_Handler,\n\t\t},\n\t},\n\tStreams: []grpc.StreamDesc{\n\t\t{\n\t\t\tStreamName:    \"DeltaSecrets\",\n\t\t\tHandler:       _SecretDiscoveryService_DeltaSecrets_Handler,\n\t\t\tServerStreams: true,\n\t\t\tClientStreams: true,\n\t\t},\n\t\t{\n\t\t\tStreamName:    \"StreamSecrets\",\n\t\t\tHandler:       _SecretDiscoveryService_StreamSecrets_Handler,\n\t\t\tServerStreams: true,\n\t\t\tClientStreams: true,\n\t\t},\n\t},\n\tMetadata: \"envoy/service/discovery/v2/sds.proto\",\n}\n\nfunc (m *SdsDummy) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *SdsDummy) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc encodeVarintSds(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *SdsDummy) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc sovSds(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozSds(x uint64) (n int) {\n\treturn sovSds(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *SdsDummy) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowSds\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: SdsDummy: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: SdsDummy: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipSds(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthSds\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthSds\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipSds(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowSds\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowSds\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowSds\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthSds\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthSds\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowSds\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipSds(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthSds\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthSds = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowSds   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/type/http_status.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/type/http_status.proto\n\npackage envoy_type\n\nimport (\n\tfmt \"fmt\"\n\t_ \"github.com/envoyproxy/protoc-gen-validate/validate\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\n// HTTP response codes supported in Envoy.\n// For more details: https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml\ntype StatusCode int32\n\nconst (\n\t// Empty - This code not part of the HTTP status code specification, but it is needed for proto\n\t// `enum` type.\n\tStatusCode_Empty                         StatusCode = 0\n\tStatusCode_Continue                      StatusCode = 100\n\tStatusCode_OK                            StatusCode = 200\n\tStatusCode_Created                       StatusCode = 201\n\tStatusCode_Accepted                      StatusCode = 202\n\tStatusCode_NonAuthoritativeInformation   StatusCode = 203\n\tStatusCode_NoContent                     StatusCode = 204\n\tStatusCode_ResetContent                  StatusCode = 205\n\tStatusCode_PartialContent                StatusCode = 206\n\tStatusCode_MultiStatus                   StatusCode = 207\n\tStatusCode_AlreadyReported               StatusCode = 208\n\tStatusCode_IMUsed                        StatusCode = 226\n\tStatusCode_MultipleChoices               StatusCode = 300\n\tStatusCode_MovedPermanently              StatusCode = 301\n\tStatusCode_Found                         StatusCode = 302\n\tStatusCode_SeeOther                      StatusCode = 303\n\tStatusCode_NotModified                   StatusCode = 304\n\tStatusCode_UseProxy                      StatusCode = 305\n\tStatusCode_TemporaryRedirect             StatusCode = 307\n\tStatusCode_PermanentRedirect             StatusCode = 308\n\tStatusCode_BadRequest                    StatusCode = 400\n\tStatusCode_Unauthorized                  StatusCode = 401\n\tStatusCode_PaymentRequired               StatusCode = 402\n\tStatusCode_Forbidden                     StatusCode = 403\n\tStatusCode_NotFound                      StatusCode = 404\n\tStatusCode_MethodNotAllowed              StatusCode = 405\n\tStatusCode_NotAcceptable                 StatusCode = 406\n\tStatusCode_ProxyAuthenticationRequired   StatusCode = 407\n\tStatusCode_RequestTimeout                StatusCode = 408\n\tStatusCode_Conflict                      StatusCode = 409\n\tStatusCode_Gone                          StatusCode = 410\n\tStatusCode_LengthRequired                StatusCode = 411\n\tStatusCode_PreconditionFailed            StatusCode = 412\n\tStatusCode_PayloadTooLarge               StatusCode = 413\n\tStatusCode_URITooLong                    StatusCode = 414\n\tStatusCode_UnsupportedMediaType          StatusCode = 415\n\tStatusCode_RangeNotSatisfiable           StatusCode = 416\n\tStatusCode_ExpectationFailed             StatusCode = 417\n\tStatusCode_MisdirectedRequest            StatusCode = 421\n\tStatusCode_UnprocessableEntity           StatusCode = 422\n\tStatusCode_Locked                        StatusCode = 423\n\tStatusCode_FailedDependency              StatusCode = 424\n\tStatusCode_UpgradeRequired               StatusCode = 426\n\tStatusCode_PreconditionRequired          StatusCode = 428\n\tStatusCode_TooManyRequests               StatusCode = 429\n\tStatusCode_RequestHeaderFieldsTooLarge   StatusCode = 431\n\tStatusCode_InternalServerError           StatusCode = 500\n\tStatusCode_NotImplemented                StatusCode = 501\n\tStatusCode_BadGateway                    StatusCode = 502\n\tStatusCode_ServiceUnavailable            StatusCode = 503\n\tStatusCode_GatewayTimeout                StatusCode = 504\n\tStatusCode_HTTPVersionNotSupported       StatusCode = 505\n\tStatusCode_VariantAlsoNegotiates         StatusCode = 506\n\tStatusCode_InsufficientStorage           StatusCode = 507\n\tStatusCode_LoopDetected                  StatusCode = 508\n\tStatusCode_NotExtended                   StatusCode = 510\n\tStatusCode_NetworkAuthenticationRequired StatusCode = 511\n)\n\nvar StatusCode_name = map[int32]string{\n\t0:   \"Empty\",\n\t100: \"Continue\",\n\t200: \"OK\",\n\t201: \"Created\",\n\t202: \"Accepted\",\n\t203: \"NonAuthoritativeInformation\",\n\t204: \"NoContent\",\n\t205: \"ResetContent\",\n\t206: \"PartialContent\",\n\t207: \"MultiStatus\",\n\t208: \"AlreadyReported\",\n\t226: \"IMUsed\",\n\t300: \"MultipleChoices\",\n\t301: \"MovedPermanently\",\n\t302: \"Found\",\n\t303: \"SeeOther\",\n\t304: \"NotModified\",\n\t305: \"UseProxy\",\n\t307: \"TemporaryRedirect\",\n\t308: \"PermanentRedirect\",\n\t400: \"BadRequest\",\n\t401: \"Unauthorized\",\n\t402: \"PaymentRequired\",\n\t403: \"Forbidden\",\n\t404: \"NotFound\",\n\t405: \"MethodNotAllowed\",\n\t406: \"NotAcceptable\",\n\t407: \"ProxyAuthenticationRequired\",\n\t408: \"RequestTimeout\",\n\t409: \"Conflict\",\n\t410: \"Gone\",\n\t411: \"LengthRequired\",\n\t412: \"PreconditionFailed\",\n\t413: \"PayloadTooLarge\",\n\t414: \"URITooLong\",\n\t415: \"UnsupportedMediaType\",\n\t416: \"RangeNotSatisfiable\",\n\t417: \"ExpectationFailed\",\n\t421: \"MisdirectedRequest\",\n\t422: \"UnprocessableEntity\",\n\t423: \"Locked\",\n\t424: \"FailedDependency\",\n\t426: \"UpgradeRequired\",\n\t428: \"PreconditionRequired\",\n\t429: \"TooManyRequests\",\n\t431: \"RequestHeaderFieldsTooLarge\",\n\t500: \"InternalServerError\",\n\t501: \"NotImplemented\",\n\t502: \"BadGateway\",\n\t503: \"ServiceUnavailable\",\n\t504: \"GatewayTimeout\",\n\t505: \"HTTPVersionNotSupported\",\n\t506: \"VariantAlsoNegotiates\",\n\t507: \"InsufficientStorage\",\n\t508: \"LoopDetected\",\n\t510: \"NotExtended\",\n\t511: \"NetworkAuthenticationRequired\",\n}\n\nvar StatusCode_value = map[string]int32{\n\t\"Empty\":                         0,\n\t\"Continue\":                      100,\n\t\"OK\":                            200,\n\t\"Created\":                       201,\n\t\"Accepted\":                      202,\n\t\"NonAuthoritativeInformation\":   203,\n\t\"NoContent\":                     204,\n\t\"ResetContent\":                  205,\n\t\"PartialContent\":                206,\n\t\"MultiStatus\":                   207,\n\t\"AlreadyReported\":               208,\n\t\"IMUsed\":                        226,\n\t\"MultipleChoices\":               300,\n\t\"MovedPermanently\":              301,\n\t\"Found\":                         302,\n\t\"SeeOther\":                      303,\n\t\"NotModified\":                   304,\n\t\"UseProxy\":                      305,\n\t\"TemporaryRedirect\":             307,\n\t\"PermanentRedirect\":             308,\n\t\"BadRequest\":                    400,\n\t\"Unauthorized\":                  401,\n\t\"PaymentRequired\":               402,\n\t\"Forbidden\":                     403,\n\t\"NotFound\":                      404,\n\t\"MethodNotAllowed\":              405,\n\t\"NotAcceptable\":                 406,\n\t\"ProxyAuthenticationRequired\":   407,\n\t\"RequestTimeout\":                408,\n\t\"Conflict\":                      409,\n\t\"Gone\":                          410,\n\t\"LengthRequired\":                411,\n\t\"PreconditionFailed\":            412,\n\t\"PayloadTooLarge\":               413,\n\t\"URITooLong\":                    414,\n\t\"UnsupportedMediaType\":          415,\n\t\"RangeNotSatisfiable\":           416,\n\t\"ExpectationFailed\":             417,\n\t\"MisdirectedRequest\":            421,\n\t\"UnprocessableEntity\":           422,\n\t\"Locked\":                        423,\n\t\"FailedDependency\":              424,\n\t\"UpgradeRequired\":               426,\n\t\"PreconditionRequired\":          428,\n\t\"TooManyRequests\":               429,\n\t\"RequestHeaderFieldsTooLarge\":   431,\n\t\"InternalServerError\":           500,\n\t\"NotImplemented\":                501,\n\t\"BadGateway\":                    502,\n\t\"ServiceUnavailable\":            503,\n\t\"GatewayTimeout\":                504,\n\t\"HTTPVersionNotSupported\":       505,\n\t\"VariantAlsoNegotiates\":         506,\n\t\"InsufficientStorage\":           507,\n\t\"LoopDetected\":                  508,\n\t\"NotExtended\":                   510,\n\t\"NetworkAuthenticationRequired\": 511,\n}\n\nfunc (x StatusCode) String() string {\n\treturn proto.EnumName(StatusCode_name, int32(x))\n}\n\nfunc (StatusCode) EnumDescriptor() ([]byte, []int) {\n\treturn fileDescriptor_7544d7adacd3389b, []int{0}\n}\n\n// HTTP status.\ntype HttpStatus struct {\n\t// Supplies HTTP response code.\n\tCode                 StatusCode `protobuf:\"varint,1,opt,name=code,proto3,enum=envoy.type.StatusCode\" json:\"code,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}   `json:\"-\"`\n\tXXX_unrecognized     []byte     `json:\"-\"`\n\tXXX_sizecache        int32      `json:\"-\"`\n}\n\nfunc (m *HttpStatus) Reset()         { *m = HttpStatus{} }\nfunc (m *HttpStatus) String() string { return proto.CompactTextString(m) }\nfunc (*HttpStatus) ProtoMessage()    {}\nfunc (*HttpStatus) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_7544d7adacd3389b, []int{0}\n}\nfunc (m *HttpStatus) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *HttpStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_HttpStatus.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *HttpStatus) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_HttpStatus.Merge(m, src)\n}\nfunc (m *HttpStatus) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *HttpStatus) XXX_DiscardUnknown() {\n\txxx_messageInfo_HttpStatus.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_HttpStatus proto.InternalMessageInfo\n\nfunc (m *HttpStatus) GetCode() StatusCode {\n\tif m != nil {\n\t\treturn m.Code\n\t}\n\treturn StatusCode_Empty\n}\n\nfunc init() {\n\tproto.RegisterEnum(\"envoy.type.StatusCode\", StatusCode_name, StatusCode_value)\n\tproto.RegisterType((*HttpStatus)(nil), \"envoy.type.HttpStatus\")\n}\n\nfunc init() { proto.RegisterFile(\"envoy/type/http_status.proto\", fileDescriptor_7544d7adacd3389b) }\n\nvar fileDescriptor_7544d7adacd3389b = []byte{\n\t// 929 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x54, 0x49, 0x6f, 0x5c, 0xc5,\n\t0x13, 0xcf, 0x9b, 0xce, 0xe6, 0x4e, 0x62, 0x57, 0x3a, 0x8b, 0xfd, 0xcf, 0x3f, 0x58, 0x56, 0x4e,\n\t0x88, 0x83, 0x2d, 0xc1, 0x09, 0x89, 0x8b, 0xed, 0xd8, 0xb1, 0xc1, 0x33, 0x19, 0x8d, 0x67, 0x72,\n\t0x45, 0xed, 0xd7, 0x35, 0x33, 0xad, 0xbc, 0xe9, 0x7a, 0xe9, 0x57, 0x33, 0xf6, 0xe3, 0xc8, 0x27,\n\t0x60, 0xdf, 0xd7, 0x03, 0x8b, 0x50, 0x42, 0x40, 0xc0, 0x77, 0x08, 0x7b, 0x3e, 0x02, 0xf2, 0x67,\n\t0x60, 0x0d, 0x08, 0x50, 0xf7, 0x2c, 0xf6, 0x85, 0x93, 0xfd, 0xaa, 0x6b, 0xf9, 0x2d, 0x35, 0x25,\n\t0x2f, 0xa3, 0x1b, 0x50, 0xb9, 0xc4, 0x65, 0x8e, 0x4b, 0x5d, 0xe6, 0xfc, 0xe9, 0x82, 0x35, 0xf7,\n\t0x8b, 0xc5, 0xdc, 0x13, 0x93, 0x92, 0xf1, 0x75, 0x31, 0xbc, 0x5e, 0x9a, 0x1d, 0xe8, 0xcc, 0x1a,\n\t0xcd, 0xb8, 0x34, 0xfe, 0x67, 0x98, 0x74, 0xe5, 0x49, 0x29, 0x37, 0x98, 0xf3, 0xed, 0x58, 0xa8,\n\t0x9e, 0x90, 0x47, 0x53, 0x32, 0x38, 0x97, 0x2c, 0x24, 0x0f, 0x4f, 0x3f, 0x7a, 0x71, 0xf1, 0xa0,\n\t0xc3, 0xe2, 0x30, 0x63, 0x95, 0x0c, 0xae, 0xc0, 0x83, 0x95, 0x63, 0xcf, 0x26, 0x95, 0x85, 0x23,\n\t0xc3, 0xbf, 0x90, 0x34, 0x62, 0xd5, 0x23, 0x5f, 0x4d, 0x49, 0x79, 0x90, 0xa6, 0xa6, 0xe4, 0xb1,\n\t0xb5, 0x5e, 0xce, 0x25, 0x1c, 0x51, 0xa7, 0xe5, 0xc9, 0x55, 0x72, 0x6c, 0x5d, 0x1f, 0xc1, 0xa8,\n\t0x13, 0xb2, 0x72, 0xfd, 0x29, 0xb8, 0x97, 0xa8, 0xd3, 0xf2, 0xc4, 0xaa, 0x47, 0xcd, 0x68, 0xe0,\n\t0xeb, 0x44, 0x9d, 0x91, 0x27, 0x97, 0xd3, 0x14, 0xf3, 0xf0, 0xf9, 0x4d, 0xa2, 0x16, 0xe4, 0xff,\n\t0x6b, 0xe4, 0x96, 0xfb, 0xdc, 0x25, 0x6f, 0x59, 0xb3, 0x1d, 0xe0, 0xa6, 0x6b, 0x93, 0xef, 0x69,\n\t0xb6, 0xe4, 0xe0, 0xdb, 0x44, 0x4d, 0xcb, 0xa9, 0x1a, 0x85, 0xbe, 0xe8, 0x18, 0xbe, 0x4b, 0xd4,\n\t0x59, 0x79, 0xba, 0x81, 0x05, 0xf2, 0x38, 0xf4, 0x7d, 0xa2, 0xce, 0xc9, 0xe9, 0xba, 0xf6, 0x6c,\n\t0x75, 0x36, 0x0e, 0xfe, 0x90, 0x28, 0x90, 0xa7, 0xaa, 0xfd, 0x8c, 0xed, 0x10, 0x2b, 0xfc, 0x98,\n\t0xa8, 0xf3, 0x72, 0x66, 0x39, 0xf3, 0xa8, 0x4d, 0xd9, 0xc0, 0x9c, 0x7c, 0x40, 0x70, 0x3f, 0x51,\n\t0xa7, 0xe4, 0xf1, 0xcd, 0x6a, 0xab, 0x40, 0x03, 0xfb, 0x31, 0x25, 0x16, 0xe5, 0x19, 0xae, 0x76,\n\t0xc9, 0xa6, 0x58, 0xc0, 0xed, 0x8a, 0xba, 0x20, 0xa1, 0x4a, 0x03, 0x34, 0x75, 0xf4, 0x3d, 0xed,\n\t0xd0, 0x71, 0x56, 0xc2, 0x9d, 0x8a, 0x92, 0xf2, 0xd8, 0x3a, 0xf5, 0x9d, 0x81, 0x4f, 0x2b, 0x81,\n\t0xd6, 0x36, 0xe2, 0x75, 0xee, 0xa2, 0x87, 0xbb, 0x95, 0x30, 0xbc, 0x46, 0x5c, 0x25, 0x63, 0xdb,\n\t0x16, 0x0d, 0x7c, 0x16, 0x13, 0x5a, 0x05, 0xd6, 0x3d, 0xed, 0x95, 0xf0, 0x79, 0x45, 0x5d, 0x94,\n\t0x67, 0x9b, 0xd8, 0xcb, 0xc9, 0x6b, 0x5f, 0x36, 0xd0, 0x58, 0x8f, 0x29, 0xc3, 0x17, 0x31, 0x3e,\n\t0x99, 0x32, 0x89, 0x7f, 0x59, 0x51, 0x33, 0x52, 0xae, 0x68, 0xd3, 0xc0, 0x5b, 0x7d, 0x2c, 0x18,\n\t0x9e, 0x13, 0x41, 0x86, 0x96, 0xd3, 0x43, 0xdd, 0x9e, 0x41, 0x03, 0xcf, 0x8b, 0x00, 0xbe, 0xae,\n\t0xcb, 0x5e, 0xac, 0xbc, 0xd5, 0xb7, 0x1e, 0x0d, 0xbc, 0x20, 0x82, 0x7e, 0xeb, 0xe4, 0x77, 0xac,\n\t0x31, 0xe8, 0xe0, 0x45, 0x11, 0x80, 0xd4, 0x88, 0x87, 0xc0, 0x5f, 0x12, 0x91, 0x1b, 0x72, 0x97,\n\t0x4c, 0x8d, 0x78, 0x39, 0xcb, 0x68, 0x17, 0x0d, 0xbc, 0x2c, 0x94, 0x92, 0x67, 0x42, 0x20, 0x3a,\n\t0xa5, 0x77, 0x32, 0x84, 0x57, 0x44, 0xf0, 0x2a, 0xe2, 0x0f, 0x6e, 0xa1, 0x63, 0x9b, 0x46, 0x8f,\n\t0x26, 0xb3, 0x5e, 0x15, 0xc1, 0x88, 0x11, 0xc4, 0xa6, 0xed, 0x21, 0xf5, 0x19, 0x5e, 0x8b, 0x03,\n\t0x57, 0xc9, 0xb5, 0x33, 0x9b, 0x32, 0xbc, 0x2e, 0xd4, 0x94, 0x3c, 0x7a, 0x8d, 0x1c, 0xc2, 0x1b,\n\t0x31, 0x7d, 0x0b, 0x5d, 0x87, 0xbb, 0x93, 0x1e, 0x6f, 0x0a, 0x35, 0x2b, 0x55, 0xdd, 0x63, 0x4a,\n\t0xce, 0xd8, 0xd0, 0x7e, 0x5d, 0xdb, 0x0c, 0x0d, 0xbc, 0x35, 0xa6, 0x97, 0x91, 0x36, 0x4d, 0xa2,\n\t0x2d, 0xed, 0x3b, 0x08, 0x6f, 0x8b, 0x20, 0x4c, 0xab, 0xb1, 0x19, 0x22, 0xe4, 0x3a, 0xf0, 0x8e,\n\t0x50, 0xff, 0x93, 0xe7, 0x5b, 0xae, 0xe8, 0xe7, 0x43, 0x87, 0xab, 0x68, 0xac, 0x6e, 0x96, 0x39,\n\t0xc2, 0xbb, 0x42, 0xcd, 0xc9, 0x73, 0x0d, 0xed, 0x3a, 0x58, 0x23, 0xde, 0xd6, 0x6c, 0x8b, 0xb6,\n\t0x8d, 0xd4, 0xde, 0x13, 0x41, 0xf6, 0xb5, 0xbd, 0x1c, 0x53, 0xd6, 0x87, 0x66, 0xbe, 0x1f, 0xc1,\n\t0x54, 0x6d, 0x31, 0xb4, 0x01, 0x27, 0xf2, 0x7f, 0x10, 0x5b, 0xb5, 0x5c, 0xee, 0x29, 0xc5, 0xa2,\n\t0x08, 0x4d, 0xd6, 0x1c, 0x5b, 0x2e, 0xe1, 0x43, 0x11, 0xf6, 0x69, 0x8b, 0xd2, 0x9b, 0x68, 0xe0,\n\t0xa3, 0xa8, 0xee, 0xb0, 0xd9, 0x55, 0xcc, 0xd1, 0x19, 0x74, 0x69, 0x09, 0x1f, 0x47, 0x2a, 0xad,\n\t0xbc, 0xe3, 0xb5, 0xc1, 0x09, 0xf3, 0x4f, 0x22, 0xf2, 0xc3, 0xcc, 0x27, 0x4f, 0xb7, 0x63, 0x41,\n\t0x93, 0xa8, 0xaa, 0x5d, 0x39, 0xc2, 0x50, 0xc0, 0x9d, 0x68, 0xc8, 0xe8, 0x73, 0x03, 0xb5, 0x41,\n\t0xbf, 0x6e, 0x31, 0x33, 0xc5, 0x44, 0x9d, 0xbb, 0x11, 0xe6, 0xa6, 0x63, 0xf4, 0x4e, 0x67, 0xdb,\n\t0xe8, 0x07, 0xe8, 0xd7, 0xbc, 0x27, 0x0f, 0x3f, 0x47, 0xed, 0x6b, 0xc4, 0x9b, 0xbd, 0x3c, 0xc3,\n\t0xb0, 0x31, 0x68, 0xe0, 0x17, 0x31, 0xda, 0xb2, 0x6b, 0x9a, 0x71, 0x57, 0x97, 0xf0, 0x6b, 0xe4,\n\t0x1f, 0xea, 0x6c, 0x8a, 0x2d, 0xa7, 0x07, 0xda, 0x66, 0x51, 0xb0, 0xdf, 0x62, 0xf9, 0x28, 0x6d,\n\t0xec, 0xf4, 0xef, 0x42, 0x5d, 0x96, 0xb3, 0x1b, 0xcd, 0x66, 0xfd, 0x06, 0xfa, 0xc2, 0x92, 0x0b,\n\t0x2a, 0x8f, 0x6d, 0x80, 0x3f, 0x84, 0xba, 0x24, 0x2f, 0xdc, 0xd0, 0xde, 0x6a, 0xc7, 0xcb, 0x59,\n\t0x41, 0x35, 0xec, 0x10, 0x5b, 0xcd, 0x58, 0xc0, 0x83, 0x11, 0xce, 0xa2, 0xdf, 0x6e, 0xdb, 0xd4,\n\t0xa2, 0xe3, 0x6d, 0x26, 0xaf, 0x3b, 0x08, 0x7f, 0xc6, 0x3d, 0xdf, 0x22, 0xca, 0xaf, 0x22, 0x47,\n\t0x0b, 0xe0, 0x2f, 0x31, 0xfa, 0x71, 0xad, 0xed, 0x71, 0x50, 0xd4, 0xc0, 0xdf, 0x42, 0x5d, 0x91,\n\t0x0f, 0xd5, 0x90, 0x77, 0xc9, 0xdf, 0xfc, 0x8f, 0xdd, 0xfc, 0x47, 0xac, 0x3c, 0x7e, 0x6f, 0x7f,\n\t0x3e, 0xb9, 0xbf, 0x3f, 0x9f, 0xfc, 0xb4, 0x3f, 0x9f, 0xc8, 0x39, 0x4b, 0xc3, 0xbb, 0x97, 0x87,\n\t0x8d, 0x3e, 0x74, 0x02, 0x57, 0x66, 0x0e, 0x2e, 0x65, 0x3d, 0x1c, 0xcf, 0x7a, 0xb2, 0x73, 0x3c,\n\t0x5e, 0xd1, 0xc7, 0xfe, 0x0d, 0x00, 0x00, 0xff, 0xff, 0xaa, 0x49, 0xbb, 0xde, 0x8a, 0x05, 0x00,\n\t0x00,\n}\n\nfunc (m *HttpStatus) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *HttpStatus) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Code != 0 {\n\t\tdAtA[i] = 0x8\n\t\ti++\n\t\ti = encodeVarintHttpStatus(dAtA, i, uint64(m.Code))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc encodeVarintHttpStatus(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *HttpStatus) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Code != 0 {\n\t\tn += 1 + sovHttpStatus(uint64(m.Code))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc sovHttpStatus(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozHttpStatus(x uint64) (n int) {\n\treturn sovHttpStatus(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *HttpStatus) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowHttpStatus\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: HttpStatus: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: HttpStatus: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Code\", wireType)\n\t\t\t}\n\t\t\tm.Code = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowHttpStatus\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.Code |= StatusCode(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipHttpStatus(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpStatus\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthHttpStatus\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipHttpStatus(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowHttpStatus\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowHttpStatus\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowHttpStatus\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthHttpStatus\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthHttpStatus\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowHttpStatus\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipHttpStatus(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthHttpStatus\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthHttpStatus = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowHttpStatus   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "third_party/generated/envoyproxy/data-plane-api/envoy/type/percent.pb.go",
    "content": "// Code generated by protoc-gen-gogo. DO NOT EDIT.\n// source: envoy/type/percent.proto\n\npackage envoy_type\n\nimport (\n\tbytes \"bytes\"\n\tencoding_binary \"encoding/binary\"\n\tfmt \"fmt\"\n\t_ \"github.com/envoyproxy/protoc-gen-validate/validate\"\n\t_ \"github.com/gogo/protobuf/gogoproto\"\n\tproto \"github.com/gogo/protobuf/proto\"\n\tio \"io\"\n\tmath \"math\"\n\tmath_bits \"math/bits\"\n)\n\n// Reference imports to suppress errors if they are not otherwise used.\nvar _ = proto.Marshal\nvar _ = fmt.Errorf\nvar _ = math.Inf\n\n// This is a compile-time assertion to ensure that this generated file\n// is compatible with the proto package it is being compiled against.\n// A compilation error at this line likely means your copy of the\n// proto package needs to be updated.\nconst _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package\n\n// Fraction percentages support several fixed denominator values.\ntype FractionalPercent_DenominatorType int32\n\nconst (\n\t// 100.\n\t//\n\t// **Example**: 1/100 = 1%.\n\tFractionalPercent_HUNDRED FractionalPercent_DenominatorType = 0\n\t// 10,000.\n\t//\n\t// **Example**: 1/10000 = 0.01%.\n\tFractionalPercent_TEN_THOUSAND FractionalPercent_DenominatorType = 1\n\t// 1,000,000.\n\t//\n\t// **Example**: 1/1000000 = 0.0001%.\n\tFractionalPercent_MILLION FractionalPercent_DenominatorType = 2\n)\n\nvar FractionalPercent_DenominatorType_name = map[int32]string{\n\t0: \"HUNDRED\",\n\t1: \"TEN_THOUSAND\",\n\t2: \"MILLION\",\n}\n\nvar FractionalPercent_DenominatorType_value = map[string]int32{\n\t\"HUNDRED\":      0,\n\t\"TEN_THOUSAND\": 1,\n\t\"MILLION\":      2,\n}\n\nfunc (x FractionalPercent_DenominatorType) String() string {\n\treturn proto.EnumName(FractionalPercent_DenominatorType_name, int32(x))\n}\n\nfunc (FractionalPercent_DenominatorType) EnumDescriptor() ([]byte, []int) {\n\treturn fileDescriptor_89401f90eb07307e, []int{1, 0}\n}\n\n// Identifies a percentage, in the range [0.0, 100.0].\ntype Percent struct {\n\tValue                float64  `protobuf:\"fixed64,1,opt,name=value,proto3\" json:\"value,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{} `json:\"-\"`\n\tXXX_unrecognized     []byte   `json:\"-\"`\n\tXXX_sizecache        int32    `json:\"-\"`\n}\n\nfunc (m *Percent) Reset()         { *m = Percent{} }\nfunc (m *Percent) String() string { return proto.CompactTextString(m) }\nfunc (*Percent) ProtoMessage()    {}\nfunc (*Percent) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_89401f90eb07307e, []int{0}\n}\nfunc (m *Percent) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *Percent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_Percent.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *Percent) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_Percent.Merge(m, src)\n}\nfunc (m *Percent) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *Percent) XXX_DiscardUnknown() {\n\txxx_messageInfo_Percent.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_Percent proto.InternalMessageInfo\n\nfunc (m *Percent) GetValue() float64 {\n\tif m != nil {\n\t\treturn m.Value\n\t}\n\treturn 0\n}\n\n// A fractional percentage is used in cases in which for performance reasons performing floating\n// point to integer conversions during randomness calculations is undesirable. The message includes\n// both a numerator and denominator that together determine the final fractional value.\n//\n// * **Example**: 1/100 = 1%.\n// * **Example**: 3/10000 = 0.03%.\ntype FractionalPercent struct {\n\t// Specifies the numerator. Defaults to 0.\n\tNumerator uint32 `protobuf:\"varint,1,opt,name=numerator,proto3\" json:\"numerator,omitempty\"`\n\t// Specifies the denominator. If the denominator specified is less than the numerator, the final\n\t// fractional percentage is capped at 1 (100%).\n\tDenominator          FractionalPercent_DenominatorType `protobuf:\"varint,2,opt,name=denominator,proto3,enum=envoy.type.FractionalPercent_DenominatorType\" json:\"denominator,omitempty\"`\n\tXXX_NoUnkeyedLiteral struct{}                          `json:\"-\"`\n\tXXX_unrecognized     []byte                            `json:\"-\"`\n\tXXX_sizecache        int32                             `json:\"-\"`\n}\n\nfunc (m *FractionalPercent) Reset()         { *m = FractionalPercent{} }\nfunc (m *FractionalPercent) String() string { return proto.CompactTextString(m) }\nfunc (*FractionalPercent) ProtoMessage()    {}\nfunc (*FractionalPercent) Descriptor() ([]byte, []int) {\n\treturn fileDescriptor_89401f90eb07307e, []int{1}\n}\nfunc (m *FractionalPercent) XXX_Unmarshal(b []byte) error {\n\treturn m.Unmarshal(b)\n}\nfunc (m *FractionalPercent) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {\n\tif deterministic {\n\t\treturn xxx_messageInfo_FractionalPercent.Marshal(b, m, deterministic)\n\t} else {\n\t\tb = b[:cap(b)]\n\t\tn, err := m.MarshalTo(b)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\treturn b[:n], nil\n\t}\n}\nfunc (m *FractionalPercent) XXX_Merge(src proto.Message) {\n\txxx_messageInfo_FractionalPercent.Merge(m, src)\n}\nfunc (m *FractionalPercent) XXX_Size() int {\n\treturn m.Size()\n}\nfunc (m *FractionalPercent) XXX_DiscardUnknown() {\n\txxx_messageInfo_FractionalPercent.DiscardUnknown(m)\n}\n\nvar xxx_messageInfo_FractionalPercent proto.InternalMessageInfo\n\nfunc (m *FractionalPercent) GetNumerator() uint32 {\n\tif m != nil {\n\t\treturn m.Numerator\n\t}\n\treturn 0\n}\n\nfunc (m *FractionalPercent) GetDenominator() FractionalPercent_DenominatorType {\n\tif m != nil {\n\t\treturn m.Denominator\n\t}\n\treturn FractionalPercent_HUNDRED\n}\n\nfunc init() {\n\tproto.RegisterEnum(\"envoy.type.FractionalPercent_DenominatorType\", FractionalPercent_DenominatorType_name, FractionalPercent_DenominatorType_value)\n\tproto.RegisterType((*Percent)(nil), \"envoy.type.Percent\")\n\tproto.RegisterType((*FractionalPercent)(nil), \"envoy.type.FractionalPercent\")\n}\n\nfunc init() { proto.RegisterFile(\"envoy/type/percent.proto\", fileDescriptor_89401f90eb07307e) }\n\nvar fileDescriptor_89401f90eb07307e = []byte{\n\t// 304 bytes of a gzipped FileDescriptorProto\n\t0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x48, 0xcd, 0x2b, 0xcb,\n\t0xaf, 0xd4, 0x2f, 0xa9, 0x2c, 0x48, 0xd5, 0x2f, 0x48, 0x2d, 0x4a, 0x4e, 0xcd, 0x2b, 0xd1, 0x2b,\n\t0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x02, 0xcb, 0xe8, 0x81, 0x64, 0xa4, 0xc4, 0xcb, 0x12, 0x73,\n\t0x32, 0x53, 0x12, 0x4b, 0x52, 0xf5, 0x61, 0x0c, 0x88, 0x22, 0x29, 0x91, 0xf4, 0xfc, 0xf4, 0x7c,\n\t0x30, 0x53, 0x1f, 0xc4, 0x82, 0x88, 0x2a, 0x59, 0x70, 0xb1, 0x07, 0x40, 0xcc, 0x12, 0xd2, 0xe5,\n\t0x62, 0x2d, 0x4b, 0xcc, 0x29, 0x4d, 0x95, 0x60, 0x54, 0x60, 0xd4, 0x60, 0x74, 0x12, 0xff, 0xe5,\n\t0x24, 0x22, 0x24, 0x24, 0xc9, 0x00, 0x06, 0x91, 0x0e, 0x9a, 0x0c, 0x50, 0x10, 0x04, 0x51, 0xa5,\n\t0x74, 0x9a, 0x91, 0x4b, 0xd0, 0xad, 0x28, 0x31, 0xb9, 0x24, 0x33, 0x3f, 0x2f, 0x31, 0x07, 0x66,\n\t0x88, 0x0c, 0x17, 0x67, 0x5e, 0x69, 0x6e, 0x6a, 0x51, 0x62, 0x49, 0x7e, 0x11, 0xd8, 0x20, 0xde,\n\t0x20, 0x84, 0x80, 0x50, 0x24, 0x17, 0x77, 0x4a, 0x6a, 0x5e, 0x7e, 0x6e, 0x66, 0x1e, 0x58, 0x9e,\n\t0x49, 0x81, 0x51, 0x83, 0xcf, 0x48, 0x57, 0x0f, 0xe1, 0x7c, 0x3d, 0x0c, 0x13, 0xf5, 0x5c, 0x10,\n\t0x1a, 0x42, 0x2a, 0x0b, 0x52, 0x9d, 0x38, 0x7e, 0x39, 0xb1, 0x36, 0x31, 0x32, 0x09, 0x30, 0x06,\n\t0x21, 0x9b, 0xa5, 0x64, 0xcb, 0xc5, 0x8f, 0xa6, 0x52, 0x88, 0x9b, 0x8b, 0xdd, 0x23, 0xd4, 0xcf,\n\t0x25, 0xc8, 0xd5, 0x45, 0x80, 0x41, 0x48, 0x80, 0x8b, 0x27, 0xc4, 0xd5, 0x2f, 0x3e, 0xc4, 0xc3,\n\t0x3f, 0x34, 0xd8, 0xd1, 0xcf, 0x45, 0x80, 0x11, 0x24, 0xed, 0xeb, 0xe9, 0xe3, 0xe3, 0xe9, 0xef,\n\t0x27, 0xc0, 0xe4, 0x64, 0xb5, 0xe2, 0x91, 0x1c, 0xe3, 0x89, 0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9,\n\t0x31, 0x3e, 0x78, 0x24, 0xc7, 0xc8, 0x25, 0x91, 0x99, 0x0f, 0x71, 0x58, 0x41, 0x51, 0x7e, 0x45,\n\t0x25, 0x92, 0x1b, 0x9d, 0x78, 0xa0, 0x4e, 0x0b, 0x00, 0x85, 0x60, 0x00, 0x63, 0x12, 0x1b, 0x38,\n\t0x28, 0x8d, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0x5c, 0x32, 0x60, 0x45, 0xa1, 0x01, 0x00, 0x00,\n}\n\nfunc (this *Percent) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*Percent)\n\tif !ok {\n\t\tthat2, ok := that.(Percent)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Value != that1.Value {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (this *FractionalPercent) Equal(that interface{}) bool {\n\tif that == nil {\n\t\treturn this == nil\n\t}\n\n\tthat1, ok := that.(*FractionalPercent)\n\tif !ok {\n\t\tthat2, ok := that.(FractionalPercent)\n\t\tif ok {\n\t\t\tthat1 = &that2\n\t\t} else {\n\t\t\treturn false\n\t\t}\n\t}\n\tif that1 == nil {\n\t\treturn this == nil\n\t} else if this == nil {\n\t\treturn false\n\t}\n\tif this.Numerator != that1.Numerator {\n\t\treturn false\n\t}\n\tif this.Denominator != that1.Denominator {\n\t\treturn false\n\t}\n\tif !bytes.Equal(this.XXX_unrecognized, that1.XXX_unrecognized) {\n\t\treturn false\n\t}\n\treturn true\n}\nfunc (m *Percent) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *Percent) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Value != 0 {\n\t\tdAtA[i] = 0x9\n\t\ti++\n\t\tencoding_binary.LittleEndian.PutUint64(dAtA[i:], uint64(math.Float64bits(float64(m.Value))))\n\t\ti += 8\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc (m *FractionalPercent) Marshal() (dAtA []byte, err error) {\n\tsize := m.Size()\n\tdAtA = make([]byte, size)\n\tn, err := m.MarshalTo(dAtA)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn dAtA[:n], nil\n}\n\nfunc (m *FractionalPercent) MarshalTo(dAtA []byte) (int, error) {\n\tvar i int\n\t_ = i\n\tvar l int\n\t_ = l\n\tif m.Numerator != 0 {\n\t\tdAtA[i] = 0x8\n\t\ti++\n\t\ti = encodeVarintPercent(dAtA, i, uint64(m.Numerator))\n\t}\n\tif m.Denominator != 0 {\n\t\tdAtA[i] = 0x10\n\t\ti++\n\t\ti = encodeVarintPercent(dAtA, i, uint64(m.Denominator))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\ti += copy(dAtA[i:], m.XXX_unrecognized)\n\t}\n\treturn i, nil\n}\n\nfunc encodeVarintPercent(dAtA []byte, offset int, v uint64) int {\n\tfor v >= 1<<7 {\n\t\tdAtA[offset] = uint8(v&0x7f | 0x80)\n\t\tv >>= 7\n\t\toffset++\n\t}\n\tdAtA[offset] = uint8(v)\n\treturn offset + 1\n}\nfunc (m *Percent) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Value != 0 {\n\t\tn += 9\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc (m *FractionalPercent) Size() (n int) {\n\tif m == nil {\n\t\treturn 0\n\t}\n\tvar l int\n\t_ = l\n\tif m.Numerator != 0 {\n\t\tn += 1 + sovPercent(uint64(m.Numerator))\n\t}\n\tif m.Denominator != 0 {\n\t\tn += 1 + sovPercent(uint64(m.Denominator))\n\t}\n\tif m.XXX_unrecognized != nil {\n\t\tn += len(m.XXX_unrecognized)\n\t}\n\treturn n\n}\n\nfunc sovPercent(x uint64) (n int) {\n\treturn (math_bits.Len64(x|1) + 6) / 7\n}\nfunc sozPercent(x uint64) (n int) {\n\treturn sovPercent(uint64((x << 1) ^ uint64((int64(x) >> 63))))\n}\nfunc (m *Percent) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowPercent\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: Percent: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: Percent: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 1 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Value\", wireType)\n\t\t\t}\n\t\t\tvar v uint64\n\t\t\tif (iNdEx + 8) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tv = uint64(encoding_binary.LittleEndian.Uint64(dAtA[iNdEx:]))\n\t\t\tiNdEx += 8\n\t\t\tm.Value = float64(math.Float64frombits(v))\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipPercent(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthPercent\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthPercent\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc (m *FractionalPercent) Unmarshal(dAtA []byte) error {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tpreIndex := iNdEx\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn ErrIntOverflowPercent\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= uint64(b&0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tfieldNum := int32(wire >> 3)\n\t\twireType := int(wire & 0x7)\n\t\tif wireType == 4 {\n\t\t\treturn fmt.Errorf(\"proto: FractionalPercent: wiretype end group for non-group\")\n\t\t}\n\t\tif fieldNum <= 0 {\n\t\t\treturn fmt.Errorf(\"proto: FractionalPercent: illegal tag %d (wire type %d)\", fieldNum, wire)\n\t\t}\n\t\tswitch fieldNum {\n\t\tcase 1:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Numerator\", wireType)\n\t\t\t}\n\t\t\tm.Numerator = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowPercent\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.Numerator |= uint32(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tcase 2:\n\t\t\tif wireType != 0 {\n\t\t\t\treturn fmt.Errorf(\"proto: wrong wireType = %d for field Denominator\", wireType)\n\t\t\t}\n\t\t\tm.Denominator = 0\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn ErrIntOverflowPercent\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tm.Denominator |= FractionalPercent_DenominatorType(b&0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\tdefault:\n\t\t\tiNdEx = preIndex\n\t\t\tskippy, err := skipPercent(dAtA[iNdEx:])\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif skippy < 0 {\n\t\t\t\treturn ErrInvalidLengthPercent\n\t\t\t}\n\t\t\tif (iNdEx + skippy) < 0 {\n\t\t\t\treturn ErrInvalidLengthPercent\n\t\t\t}\n\t\t\tif (iNdEx + skippy) > l {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tm.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)\n\t\t\tiNdEx += skippy\n\t\t}\n\t}\n\n\tif iNdEx > l {\n\t\treturn io.ErrUnexpectedEOF\n\t}\n\treturn nil\n}\nfunc skipPercent(dAtA []byte) (n int, err error) {\n\tl := len(dAtA)\n\tiNdEx := 0\n\tfor iNdEx < l {\n\t\tvar wire uint64\n\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\tif shift >= 64 {\n\t\t\t\treturn 0, ErrIntOverflowPercent\n\t\t\t}\n\t\t\tif iNdEx >= l {\n\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tb := dAtA[iNdEx]\n\t\t\tiNdEx++\n\t\t\twire |= (uint64(b) & 0x7F) << shift\n\t\t\tif b < 0x80 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\twireType := int(wire & 0x7)\n\t\tswitch wireType {\n\t\tcase 0:\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowPercent\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tiNdEx++\n\t\t\t\tif dAtA[iNdEx-1] < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 1:\n\t\t\tiNdEx += 8\n\t\t\treturn iNdEx, nil\n\t\tcase 2:\n\t\t\tvar length int\n\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\tif shift >= 64 {\n\t\t\t\t\treturn 0, ErrIntOverflowPercent\n\t\t\t\t}\n\t\t\t\tif iNdEx >= l {\n\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t}\n\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\tiNdEx++\n\t\t\t\tlength |= (int(b) & 0x7F) << shift\n\t\t\t\tif b < 0x80 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif length < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthPercent\n\t\t\t}\n\t\t\tiNdEx += length\n\t\t\tif iNdEx < 0 {\n\t\t\t\treturn 0, ErrInvalidLengthPercent\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 3:\n\t\t\tfor {\n\t\t\t\tvar innerWire uint64\n\t\t\t\tvar start int = iNdEx\n\t\t\t\tfor shift := uint(0); ; shift += 7 {\n\t\t\t\t\tif shift >= 64 {\n\t\t\t\t\t\treturn 0, ErrIntOverflowPercent\n\t\t\t\t\t}\n\t\t\t\t\tif iNdEx >= l {\n\t\t\t\t\t\treturn 0, io.ErrUnexpectedEOF\n\t\t\t\t\t}\n\t\t\t\t\tb := dAtA[iNdEx]\n\t\t\t\t\tiNdEx++\n\t\t\t\t\tinnerWire |= (uint64(b) & 0x7F) << shift\n\t\t\t\t\tif b < 0x80 {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tinnerWireType := int(innerWire & 0x7)\n\t\t\t\tif innerWireType == 4 {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\tnext, err := skipPercent(dAtA[start:])\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn 0, err\n\t\t\t\t}\n\t\t\t\tiNdEx = start + next\n\t\t\t\tif iNdEx < 0 {\n\t\t\t\t\treturn 0, ErrInvalidLengthPercent\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn iNdEx, nil\n\t\tcase 4:\n\t\t\treturn iNdEx, nil\n\t\tcase 5:\n\t\t\tiNdEx += 4\n\t\t\treturn iNdEx, nil\n\t\tdefault:\n\t\t\treturn 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType)\n\t\t}\n\t}\n\tpanic(\"unreachable\")\n}\n\nvar (\n\tErrInvalidLengthPercent = fmt.Errorf(\"proto: negative length found during unmarshaling\")\n\tErrIntOverflowPercent   = fmt.Errorf(\"proto: integer overflow\")\n)\n"
  },
  {
    "path": "utils/README.md",
    "content": "# Utils directory\n\nThis directory has utility libraries that can be used outside the scope of trireme as well but should be looked at as independent packages.\n\nIn the roadmap these will move out into libs.\n"
  },
  {
    "path": "utils/allocator/allocator.go",
    "content": "package allocator\n\nimport (\n\t\"strconv\"\n)\n\n// allocator\ntype allocator struct {\n\tallocate chan int\n}\n\n// New provides a new allocator\nfunc New(start, size int) Allocator {\n\ta := &allocator{\n\t\tallocate: make(chan int, size),\n\t}\n\n\tfor i := start; i < (start + size); i++ {\n\t\ta.allocate <- i\n\t}\n\n\treturn a\n}\n\n// Allocate allocates an item\nfunc (p *allocator) Allocate() string {\n\treturn strconv.Itoa(<-p.allocate)\n}\n\n// Release releases an item\nfunc (p *allocator) Release(item string) {\n\n\t// Do not release when the channel is full. These can happen when we resync\n\t// stopped containers.\n\tif len(p.allocate) == cap(p.allocate) {\n\t\treturn\n\t}\n\n\tintItem, err := strconv.Atoi(item)\n\tif err != nil {\n\t\treturn\n\t}\n\tp.allocate <- intItem\n}\n\n// AllocateInt allocates an integer.\nfunc (p *allocator) AllocateInt() int {\n\treturn <-p.allocate\n}\n\n// ReleaseInt releases an int.\nfunc (p *allocator) ReleaseInt(item int) {\n\tif len(p.allocate) == cap(p.allocate) || item == 0 {\n\t\treturn\n\t}\n\n\tp.allocate <- item\n}\n"
  },
  {
    "path": "utils/allocator/interfaces.go",
    "content": "package allocator\n\n// Allocator is an allocator interface\ntype Allocator interface {\n\n\t// Allocate allocates a string\n\tAllocate() string\n\n\t// Release releases a string\n\tRelease(item string)\n\n\t// AllocateInt allocates an int\n\tAllocateInt() int\n\n\t// ReleaseInt releases an item of type integer.\n\tReleaseInt(item int)\n}\n"
  },
  {
    "path": "utils/cache/cache.go",
    "content": "package cache\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"sync\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n)\n\n// ExpirationNotifier is a function which will be called every time a cache\n// expires an item\ntype ExpirationNotifier func(id interface{}, item interface{})\n\n// DataStore is the interface to a datastore.\ntype DataStore interface {\n\tAdd(u interface{}, value interface{}) (err error)\n\tAddOrUpdate(u interface{}, value interface{}) bool\n\tGet(u interface{}) (i interface{}, err error)\n\tGetReset(u interface{}, duration time.Duration) (interface{}, error)\n\tRemove(u interface{}) (err error)\n\tRemoveWithDelay(u interface{}, duration time.Duration) (err error)\n\tLockedModify(u interface{}, add func(a, b interface{}) interface{}, increment interface{}) (interface{}, error)\n\tSetTimeOut(u interface{}, timeout time.Duration) (err error)\n\tKeyList() []interface{}\n\tToString() string\n}\n\n// Cache is the structure that involves the map of entries. The cache\n// provides a sync mechanism and allows multiple clients at the same time.\ntype Cache struct {\n\tname     string\n\tdata     map[interface{}]entry\n\tlifetime time.Duration\n\tsync.RWMutex\n\texpirer ExpirationNotifier\n\tmax     int\n}\n\n// entry is a single line in the datastore that includes the actual entry\n// and the time that entry was created or updated\ntype entry struct {\n\tvalue     interface{}\n\ttimestamp time.Time\n\ttimer     *time.Timer\n\texpirer   ExpirationNotifier\n}\n\n// cacheRegistry keeps handles of all caches initialized through this library\n// for book keeping\ntype cacheRegistry struct {\n\tsync.RWMutex\n\titems map[string]*Cache\n}\n\nvar registry *cacheRegistry\n\nfunc init() {\n\n\tregistry = &cacheRegistry{\n\t\titems: make(map[string]*Cache),\n\t}\n}\n\n// Add adds a cache to a registry\nfunc (r *cacheRegistry) Add(c *Cache) {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tr.items[c.name] = c\n}\n\n// ToString generates information about all caches initialized through this lib\nfunc (r *cacheRegistry) ToString() string {\n\tr.Lock()\n\tdefer r.Unlock()\n\n\tbuffer := fmt.Sprintf(\"Cache Registry: %d\\n\", len(r.items))\n\tbuffer += fmt.Sprintf(\" %32s : %s\\n\\n\", \"Cache Name\", \"max/curr\")\n\tfor k, c := range r.items {\n\t\tbuffer += fmt.Sprintf(\" %32s : %s\\n\", k, c.ToString())\n\t}\n\treturn buffer\n}\n\n// NewCache creates a new data cache\nfunc NewCache(name string) *Cache {\n\n\treturn NewCacheWithExpirationNotifier(name, -1, nil)\n}\n\n// NewCacheWithExpiration creates a new data cache\nfunc NewCacheWithExpiration(name string, lifetime time.Duration) *Cache {\n\n\treturn NewCacheWithExpirationNotifier(name, lifetime, nil)\n}\n\n// NewCacheWithExpirationNotifier creates a new data cache with notifier\nfunc NewCacheWithExpirationNotifier(name string, lifetime time.Duration, expirer ExpirationNotifier) *Cache {\n\n\tc := &Cache{\n\t\tname:     name,\n\t\tdata:     make(map[interface{}]entry),\n\t\tlifetime: lifetime,\n\t\texpirer:  expirer,\n\t}\n\tc.max = len(c.data)\n\tregistry.Add(c)\n\treturn c\n}\n\n// ToString generates information about all caches initialized through this lib\nfunc ToString() string {\n\n\treturn registry.ToString()\n}\n\n// ToString provides statistics about this cache\nfunc (c *Cache) ToString() string {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\treturn fmt.Sprintf(\"%d/%d\", c.max, len(c.data))\n}\n\n// Add stores an entry into the cache and updates the timestamp\nfunc (c *Cache) Add(u interface{}, value interface{}) (err error) {\n\n\tvar timer *time.Timer\n\tif c.lifetime != -1 {\n\t\ttimer = time.AfterFunc(c.lifetime, func() {\n\t\t\tif err := c.removeNotify(u, true); err != nil {\n\t\t\t\tzap.L().Warn(\"Failed to remove item\", zap.String(\"key\", fmt.Sprintf(\"%v\", u)))\n\t\t\t}\n\t\t})\n\t}\n\n\tt := time.Now()\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif _, ok := c.data[u]; !ok {\n\n\t\tc.data[u] = entry{\n\t\t\tvalue:     value,\n\t\t\ttimestamp: t,\n\t\t\ttimer:     timer,\n\t\t\texpirer:   c.expirer,\n\t\t}\n\t\tif len(c.data) > c.max {\n\t\t\tc.max = len(c.data)\n\t\t}\n\t\treturn nil\n\t}\n\n\treturn errors.New(\"item exists: use update\")\n}\n\n// GetReset  changes the value of an entry into the cache and updates the timestamp\nfunc (c *Cache) GetReset(u interface{}, duration time.Duration) (interface{}, error) {\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif line, ok := c.data[u]; ok {\n\n\t\tif c.lifetime != -1 && line.timer != nil {\n\t\t\tif duration > 0 {\n\t\t\t\tline.timer.Reset(duration)\n\t\t\t} else {\n\t\t\t\tline.timer.Reset(c.lifetime)\n\t\t\t}\n\t\t}\n\n\t\treturn line.value, nil\n\t}\n\n\treturn nil, errors.New(\"cannot read item: not found\")\n}\n\n// Update changes the value of an entry into the cache and updates the timestamp\nfunc (c *Cache) Update(u interface{}, value interface{}) (err error) {\n\n\tvar timer *time.Timer\n\tif c.lifetime != -1 {\n\t\ttimer = time.AfterFunc(c.lifetime, func() {\n\t\t\tif err := c.removeNotify(u, true); err != nil {\n\t\t\t\tzap.L().Warn(\"Failed to remove item\", zap.String(\"key\", fmt.Sprintf(\"%v\", u)))\n\t\t\t}\n\t\t})\n\t}\n\n\tt := time.Now()\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif _, ok := c.data[u]; ok {\n\n\t\tif c.data[u].timer != nil {\n\t\t\tc.data[u].timer.Stop()\n\t\t}\n\n\t\tc.data[u] = entry{\n\t\t\tvalue:     value,\n\t\t\ttimestamp: t,\n\t\t\ttimer:     timer,\n\t\t\texpirer:   c.expirer,\n\t\t}\n\n\t\treturn nil\n\t}\n\n\treturn errors.New(\"cannot update item: not found\")\n}\n\n// AddOrUpdate adds a new value in the cache or updates the existing value\n// if needed. If an update happens the timestamp is also updated.\n// Returns true if key was updated.\nfunc (c *Cache) AddOrUpdate(u interface{}, value interface{}) (updated bool) {\n\n\tvar timer *time.Timer\n\tif c.lifetime != -1 {\n\t\ttimer = time.AfterFunc(c.lifetime, func() {\n\t\t\tif err := c.removeNotify(u, true); err != nil {\n\t\t\t\tzap.L().Warn(\"Failed to remove item\", zap.String(\"key\", fmt.Sprintf(\"%v\", u)))\n\t\t\t}\n\t\t})\n\t}\n\n\tt := time.Now()\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif _, updated = c.data[u]; updated {\n\t\tif c.data[u].timer != nil {\n\t\t\tc.data[u].timer.Stop()\n\t\t}\n\t}\n\n\tc.data[u] = entry{\n\t\tvalue:     value,\n\t\ttimestamp: t,\n\t\ttimer:     timer,\n\t\texpirer:   c.expirer,\n\t}\n\tif len(c.data) > c.max {\n\t\tc.max = len(c.data)\n\t}\n\n\treturn updated\n}\n\n// SetTimeOut sets the time out of an entry to a new value\nfunc (c *Cache) SetTimeOut(u interface{}, timeout time.Duration) (err error) {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif _, ok := c.data[u]; !ok {\n\t\treturn errors.New(\"item is already deleted\")\n\t}\n\n\tc.data[u].timer.Reset(timeout)\n\n\treturn nil\n}\n\n// Get retrieves the entry from the cache\nfunc (c *Cache) Get(u interface{}) (i interface{}, err error) {\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tif _, ok := c.data[u]; !ok {\n\t\treturn nil, errors.New(\"not found\")\n\t}\n\n\treturn c.data[u].value, nil\n}\n\n// KeyList returns all the keys that are currently stored in the cache.\nfunc (c *Cache) KeyList() []interface{} {\n\tc.Lock()\n\tdefer c.Unlock()\n\n\tlist := []interface{}{}\n\tfor k := range c.data {\n\t\tlist = append(list, k)\n\t}\n\treturn list\n}\n\n// removeNotify removes the entry from the cache and optionally notifies.\n// returns error if not there\nfunc (c *Cache) removeNotify(u interface{}, notify bool) (err error) {\n\n\tc.Lock()\n\n\tval, ok := c.data[u]\n\tif !ok {\n\t\tc.Unlock()\n\t\treturn errors.New(\"not found\")\n\t}\n\n\tif val.timer != nil {\n\t\tval.timer.Stop()\n\t}\n\tdelete(c.data, u)\n\tc.Unlock()\n\n\tif notify && val.expirer != nil {\n\t\tval.expirer(u, val.value)\n\t}\n\treturn nil\n}\n\n// Remove removes the entry from the cache and returns error if not there\nfunc (c *Cache) Remove(u interface{}) (err error) {\n\n\treturn c.removeNotify(u, false)\n}\n\n// RemoveWithDelay removes the entry from the cache after a certain duration\nfunc (c *Cache) RemoveWithDelay(u interface{}, duration time.Duration) error {\n\tif duration == -1 {\n\t\treturn c.Remove(u)\n\t}\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\te, ok := c.data[u]\n\n\tif !ok {\n\t\treturn errors.New(\"cannot remove item with delay: not found\")\n\t}\n\n\ttimer := time.AfterFunc(duration, func() {\n\t\tif err := c.Remove(u); err != nil {\n\t\t\tzap.L().Warn(\"Failed to remove item with delay\", zap.String(\"key\", fmt.Sprintf(\"%v\", u)), zap.String(\"delay\", duration.String()))\n\t\t}\n\t})\n\n\tt := time.Now()\n\n\tif c.data[u].timer != nil {\n\t\tc.data[u].timer.Stop()\n\t}\n\n\tc.data[u] = entry{\n\t\tvalue:     e.value,\n\t\ttimestamp: t,\n\t\ttimer:     timer,\n\t\texpirer:   c.expirer,\n\t}\n\n\treturn nil\n\n}\n\n// SizeOf returns the number of elements in the cache\nfunc (c *Cache) SizeOf() int {\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\treturn len(c.data)\n}\n\n// LockedModify locks the data store\nfunc (c *Cache) LockedModify(u interface{}, add func(a, b interface{}) interface{}, increment interface{}) (interface{}, error) {\n\n\tvar timer *time.Timer\n\tif c.lifetime != -1 {\n\t\ttimer = time.AfterFunc(c.lifetime, func() {\n\t\t\tif err := c.removeNotify(u, true); err != nil {\n\t\t\t\tzap.L().Warn(\"Failed to remove item\", zap.String(\"key\", fmt.Sprintf(\"%v\", u)))\n\t\t\t}\n\t\t})\n\t}\n\n\tt := time.Now()\n\n\tc.Lock()\n\tdefer c.Unlock()\n\n\te, ok := c.data[u]\n\tif !ok {\n\t\treturn nil, errors.New(\"not found\")\n\t}\n\n\tif e.timer != nil {\n\t\te.timer.Stop()\n\t}\n\n\te.value = add(e.value, increment)\n\te.timer = timer\n\te.timestamp = t\n\te.expirer = c.expirer\n\n\tc.data[u] = e\n\n\treturn e.value, nil\n\n}\n"
  },
  {
    "path": "utils/cache/cache_test.go",
    "content": "// +build !windows\n\npackage cache\n\nimport (\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/rs/xid\"\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestConstructorNewCache(t *testing.T) {\n\n\t//\tt.Parallel()\n\n\tConvey(\"Given I call the method NewCache, I should a new cache\", t, func() {\n\n\t\tc := &Cache{\n\t\t\tname: \"cache\",\n\t\t}\n\n\t\tSo(NewCache(\"cache\"), ShouldHaveSameTypeAs, c)\n\t})\n}\n\nfunc TestElements(t *testing.T) {\n\n\t//\tt.Parallel()\n\n\tc := NewCache(\"cache\")\n\tid := xid.New()\n\tfakeid := xid.New()\n\tnewid := xid.New()\n\tvalue := \"element\"\n\tsecondValue := \"element2\"\n\tthirdValue := \"element3\"\n\n\tConvey(\"Given that I want to test elemenets, I must initialize a cache\", t, func() {\n\n\t\t// Test Write\n\t\tConvey(\"Given that I add a new element in the cache, it should not have errors\", func() {\n\t\t\terr := c.Add(id, value)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"Given that I add the same element for a second time, I should get an error\", func() {\n\t\t\terr := c.Add(id, value)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\t// Test Read\n\t\tConvey(\"Given that I have an element in the cache, I should be able to read it\", func() {\n\t\t\tnewvalue, err := c.Get(id)\n\t\t\tSo(value, ShouldEqual, newvalue)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"Given that I try to read an element that is not there, I should get an error\", func() {\n\t\t\t_, err := c.Get(fakeid)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\t// Test Update\n\t\tConvey(\"Given that I want to update the element, I should be able to do it\", func() {\n\n\t\t\terr := c.Update(id, secondValue)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tnewvalue, err := c.Get(id)\n\t\t\tSo(newvalue, ShouldEqual, secondValue)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"Given that I try to update an element that doesn't exist, I should get an error \", func() {\n\t\t\tnextid := xid.New()\n\t\t\terr := c.Update(nextid, value)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\n\t\tConvey(\"Given that I try to add or update an element in the cache, I should not get an error\", func() {\n\n\t\t\tok := c.AddOrUpdate(newid, secondValue)\n\t\t\tnewvalue, err := c.Get(newid)\n\t\t\tSo(newvalue, ShouldEqual, secondValue)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(ok, ShouldBeFalse)\n\t\t})\n\n\t\tConvey(\"Given that I try to update an element in the cache, I should not get an error\", func() {\n\n\t\t\tok := c.AddOrUpdate(newid, thirdValue)\n\t\t\tnewvalue, err := c.Get(newid)\n\t\t\tSo(newvalue, ShouldEqual, thirdValue)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(ok, ShouldBeTrue)\n\t\t})\n\n\t\tConvey(\"Given that I have an element in the cache, I should be able to delete it\", func() {\n\t\t\terr := c.Remove(id)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"Given that I try to delete the same element twice, I should not be able to do it\", func() {\n\t\t\terr := c.Remove(id)\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc Test_CacheTimer(t *testing.T) {\n\n\t//t.Parallel()\n\n\tConvey(\"Given a new cache with an expiration timer \", t, func() {\n\t\tc := NewCacheWithExpiration(\"cache\", 3*time.Second)\n\n\t\tConvey(\"When I create an item that has to exist for a second\", func() {\n\t\t\terr := c.Add(\"key\", \"value\")\n\t\t\tSo(err, ShouldBeNil)\n\n\t\t\tConvey(\"Then I should be able to get back the item\", func() {\n\t\t\t\tval, err := c.Get(\"key\")\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(val.(string), ShouldResemble, \"value\")\n\n\t\t\t\tConvey(\"When I wait for 1 second and update the time\", func() {\n\t\t\t\t\t<-time.After(1 * time.Second)\n\n\t\t\t\t\tc.AddOrUpdate(\"key\", \"value2\")\n\n\t\t\t\t\tConvey(\"I should be able to read the second item\", func() {\n\t\t\t\t\t\tval, err := c.Get(\"key\")\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\t\tSo(val.(string), ShouldResemble, \"value2\")\n\n\t\t\t\t\t\tConvey(\"But when I wait for a another second, the items should still exist\", func() {\n\t\t\t\t\t\t\t<-time.After(1 * time.Second)\n\t\t\t\t\t\t\tval, err := c.Get(\"key\")\n\t\t\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\t\t\tSo(val.(string), ShouldResemble, \"value2\")\n\n\t\t\t\t\t\t\tConvey(\"But if I wait for two seconds after the update, the item must not exixt\", func() {\n\t\t\t\t\t\t\t\t<-time.After(4 * time.Second)\n\t\t\t\t\t\t\t\t_, err := c.Get(\"key\")\n\t\t\t\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\t\t\t})\n\t\t\t\t\t\t})\n\t\t\t\t\t})\n\n\t\t\t\t})\n\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc add(a, b interface{}) interface{} {\n\treturn a.(int) + b.(int)\n}\n\nfunc TestLockedModify(t *testing.T) {\n\n\t//t.Parallel()\n\n\tConvey(\"Given a new cache\", t, func() {\n\t\tc := NewCache(\"cache\")\n\n\t\tConvey(\"Given an element that is an integer\", func() {\n\t\t\terr := c.Add(\"key\", 1)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tConvey(\"Given an an incremental add function\", func() {\n\t\t\t\tvalue, err := c.LockedModify(\"key\", add, 1)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(value, ShouldNotBeNil)\n\t\t\t\tConvey(\"I should get the right value  \", func() {\n\t\t\t\t\tval, err := c.Get(\"key\")\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\tSo(val.(int), ShouldEqual, 2)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestTimerExpirationWithUpdate(t *testing.T) {\n\n\t//t.Parallel()\n\n\tConvey(\"Given that I instantiate 1 objects with 2 second timers\", t, func() {\n\t\ti := 1\n\t\tc := NewCacheWithExpiration(\"cache\", 2*time.Second)\n\t\terr := c.Add(i, i)\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"When I check the cache size After 1 seconds, the size should be 1\", func() {\n\t\t\t<-time.After(1 * time.Second)\n\t\t\t// So(c.SizeOf(), ShouldEqual, 1)\n\t\t\tConvey(\"When I update the object and check again after another 1 seconds, the size should be 1\", func() {\n\t\t\t\terr := c.Update(1, 1)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t<-time.After(1 * time.Second)\n\t\t\t\t// So(c.SizeOf(), ShouldEqual, 1) // @TODO: fix me it should be 1\n\t\t\t\tConvey(\"When I check the cache size After another 2 seconds, the size should be 0\", func() {\n\t\t\t\t\t<-time.After(2 * time.Second)\n\t\t\t\t\tSo(c.SizeOf(), ShouldBeZeroValue)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestGetReset(t *testing.T) {\n\n\t//t.Parallel()\n\n\tConvey(\"Given that I instantiate 1 object with a 2 second timer\", t, func() {\n\t\tc := NewCacheWithExpiration(\"cache\", 2*time.Second)\n\t\terr := c.Add(\"test\", \"test\")\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"When I check the cache after 1 second, the element should be there\", func() {\n\t\t\t<-time.After(1 * time.Second)\n\t\t\td, err := c.Get(\"test\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(d.(string), ShouldResemble, \"test\")\n\n\t\t\tConvey(\"When I retrieve the data with get reset\", func() {\n\n\t\t\t\tval, err := c.GetReset(\"test\", 0)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(val.(string), ShouldResemble, \"test\")\n\n\t\t\t\tConvey(\"If I wait 1100, the data should still be there \", func() {\n\t\t\t\t\t<-time.After(1100 * time.Millisecond)\n\t\t\t\t\td, err := c.Get(\"test\")\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\tSo(d.(string), ShouldResemble, \"test\")\n\n\t\t\t\t\tConvey(\"If I wait for another second, the data should be gone\", func() {\n\t\t\t\t\t\t<-time.After(1200 * time.Millisecond)\n\t\t\t\t\t\tval, err := c.Get(\"test\")\n\t\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\t\tSo(val, ShouldBeNil)\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestSetTimeOut(t *testing.T) {\n\n\t//t.Parallel()\n\n\tConvey(\"Given that I instantiate 1 object with a 2 second timer\", t, func() {\n\t\tc := NewCacheWithExpiration(\"cache\", 2*time.Second)\n\t\terr := c.Add(\"test\", \"test\")\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"When I check the cache after 1 second, the element should be there\", func() {\n\t\t\t<-time.After(1 * time.Second)\n\t\t\td, err := c.Get(\"test\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\terr = c.SetTimeOut(\"test\", 2*time.Second)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(d.(string), ShouldResemble, \"test\")\n\n\t\t\tConvey(\"When I reset the timer to two more seconds\", func() {\n\n\t\t\t\terr := c.SetTimeOut(\"test\", 2*time.Second)\n\t\t\t\tSo(err, ShouldBeNil)\n\n\t\t\t\tConvey(\"If I wait 1100, the data should still be there \", func() {\n\t\t\t\t\t<-time.After(1500 * time.Millisecond)\n\t\t\t\t\td, err := c.Get(\"test\")\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\tSo(d.(string), ShouldResemble, \"test\")\n\n\t\t\t\t\tConvey(\"If I wait for another second, the data should be gone\", func() {\n\t\t\t\t\t\t<-time.After(1200 * time.Millisecond)\n\t\t\t\t\t\tval, err := c.Get(\"test\")\n\t\t\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t\t\t\tSo(val, ShouldBeNil)\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestCacheWithExpirationNotifier(t *testing.T) {\n\n\t//t.Parallel()\n\n\tfinished := make(chan bool)\n\n\tConvey(\"Given a cache with an expiration notitifier \", t, func() {\n\t\tc := NewCacheWithExpirationNotifier(\"cache\", 2*time.Second, func(id interface{}, item interface{}) {\n\t\t\tif id.(string) == \"test\" && item.(string) == \"test\" {\n\t\t\t\tfinished <- true\n\t\t\t} else {\n\t\t\t\tfinished <- false\n\t\t\t}\n\t\t})\n\n\t\tConvey(\"When I add an element\", func() {\n\t\t\toldtime := time.Now()\n\t\t\terr := c.Add(\"test\", \"test\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tConvey(\"I should receive a notification\", func() {\n\t\t\t\tr := <-finished\n\t\t\t\tDuration := time.Since(oldtime)\n\t\t\t\tSo(r, ShouldBeTrue)\n\t\t\t\tSo(Duration.Seconds(), ShouldBeGreaterThanOrEqualTo, 2.0)\n\t\t\t})\n\t\t})\n\t})\n}\n\n// func TestThousandsOfTimers(t *testing.T) {\n\n// \t//t.Parallel()\n\n// \tConvey(\"Given that I instantiate 10K objects with 2 second timers\", t, func() {\n// \t\tc := NewCacheWithExpiration(\"cache\", 2*time.Second)\n// \t\tfor i := 0; i < 1000; i++ {\n// \t\t\terr := c.Add(i, i)\n// \t\t\tSo(err, ShouldBeNil)\n// \t\t}\n// \t\tConvey(\"After I wait for 1 second and add 10K more objects with 2 second timers\", func() {\n// \t\t\t<-time.After(1 * time.Second)\n// \t\t\tfor i := 20000; i < 30000; i++ {\n// \t\t\t\terr := c.Add(i, i)\n// \t\t\t\tSo(err, ShouldBeNil)\n// \t\t\t}\n// \t\t\t//TODO: This test is failing if we wait 3 seconds\n// \t\t\tConvey(\"After I wait for another 4 seconds\", func() {\n// \t\t\t\t<-time.After(5 * time.Second)\n// \t\t\t\tConvey(\"I should have no objects in the cache\", func() {\n// \t\t\t\t\tSo(c.SizeOf(), ShouldEqual, 0)\n// \t\t\t\t})\n// \t\t\t})\n// \t\t})\n// \t})\n// }\n\nfunc TestRemoveWithDelay(t *testing.T) {\n\tConvey(\"Given an initial cache that is non empty\", t, func() {\n\t\tc := NewCache(\"cache\")\n\t\tc.Add(\"info1\", \"info1\") // nolint\n\t\tc.Add(\"info2\", \"info2\") // nolint\n\t\tc.Add(\"info3\", \"info3\") // nolint\n\t\tc.Add(\"info4\", \"info4\") // nolint\n\n\t\tConvey(\"When I remove an valid entry with a duration of -1\", func() {\n\t\t\terr := c.RemoveWithDelay(\"info1\", -1)\n\t\t\tConvey(\"It should remove the entry right away\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(c.SizeOf(), ShouldEqual, 3)\n\t\t\t\t_, err := c.Get(\"info1\")\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I remove an valid entry with a duration of 2 seconds\", func() {\n\t\t\terr := c.RemoveWithDelay(\"info2\", 2*time.Second)\n\t\t\tConvey(\"It should remove the entry right away\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(c.SizeOf(), ShouldEqual, 4)\n\t\t\t\t<-time.After(3 * time.Second)\n\t\t\t\tSo(c.SizeOf(), ShouldEqual, 3)\n\t\t\t\t_, err := c.Get(\"info2\")\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"When I remove an unknown entry with a duration of 2 seconds\", func() {\n\t\t\terr := c.RemoveWithDelay(\"unknown\", 2*time.Second)\n\t\t\tConvey(\"It should return an error\", func() {\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\n\t})\n}\n"
  },
  {
    "path": "utils/cgnetcls/cgnetcls.go",
    "content": "// +build linux\n\n//Package cgnetcls implements functionality to manage classid for processes belonging to different cgroups\npackage cgnetcls\n\nimport (\n\t\"bufio\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync/atomic\"\n\t\"syscall\"\n\n\t\"github.com/kardianos/osext\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\tmarkconstants \"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n\t\"go.uber.org/zap\"\n)\n\n// Creategroup creates a cgroup/net_cls structure and writes the allocated classid to the file.\n// To add a new process to this cgroup we need to write to the cgroup file\nfunc (s *netCls) Creategroup(cgroupname string) error {\n\n\tcgroupPath := filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname)\n\tif _, err := os.Stat(cgroupPath); err == nil {\n\t\treturn nil\n\t}\n\n\tif err := os.MkdirAll(cgroupPath, 0700); err != nil {\n\t\treturn err\n\t}\n\n\t// Write to the notify on release file and release agent files\n\n\tif s.ReleaseAgentPath != \"\" {\n\t\tif err := ioutil.WriteFile(filepath.Join(cgroupNetClsPath, releaseAgentConfFile), []byte(s.ReleaseAgentPath), 0644); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to register a release agent error: %s\", err)\n\t\t}\n\n\t\tif err := ioutil.WriteFile(filepath.Join(cgroupNetClsPath, notifyOnReleaseFile), []byte(\"1\"), 0644); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to write to the notify file: %s\", err)\n\t\t}\n\n\t\tif err := ioutil.WriteFile(filepath.Join(cgroupNetClsPath, s.TriremePath, notifyOnReleaseFile), []byte(\"1\"), 0644); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to write to the notify file: %s\", err)\n\t\t}\n\n\t\tif err := ioutil.WriteFile(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname, notifyOnReleaseFile), []byte(\"1\"), 0644); err != nil {\n\t\t\treturn fmt.Errorf(\"unable to write to the notify file: %s\", err)\n\t\t}\n\t}\n\n\treturn nil\n\n}\n\n// AssignRootMark assings the mark at the root of the file system.\nfunc (s *netCls) AssignRootMark(mark uint64) error {\n\n\tif _, err := os.Stat(cgroupNetClsPath); os.IsNotExist(err) {\n\t\treturn fmt.Errorf(\"cgroup does not exist: %s\", err)\n\t}\n\n\t//16 is the base since the mark file expects hexadecimal values\n\tmarkval := \"0x\" + (strconv.FormatUint(mark, 16))\n\n\tif err := ioutil.WriteFile(filepath.Join(cgroupNetClsPath, markFile), []byte(markval), 0644); err != nil {\n\t\treturn fmt.Errorf(\"failed to write to net_cls.classid file for the root cgroup: %s\", err)\n\t}\n\n\treturn nil\n}\n\n//AssignMark writes the mark value to net_cls.classid file.\nfunc (s *netCls) AssignMark(cgroupname string, mark uint64) error {\n\n\tif _, err := os.Stat(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname)); os.IsNotExist(err) {\n\t\treturn fmt.Errorf(\"cgroup does not exist: %s\", err)\n\t}\n\n\t//16 is the base since the mark file expects hexadecimal values\n\tmarkval := \"0x\" + (strconv.FormatUint(mark, 16))\n\n\tif err := ioutil.WriteFile(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname, markFile), []byte(markval), 0644); err != nil {\n\t\treturn fmt.Errorf(\"failed to write to net_cls.classid file for new cgroup: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// AddProcess adds the process to the net_cls group\nfunc (s *netCls) AddProcess(cgroupname string, pid int) error {\n\n\tif _, err := os.Stat(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname)); os.IsNotExist(err) {\n\t\treturn fmt.Errorf(\"cannot add process. cgroup does not exist: %s\", err)\n\t}\n\n\tPID := []byte(strconv.Itoa(pid))\n\tif err := syscall.Kill(pid, 0); err != nil {\n\t\treturn nil\n\t}\n\n\tif err := ioutil.WriteFile(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname, procs), PID, 0644); err != nil {\n\t\treturn fmt.Errorf(\"cannot add process: %s\", err)\n\t}\n\n\treturn nil\n}\n\n//RemoveProcess removes the process from the cgroup by writing the pid to the\n//top of net_cls cgroup cgroup.procs\nfunc (s *netCls) RemoveProcess(cgroupname string, pid int) error {\n\n\tif _, err := os.Stat(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname)); os.IsNotExist(err) {\n\t\treturn fmt.Errorf(\"cannot clean up process. cgroup does not exist: %s\", err)\n\t}\n\n\tdata, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, procs))\n\tif err != nil {\n\t\treturn fmt.Errorf(\"cannot cleanup process: %s\", err)\n\t}\n\tif !strings.Contains(string(data), strconv.Itoa(pid)) {\n\t\treturn errors.New(\"cannot cleanup process. process is not a part of this cgroup\")\n\t}\n\n\tif err := ioutil.WriteFile(filepath.Join(cgroupNetClsPath, procs), []byte(strconv.Itoa(pid)), 0644); err != nil {\n\t\treturn fmt.Errorf(\"cannot clean up process: %s\", err)\n\t}\n\n\treturn nil\n}\n\n// DeleteCgroup assumes the cgroup is already empty and destroys the directory structure.\n// It will return an error if the group is not empty. Use RempoveProcess to remove all processes\n// Before we try deletion\nfunc (s *netCls) DeleteCgroup(cgroupname string) error {\n\n\tif _, err := os.Stat(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname)); os.IsNotExist(err) {\n\t\tzap.L().Debug(\"Group already deleted\", zap.Error(err))\n\t\treturn nil\n\t}\n\n\tif err := os.Remove(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname)); err != nil {\n\t\treturn fmt.Errorf(\"unable to delete cgroup %s: %s\", cgroupname, err)\n\t}\n\n\treturn nil\n}\n\n//Deletebasepath removes the base aporeto directory which comes as a separate event when we are not managing any processes\nfunc (s *netCls) Deletebasepath(cgroupName string) bool {\n\n\tif cgroupName == s.TriremePath {\n\t\tif err := os.Remove(filepath.Join(cgroupNetClsPath, cgroupName)); err != nil {\n\t\t\tzap.L().Error(\"Error when removing Trireme Base Path\", zap.Error(err))\n\t\t}\n\t\treturn true\n\t}\n\n\treturn false\n}\n\n// ListCgroupProcesses returns lists of  processes in the cgroup\nfunc (s *netCls) ListCgroupProcesses(cgroupname string) ([]string, error) {\n\n\tif _, err := os.Stat(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname)); os.IsNotExist(err) {\n\t\treturn []string{}, fmt.Errorf(\"cgroup %s does not exist: %s\", cgroupname, err)\n\t}\n\n\tdata, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, s.TriremePath, cgroupname, \"cgroup.procs\"))\n\tif err != nil {\n\t\treturn []string{}, fmt.Errorf(\"cannot read procs file: %s\", err)\n\t}\n\n\tprocs := []string{}\n\n\tfor _, line := range strings.Split(string(data), \"\\n\") {\n\t\tif len(line) > 0 {\n\t\t\tprocs = append(procs, line)\n\t\t}\n\t}\n\n\treturn procs, nil\n}\n\n// ListAllCgroups returns a list of the cgroups that are managed in the Trireme path\nfunc (s *netCls) ListAllCgroups(path string) []string {\n\n\tcgroups, err := ioutil.ReadDir(filepath.Join(cgroupNetClsPath, s.TriremePath, path))\n\tif err != nil {\n\t\treturn []string{}\n\t}\n\n\tnames := make([]string, len(cgroups))\n\tfor i := 0; i < len(cgroups); i++ {\n\t\tnames[i] = cgroups[i].Name()\n\t}\n\n\treturn names\n}\n\nfunc mountCgroupController() error {\n\tmounts, err := ioutil.ReadFile(\"/proc/mounts\")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"Failed to read /proc/mount: %s\", err)\n\t}\n\n\tsc := bufio.NewScanner(strings.NewReader(string(mounts)))\n\tvar netCls = false\n\tvar cgroupMount string\n\tfor sc.Scan() {\n\t\tif strings.HasPrefix(sc.Text(), \"cgroup\") {\n\t\t\tcgroupMount = strings.Split(sc.Text(), \" \")[1]\n\t\t\tcgroupMount = cgroupMount[:strings.LastIndex(cgroupMount, \"/\")]\n\t\t\tif strings.Contains(sc.Text(), \"net_cls\") {\n\t\t\t\tcgroupNetClsPath = strings.Split(sc.Text(), \" \")[1]\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\n\t}\n\n\tif len(cgroupMount) == 0 {\n\t\treturn fmt.Errorf(\"Failed to get mount points: %s\", err)\n\t}\n\n\tif !netCls {\n\t\tcgroupNetClsPath = cgroupMount + \"/net_cls\"\n\n\t\tif err := os.MkdirAll(cgroupNetClsPath, 0700); err != nil {\n\t\t\treturn fmt.Errorf(\"Fail to create net_cls directory: %s\", err)\n\t\t}\n\n\t\tif err := syscall.Mount(\"cgroup\", cgroupNetClsPath, \"cgroup\", 0, \"net_cls,net_prio\"); err != nil {\n\t\t\treturn fmt.Errorf(\"Fail to mount net_cls group: %s\", err)\n\t\t}\n\t}\n\treturn nil\n}\n\n// CgroupMemberCount -- Returns the cound of the number of processes in a cgroup\n// TODO: looks like dead code\nfunc CgroupMemberCount(cgroupName string) int {\n\n\tif _, err := os.Stat(filepath.Join(cgroupNetClsPath, cgroupName)); os.IsNotExist(err) {\n\t\treturn 0\n\t}\n\n\tdata, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, cgroupName, \"cgroup.procs\"))\n\tif err != nil {\n\t\treturn 0\n\t}\n\n\treturn len(data)\n}\n\n// NewDockerCgroupNetController returns a handle to call functions on the cgroup net_cls controller\nfunc NewDockerCgroupNetController() Cgroupnetcls {\n\n\tcontroller := &netCls{\n\t\tmarkchan:         make(chan uint64),\n\t\tReleaseAgentPath: \"\",\n\t\tTriremePath:      common.TriremeDockerHostNetwork,\n\t}\n\n\treturn controller\n}\n\n//NewCgroupNetController returns a handle to call functions on the cgroup net_cls controller\nfunc NewCgroupNetController(triremepath string, releasePath string) Cgroupnetcls {\n\n\tbinpath, _ := osext.Executable()\n\tcontroller := &netCls{\n\t\tmarkchan:         make(chan uint64),\n\t\tReleaseAgentPath: binpath,\n\t\tTriremePath:      \"\",\n\t}\n\n\tif releasePath != \"\" {\n\t\tcontroller.ReleaseAgentPath = releasePath\n\t}\n\n\tif triremepath != \"\" {\n\t\tcontroller.TriremePath = triremepath\n\t}\n\n\treturn controller\n}\n\n// MarkVal returns a new Mark Value\nfunc MarkVal() uint64 {\n\tret := atomic.AddUint64(&markval, 1)\n\t// bound to happen if someone has a long running enforcer\n\t// and uses a lot of LinuxProcess PUs\n\tif ret == markconstants.EnforcerCgroupMark {\n\t\treturn atomic.AddUint64(&markval, 1)\n\t}\n\treturn ret\n}\n"
  },
  {
    "path": "utils/cgnetcls/cgnetcls_osx.go",
    "content": "// +build darwin\n\n//Package cgnetcls implements functionality to manage classid for processes belonging to different cgroups\npackage cgnetcls\n\n//Creategroup creates a cgroup/net_cls structure and writes the allocated classid to the file.\n//To add a new process to this cgroup we need to write to the cgroup file\nfunc (s *netCls) Creategroup(cgroupname string) error {\n\treturn nil\n}\n\n//AssignMark writes the mark value to net_cls.classid file.\nfunc (s *netCls) AssignMark(cgroupname string, mark uint64) error {\n\treturn nil\n}\n\n// AssignRootMark assings the value at the root.\nfunc (s *netCls) AssignRootMark(mark uint64) error {\n\treturn nil\n}\n\n//AddProcess adds the process to the net_cls group\nfunc (s *netCls) AddProcess(cgroupname string, pid int) error {\n\treturn nil\n}\n\n//RemoveProcess removes the process from the cgroup by writing the pid to the\n//top of net_cls cgroup cgroup.procs\nfunc (s *netCls) RemoveProcess(cgroupname string, pid int) error {\n\treturn nil\n}\n\n// DeleteCgroup removes the cgroup\nfunc (s *netCls) DeleteCgroup(cgroupname string) error {\n\treturn nil\n}\n\nfunc (s *netCls) Deletebasepath(contextID string) bool {\n\treturn true\n}\n\nfunc (s *netCls) GetCgroupList() {\n\n}\n\n// ListCgroupProcesses lists the processes of the cgroup\nfunc (s *netCls) ListCgroupProcesses(cgroupname string) ([]string, error) {\n\treturn []string{}, nil\n}\n\n// ListAllCgroups returns a list of the cgroups that are managed in the Trireme path\nfunc (s *netCls) ListAllCgroups(path string) []string {\n\treturn []string{}\n}\n\n//NewCgroupNetController returns a handle to call functions on the cgroup net_cls controller\nfunc NewCgroupNetController(triremepath string, releasePath string) Cgroupnetcls {\n\treturn &netCls{}\n}\n\n//NewDockerCgroupNetController returns a handle to call functions on the cgroup net_cls controller\nfunc NewDockerCgroupNetController() Cgroupnetcls {\n\treturn &netCls{}\n}\n\n// MarkVal returns a new Mark\nfunc MarkVal() uint64 {\n\treturn 103\n}\n"
  },
  {
    "path": "utils/cgnetcls/cgnetcls_test.go",
    "content": "// +build linux\n\npackage cgnetcls\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"math/rand\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\t\"syscall\"\n\t\"testing\"\n\n\tmarkconstants \"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n)\n\n// This package does not use interfaces/objects from other trireme component so we don't need to mock anything here\n// We will create actual system objects\n// This can be tested only on linux since the directory structure will not exist anywhere else\n// Tests here will be skipped if you don't run as root\n\nconst (\n\ttestcgroupname       = \"/test\"\n\ttestcgroupnameformat = \"test\"\n\ttestmark             = 100\n\ttestRootUser         = \"root\"\n)\n\nfunc cleanupnetclsgroup() {\n\tdata, _ := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, testcgroupname, procs))\n\tfmt.Println(string(data))\n\t_ = ioutil.WriteFile(filepath.Join(cgroupNetClsPath, procs), data, 0644)\n\t_ = os.RemoveAll(filepath.Join(cgroupNetClsPath, testcgroupname))\n}\n\nfunc TestCreategroup(t *testing.T) {\n\n\tif os.Getenv(\"USER\") != testRootUser {\n\t\tt.SkipNow()\n\t}\n\n\tcg := NewCgroupNetController(\"/tmp\", \"\")\n\tif err := cg.Creategroup(testcgroupnameformat); err != nil {\n\t\t//Check if all the files required are created\n\t\tt.Errorf(\"Failed to create group error returned %s\", err.Error())\n\t}\n\n\tdefer cleanupnetclsgroup()\n\n\tif _, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, releaseAgentConfFile)); err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\tt.Errorf(\"ReleaseAgentConf File does not exist.Cgroup mount failed\")\n\t\t\tt.SkipNow()\n\t\t}\n\t}\n\n\tif val, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, notifyOnReleaseFile)); err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\tt.Errorf(\"Notify on release file does not exist.Cgroup mount failed\")\n\t\t\tt.SkipNow()\n\t\t}\n\t} else {\n\t\tif strings.TrimSpace(string(val)) != \"1\" {\n\t\t\tt.Errorf(\"Notify release file in base net_cls not programmed\")\n\t\t\tt.SkipNow()\n\t\t}\n\t}\n\n\tif val, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, notifyOnReleaseFile)); err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\tt.Errorf(\"Notify on release file does not exist.Cgroup mount failed\")\n\t\t\tt.SkipNow()\n\t\t}\n\t} else {\n\t\tif strings.TrimSpace(string(val)) != \"1\" {\n\t\t\tt.Errorf(\"Notify release file in aporeto base dir /sys/fs/cgroup/aporeto not programmed\")\n\t\t\tt.SkipNow()\n\t\t}\n\t}\n\n\tif val, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, testcgroupname, notifyOnReleaseFile)); err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\tt.Errorf(\"Notify on release file does not exist.Cgroup mount failed\")\n\t\t\tt.SkipNow()\n\t\t}\n\t} else {\n\t\tif strings.TrimSpace(string(val)) != \"1\" {\n\t\t\tt.Errorf(\"Notify release file in cgroup not programmed\")\n\t\t\tt.SkipNow()\n\t\t}\n\t}\n}\n\nfunc TestAssignMark(t *testing.T) {\n\tcg := NewCgroupNetController(\"/tmp\", \"\")\n\tif os.Getenv(\"USER\") != testRootUser {\n\t\tt.SkipNow()\n\t}\n\t//Assigning mark before creating group\n\tif err := cg.AssignMark(testcgroupname, testmark); err == nil {\n\t\tt.Errorf(\"Assign mark succeeded without a valid group being present \")\n\t\tt.SkipNow()\n\t}\n\tif err := cg.Creategroup(testcgroupnameformat); err != nil {\n\t\tt.Errorf(\"Error creating cgroup %s\", err)\n\t\tt.SkipNow()\n\t}\n\n\tdefer cleanupnetclsgroup()\n\n\tif err := cg.AssignMark(testcgroupnameformat, testmark); err != nil {\n\t\tt.Errorf(\"Failed to assign mark error = %s\", err.Error())\n\t\tt.SkipNow()\n\t} else {\n\t\tdata, _ := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, testcgroupname, markFile))\n\t\tu, err := strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Non Integer mark value in classid file\")\n\t\t\tt.SkipNow()\n\t\t}\n\t\tif u != testmark {\n\t\t\tt.Errorf(\"Unexpected mark val expected %d, read %d\", testmark, u)\n\t\t\tt.SkipNow()\n\t\t}\n\t}\n}\n\nfunc TestAddProcess(t *testing.T) {\n\t//hopefully this pid does not exist\n\tpid := 1<<31 - 1\n\tr := rand.New(rand.NewSource(23))\n\tif os.Getenv(\"USER\") != testRootUser {\n\t\tt.SkipNow()\n\t}\n\tcg := NewCgroupNetController(\"/tmp\", \"\")\n\t//AddProcess to a non-existent group\n\tif err := cg.AddProcess(testcgroupname, os.Getpid()); err == nil {\n\t\tt.Errorf(\"Process successfully added to a non existent group\")\n\t\tt.SkipNow()\n\t}\n\tif err := cg.Creategroup(testcgroupnameformat); err != nil {\n\t\tt.Errorf(\"Error creating cgroup\")\n\t\tt.SkipNow()\n\t}\n\n\tdefer cleanupnetclsgroup()\n\n\t//Add a non-existent process\n\t//loop to find non-existent pid\n\tfor {\n\t\tif err := syscall.Kill(pid, 0); err != nil {\n\t\t\tbreak\n\t\t}\n\t\tpid = r.Int()\n\n\t}\n\tif err := cg.AddProcess(testcgroupnameformat, pid); err != nil {\n\t\tt.Errorf(\"Unexpected error not returned for non-existent process\")\n\t\tt.SkipNow()\n\t}\n\tpid = 1 //Guaranteed to be present\n\tif err := cg.AddProcess(testcgroupname, pid); err != nil {\n\t\tt.Errorf(\"Failed to add process %s\", err.Error())\n\t\tt.SkipNow()\n\t} else {\n\t\t//This directory structure should not be delete\n\t\tif err := os.RemoveAll(filepath.Join(cgroupNetClsPath, testcgroupname)); err == nil {\n\t\t\tt.Errorf(\"Process not added to cgroup\")\n\t\t\tt.SkipNow()\n\t\t}\n\t}\n}\n\nfunc TestRemoveProcess(t *testing.T) {\n\tif os.Getenv(\"USER\") != testRootUser {\n\t\tt.SkipNow()\n\t}\n\tcg := NewCgroupNetController(\"/tmp\", \"\")\n\t//Removing process from non-existent group\n\tif err := cg.RemoveProcess(testcgroupname, 1); err == nil {\n\t\tt.Errorf(\"RemoveProcess succeeded without valid group being present \")\n\t\tt.SkipNow()\n\t}\n\tif err := cg.Creategroup(testcgroupnameformat); err != nil {\n\t\tt.Errorf(\"Error creating cgroup\")\n\t\tt.SkipNow()\n\t}\n\n\tdefer cleanupnetclsgroup()\n\n\tif err := cg.AddProcess(testcgroupname, 1); err != nil {\n\t\tt.Errorf(\"Error adding process\")\n\t\tt.SkipNow()\n\t}\n\tif err := cg.RemoveProcess(testcgroupnameformat, 10); err == nil {\n\t\tt.Errorf(\"Removed process which was not a part of this cgroup\")\n\t\tt.SkipNow()\n\t}\n\tif err := cg.RemoveProcess(testcgroupname, 1); err != nil {\n\t\tt.Errorf(\"Failed to remove process %s\", err.Error())\n\t\tt.SkipNow()\n\t}\n}\n\nfunc TestDeleteCgroup(t *testing.T) {\n\tif os.Getenv(\"USER\") != testRootUser {\n\t\tt.SkipNow()\n\t}\n\tcg := NewCgroupNetController(\"/tmp\", \"\")\n\t//Removing process from non-existent group\n\tif err := cg.DeleteCgroup(testcgroupnameformat); err != nil {\n\t\tt.Errorf(\"Non-existent cgroup delelte returned an error\")\n\t\tt.SkipNow()\n\t}\n\tif err := cg.Creategroup(testcgroupname); err != nil {\n\t\tt.Errorf(\"Failed to create cgroup %s\", err.Error())\n\t\tt.SkipNow()\n\t}\n\n\tdefer cleanupnetclsgroup()\n\n\tif err := cg.DeleteCgroup(testcgroupname); err != nil {\n\t\tt.Errorf(\"Failed to delete cgroup %s\", err.Error())\n\t\tt.SkipNow()\n\t}\n\n}\n\nfunc TestDeleteBasePath(t *testing.T) {\n\tif os.Getenv(\"USER\") != testRootUser {\n\t\tt.SkipNow()\n\t}\n\tcg := NewCgroupNetController(\"/tmp\", \"\")\n\t//Removing process from non-existent group\n\tif err := cg.DeleteCgroup(testcgroupname); err != nil {\n\t\tt.Errorf(\"Delete of group failed %s\", err.Error())\n\t}\n\n\tdefer cleanupnetclsgroup()\n\n\tcg.Deletebasepath(testcgroupnameformat)\n\t_, err := os.Stat(filepath.Join(cgroupNetClsPath, testcgroupname))\n\tif err == nil {\n\t\tt.Errorf(\"Delete of cgroup from system failed\")\n\t\tt.SkipNow()\n\t}\n}\n\nfunc TestListCgroupProcesses(t *testing.T) {\n\tpid := 1<<31 - 1\n\tr := rand.New(rand.NewSource(23))\n\tif os.Getenv(\"USER\") != testRootUser {\n\t\tt.SkipNow()\n\t}\n\tcg := NewCgroupNetController(\"/tmp\", \"\")\n\n\t_, err := cg.ListCgroupProcesses(testcgroupname)\n\tif err == nil {\n\t\tt.Errorf(\"No process found but succeeded\")\n\t}\n\t//AddProcess to a non-existent group\n\tif err = cg.AddProcess(testcgroupname, os.Getpid()); err == nil {\n\t\tt.Errorf(\"Process successfully added to a non existent group\")\n\t\tt.SkipNow()\n\t}\n\tif err = cg.Creategroup(testcgroupname); err != nil {\n\t\tt.Errorf(\"Error creating cgroup\")\n\t\tt.SkipNow()\n\t}\n\n\tdefer cleanupnetclsgroup()\n\n\t//Add a non-existent process\n\t//loop to find non-existent pid\n\tfor {\n\t\tif err = syscall.Kill(pid, 0); err != nil {\n\t\t\tbreak\n\t\t}\n\t\tpid = r.Int()\n\n\t}\n\n\tpid = 1 //Guaranteed to be present\n\tif err = cg.AddProcess(testcgroupname, pid); err != nil {\n\t\tt.Errorf(\"Failed to add process %s\", err.Error())\n\t\tt.SkipNow()\n\t} else {\n\t\t//This directory structure should not be delete\n\t\tif err = os.RemoveAll(filepath.Join(cgroupNetClsPath, testcgroupname)); err == nil {\n\t\t\tt.Errorf(\"Process not added to cgroup\")\n\t\t\tt.SkipNow()\n\t\t}\n\t}\n\n\tprocs, err := cg.ListCgroupProcesses(testcgroupname)\n\tif procs[0] != \"1\" && err != nil {\n\t\tt.Errorf(\"No process found %d\", err)\n\t}\n}\n\nfunc TestMarkVal(t *testing.T) {\n\ttype test struct {\n\t\tname string\n\t\twant uint64\n\t}\n\t// this is kind of silly, but as this is an atomic counter,\n\t// there is no other way than to really count up and compare\n\tgenerateTests := func() []test {\n\t\tret := []test{}\n\t\ti := 1\n\t\tfor {\n\t\t\twant := i + markconstants.Initialmarkval\n\t\t\t// this is the exception, in this case we should get plus one\n\t\t\tif want >= markconstants.EnforcerCgroupMark {\n\t\t\t\twant++\n\t\t\t}\n\t\t\tret = append(ret, test{\n\t\t\t\tname: strconv.Itoa(i + markconstants.Initialmarkval),\n\t\t\t\twant: uint64(want),\n\t\t\t})\n\t\t\t// abort a couple tests after that\n\t\t\tif want == markconstants.EnforcerCgroupMark+2 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\ti++\n\t\t}\n\t\treturn ret\n\t}\n\tfor _, tt := range generateTests() {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot := MarkVal()\n\t\t\tt.Logf(\"test %s, markVal %d\", tt.name, got)\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"MarkVal() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "utils/cgnetcls/cgnetcls_windows.go",
    "content": "// +build windows\n\npackage cgnetcls\n\nimport (\n\t\"sync/atomic\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n)\n\ntype netCls struct {\n}\n\nvar markval uint64 = constants.Initialmarkval\n\n// MarkVal returns a new Mark Value\nfunc MarkVal() uint64 {\n\treturn atomic.AddUint64(&markval, 1)\n}\n\n//Creategroup creates a cgroup/net_cls structure and writes the allocated classid to the file.\n//To add a new process to this cgroup we need to write to the cgroup file\nfunc (s *netCls) Creategroup(cgroupname string) error {\n\treturn nil\n}\n\n//AssignMark writes the mark value to net_cls.classid file.\nfunc (s *netCls) AssignMark(cgroupname string, mark uint64) error {\n\treturn nil\n}\n\n// AssignRootMark assings the value at the root.\nfunc (s *netCls) AssignRootMark(mark uint64) error {\n\treturn nil\n}\n\n//AddProcess adds the process to the net_cls group\nfunc (s *netCls) AddProcess(cgroupname string, pid int) error {\n\treturn nil\n}\n\n//RemoveProcess removes the process from the cgroup by writing the pid to the\n//top of net_cls cgroup cgroup.procs\nfunc (s *netCls) RemoveProcess(cgroupname string, pid int) error {\n\treturn nil\n}\n\n// DeleteCgroup removes the cgroup\nfunc (s *netCls) DeleteCgroup(cgroupname string) error {\n\treturn nil\n}\n\nfunc (s *netCls) Deletebasepath(contextID string) bool {\n\treturn true\n}\n\nfunc (s *netCls) GetCgroupList() {\n\n}\n\n// ListCgroupProcesses lists the processes of the cgroup\nfunc (s *netCls) ListCgroupProcesses(cgroupname string) ([]string, error) {\n\treturn []string{}, nil\n}\n\n// ListAllCgroups returns a list of the cgroups that are managed in the Trireme path\nfunc (s *netCls) ListAllCgroups(path string) []string {\n\treturn []string{}\n}\n\n//NewCgroupNetController returns a handle to call functions on the cgroup net_cls controller\nfunc NewCgroupNetController(triremepath string, releasePath string) Cgroupnetcls {\n\treturn &netCls{}\n}\n\n//NewDockerCgroupNetController returns a handle to call functions on the cgroup net_cls controller\nfunc NewDockerCgroupNetController() Cgroupnetcls {\n\treturn &netCls{}\n}\n\n// ConfigureNetClsPath does nothing for windows\nfunc ConfigureNetClsPath(path string) {\n}\n"
  },
  {
    "path": "utils/cgnetcls/constants.go",
    "content": "package cgnetcls\n\nconst (\n\t// CgroupNameTag  identifies the cgroup name\n\tCgroupNameTag = \"@cgroup_name\"\n\t// CgroupMarkTag identifies the cgroup mark value\n\tCgroupMarkTag = \"@cgroup_mark\"\n\t// PortTag is the tag for the port values\n\tPortTag = \"port\"\n)\n"
  },
  {
    "path": "utils/cgnetcls/constants_nonwindows.go",
    "content": "// +build !windows\n\npackage cgnetcls\n\nconst (\n\tmarkFile             = \"/net_cls.classid\"\n\tprocs                = \"/cgroup.procs\"\n\treleaseAgentConfFile = \"/release_agent\"     // nolint: varcheck\n\tnotifyOnReleaseFile  = \"/notify_on_release\" // nolint: varcheck\n)\n"
  },
  {
    "path": "utils/cgnetcls/interfaces.go",
    "content": "package cgnetcls\n\n//Cgroupnetcls interface exposing methods that can be called from outside to manage net_cls cgroups\ntype Cgroupnetcls interface {\n\tCreategroup(cgroupname string) error\n\tAssignMark(cgroupname string, mark uint64) error\n\tAssignRootMark(mark uint64) error\n\tAddProcess(cgroupname string, pid int) error\n\tRemoveProcess(cgroupname string, pid int) error\n\tDeleteCgroup(cgroupname string) error\n\tDeletebasepath(contextID string) bool\n\tListCgroupProcesses(cgroupname string) ([]string, error)\n\tListAllCgroups(path string) []string\n}\n"
  },
  {
    "path": "utils/cgnetcls/mockcgnetcls/mockcgnetcls.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: utils/cgnetcls/interfaces.go\n\n// Package mockcgnetcls is a generated GoMock package.\npackage mockcgnetcls\n\nimport (\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n)\n\n// MockCgroupnetcls is a mock of Cgroupnetcls interface\n// nolint\ntype MockCgroupnetcls struct {\n\tctrl     *gomock.Controller\n\trecorder *MockCgroupnetclsMockRecorder\n}\n\n// MockCgroupnetclsMockRecorder is the mock recorder for MockCgroupnetcls\n// nolint\ntype MockCgroupnetclsMockRecorder struct {\n\tmock *MockCgroupnetcls\n}\n\n// NewMockCgroupnetcls creates a new mock instance\n// nolint\nfunc NewMockCgroupnetcls(ctrl *gomock.Controller) *MockCgroupnetcls {\n\tmock := &MockCgroupnetcls{ctrl: ctrl}\n\tmock.recorder = &MockCgroupnetclsMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockCgroupnetcls) EXPECT() *MockCgroupnetclsMockRecorder {\n\treturn m.recorder\n}\n\n// Creategroup mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) Creategroup(cgroupname string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Creategroup\", cgroupname)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// Creategroup indicates an expected call of Creategroup\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) Creategroup(cgroupname interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Creategroup\", reflect.TypeOf((*MockCgroupnetcls)(nil).Creategroup), cgroupname)\n}\n\n// AssignMark mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) AssignMark(cgroupname string, mark uint64) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"AssignMark\", cgroupname, mark)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AssignMark indicates an expected call of AssignMark\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) AssignMark(cgroupname, mark interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AssignMark\", reflect.TypeOf((*MockCgroupnetcls)(nil).AssignMark), cgroupname, mark)\n}\n\n// AssignRootMark mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) AssignRootMark(mark uint64) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"AssignRootMark\", mark)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AssignRootMark indicates an expected call of AssignRootMark\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) AssignRootMark(mark interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AssignRootMark\", reflect.TypeOf((*MockCgroupnetcls)(nil).AssignRootMark), mark)\n}\n\n// AddProcess mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) AddProcess(cgroupname string, pid int) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"AddProcess\", cgroupname, pid)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// AddProcess indicates an expected call of AddProcess\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) AddProcess(cgroupname, pid interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"AddProcess\", reflect.TypeOf((*MockCgroupnetcls)(nil).AddProcess), cgroupname, pid)\n}\n\n// RemoveProcess mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) RemoveProcess(cgroupname string, pid int) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RemoveProcess\", cgroupname, pid)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// RemoveProcess indicates an expected call of RemoveProcess\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) RemoveProcess(cgroupname, pid interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RemoveProcess\", reflect.TypeOf((*MockCgroupnetcls)(nil).RemoveProcess), cgroupname, pid)\n}\n\n// DeleteCgroup mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) DeleteCgroup(cgroupname string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"DeleteCgroup\", cgroupname)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// DeleteCgroup indicates an expected call of DeleteCgroup\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) DeleteCgroup(cgroupname interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"DeleteCgroup\", reflect.TypeOf((*MockCgroupnetcls)(nil).DeleteCgroup), cgroupname)\n}\n\n// Deletebasepath mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) Deletebasepath(contextID string) bool {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Deletebasepath\", contextID)\n\tret0, _ := ret[0].(bool)\n\treturn ret0\n}\n\n// Deletebasepath indicates an expected call of Deletebasepath\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) Deletebasepath(contextID interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Deletebasepath\", reflect.TypeOf((*MockCgroupnetcls)(nil).Deletebasepath), contextID)\n}\n\n// ListCgroupProcesses mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) ListCgroupProcesses(cgroupname string) ([]string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ListCgroupProcesses\", cgroupname)\n\tret0, _ := ret[0].([]string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ListCgroupProcesses indicates an expected call of ListCgroupProcesses\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) ListCgroupProcesses(cgroupname interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ListCgroupProcesses\", reflect.TypeOf((*MockCgroupnetcls)(nil).ListCgroupProcesses), cgroupname)\n}\n\n// ListAllCgroups mocks base method\n// nolint\nfunc (m *MockCgroupnetcls) ListAllCgroups(path string) []string {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ListAllCgroups\", path)\n\tret0, _ := ret[0].([]string)\n\treturn ret0\n}\n\n// ListAllCgroups indicates an expected call of ListAllCgroups\n// nolint\nfunc (mr *MockCgroupnetclsMockRecorder) ListAllCgroups(path interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ListAllCgroups\", reflect.TypeOf((*MockCgroupnetcls)(nil).ListAllCgroups), path)\n}\n"
  },
  {
    "path": "utils/cgnetcls/netcls.go",
    "content": "// +build !windows\n\npackage cgnetcls\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/common\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/constants\"\n\t\"go.uber.org/zap\"\n)\n\n// receiver definition.\ntype netCls struct {\n\tmarkchan         chan uint64 // nolint: structcheck\n\tReleaseAgentPath string\n\tTriremePath      string\n}\n\nvar (\n\tcgroupNetClsPath string\n\tmarkval          uint64 = constants.Initialmarkval // nolint: varcheck\n)\n\n// ConfigureNetClsPath updates the cgroupNetCls path\nfunc ConfigureNetClsPath(path string) {\n\tcgroupNetClsPath = path\n}\n\n// GetCgroupList geta list of all cgroup names\n// TODO: only used in autoport detection, and a bad usage as well\nfunc GetCgroupList() []string {\n\tvar cgroupList []string\n\n\t// iterate over our different base paths from the different cgroup base paths\n\tfor _, baseCgroupPath := range []string{common.TriremeCgroupPath, common.TriremeDockerHostNetwork} {\n\t\tfilelist, err := ioutil.ReadDir(filepath.Join(cgroupNetClsPath, baseCgroupPath))\n\t\tif err == nil {\n\t\t\tfor _, file := range filelist {\n\t\t\t\tif file.IsDir() {\n\t\t\t\t\tcgroupList = append(cgroupList, filepath.Join(baseCgroupPath, file.Name()))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn cgroupList\n}\n\n// ListCgroupProcesses lists the cgroups that trireme has created\n// TODO: only used in autoport detection, and a bad usage as well\nfunc ListCgroupProcesses(cgroupname string) ([]string, error) {\n\n\tif _, err := os.Stat(filepath.Join(cgroupNetClsPath, cgroupname)); os.IsNotExist(err) {\n\t\treturn []string{}, fmt.Errorf(\"cgroup %s does not exist: %s\", cgroupname, err)\n\t}\n\n\tdata, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, cgroupname, \"cgroup.procs\"))\n\tif err != nil {\n\t\treturn []string{}, fmt.Errorf(\"cannot read procs file: %s\", err)\n\t}\n\n\tprocs := []string{}\n\n\tfor _, line := range strings.Split(string(data), \"\\n\") {\n\t\tif len(line) > 0 {\n\t\t\tprocs = append(procs, line)\n\t\t}\n\t}\n\n\treturn procs, nil\n}\n\n// GetAssignedMarkVal -- returns the mark val assigned to the group\n// TODO: looks like dead code\nfunc GetAssignedMarkVal(cgroupName string) string {\n\tmark, err := ioutil.ReadFile(filepath.Join(cgroupNetClsPath, cgroupName, markFile))\n\n\tif err != nil || len(mark) < 1 {\n\t\tzap.L().Error(\"Unable to read markval for cgroup\", zap.String(\"Cgroup Name\", cgroupName), zap.Error(err))\n\t\treturn \"\"\n\t}\n\treturn string(mark[:len(mark)-1])\n}\n"
  },
  {
    "path": "utils/constants/constants.go",
    "content": "package constants\n\nconst (\n\t//Initialmarkval is the start of mark values we assign to cgroup\n\tInitialmarkval = 100\n\t// EnforcerCgroupMark is the net_cls.classid that is programmed for the cgroup that all enforcer processes belong to\n\tEnforcerCgroupMark = 1536\n\t//PacketMarkToSetConnmark is used to set mark on packet when repeating a packet through nfq.\n\tPacketMarkToSetConnmark = uint32(0x42)\n\t//DefaultInputMark is used to set mark on packet when repeating a packet through nfq.\n\tDefaultInputMark = uint32(0x43)\n\t// DefaultConnMark is the default conn mark for all data packets\n\tDefaultConnMark = uint32(0xEEEE)\n\t// DefaultExternalConnMark is the default conn mark for all data packets\n\tDefaultExternalConnMark = uint32(0xEEEF)\n\t// DeleteConnmark is the mark used to trigger udp handshake.\n\tDeleteConnmark = uint32(0xABCD)\n\t// DropConnmark is used to drop packets identified by acl's\n\tDropConnmark = uint32(0xEEED)\n\t// HandshakeConnmark is used to drop response packets\n\tHandshakeConnmark = uint32(0xEEEC)\n\t// IstioPacketMark is a mark that we use so that we don't loop in the Istio Chain forever.\n\tIstioPacketMark = 0x44\n)\n"
  },
  {
    "path": "utils/constants/constants_linux.go",
    "content": "// +build !rhel6\n\npackage constants\n\n// QueueBalanceFactor is always one for non-RHEL6 Linux\nconst QueueBalanceFactor = 1\n"
  },
  {
    "path": "utils/constants/constants_rhel6.go",
    "content": "// +build rhel6\n\npackage constants\n\n// QueueBalanceFactor is zero on RHEL6 because the mark should not be adjusted\nconst QueueBalanceFactor = 0\n"
  },
  {
    "path": "utils/cri/common.go",
    "content": "package cri\n\n// Type is the type to be given at startup\ntype Type string\n\n// Different enforcer types\nconst (\n\tTypeNone       Type = \"none\"       // TypeNone is the default enforcer type\n\tTypeDocker     Type = \"docker\"     // TypeDocker is enforcerd which uses CRI docker\n\tTypeCRIO       Type = \"crio\"       // TypeDaemonset is enforcerd which uses CRIO CRI\n\tTypeContainerD Type = \"containerd\" // TypeContainerD is a enforcerd which uses containerD CRI\n)\n\n// Container returns true iff the enforcer supports containers\nfunc (d Type) Container() bool {\n\treturn d.Docker() || d.CRIO() || d.ContainerD()\n}\n\n// CRIO returns true if the enforcer is using CRI for container management\nfunc (d Type) CRIO() bool {\n\treturn d == TypeCRIO\n}\n\n// Docker returns true if the enforcer supports docker\nfunc (d Type) Docker() bool {\n\treturn d == TypeDocker\n}\n\n// ContainerD returns true if enforcerd is using ContainerD CRI\nfunc (d Type) ContainerD() bool {\n\treturn d == TypeContainerD\n}\n\n// SupportRuncProxy returns true iff the enforcer supports runc proxy\nfunc (d Type) SupportRuncProxy() bool {\n\treturn d.Docker() || d.CRIO() || d.ContainerD()\n}\n"
  },
  {
    "path": "utils/cri/cri_client_setup.go",
    "content": "package cri\n\nimport \"time\"\n\n// maxMsgSize use 16MB as the default message size limit.\n// grpc library default is 4MB\n// NOTE: this should be the exact same constant as used in the kubelet\n//       this used to be here: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cri/remote/utils.go#L29\nconst maxMsgSize = 1024 * 1024 * 16 // nolint: varcheck\n\nvar (\n\t// connectTimeout is used for establishing the initial grpc dial context\n\tconnectTimeout = time.Second * 30\n\n\t// callTimeout is used for every single call to CRI\n\tcallTimeout = time.Second * 5 // nolint: varcheck\n)\n"
  },
  {
    "path": "utils/cri/cri_client_setup_linux.go",
    "content": "// +build linux\n\npackage cri\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/url\"\n\t\"os\"\n\t\"path\"\n\t\"regexp\"\n\t\"strings\"\n\n\t\"go.aporeto.io/enforcerd/internal/utils\"\n\t\"go.uber.org/zap\"\n\t\"google.golang.org/grpc\"\n\tcriruntimev1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\n// These are defined like this by Kubernetes. The kubelet will search for them exactly like this.\nconst (\n\tcriDockerShimEndpoint = \"/var/run/dockershim.sock\"\n\tcriContainerdEndpoint = \"/run/containerd/containerd.sock\"\n\tcriCrioEndpoint       = \"/var/run/crio/crio.sock\"\n)\n\nvar (\n\tnopFunc = func(path string) string { return path }\n\n\t// the following function was earlier utils.GetPathOnHostViaProcRoot but bow that we have proper mounts\n\t// we will make it a NOP operation.\n\tgetHostPath = nopFunc\n)\n\n// ParseStringFlag parses a flag from a given command\nfunc ParseStringFlag(cmd string, flagRegexp string) *string {\n\tflags := ParseStringFlags(cmd, flagRegexp)\n\tif len(flags) > 0 {\n\t\treturn &flags[0]\n\t}\n\treturn nil\n}\n\n// flagTemplate captures CLI flags (i.e., 'cmd --some-flag=value', 'cmd --some-flag value', 'cmd --some-flag=valA --some-flag=valB')\nconst flagTemplate = `(?:%s)(?:=|\\s+)(\\S+)`\n\n// ParseStringFlags parses a list of flags from a given command\nfunc ParseStringFlags(cmd string, flagRegexp string) []string {\n\tvar res []string\n\texpression := fmt.Sprintf(flagTemplate, flagRegexp)\n\tmatches := regexp.MustCompile(expression).FindAllStringSubmatch(cmd, -1)\n\tfor _, tokens := range matches {\n\t\tif len(tokens) > 1 {\n\t\t\tres = append(res, strings.Trim(tokens[1], `\"'`))\n\t\t}\n\t}\n\treturn res\n}\n\n// BuildProcessRegex returns a regex that should match processes with a name matching the given process regular\n// expression\n// Remark: procExpression can be a regular expression\nfunc BuildProcessRegex(procExpression string) *regexp.Regexp {\n\t// Expressions that should be matched by a given procname are:\n\t// procname -flag1 -flag2\n\t// /bin/procname -flag1 -flag2\n\t// /procname -flag1 -flag2\n\t//\n\t// Expressions that should NOT be matched are:\n\t// notprocname -flag1 -flag2\n\t// /bin/notprocname -flag1 -flag2\n\t// /bin/procname/notprocname -flag1 -flag2\n\t// notprocname -flag1 procname\n\t// notprocname -flag1 -procname\n\treturn regexp.MustCompile(fmt.Sprintf(`^(\\S*/)?(%s)( |$)`, procExpression))\n}\n\n// KubeletProcessRegex is the kubelet process regex used to find the kubelet process\n// Sometimes it is not the kubelet binary that is used in the system (e.g. Openshift4) but k8s' all-in-one binary: https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube\n// The following is an example of a kubelet cmdline in Openshift4:\n// /usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernete s/kubeconfig --rotate-certificates --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.s ock --allow-privileged --node-labels=node-role.kubernetes.io/master --minimum-container-ttl-duration=6m0s --client-ca-file=/etc/kubernetes/ca.crt --clou d-provider=aws --anonymous-auth=false --register-with-taints=node-role.kubernetes.io/master=:NoSchedule\nvar KubeletProcessRegex = BuildProcessRegex(\"(hyperkube )?kubelet\")\n\n// CriSocket returns the CRI socket path used by kubelet\nfunc CriSocket() (string, error) { //nolint\n\tprocs, err := utils.Processes()\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to list processes: %v\", err)\n\t}\n\tfor _, proc := range procs {\n\t\tif KubeletProcessRegex.MatchString(proc.Cmdline) {\n\t\t\tif sock := ParseStringFlag(proc.Cmdline, \"--container-runtime-endpoint\"); sock != nil {\n\t\t\t\treturn strings.TrimPrefix(*sock, \"unix://\"), nil\n\t\t\t}\n\t\t}\n\t}\n\treturn \"\", nil\n}\n\n// DetectCRIRuntimeEndpoint checks if the unix socket path are present for CRI\nfunc DetectCRIRuntimeEndpoint() (string, Type, error) {\n\tvar retErr error\n\tisUDSocket := func(path string) (string, error) {\n\t\tfileInfo, err := os.Stat(path)\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"%s not a socket\", path)\n\t\t}\n\t\tif !(fileInfo.Mode()&os.ModeSocket == os.ModeSocket) {\n\t\t\treturn \"\", fmt.Errorf(\"%s not a socket\", path)\n\t\t}\n\t\treturn \"unix://\" + path, nil\n\n\t}\n\truntimes := []string{criDockerShimEndpoint, criContainerdEndpoint, criCrioEndpoint}\n\texistingCriSockets := []string{}\n\n\tfor _, p := range runtimes {\n\t\tpath := getHostPath(p)\n\t\tif addr, err := isUDSocket(path); err == nil {\n\t\t\tzap.L().Debug(\"cri: detected CRI runtime service socket address\", zap.String(\"socketPathAddress\", addr))\n\t\t\texistingCriSockets = append(existingCriSockets, addr)\n\t\t} else {\n\t\t\tretErr = err\n\t\t\tzap.L().Debug(\"cri: socket path unavailable/inaccessible\", zap.String(\"socketPath\", path), zap.Error(err))\n\t\t}\n\t}\n\tzap.L().Debug(\"The CRI sockets found are:\", zap.Strings(\"paths\", existingCriSockets))\n\tif len(existingCriSockets) > 1 {\n\t\t// this should ideally not happen but if it happens then get the kubelet cmdLine CRI\n\t\t// now check for the kubelet runtime\n\t\tsockaddr, err := CriSocket()\n\t\tif err != nil {\n\t\t\treturn sockaddr, getCRISocketAddrType(sockaddr), fmt.Errorf(\"Multiple detection of CRI runtime endpoints, failed to get socketPath from kubelet\")\n\t\t}\n\t\t// If there is no CRI EP on kubelet that means docker is the default, because kubelet's default CRI is docker.\n\t\tif sockaddr == \"\" {\n\t\t\treturn getHostPath(criDockerShimEndpoint), TypeDocker, nil\n\t\t}\n\t\treturn sockaddr, getCRISocketAddrType(sockaddr), nil\n\t} else if len(existingCriSockets) == 1 {\n\t\treturn existingCriSockets[0], getCRISocketAddrType(existingCriSockets[0]), nil\n\t}\n\t// no CRI endpoint present on the system/node, this can happen during restarts\n\treturn \"\", TypeNone, fmt.Errorf(\"auto detection of CRI runtime endpoints failed, tested common locations %s, %s\", strings.Join(runtimes, \", \"), retErr)\n}\n\nfunc getCRISocketAddrType(sockaddr string) Type {\n\tif strings.Contains(sockaddr, \"crio\") {\n\t\treturn TypeCRIO\n\t}\n\tif strings.Contains(sockaddr, \"containerd\") {\n\t\treturn TypeContainerD\n\t}\n\tif strings.Contains(sockaddr, \"docker\") {\n\t\treturn TypeDocker\n\t}\n\treturn TypeNone\n}\n\nfunc getCRISocketAddr(criRuntimeEndpoint string) (string, error) {\n\tvar err error\n\taddr := criRuntimeEndpoint\n\tif addr == \"\" {\n\t\taddr, _, err = DetectCRIRuntimeEndpoint()\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\tif strings.HasPrefix(addr, \"tcp:\") {\n\t\treturn \"\", fmt.Errorf(\"tcp endpoints are not supported\")\n\t}\n\tif !strings.HasPrefix(addr, \"unix:\") {\n\t\taddr = \"unix://\" + addr\n\t}\n\taddr = path.Clean(addr)\n\n\tif strings.Contains(addr, \"frakti\") {\n\t\treturn \"\", fmt.Errorf(\"frakti runtime is not supported\")\n\t}\n\n\tu, err := url.Parse(addr)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif u.Scheme != \"unix\" {\n\t\treturn \"\", fmt.Errorf(\"only unix sockets are supported\")\n\t}\n\n\t// NOTE: convoluted, but makes unix socket paths and abstract unix socket socket URLs in gRPC connotation both work.\n\t// The trouble is that u.Path is only set for \"proper\" unix socket, and the rest is in u.Opaque\n\t// This strips \"unix:\" from the URL again as well as any following potential \"//\",\n\t// leaving a single '/' if this was a 'unix:///var/run/...' address\n\treturn strings.TrimPrefix(strings.TrimPrefix(addr, \"unix:\"), \"//\"), nil\n}\n\nfunc connectCRISocket(ctx context.Context, addr string) (*grpc.ClientConn, error) {\n\tvar err error\n\tvar connection *grpc.ClientConn\n\n\tctx, cancel := context.WithTimeout(ctx, connectTimeout)\n\tdefer cancel()\n\n\tconnection, err = grpc.DialContext(\n\t\tctx,\n\t\taddr,\n\t\t// we want to wait for an initial connection\n\t\tgrpc.WithBlock(),\n\t\t// we do everything like the kubelet: we bump this up to 16MB\n\t\tgrpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(maxMsgSize)),\n\t\t// unix socket connection, disable transport security\n\t\tgrpc.WithInsecure(),\n\t\tgrpc.WithContextDialer(func(ctx context.Context, addr string) (net.Conn, error) {\n\t\t\treturn (&net.Dialer{}).DialContext(ctx, \"unix\", addr)\n\t\t}),\n\t)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"connection to CRI runtime service failed: %s\", err.Error())\n\t}\n\treturn connection, nil\n}\n\n// NewCRIRuntimeServiceClient takes a CRI socket path and tries to establish a grpc connection to the CRI runtime service.\n// On success it is returning an ExtendedRuntimeService interface which is an extended CRI runtime service interface.\nfunc NewCRIRuntimeServiceClient(ctx context.Context, criRuntimeEndpoint string) (ExtendedRuntimeService, error) {\n\t// build the socket path URL\n\taddr, err := getCRISocketAddr(criRuntimeEndpoint)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"cri: failed to get socket address: %s\", err)\n\t}\n\n\t// establish the CRI connection\n\t// once this connection has been established\n\t// gRPC will take care of reconnections, etc.\n\t// connections are very much hands-off after that point\n\tconnection, err := connectCRISocket(ctx, addr)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"cri: failed to connect to CRI socket: %s\", err)\n\t}\n\n\t// finally create the extended wrapper\n\tsvc, err := NewCRIExtendedRuntimeServiceWrapper(\n\t\tctx,\n\t\tcallTimeout,\n\t\tcriruntimev1alpha2.NewRuntimeServiceClient(connection),\n\t)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"faile to create extended runtime service wrapper: %s\", err.Error())\n\t}\n\n\t// and return with it\n\treturn svc, nil\n}\n"
  },
  {
    "path": "utils/cri/cri_client_setup_linux_test.go",
    "content": "// +build linux\n\npackage cri\n\nimport (\n\t\"context\"\n\t\"net\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"google.golang.org/grpc\"\n)\n\nfunc Test_DetectCRIRuntimeEndpoint(t *testing.T) {\n\twd, err := os.Getwd()\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tpath := filepath.Join(wd, \"testdata\", \"var\", \"run\", \"crio\", \"crio.sock\")\n\n\tif err := os.RemoveAll(path); err != nil {\n\t\tpanic(err)\n\t}\n\tif err := os.MkdirAll(filepath.Dir(path), 0777); err != nil {\n\t\tpanic(err)\n\t}\n\tl, err := net.Listen(\"unix\", path)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer l.Close() // nolint\n\n\toldGetHostPath := getHostPath\n\tdefer func() {\n\t\tgetHostPath = oldGetHostPath\n\t}()\n\ttests := []struct {\n\t\tname        string\n\t\tgetHostPath func(string) string\n\t\twant        string\n\t\trunType     Type\n\t\twantErr     bool\n\t}{\n\t\t{\n\t\t\tname: \"failed to detect a runtime\",\n\t\t\tgetHostPath: func(path string) string {\n\t\t\t\treturn filepath.Join(wd, \"does-not-exist\", path)\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\trunType: TypeNone,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"detected a runtime\",\n\t\t\tgetHostPath: func(path string) string {\n\t\t\t\treturn filepath.Join(wd, \"testdata\", path)\n\t\t\t},\n\t\t\twant:    \"unix://\" + filepath.Join(wd, \"testdata\", \"var\", \"run\", \"crio\", \"crio.sock\"),\n\t\t\trunType: TypeCRIO,\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgetHostPath = tt.getHostPath\n\t\t\tgot, rtype, err := DetectCRIRuntimeEndpoint()\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"DetectCRIRuntimeEndpoint() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"DetectCRIRuntimeEndpoint() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\tif rtype != tt.runType {\n\t\t\t\tt.Errorf(\"DetectCRIRuntimeEndpoint() = %v, want %v\", rtype, tt.runType)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_getCRISocketAddr(t *testing.T) {\n\twd, err := os.Getwd()\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tpath := filepath.Join(wd, \"testdata\", \"var\", \"run\", \"crio\", \"crio.sock\")\n\n\tif err := os.RemoveAll(path); err != nil {\n\t\tpanic(err)\n\t}\n\tif err := os.MkdirAll(filepath.Dir(path), 0777); err != nil {\n\t\tpanic(err)\n\t}\n\tl, err := net.Listen(\"unix\", path)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer l.Close() // nolint\n\n\toldGetHostPath := getHostPath\n\tdefer func() {\n\t\tgetHostPath = oldGetHostPath\n\t}()\n\ttype args struct {\n\t\tcriRuntimeEndpoint string\n\t}\n\ttests := []struct {\n\t\tname        string\n\t\tgetHostPath func(string) string\n\t\targs        args\n\t\twant        string\n\t\twantErr     bool\n\t}{\n\t\t{\n\t\t\tname: \"auto-detected runtime should return without any error if it succeeds\",\n\t\t\targs: args{\n\t\t\t\tcriRuntimeEndpoint: \"\", // empty string enables auto-detection\n\t\t\t},\n\t\t\tgetHostPath: func(path string) string {\n\t\t\t\treturn filepath.Join(wd, \"testdata\", path)\n\t\t\t},\n\t\t\twant:    filepath.Join(wd, \"testdata\", \"var\", \"run\", \"crio\", \"crio.sock\"),\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"if auto-detection is enabled and fails, we must fail\",\n\t\t\targs: args{\n\t\t\t\tcriRuntimeEndpoint: \"\", // empty string enables auto-detection\n\t\t\t},\n\t\t\tgetHostPath: func(path string) string {\n\t\t\t\treturn filepath.Join(wd, \"does-not-exist\", path)\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"we fail on tcp endpoints\",\n\t\t\targs: args{\n\t\t\t\tcriRuntimeEndpoint: \"tcp://127.0.0.1:1234\",\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"correct file paths to a unix socket should work\",\n\t\t\targs: args{\n\t\t\t\tcriRuntimeEndpoint: filepath.Join(wd, \"testdata\", \"var\", \"run\", \"crio\", \"crio.sock\"),\n\t\t\t},\n\t\t\twant:    filepath.Join(wd, \"testdata\", \"var\", \"run\", \"crio\", \"crio.sock\"),\n\t\t\twantErr: false,\n\t\t},\n\t\t{\n\t\t\tname: \"frakti is not supported\",\n\t\t\targs: args{\n\t\t\t\tcriRuntimeEndpoint: \"/var/run/frakti.sock\",\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"frakti is not supported\",\n\t\t\targs: args{\n\t\t\t\tcriRuntimeEndpoint: \"/var/run/frakti.sock\",\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"URL parsing of endpoint fails\",\n\t\t\targs: args{\n\t\t\t\tcriRuntimeEndpoint: string([]byte{0x7f}),\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgetHostPath = tt.getHostPath\n\t\t\tgot, err := getCRISocketAddr(tt.args.criRuntimeEndpoint)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"getCRISocketAddr() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"getCRISocketAddr() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_connectCRISocket(t *testing.T) {\n\toldConnectTimeout := connectTimeout\n\tdefer func() {\n\t\tconnectTimeout = oldConnectTimeout\n\t}()\n\ttype args struct {\n\t\tctx  context.Context\n\t\taddr string\n\t}\n\ttests := []struct {\n\t\tname           string\n\t\targs           args\n\t\tconnectTimeout time.Duration\n\t\trunServer      bool\n\t\twantErr        bool\n\t}{\n\t\t{\n\t\t\tname: \"no timeout produces a canceled context which must always error\",\n\t\t\targs: args{\n\t\t\t\tctx:  context.Background(),\n\t\t\t\taddr: \"\",\n\t\t\t},\n\t\t\tconnectTimeout: 0,\n\t\t\twantErr:        true,\n\t\t},\n\t\t{\n\t\t\tname: \"successful connection to a unix server listening\",\n\t\t\targs: args{\n\t\t\t\tctx:  context.Background(),\n\t\t\t\taddr: \"@aporeto_cri_grpc_connect_test\",\n\t\t\t},\n\t\t\trunServer:      true,\n\t\t\tconnectTimeout: time.Second * 10,\n\t\t\twantErr:        false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tconnectTimeout = tt.connectTimeout\n\t\t\tctx, cancel := context.WithCancel(tt.args.ctx)\n\t\t\tdefer cancel()\n\t\t\tif tt.runServer {\n\t\t\t\ts := grpc.NewServer()\n\t\t\t\tdefer s.Stop()\n\t\t\t\tgo func() {\n\t\t\t\t\tl, err := (&net.ListenConfig{}).Listen(ctx, \"unix\", tt.args.addr)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tpanic(err)\n\t\t\t\t\t}\n\t\t\t\t\ts.Serve(l) // nolint: errcheck\n\t\t\t\t}()\n\t\t\t}\n\t\t\t_, err := connectCRISocket(ctx, tt.args.addr)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"connectCRISocket() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestNewCRIRuntimeServiceClient(t *testing.T) {\n\toldConnectTimeout := connectTimeout\n\toldCallTimeout := callTimeout\n\tdefer func() {\n\t\tconnectTimeout = oldConnectTimeout\n\t\tcallTimeout = oldCallTimeout\n\t}()\n\ttype args struct {\n\t\tctx                context.Context\n\t\tcriRuntimeEndpoint string\n\t}\n\ttests := []struct {\n\t\tname           string\n\t\targs           args\n\t\tconnectTimeout time.Duration\n\t\tcallTimeout    time.Duration\n\t\trunServer      bool\n\t\twantErr        bool\n\t}{\n\t\t{\n\t\t\tname: \"fails on getting socket path\",\n\t\t\targs: args{\n\t\t\t\tctx:                context.Background(),\n\t\t\t\tcriRuntimeEndpoint: string([]byte{0x7f}),\n\t\t\t},\n\t\t\trunServer: false,\n\t\t\twantErr:   true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tctx:                context.Background(),\n\t\t\t\tcriRuntimeEndpoint: \"unix:@aporeto_cri_grpc_connect_test1\",\n\t\t\t},\n\t\t\tconnectTimeout: time.Second * 10,\n\t\t\tcallTimeout:    time.Second * 5,\n\t\t\trunServer:      true,\n\t\t\twantErr:        false,\n\t\t},\n\t\t{\n\t\t\tname: \"fails creating the ExtendedRuntimeService\",\n\t\t\targs: args{\n\t\t\t\tctx:                context.Background(),\n\t\t\t\tcriRuntimeEndpoint: \"unix:@aporeto_cri_grpc_connect_test2\",\n\t\t\t},\n\t\t\tconnectTimeout: time.Second * 10,\n\t\t\tcallTimeout:    0, // call timeout must not be 0\n\t\t\trunServer:      true,\n\t\t\twantErr:        true,\n\t\t},\n\t\t{\n\t\t\tname: \"fails connecting to the grpc socket\",\n\t\t\targs: args{\n\t\t\t\tctx:                context.Background(),\n\t\t\t\tcriRuntimeEndpoint: \"unix:@aporeto_cri_grpc_connect_test3\",\n\t\t\t},\n\t\t\tconnectTimeout: 0,\n\t\t\trunServer:      true,\n\t\t\twantErr:        true,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tconnectTimeout = tt.connectTimeout\n\t\t\tcallTimeout = tt.callTimeout\n\t\t\tctx, cancel := context.WithCancel(tt.args.ctx)\n\t\t\tdefer cancel()\n\t\t\tif tt.runServer {\n\t\t\t\ts := grpc.NewServer()\n\t\t\t\tdefer s.Stop()\n\t\t\t\tgo func() {\n\t\t\t\t\tl, err := (&net.ListenConfig{}).Listen(ctx, \"unix\", strings.TrimPrefix(strings.TrimPrefix(tt.args.criRuntimeEndpoint, \"unix:\"), \"//\"))\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tpanic(err)\n\t\t\t\t\t}\n\t\t\t\t\ts.Serve(l) // nolint: errcheck\n\t\t\t\t}()\n\t\t\t}\n\t\t\t_, err := NewCRIRuntimeServiceClient(ctx, tt.args.criRuntimeEndpoint)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"NewCRIRuntimeServiceClient() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "utils/cri/cri_client_setup_unsupported.go",
    "content": "// +build !linux,!windows\n\npackage cri\n\nimport (\n\t\"context\"\n\t\"errors\"\n)\n\n// NewCRIRuntimeServiceClient is not supported for non-linux\nfunc NewCRIRuntimeServiceClient(ctx context.Context, criRuntimeEndpoint string) (ExtendedRuntimeService, error) {\n\treturn nil, errors.New(\"unsupported platform\")\n}\n\n// DetectCRIRuntimeEndpoint checks if the unix socket path are present for CRI\nfunc DetectCRIRuntimeEndpoint() (string, Type, error) {\n\treturn \"\", TypeNone, errors.New(\"unsupported platform\")\n}\n"
  },
  {
    "path": "utils/cri/cri_client_setup_windows.go",
    "content": "// +build windows\n\npackage cri\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/url\"\n\t\"os\"\n\t\"path\"\n\t\"regexp\"\n\t\"strings\"\n\n\t\"github.com/Microsoft/go-winio\"\n\t\"go.aporeto.io/enforcerd/internal/utils\"\n\t\"go.uber.org/zap\"\n\t\"google.golang.org/grpc\"\n\tcriruntimev1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\nconst (\n\tcriDockerShimEndpoint    = \"//./pipe/dockershim\"\n\tcriContainerdOldEndpoint = \"//./pipe/containerd\"\n\tcriContainerdEndpoint    = \"//./pipe/containerd-containerd\"\n\tcriCrioEndpoint          = \"//./pipe/crio\"\n)\n\n// ParseStringFlag parses a flag from a given command\nfunc ParseStringFlag(cmd string, flagRegexp string) *string {\n\tflags := ParseStringFlags(cmd, flagRegexp)\n\tif len(flags) > 0 {\n\t\treturn &flags[0]\n\t}\n\treturn nil\n}\n\n// flagTemplate captures CLI flags (i.e., 'cmd --some-flag=value', 'cmd --some-flag value', 'cmd --some-flag=valA --some-flag=valB')\nconst flagTemplate = `(?:%s)(?:=|\\s+)(\\S+)`\n\n// ParseStringFlags parses a list of flags from a given command\nfunc ParseStringFlags(cmd string, flagRegexp string) []string {\n\tvar res []string\n\texpression := fmt.Sprintf(flagTemplate, flagRegexp)\n\tmatches := regexp.MustCompile(expression).FindAllStringSubmatch(cmd, -1)\n\tfor _, tokens := range matches {\n\t\tif len(tokens) > 1 {\n\t\t\tres = append(res, strings.Trim(tokens[1], `\"'`))\n\t\t}\n\t}\n\treturn res\n}\n\n// BuildProcessRegex returns a regex that should match processes with a name matching the given process regular\n// expression\n// Remark: procExpression can be a regular expression\nfunc BuildProcessRegex(procExpression string) *regexp.Regexp {\n\t// Expressions that should be matched by a given procname are:\n\t// procname -flag1 -flag2\n\t// /bin/procname -flag1 -flag2\n\t// /procname -flag1 -flag2\n\t//\n\t// Expressions that should NOT be matched are:\n\t// notprocname -flag1 -flag2\n\t// /bin/notprocname -flag1 -flag2\n\t// /bin/procname/notprocname -flag1 -flag2\n\t// notprocname -flag1 procname\n\t// notprocname -flag1 -procname\n\treturn regexp.MustCompile(fmt.Sprintf(`^(\\S*/)?(%s)( |$)`, procExpression))\n}\n\n// KubeletProcessRegex is the kubelet process regex used to find the kubelet process\n// Sometimes it is not the kubelet binary that is used in the system (e.g. Openshift4) but k8s' all-in-one binary: https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube\n// The following is an example of a kubelet cmdline in Openshift4:\n// /usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernete s/kubeconfig --rotate-certificates --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.s ock --allow-privileged --node-labels=node-role.kubernetes.io/master --minimum-container-ttl-duration=6m0s --client-ca-file=/etc/kubernetes/ca.crt --clou d-provider=aws --anonymous-auth=false --register-with-taints=node-role.kubernetes.io/master=:NoSchedule\nvar KubeletProcessRegex = BuildProcessRegex(\"(hyperkube )?kubelet\")\n\n// CriSocket returns the CRI socket path used by kubelet\nfunc CriSocket() (string, error) { //nolint\n\tprocs, err := utils.Processes()\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to list processes: %v\", err)\n\t}\n\tfor _, proc := range procs {\n\t\tif KubeletProcessRegex.MatchString(proc.Cmdline) {\n\t\t\tif sock := ParseStringFlag(proc.Cmdline, \"--container-runtime-endpoint\"); sock != nil {\n\t\t\t\treturn strings.TrimPrefix(*sock, \"npipe://\"), nil\n\t\t\t}\n\t\t}\n\t}\n\treturn \"\", nil\n}\n\n// DetectCRIRuntimeEndpoint checks if the unix socket path are present for CRI\nfunc DetectCRIRuntimeEndpoint() (string, Type, error) {\n\tvar retErr error\n\tisNamedPipe := func(path string) (string, error) {\n\t\t_, err := os.Stat(path)\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"%s not a named pipe\", path)\n\t\t}\n\t\treturn path, nil\n\t}\n\truntimes := []string{criDockerShimEndpoint, criContainerdEndpoint, criContainerdOldEndpoint, criCrioEndpoint}\n\texistingCriSockets := []string{}\n\tfor _, p := range runtimes {\n\t\tif addr, err := isNamedPipe(p); err == nil {\n\t\t\tzap.L().Debug(\"cri: detected CRI runtime service socket address\", zap.String(\"socketPathAddress\", addr))\n\t\t\texistingCriSockets = append(existingCriSockets, addr)\n\t\t} else {\n\t\t\tretErr = err\n\t\t\tzap.L().Debug(\"cri: socket path unavailable/inaccessible\", zap.String(\"socketPath\", p), zap.Error(err))\n\t\t}\n\t}\n\n\tzap.L().Debug(\"The CRI sockets found are:\", zap.Strings(\"paths\", existingCriSockets))\n\tif len(existingCriSockets) > 1 {\n\t\t// this should ideally not happen but if it happens then get the kubelet cmdLine CRI\n\t\t// now check for the kubelet runtime\n\t\tsockaddr, err := CriSocket()\n\t\tif err != nil {\n\t\t\treturn sockaddr, getCRISocketAddrType(sockaddr), fmt.Errorf(\"Multiple detection of CRI runtime endpoints, failed to get socketPath from kubelet\")\n\t\t}\n\t\t// If there is no CRI EP on kubelet that means docker is the default, because kubelet's default CRI is docker.\n\t\tif sockaddr == \"\" {\n\t\t\treturn criDockerShimEndpoint, TypeDocker, nil\n\t\t}\n\t\treturn sockaddr, getCRISocketAddrType(sockaddr), nil\n\t} else if len(existingCriSockets) == 1 {\n\t\treturn existingCriSockets[0], getCRISocketAddrType(existingCriSockets[0]), nil\n\t}\n\t// no CRI endpoint present on the system/node, this can happen during restarts\n\treturn \"\", TypeNone, fmt.Errorf(\"auto detection of CRI runtime endpoints failed, tested common locations %s, %s\", strings.Join(runtimes, \", \"), retErr)\n}\n\nfunc getCRISocketAddrType(sockaddr string) Type {\n\tif strings.Contains(sockaddr, \"crio\") {\n\t\treturn TypeCRIO\n\t}\n\tif strings.Contains(sockaddr, \"containerd\") {\n\t\treturn TypeContainerD\n\t}\n\tif strings.Contains(sockaddr, \"docker\") {\n\t\treturn TypeDocker\n\t}\n\treturn TypeNone\n}\n\nfunc getCRISocketAddr(criRuntimeEndpoint string) (string, error) {\n\tvar err error\n\t// url.Parse only supports forward slashes\n\taddr := strings.Replace(criRuntimeEndpoint, \"\\\\\", \"/\", -1)\n\tif addr == \"\" {\n\t\taddr, _, err = DetectCRIRuntimeEndpoint()\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t}\n\tif strings.HasPrefix(addr, \"tcp:\") {\n\t\treturn \"\", fmt.Errorf(\"tcp endpoints are not supported\")\n\t}\n\tif !strings.HasPrefix(addr, \"npipe:\") {\n\t\taddr = \"npipe://\" + addr\n\t}\n\taddr = path.Clean(addr)\n\n\tif strings.Contains(addr, \"frakti\") {\n\t\treturn \"\", fmt.Errorf(\"frakti runtime is not supported\")\n\t}\n\n\tu, err := url.Parse(addr)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tif u.Scheme != \"npipe\" {\n\t\treturn \"\", fmt.Errorf(\"only named pipes are supported\")\n\t}\n\tif u.Host != \".\" {\n\t\treturn \"\", fmt.Errorf(\"only named pipes on the local host are supported\")\n\t}\n\n\treturn fmt.Sprintf(\"//%s%s\", u.Host, u.Path), nil\n}\n\nfunc connectCRISocket(ctx context.Context, addr string) (*grpc.ClientConn, error) {\n\tvar err error\n\tvar connection *grpc.ClientConn\n\n\tctx, cancel := context.WithTimeout(ctx, connectTimeout)\n\tdefer cancel()\n\n\tconnection, err = grpc.DialContext(\n\t\tctx,\n\t\taddr,\n\t\t// we want to wait for an initial connection\n\t\tgrpc.WithBlock(),\n\t\t// we do everything like the kubelet: we bump this up to 16MB\n\t\tgrpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(maxMsgSize)),\n\t\t// unix socket connection, disable transport security\n\t\tgrpc.WithInsecure(),\n\t\tgrpc.WithContextDialer(func(ctx context.Context, addr string) (net.Conn, error) {\n\t\t\treturn winio.DialPipeContext(ctx, addr)\n\t\t}),\n\t)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"connection to CRI runtime service failed: %s\", err.Error())\n\t}\n\treturn connection, nil\n}\n\n// NewCRIRuntimeServiceClient takes a CRI socket path and tries to establish a grpc connection to the CRI runtime service.\n// On success it is returning an ExtendedRuntimeService interface which is an extended CRI runtime service interface.\nfunc NewCRIRuntimeServiceClient(ctx context.Context, criRuntimeEndpoint string) (ExtendedRuntimeService, error) {\n\t// build the socket path URL\n\taddr, err := getCRISocketAddr(criRuntimeEndpoint)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"cri: failed to get socket address: %s\", err)\n\t}\n\n\t// establish the CRI connection\n\t// once this connection has been established\n\t// gRPC will take care of reconnections, etc.\n\t// connections are very much hands-off after that point\n\tconnection, err := connectCRISocket(ctx, addr)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"cri: failed to connect to CRI socket: %s\", err)\n\t}\n\n\t// finally create the extended wrapper\n\tsvc, err := NewCRIExtendedRuntimeServiceWrapper(\n\t\tctx,\n\t\tcallTimeout,\n\t\tcriruntimev1alpha2.NewRuntimeServiceClient(connection),\n\t)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"faile to create extended runtime service wrapper: %s\", err.Error())\n\t}\n\n\t// and return with it\n\treturn svc, nil\n}\n"
  },
  {
    "path": "utils/cri/cri_runtime_wrapper.go",
    "content": "package cri\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"time\"\n\n\tcriruntimev1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\n// NewCRIExtendedRuntimeServiceWrapper creates an ExtendedRuntimeService from a v1alpha2 runtime service client\n// NOTE: the passed context is used for every subsequent call on the interface as the parent context with a timeout\n// as passed through the argument. If the parent context gets canceled, this client becomes useless.\nfunc NewCRIExtendedRuntimeServiceWrapper(ctx context.Context, timeout time.Duration, client criruntimev1alpha2.RuntimeServiceClient) (ExtendedRuntimeService, error) {\n\tif client == nil {\n\t\treturn nil, fmt.Errorf(\"client cannot be nil\")\n\t}\n\tif timeout == time.Duration(0) {\n\t\treturn nil, fmt.Errorf(\"timeout cannot be 0\")\n\t}\n\treturn &extendedServiceRuntimeWrapper{\n\t\tctx:     ctx,\n\t\ttimeout: timeout,\n\t\trs:      client,\n\t}, nil\n}\n\ntype extendedServiceRuntimeWrapper struct {\n\t// well, this is stupid:\n\t// the criapi.RuntimeService should take a context as first argument everywhere\n\t// as it doesn't, the only sensible way is to be able to pass it from here\n\t// however, be careful with this: if that passed context gets canceled, nothing will work anymore\n\tctx     context.Context\n\ttimeout time.Duration\n\trs      criruntimev1alpha2.RuntimeServiceClient\n}\n\n// Version returns the runtime name, runtime version and runtime API version\nfunc (w *extendedServiceRuntimeWrapper) Version(apiVersion string) (*criruntimev1alpha2.VersionResponse, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\treturn w.rs.Version(ctx, &criruntimev1alpha2.VersionRequest{Version: apiVersion})\n}\n\n// CreateContainer creates a new container in specified PodSandbox.\nfunc (w *extendedServiceRuntimeWrapper) CreateContainer(podSandboxID string, config *criruntimev1alpha2.ContainerConfig, sandboxConfig *criruntimev1alpha2.PodSandboxConfig) (string, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.CreateContainer(ctx, &criruntimev1alpha2.CreateContainerRequest{\n\t\tPodSandboxId:  podSandboxID,\n\t\tConfig:        config,\n\t\tSandboxConfig: sandboxConfig,\n\t})\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn resp.GetContainerId(), nil\n}\n\n// StartContainer starts the container.\nfunc (w *extendedServiceRuntimeWrapper) StartContainer(containerID string) error {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\t_, err := w.rs.StartContainer(ctx, &criruntimev1alpha2.StartContainerRequest{\n\t\tContainerId: containerID,\n\t})\n\treturn err\n}\n\n// StopContainer stops a running container with a grace period (i.e., timeout).\nfunc (w *extendedServiceRuntimeWrapper) StopContainer(containerID string, timeout int64) error {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\t_, err := w.rs.StopContainer(ctx, &criruntimev1alpha2.StopContainerRequest{\n\t\tContainerId: containerID,\n\t\tTimeout:     timeout,\n\t})\n\treturn err\n}\n\n// RemoveContainer removes the container.\nfunc (w *extendedServiceRuntimeWrapper) RemoveContainer(containerID string) error {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\t_, err := w.rs.RemoveContainer(ctx, &criruntimev1alpha2.RemoveContainerRequest{\n\t\tContainerId: containerID,\n\t})\n\treturn err\n}\n\n// ListContainers lists all containers by filters.\nfunc (w *extendedServiceRuntimeWrapper) ListContainers(filter *criruntimev1alpha2.ContainerFilter) ([]*criruntimev1alpha2.Container, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.ListContainers(ctx, &criruntimev1alpha2.ListContainersRequest{\n\t\tFilter: filter,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.GetContainers(), nil\n}\n\n// ContainerStatus returns the status of the container.\nfunc (w *extendedServiceRuntimeWrapper) ContainerStatus(containerID string) (*criruntimev1alpha2.ContainerStatus, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.ContainerStatus(ctx, &criruntimev1alpha2.ContainerStatusRequest{\n\t\tContainerId: containerID,\n\t\tVerbose:     false,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.GetStatus(), nil\n}\n\n// ContainerStatusVerbose returns the status of the container.\nfunc (w *extendedServiceRuntimeWrapper) ContainerStatusVerbose(containerID string) (*criruntimev1alpha2.ContainerStatus, map[string]string, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.ContainerStatus(ctx, &criruntimev1alpha2.ContainerStatusRequest{\n\t\tContainerId: containerID,\n\t\tVerbose:     true,\n\t})\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\treturn resp.GetStatus(), resp.GetInfo(), nil\n}\n\n// UpdateContainerResources updates the cgroup resources for the container.\nfunc (w *extendedServiceRuntimeWrapper) UpdateContainerResources(containerID string, resources *criruntimev1alpha2.LinuxContainerResources) error {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\t_, err := w.rs.UpdateContainerResources(ctx, &criruntimev1alpha2.UpdateContainerResourcesRequest{\n\t\tContainerId: containerID,\n\t\tLinux:       resources,\n\t})\n\treturn err\n}\n\n// ExecSync executes a command in the container, and returns the stdout output.\n// If command exits with a non-zero exit code, an error is returned.\nfunc (w *extendedServiceRuntimeWrapper) ExecSync(containerID string, cmd []string, timeout time.Duration) (stdout []byte, stderr []byte, err error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.ExecSync(ctx, &criruntimev1alpha2.ExecSyncRequest{\n\t\tContainerId: containerID,\n\t\tCmd:         cmd,\n\t\tTimeout:     int64(timeout.Seconds()),\n\t})\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tif resp.GetExitCode() != int32(0) {\n\t\treturn resp.GetStdout(), resp.GetStderr(), fmt.Errorf(\"exit code: %d\", resp.GetExitCode())\n\t}\n\treturn resp.GetStdout(), resp.GetStderr(), nil\n}\n\n// Exec prepares a streaming endpoint to execute a command in the container, and returns the address.\nfunc (w *extendedServiceRuntimeWrapper) Exec(req *criruntimev1alpha2.ExecRequest) (*criruntimev1alpha2.ExecResponse, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\treturn w.rs.Exec(ctx, req)\n}\n\n// Attach prepares a streaming endpoint to attach to a running container, and returns the address.\nfunc (w *extendedServiceRuntimeWrapper) Attach(req *criruntimev1alpha2.AttachRequest) (*criruntimev1alpha2.AttachResponse, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\treturn w.rs.Attach(ctx, req)\n}\n\n// ReopenContainerLog asks runtime to reopen the stdout/stderr log file\n// for the container. If it returns error, new container log file MUST NOT\n// be created.\nfunc (w *extendedServiceRuntimeWrapper) ReopenContainerLog(containerID string) error {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\t_, err := w.rs.ReopenContainerLog(ctx, &criruntimev1alpha2.ReopenContainerLogRequest{\n\t\tContainerId: containerID,\n\t})\n\treturn err\n}\n\n// RunPodSandbox creates and starts a pod-level sandbox. Runtimes should ensure\n// the sandbox is in ready state.\nfunc (w *extendedServiceRuntimeWrapper) RunPodSandbox(config *criruntimev1alpha2.PodSandboxConfig, runtimeHandler string) (string, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.RunPodSandbox(ctx, &criruntimev1alpha2.RunPodSandboxRequest{\n\t\tConfig:         config,\n\t\tRuntimeHandler: runtimeHandler,\n\t})\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn resp.GetPodSandboxId(), nil\n}\n\n// StopPodSandbox stops the sandbox. If there are any running containers in the\n// sandbox, they should be force terminated.\nfunc (w *extendedServiceRuntimeWrapper) StopPodSandbox(podSandboxID string) error {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\t_, err := w.rs.StopPodSandbox(ctx, &criruntimev1alpha2.StopPodSandboxRequest{\n\t\tPodSandboxId: podSandboxID,\n\t})\n\treturn err\n}\n\n// RemovePodSandbox removes the sandbox. If there are running containers in the\n// sandbox, they should be forcibly removed.\nfunc (w *extendedServiceRuntimeWrapper) RemovePodSandbox(podSandboxID string) error {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\t_, err := w.rs.RemovePodSandbox(ctx, &criruntimev1alpha2.RemovePodSandboxRequest{\n\t\tPodSandboxId: podSandboxID,\n\t})\n\treturn err\n}\n\n// PodSandboxStatus returns the Status of the PodSandbox.\nfunc (w *extendedServiceRuntimeWrapper) PodSandboxStatus(podSandboxID string) (*criruntimev1alpha2.PodSandboxStatus, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.PodSandboxStatus(ctx, &criruntimev1alpha2.PodSandboxStatusRequest{\n\t\tPodSandboxId: podSandboxID,\n\t\tVerbose:      false,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.GetStatus(), nil\n}\n\n// PodSandboxStatusVerbose returns the Status of the PodSandbox.\nfunc (w *extendedServiceRuntimeWrapper) PodSandboxStatusVerbose(podSandboxID string) (*criruntimev1alpha2.PodSandboxStatus, map[string]string, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.PodSandboxStatus(ctx, &criruntimev1alpha2.PodSandboxStatusRequest{\n\t\tPodSandboxId: podSandboxID,\n\t\tVerbose:      true,\n\t})\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\treturn resp.GetStatus(), resp.GetInfo(), nil\n}\n\n// ListPodSandbox returns a list of Sandbox.\nfunc (w *extendedServiceRuntimeWrapper) ListPodSandbox(filter *criruntimev1alpha2.PodSandboxFilter) ([]*criruntimev1alpha2.PodSandbox, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.ListPodSandbox(ctx, &criruntimev1alpha2.ListPodSandboxRequest{\n\t\tFilter: filter,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.GetItems(), nil\n}\n\n// PortForward prepares a streaming endpoint to forward ports from a PodSandbox, and returns the address.\nfunc (w *extendedServiceRuntimeWrapper) PortForward(req *criruntimev1alpha2.PortForwardRequest) (*criruntimev1alpha2.PortForwardResponse, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\treturn w.rs.PortForward(ctx, req)\n}\n\n// ContainerStats returns stats of the container. If the container does not\n// exist, the call returns an error.\nfunc (w *extendedServiceRuntimeWrapper) ContainerStats(containerID string) (*criruntimev1alpha2.ContainerStats, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.ContainerStats(ctx, &criruntimev1alpha2.ContainerStatsRequest{\n\t\tContainerId: containerID,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.GetStats(), nil\n}\n\n// ListContainerStats returns stats of all running containers.\nfunc (w *extendedServiceRuntimeWrapper) ListContainerStats(filter *criruntimev1alpha2.ContainerStatsFilter) ([]*criruntimev1alpha2.ContainerStats, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.ListContainerStats(ctx, &criruntimev1alpha2.ListContainerStatsRequest{\n\t\tFilter: filter,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.GetStats(), nil\n}\n\n// UpdateRuntimeConfig updates runtime configuration if specified\nfunc (w *extendedServiceRuntimeWrapper) UpdateRuntimeConfig(runtimeConfig *criruntimev1alpha2.RuntimeConfig) error {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\t_, err := w.rs.UpdateRuntimeConfig(ctx, &criruntimev1alpha2.UpdateRuntimeConfigRequest{\n\t\tRuntimeConfig: runtimeConfig,\n\t})\n\treturn err\n}\n\n// Status returns the status of the runtime.\nfunc (w *extendedServiceRuntimeWrapper) Status() (*criruntimev1alpha2.RuntimeStatus, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.Status(ctx, &criruntimev1alpha2.StatusRequest{\n\t\tVerbose: false,\n\t})\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn resp.GetStatus(), nil\n}\n\n// Status returns the status of the runtime.\nfunc (w *extendedServiceRuntimeWrapper) StatusVerbose() (*criruntimev1alpha2.RuntimeStatus, map[string]string, error) {\n\tctx, cancel := context.WithTimeout(w.ctx, w.timeout)\n\tdefer cancel()\n\tresp, err := w.rs.Status(ctx, &criruntimev1alpha2.StatusRequest{\n\t\tVerbose: true,\n\t})\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\treturn resp.GetStatus(), resp.GetInfo(), nil\n}\n"
  },
  {
    "path": "utils/cri/cri_runtime_wrapper_test.go",
    "content": "package cri\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"reflect\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/golang/mock/gomock\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cri/mockcri\"\n\tcriruntimev1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\nfunc TestNewCRIExtendedRuntimeServiceWrapper(t *testing.T) {\n\ttype args struct {\n\t\tctx     context.Context\n\t\ttimeout time.Duration\n\t\tclient  criruntimev1alpha2.RuntimeServiceClient\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\twant    ExtendedRuntimeService\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"client is nil\",\n\t\t\targs: args{\n\t\t\t\tctx:     context.Background(),\n\t\t\t\ttimeout: connectTimeout,\n\t\t\t\tclient:  nil,\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"timeout is 0\",\n\t\t\targs: args{\n\t\t\t\tctx:     context.Background(),\n\t\t\t\ttimeout: 0,\n\t\t\t\tclient:  criruntimev1alpha2.NewRuntimeServiceClient(nil),\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tctx:     context.Background(),\n\t\t\t\ttimeout: connectTimeout,\n\t\t\t\tclient:  criruntimev1alpha2.NewRuntimeServiceClient(nil),\n\t\t\t},\n\t\t\twant: &extendedServiceRuntimeWrapper{\n\t\t\t\tctx:     context.Background(),\n\t\t\t\ttimeout: connectTimeout,\n\t\t\t\trs:      criruntimev1alpha2.NewRuntimeServiceClient(nil),\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tgot, err := NewCRIExtendedRuntimeServiceWrapper(tt.args.ctx, tt.args.timeout, tt.args.client)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"NewCRIExtendedRuntimeServiceWrapper() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"NewCRIExtendedRuntimeServiceWrapper() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc newUnitTestCRIExtendedRuntimeServiceWrapper(t *testing.T) (*gomock.Controller, *mockcri.MockRuntimeServiceClient, context.CancelFunc, ExtendedRuntimeService) {\n\tctrl := gomock.NewController(t)\n\tclient := mockcri.NewMockRuntimeServiceClient(ctrl)\n\tctx, cancel := context.WithCancel(context.Background())\n\tw, err := NewCRIExtendedRuntimeServiceWrapper(ctx, connectTimeout, client)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn ctrl, client, cancel, w\n}\n\ntype prepareFunc func(*testing.T, *mockcri.MockRuntimeServiceClient)\n\nvar errMock = errors.New(\"mocked error has occurred\")\n\nfunc Test_extendedServiceRuntimeWrapper_Version(t *testing.T) {\n\ttype args struct {\n\t\tapiVersion string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\tprepare prepareFunc\n\t\twant    *criruntimev1alpha2.VersionResponse\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tapiVersion: \"version\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Version(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.VersionRequest{Version: \"version\"}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tapiVersion: \"version\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Version(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.VersionRequest{Version: \"version\"}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.VersionResponse{}, nil)\n\t\t\t},\n\t\t\twant:    &criruntimev1alpha2.VersionResponse{},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.Version(tt.args.apiVersion)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.Version() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.Version() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_CreateContainer(t *testing.T) {\n\ttype args struct {\n\t\tpodSandboxID  string\n\t\tconfig        *criruntimev1alpha2.ContainerConfig\n\t\tsandboxConfig *criruntimev1alpha2.PodSandboxConfig\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\tprepare prepareFunc\n\t\twant    string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID:  \"sandboxID\",\n\t\t\t\tconfig:        &criruntimev1alpha2.ContainerConfig{},\n\t\t\t\tsandboxConfig: &criruntimev1alpha2.PodSandboxConfig{},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().CreateContainer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.CreateContainerRequest{\n\t\t\t\t\t\tPodSandboxId:  \"sandboxID\",\n\t\t\t\t\t\tConfig:        &criruntimev1alpha2.ContainerConfig{},\n\t\t\t\t\t\tSandboxConfig: &criruntimev1alpha2.PodSandboxConfig{},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID:  \"sandboxID\",\n\t\t\t\tconfig:        &criruntimev1alpha2.ContainerConfig{},\n\t\t\t\tsandboxConfig: &criruntimev1alpha2.PodSandboxConfig{},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().CreateContainer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.CreateContainerRequest{\n\t\t\t\t\t\tPodSandboxId:  \"sandboxID\",\n\t\t\t\t\t\tConfig:        &criruntimev1alpha2.ContainerConfig{},\n\t\t\t\t\t\tSandboxConfig: &criruntimev1alpha2.PodSandboxConfig{},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.CreateContainerResponse{\n\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant:    \"containerID\",\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.CreateContainer(tt.args.podSandboxID, tt.args.config, tt.args.sandboxConfig)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.CreateContainer() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.CreateContainer() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_StartContainer(t *testing.T) {\n\ttype args struct {\n\t\tcontainerID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\targs    args\n\t\tprepare prepareFunc\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().StartContainer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StartContainerRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().StartContainer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StartContainerRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.StartContainerResponse{}, nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tif err := w.StartContainer(tt.args.containerID); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.StartContainer() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_StopContainer(t *testing.T) {\n\ttype args struct {\n\t\tcontainerID string\n\t\ttimeout     int64\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t\ttimeout:     42,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().StopContainer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StopContainerRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tTimeout:     42,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t\ttimeout:     42,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().StopContainer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StopContainerRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tTimeout:     42,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.StopContainerResponse{}, nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tif err := w.StopContainer(tt.args.containerID, tt.args.timeout); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.StopContainer() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_RemoveContainer(t *testing.T) {\n\ttype args struct {\n\t\tcontainerID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().RemoveContainer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.RemoveContainerRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().RemoveContainer(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.RemoveContainerRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.RemoveContainerResponse{}, nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tif err := w.RemoveContainer(tt.args.containerID); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.RemoveContainer() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_ListContainers(t *testing.T) {\n\ttype args struct {\n\t\tfilter *criruntimev1alpha2.ContainerFilter\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    []*criruntimev1alpha2.Container\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tfilter: &criruntimev1alpha2.ContainerFilter{\n\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ListContainers(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ListContainersRequest{\n\t\t\t\t\t\tFilter: &criruntimev1alpha2.ContainerFilter{\n\t\t\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tfilter: &criruntimev1alpha2.ContainerFilter{\n\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ListContainers(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ListContainersRequest{\n\t\t\t\t\t\tFilter: &criruntimev1alpha2.ContainerFilter{\n\t\t\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ListContainersResponse{\n\t\t\t\t\tContainers: []*criruntimev1alpha2.Container{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tId: \"one\",\n\t\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tId: \"two\",\n\t\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: []*criruntimev1alpha2.Container{\n\t\t\t\t{\n\t\t\t\t\tId: \"one\",\n\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tId: \"two\",\n\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.ListContainers(tt.args.filter)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ListContainers() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ListContainers() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_ContainerStatus(t *testing.T) {\n\ttype args struct {\n\t\tcontainerID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    *criruntimev1alpha2.ContainerStatus\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ContainerStatus(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ContainerStatusRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tVerbose:     false,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ContainerStatus(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ContainerStatusRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tVerbose:     false,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ContainerStatusResponse{\n\t\t\t\t\tStatus: &criruntimev1alpha2.ContainerStatus{\n\t\t\t\t\t\tId:    \"containerID\",\n\t\t\t\t\t\tState: criruntimev1alpha2.ContainerState_CONTAINER_RUNNING,\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.ContainerStatus{\n\t\t\t\tId:    \"containerID\",\n\t\t\t\tState: criruntimev1alpha2.ContainerState_CONTAINER_RUNNING,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.ContainerStatus(tt.args.containerID)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ContainerStatus() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ContainerStatus() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_ContainerStatusVerbose(t *testing.T) {\n\n\ttype args struct {\n\t\tcontainerID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    *criruntimev1alpha2.ContainerStatus\n\t\twant1   map[string]string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ContainerStatus(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ContainerStatusRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tVerbose:     true,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ContainerStatus(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ContainerStatusRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tVerbose:     true,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ContainerStatusResponse{\n\t\t\t\t\tStatus: &criruntimev1alpha2.ContainerStatus{\n\t\t\t\t\t\tId:    \"containerID\",\n\t\t\t\t\t\tState: criruntimev1alpha2.ContainerState_CONTAINER_RUNNING,\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.ContainerStatus{\n\t\t\t\tId:    \"containerID\",\n\t\t\t\tState: criruntimev1alpha2.ContainerState_CONTAINER_RUNNING,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, got1, err := w.ContainerStatusVerbose(tt.args.containerID)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ContainerStatusVerbose() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ContainerStatusVerbose() got = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got1, tt.want1) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ContainerStatusVerbose() got1 = %v, want %v\", got1, tt.want1)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_UpdateContainerResources(t *testing.T) {\n\n\ttype args struct {\n\t\tcontainerID string\n\t\tresources   *criruntimev1alpha2.LinuxContainerResources\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t\tresources: &criruntimev1alpha2.LinuxContainerResources{\n\t\t\t\t\tCpuPeriod: 42,\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().UpdateContainerResources(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.UpdateContainerResourcesRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tLinux: &criruntimev1alpha2.LinuxContainerResources{\n\t\t\t\t\t\t\tCpuPeriod: 42,\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t\tresources: &criruntimev1alpha2.LinuxContainerResources{\n\t\t\t\t\tCpuPeriod: 42,\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().UpdateContainerResources(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.UpdateContainerResourcesRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tLinux: &criruntimev1alpha2.LinuxContainerResources{\n\t\t\t\t\t\t\tCpuPeriod: 42,\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.UpdateContainerResourcesResponse{}, nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tif err := w.UpdateContainerResources(tt.args.containerID, tt.args.resources); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.UpdateContainerResources() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_ExecSync(t *testing.T) {\n\n\ttype args struct {\n\t\tcontainerID string\n\t\tcmd         []string\n\t\ttimeout     time.Duration\n\t}\n\ttests := []struct {\n\t\tname       string\n\t\tprepare    prepareFunc\n\t\targs       args\n\t\twantStdout []byte\n\t\twantStderr []byte\n\t\twantErr    bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t\tcmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\ttimeout:     time.Minute * 1,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ExecSync(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ExecSyncRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tCmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\t\t\tTimeout:     int64((time.Minute * 1).Seconds()),\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantStdout: nil,\n\t\t\twantStderr: nil,\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname: \"command execution failed\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t\tcmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\ttimeout:     time.Minute * 1,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ExecSync(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ExecSyncRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tCmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\t\t\tTimeout:     int64((time.Minute * 1).Seconds()),\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ExecSyncResponse{\n\t\t\t\t\tStdout:   []byte(\"stdout\"),\n\t\t\t\t\tStderr:   []byte(\"stderr\"),\n\t\t\t\t\tExitCode: 42,\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twantStdout: []byte(\"stdout\"),\n\t\t\twantStderr: []byte(\"stderr\"),\n\t\t\twantErr:    true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t\tcmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\ttimeout:     time.Minute * 1,\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ExecSync(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ExecSyncRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tCmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\t\t\tTimeout:     int64((time.Minute * 1).Seconds()),\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ExecSyncResponse{\n\t\t\t\t\tStdout:   []byte(\"stdout\"),\n\t\t\t\t\tStderr:   []byte(\"stderr\"),\n\t\t\t\t\tExitCode: 0,\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twantStdout: []byte(\"stdout\"),\n\t\t\twantStderr: []byte(\"stderr\"),\n\t\t\twantErr:    false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgotStdout, gotStderr, err := w.ExecSync(tt.args.containerID, tt.args.cmd, tt.args.timeout)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ExecSync() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(gotStdout, tt.wantStdout) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ExecSync() gotStdout = %v, want %v\", gotStdout, tt.wantStdout)\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(gotStderr, tt.wantStderr) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ExecSync() gotStderr = %v, want %v\", gotStderr, tt.wantStderr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_Exec(t *testing.T) {\n\ttype args struct {\n\t\treq *criruntimev1alpha2.ExecRequest\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    *criruntimev1alpha2.ExecResponse\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\treq: &criruntimev1alpha2.ExecRequest{\n\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\tCmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Exec(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ExecRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tCmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\treq: &criruntimev1alpha2.ExecRequest{\n\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\tCmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Exec(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ExecRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tCmd:         []string{\"/bin/bash\", \"-c\", \"echo hello world\"},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ExecResponse{\n\t\t\t\t\tUrl: \"pick up status of exec request here\",\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.ExecResponse{\n\t\t\t\tUrl: \"pick up status of exec request here\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.Exec(tt.args.req)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.Exec() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.Exec() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_Attach(t *testing.T) {\n\ttype args struct {\n\t\treq *criruntimev1alpha2.AttachRequest\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    *criruntimev1alpha2.AttachResponse\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\treq: &criruntimev1alpha2.AttachRequest{\n\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\tStdout:      true,\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Attach(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.AttachRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tStdout:      true,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\treq: &criruntimev1alpha2.AttachRequest{\n\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\tStdout:      true,\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Attach(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.AttachRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t\tStdout:      true,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.AttachResponse{\n\t\t\t\t\tUrl: \"pick up status of attach request here\",\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.AttachResponse{\n\t\t\t\tUrl: \"pick up status of attach request here\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.Attach(tt.args.req)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.Attach() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.Attach() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_ReopenContainerLog(t *testing.T) {\n\ttype args struct {\n\t\tcontainerID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ReopenContainerLog(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ReopenContainerLogRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ReopenContainerLog(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ReopenContainerLogRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ReopenContainerLogResponse{}, nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tif err := w.ReopenContainerLog(tt.args.containerID); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ReopenContainerLog() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_RunPodSandbox(t *testing.T) {\n\ttype args struct {\n\t\tconfig         *criruntimev1alpha2.PodSandboxConfig\n\t\truntimeHandler string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tconfig: &criruntimev1alpha2.PodSandboxConfig{\n\t\t\t\t\tMetadata: &criruntimev1alpha2.PodSandboxMetadata{\n\t\t\t\t\t\tName:      \"pod-name\",\n\t\t\t\t\t\tNamespace: \"default\",\n\t\t\t\t\t\tUid:       \"b924b248-6395-4415-8603-4f3562e44418\",\n\t\t\t\t\t\tAttempt:   0,\n\t\t\t\t\t},\n\t\t\t\t\tHostname: \"sandboxHostname\",\n\t\t\t\t},\n\t\t\t\truntimeHandler: \"runtimeHandler\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().RunPodSandbox(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.RunPodSandboxRequest{\n\t\t\t\t\t\tRuntimeHandler: \"runtimeHandler\",\n\t\t\t\t\t\tConfig: &criruntimev1alpha2.PodSandboxConfig{\n\t\t\t\t\t\t\tMetadata: &criruntimev1alpha2.PodSandboxMetadata{\n\t\t\t\t\t\t\t\tName:      \"pod-name\",\n\t\t\t\t\t\t\t\tNamespace: \"default\",\n\t\t\t\t\t\t\t\tUid:       \"b924b248-6395-4415-8603-4f3562e44418\",\n\t\t\t\t\t\t\t\tAttempt:   0,\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tHostname: \"sandboxHostname\",\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    \"\",\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tconfig: &criruntimev1alpha2.PodSandboxConfig{\n\t\t\t\t\tMetadata: &criruntimev1alpha2.PodSandboxMetadata{\n\t\t\t\t\t\tName:      \"pod-name\",\n\t\t\t\t\t\tNamespace: \"default\",\n\t\t\t\t\t\tUid:       \"b924b248-6395-4415-8603-4f3562e44418\",\n\t\t\t\t\t\tAttempt:   0,\n\t\t\t\t\t},\n\t\t\t\t\tHostname: \"sandboxHostname\",\n\t\t\t\t},\n\t\t\t\truntimeHandler: \"runtimeHandler\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().RunPodSandbox(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.RunPodSandboxRequest{\n\t\t\t\t\t\tRuntimeHandler: \"runtimeHandler\",\n\t\t\t\t\t\tConfig: &criruntimev1alpha2.PodSandboxConfig{\n\t\t\t\t\t\t\tMetadata: &criruntimev1alpha2.PodSandboxMetadata{\n\t\t\t\t\t\t\t\tName:      \"pod-name\",\n\t\t\t\t\t\t\t\tNamespace: \"default\",\n\t\t\t\t\t\t\t\tUid:       \"b924b248-6395-4415-8603-4f3562e44418\",\n\t\t\t\t\t\t\t\tAttempt:   0,\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\tHostname: \"sandboxHostname\",\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.RunPodSandboxResponse{\n\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant:    \"podSandboxID\",\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.RunPodSandbox(tt.args.config, tt.args.runtimeHandler)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.RunPodSandbox() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif got != tt.want {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.RunPodSandbox() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_StopPodSandbox(t *testing.T) {\n\ttype args struct {\n\t\tpodSandboxID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID: \"podSandboxID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().StopPodSandbox(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StopPodSandboxRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID: \"podSandboxID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().StopPodSandbox(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StopPodSandboxRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.StopPodSandboxResponse{}, nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tif err := w.StopPodSandbox(tt.args.podSandboxID); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.StopPodSandbox() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_RemovePodSandbox(t *testing.T) {\n\ttype args struct {\n\t\tpodSandboxID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID: \"podSandboxID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().RemovePodSandbox(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.RemovePodSandboxRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID: \"podSandboxID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().RemovePodSandbox(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.RemovePodSandboxRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.RemovePodSandboxResponse{}, nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tif err := w.RemovePodSandbox(tt.args.podSandboxID); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.RemovePodSandbox() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_PodSandboxStatus(t *testing.T) {\n\ttype args struct {\n\t\tpodSandboxID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    *criruntimev1alpha2.PodSandboxStatus\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID: \"podSandboxID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().PodSandboxStatus(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.PodSandboxStatusRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t\tVerbose:      false,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID: \"podSandboxID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().PodSandboxStatus(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.PodSandboxStatusRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t\tVerbose:      false,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.PodSandboxStatusResponse{\n\t\t\t\t\tStatus: &criruntimev1alpha2.PodSandboxStatus{\n\t\t\t\t\t\tId:    \"podSandboxID\",\n\t\t\t\t\t\tState: criruntimev1alpha2.PodSandboxState_SANDBOX_READY,\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.PodSandboxStatus{\n\t\t\t\tId:    \"podSandboxID\",\n\t\t\t\tState: criruntimev1alpha2.PodSandboxState_SANDBOX_READY,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.PodSandboxStatus(tt.args.podSandboxID)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.PodSandboxStatus() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.PodSandboxStatus() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_PodSandboxStatusVerbose(t *testing.T) {\n\n\ttype args struct {\n\t\tpodSandboxID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    *criruntimev1alpha2.PodSandboxStatus\n\t\twant1   map[string]string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID: \"podSandboxID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().PodSandboxStatus(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.PodSandboxStatusRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t\tVerbose:      true,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tpodSandboxID: \"podSandboxID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().PodSandboxStatus(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.PodSandboxStatusRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t\tVerbose:      true,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.PodSandboxStatusResponse{\n\t\t\t\t\tStatus: &criruntimev1alpha2.PodSandboxStatus{\n\t\t\t\t\t\tId:    \"podSandboxID\",\n\t\t\t\t\t\tState: criruntimev1alpha2.PodSandboxState_SANDBOX_READY,\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.PodSandboxStatus{\n\t\t\t\tId:    \"podSandboxID\",\n\t\t\t\tState: criruntimev1alpha2.PodSandboxState_SANDBOX_READY,\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, got1, err := w.PodSandboxStatusVerbose(tt.args.podSandboxID)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.PodSandboxStatusVerbose() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.PodSandboxStatusVerbose() got = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got1, tt.want1) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.PodSandboxStatusVerbose() got1 = %v, want %v\", got1, tt.want1)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_ListPodSandbox(t *testing.T) {\n\ttype args struct {\n\t\tfilter *criruntimev1alpha2.PodSandboxFilter\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    []*criruntimev1alpha2.PodSandbox\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tfilter: &criruntimev1alpha2.PodSandboxFilter{\n\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ListPodSandbox(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ListPodSandboxRequest{\n\t\t\t\t\t\tFilter: &criruntimev1alpha2.PodSandboxFilter{\n\t\t\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tfilter: &criruntimev1alpha2.PodSandboxFilter{\n\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ListPodSandbox(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ListPodSandboxRequest{\n\t\t\t\t\t\tFilter: &criruntimev1alpha2.PodSandboxFilter{\n\t\t\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ListPodSandboxResponse{\n\t\t\t\t\tItems: []*criruntimev1alpha2.PodSandbox{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tId:    \"one\",\n\t\t\t\t\t\t\tState: criruntimev1alpha2.PodSandboxState_SANDBOX_READY,\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tId:    \"two\",\n\t\t\t\t\t\t\tState: criruntimev1alpha2.PodSandboxState_SANDBOX_NOTREADY,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: []*criruntimev1alpha2.PodSandbox{\n\t\t\t\t{\n\t\t\t\t\tId:    \"one\",\n\t\t\t\t\tState: criruntimev1alpha2.PodSandboxState_SANDBOX_READY,\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tId:    \"two\",\n\t\t\t\t\tState: criruntimev1alpha2.PodSandboxState_SANDBOX_NOTREADY,\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.ListPodSandbox(tt.args.filter)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ListPodSandbox() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ListPodSandbox() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_PortForward(t *testing.T) {\n\ttype args struct {\n\t\treq *criruntimev1alpha2.PortForwardRequest\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    *criruntimev1alpha2.PortForwardResponse\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\treq: &criruntimev1alpha2.PortForwardRequest{\n\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\tPort:         []int32{42},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().PortForward(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.PortForwardRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t\tPort:         []int32{42},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\treq: &criruntimev1alpha2.PortForwardRequest{\n\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\tPort:         []int32{42},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().PortForward(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.PortForwardRequest{\n\t\t\t\t\t\tPodSandboxId: \"podSandboxID\",\n\t\t\t\t\t\tPort:         []int32{42},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.PortForwardResponse{\n\t\t\t\t\tUrl: \"pick up port forward request here\",\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.PortForwardResponse{\n\t\t\t\tUrl: \"pick up port forward request here\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.PortForward(tt.args.req)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.PortForward() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.PortForward() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_ContainerStats(t *testing.T) {\n\ttype args struct {\n\t\tcontainerID string\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    *criruntimev1alpha2.ContainerStats\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ContainerStats(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ContainerStatsRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\tcontainerID: \"containerID\",\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ContainerStats(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ContainerStatsRequest{\n\t\t\t\t\t\tContainerId: \"containerID\",\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ContainerStatsResponse{\n\t\t\t\t\tStats: &criruntimev1alpha2.ContainerStats{\n\t\t\t\t\t\tAttributes: &criruntimev1alpha2.ContainerAttributes{\n\t\t\t\t\t\t\tId: \"containerID\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.ContainerStats{\n\t\t\t\tAttributes: &criruntimev1alpha2.ContainerAttributes{\n\t\t\t\t\tId: \"containerID\",\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.ContainerStats(tt.args.containerID)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ContainerStats() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ContainerStats() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_ListContainerStats(t *testing.T) {\n\ttype args struct {\n\t\tfilter *criruntimev1alpha2.ContainerStatsFilter\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twant    []*criruntimev1alpha2.ContainerStats\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tfilter: &criruntimev1alpha2.ContainerStatsFilter{\n\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ListContainerStats(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ListContainerStatsRequest{\n\t\t\t\t\t\tFilter: &criruntimev1alpha2.ContainerStatsFilter{\n\t\t\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\tfilter: &criruntimev1alpha2.ContainerStatsFilter{\n\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().ListContainerStats(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.ListContainerStatsRequest{\n\t\t\t\t\t\tFilter: &criruntimev1alpha2.ContainerStatsFilter{\n\t\t\t\t\t\t\tLabelSelector: map[string]string{\n\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.ListContainerStatsResponse{\n\t\t\t\t\tStats: []*criruntimev1alpha2.ContainerStats{\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tAttributes: &criruntimev1alpha2.ContainerAttributes{\n\t\t\t\t\t\t\t\tId: \"one\",\n\t\t\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tAttributes: &criruntimev1alpha2.ContainerAttributes{\n\t\t\t\t\t\t\t\tId: \"two\",\n\t\t\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: []*criruntimev1alpha2.ContainerStats{\n\t\t\t\t{\n\t\t\t\t\tAttributes: &criruntimev1alpha2.ContainerAttributes{\n\t\t\t\t\t\tId: \"one\",\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tAttributes: &criruntimev1alpha2.ContainerAttributes{\n\t\t\t\t\t\tId: \"two\",\n\t\t\t\t\t\tLabels: map[string]string{\n\t\t\t\t\t\t\t\"a\": \"b\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.ListContainerStats(tt.args.filter)\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ListContainerStats() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.ListContainerStats() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_UpdateRuntimeConfig(t *testing.T) {\n\ttype args struct {\n\t\truntimeConfig *criruntimev1alpha2.RuntimeConfig\n\t}\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\targs    args\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\targs: args{\n\t\t\t\truntimeConfig: &criruntimev1alpha2.RuntimeConfig{\n\t\t\t\t\tNetworkConfig: &criruntimev1alpha2.NetworkConfig{\n\t\t\t\t\t\tPodCidr: \"10.10.10.0/24\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().UpdateRuntimeConfig(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.UpdateRuntimeConfigRequest{\n\t\t\t\t\t\tRuntimeConfig: &criruntimev1alpha2.RuntimeConfig{\n\t\t\t\t\t\t\tNetworkConfig: &criruntimev1alpha2.NetworkConfig{\n\t\t\t\t\t\t\t\tPodCidr: \"10.10.10.0/24\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\targs: args{\n\t\t\t\truntimeConfig: &criruntimev1alpha2.RuntimeConfig{\n\t\t\t\t\tNetworkConfig: &criruntimev1alpha2.NetworkConfig{\n\t\t\t\t\t\tPodCidr: \"10.10.10.0/24\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().UpdateRuntimeConfig(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.UpdateRuntimeConfigRequest{\n\t\t\t\t\t\tRuntimeConfig: &criruntimev1alpha2.RuntimeConfig{\n\t\t\t\t\t\t\tNetworkConfig: &criruntimev1alpha2.NetworkConfig{\n\t\t\t\t\t\t\t\tPodCidr: \"10.10.10.0/24\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.UpdateRuntimeConfigResponse{}, nil)\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tif err := w.UpdateRuntimeConfig(tt.args.runtimeConfig); (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.UpdateRuntimeConfig() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_Status(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\twant    *criruntimev1alpha2.RuntimeStatus\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Status(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StatusRequest{\n\t\t\t\t\t\tVerbose: false,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Status(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StatusRequest{\n\t\t\t\t\t\tVerbose: false,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.StatusResponse{\n\t\t\t\t\tStatus: &criruntimev1alpha2.RuntimeStatus{\n\t\t\t\t\t\tConditions: []*criruntimev1alpha2.RuntimeCondition{\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tType:    \"RuntimeReady\",\n\t\t\t\t\t\t\t\tStatus:  true,\n\t\t\t\t\t\t\t\tReason:  \"\",\n\t\t\t\t\t\t\t\tMessage: \"\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tType:    \"NetworkReady\",\n\t\t\t\t\t\t\t\tStatus:  true,\n\t\t\t\t\t\t\t\tReason:  \"\",\n\t\t\t\t\t\t\t\tMessage: \"\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.RuntimeStatus{\n\t\t\t\tConditions: []*criruntimev1alpha2.RuntimeCondition{\n\t\t\t\t\t{\n\t\t\t\t\t\tType:    \"RuntimeReady\",\n\t\t\t\t\t\tStatus:  true,\n\t\t\t\t\t\tReason:  \"\",\n\t\t\t\t\t\tMessage: \"\",\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tType:    \"NetworkReady\",\n\t\t\t\t\t\tStatus:  true,\n\t\t\t\t\t\tReason:  \"\",\n\t\t\t\t\t\tMessage: \"\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, err := w.Status()\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.Status() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.Status() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc Test_extendedServiceRuntimeWrapper_StatusVerbose(t *testing.T) {\n\ttests := []struct {\n\t\tname    string\n\t\tprepare prepareFunc\n\t\twant    *criruntimev1alpha2.RuntimeStatus\n\t\twant1   map[string]string\n\t\twantErr bool\n\t}{\n\t\t{\n\t\t\tname: \"error\",\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Status(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StatusRequest{\n\t\t\t\t\t\tVerbose: true,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(nil, errMock)\n\t\t\t},\n\t\t\twant:    nil,\n\t\t\twantErr: true,\n\t\t},\n\t\t{\n\t\t\tname: \"success\",\n\t\t\tprepare: func(t *testing.T, c *mockcri.MockRuntimeServiceClient) {\n\t\t\t\tc.EXPECT().Status(\n\t\t\t\t\tgomock.Any(),\n\t\t\t\t\tgomock.Eq(&criruntimev1alpha2.StatusRequest{\n\t\t\t\t\t\tVerbose: true,\n\t\t\t\t\t}),\n\t\t\t\t).Times(1).Return(&criruntimev1alpha2.StatusResponse{\n\t\t\t\t\tStatus: &criruntimev1alpha2.RuntimeStatus{\n\t\t\t\t\t\tConditions: []*criruntimev1alpha2.RuntimeCondition{\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tType:    \"RuntimeReady\",\n\t\t\t\t\t\t\t\tStatus:  true,\n\t\t\t\t\t\t\t\tReason:  \"\",\n\t\t\t\t\t\t\t\tMessage: \"\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tType:    \"NetworkReady\",\n\t\t\t\t\t\t\t\tStatus:  true,\n\t\t\t\t\t\t\t\tReason:  \"\",\n\t\t\t\t\t\t\t\tMessage: \"\",\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tInfo: map[string]string{\n\t\t\t\t\t\t\"verobse\": \"output\",\n\t\t\t\t\t},\n\t\t\t\t}, nil)\n\t\t\t},\n\t\t\twant: &criruntimev1alpha2.RuntimeStatus{\n\t\t\t\tConditions: []*criruntimev1alpha2.RuntimeCondition{\n\t\t\t\t\t{\n\t\t\t\t\t\tType:    \"RuntimeReady\",\n\t\t\t\t\t\tStatus:  true,\n\t\t\t\t\t\tReason:  \"\",\n\t\t\t\t\t\tMessage: \"\",\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tType:    \"NetworkReady\",\n\t\t\t\t\t\tStatus:  true,\n\t\t\t\t\t\tReason:  \"\",\n\t\t\t\t\t\tMessage: \"\",\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\twant1: map[string]string{\n\t\t\t\t\"verobse\": \"output\",\n\t\t\t},\n\t\t\twantErr: false,\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\t// the same in every test\n\t\t\tctrl, client, cancel, w := newUnitTestCRIExtendedRuntimeServiceWrapper(t)\n\t\t\tdefer cancel()\n\t\t\tdefer ctrl.Finish()\n\t\t\tif tt.prepare != nil {\n\t\t\t\ttt.prepare(t, client)\n\t\t\t}\n\t\t\tgot, got1, err := w.StatusVerbose()\n\t\t\tif (err != nil) != tt.wantErr {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.StatusVerbose() error = %v, wantErr %v\", err, tt.wantErr)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got, tt.want) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.StatusVerbose() got = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\tif !reflect.DeepEqual(got1, tt.want1) {\n\t\t\t\tt.Errorf(\"extendedServiceRuntimeWrapper.StatusVerbose() got1 = %v, want %v\", got1, tt.want1)\n\t\t\t}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "utils/cri/interface.go",
    "content": "package cri\n\nimport (\n\tcriapi \"k8s.io/cri-api/pkg/apis\"\n\tcriruntimev1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\n// ExtendedRuntimeService extends the CRI RuntimeService by some verbose functions that are otherwise inaccessible\ntype ExtendedRuntimeService interface {\n\tcriapi.RuntimeService\n\tContainerStatusVerbose(containerID string) (*criruntimev1alpha2.ContainerStatus, map[string]string, error)\n\tPodSandboxStatusVerbose(podSandboxID string) (*criruntimev1alpha2.PodSandboxStatus, map[string]string, error)\n\tStatusVerbose() (*criruntimev1alpha2.RuntimeStatus, map[string]string, error)\n}\n"
  },
  {
    "path": "utils/cri/mockcri/mock_runtime_service.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: k8s.io/cri-api/pkg/apis/runtime/v1alpha2 (interfaces: RuntimeServiceClient)\n\n// Package mockcri is a generated GoMock package.\npackage mockcri\n\nimport (\n\tcontext \"context\"\n\treflect \"reflect\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tgrpc \"google.golang.org/grpc\"\n\tv1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\n// MockRuntimeServiceClient is a mock of RuntimeServiceClient interface\n// nolint\ntype MockRuntimeServiceClient struct {\n\tctrl     *gomock.Controller\n\trecorder *MockRuntimeServiceClientMockRecorder\n}\n\n// MockRuntimeServiceClientMockRecorder is the mock recorder for MockRuntimeServiceClient\n// nolint\ntype MockRuntimeServiceClientMockRecorder struct {\n\tmock *MockRuntimeServiceClient\n}\n\n// NewMockRuntimeServiceClient creates a new mock instance\n// nolint\nfunc NewMockRuntimeServiceClient(ctrl *gomock.Controller) *MockRuntimeServiceClient {\n\tmock := &MockRuntimeServiceClient{ctrl: ctrl}\n\tmock.recorder = &MockRuntimeServiceClientMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockRuntimeServiceClient) EXPECT() *MockRuntimeServiceClientMockRecorder {\n\treturn m.recorder\n}\n\n// Attach mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) Attach(arg0 context.Context, arg1 *v1alpha2.AttachRequest, arg2 ...grpc.CallOption) (*v1alpha2.AttachResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"Attach\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.AttachResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Attach indicates an expected call of Attach\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) Attach(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Attach\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).Attach), varargs...)\n}\n\n// ContainerStats mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) ContainerStats(arg0 context.Context, arg1 *v1alpha2.ContainerStatsRequest, arg2 ...grpc.CallOption) (*v1alpha2.ContainerStatsResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ContainerStats\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.ContainerStatsResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStats indicates an expected call of ContainerStats\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) ContainerStats(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStats\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).ContainerStats), varargs...)\n}\n\n// ContainerStatus mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) ContainerStatus(arg0 context.Context, arg1 *v1alpha2.ContainerStatusRequest, arg2 ...grpc.CallOption) (*v1alpha2.ContainerStatusResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ContainerStatus\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.ContainerStatusResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStatus indicates an expected call of ContainerStatus\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) ContainerStatus(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStatus\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).ContainerStatus), varargs...)\n}\n\n// CreateContainer mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) CreateContainer(arg0 context.Context, arg1 *v1alpha2.CreateContainerRequest, arg2 ...grpc.CallOption) (*v1alpha2.CreateContainerResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"CreateContainer\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.CreateContainerResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CreateContainer indicates an expected call of CreateContainer\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) CreateContainer(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateContainer\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).CreateContainer), varargs...)\n}\n\n// Exec mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) Exec(arg0 context.Context, arg1 *v1alpha2.ExecRequest, arg2 ...grpc.CallOption) (*v1alpha2.ExecResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"Exec\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.ExecResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Exec indicates an expected call of Exec\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) Exec(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Exec\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).Exec), varargs...)\n}\n\n// ExecSync mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) ExecSync(arg0 context.Context, arg1 *v1alpha2.ExecSyncRequest, arg2 ...grpc.CallOption) (*v1alpha2.ExecSyncResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ExecSync\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.ExecSyncResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ExecSync indicates an expected call of ExecSync\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) ExecSync(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ExecSync\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).ExecSync), varargs...)\n}\n\n// ListContainerStats mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) ListContainerStats(arg0 context.Context, arg1 *v1alpha2.ListContainerStatsRequest, arg2 ...grpc.CallOption) (*v1alpha2.ListContainerStatsResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ListContainerStats\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.ListContainerStatsResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ListContainerStats indicates an expected call of ListContainerStats\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) ListContainerStats(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ListContainerStats\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).ListContainerStats), varargs...)\n}\n\n// ListContainers mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) ListContainers(arg0 context.Context, arg1 *v1alpha2.ListContainersRequest, arg2 ...grpc.CallOption) (*v1alpha2.ListContainersResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ListContainers\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.ListContainersResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ListContainers indicates an expected call of ListContainers\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) ListContainers(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ListContainers\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).ListContainers), varargs...)\n}\n\n// ListPodSandbox mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) ListPodSandbox(arg0 context.Context, arg1 *v1alpha2.ListPodSandboxRequest, arg2 ...grpc.CallOption) (*v1alpha2.ListPodSandboxResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ListPodSandbox\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.ListPodSandboxResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ListPodSandbox indicates an expected call of ListPodSandbox\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) ListPodSandbox(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ListPodSandbox\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).ListPodSandbox), varargs...)\n}\n\n// PodSandboxStatus mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) PodSandboxStatus(arg0 context.Context, arg1 *v1alpha2.PodSandboxStatusRequest, arg2 ...grpc.CallOption) (*v1alpha2.PodSandboxStatusResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"PodSandboxStatus\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.PodSandboxStatusResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PodSandboxStatus indicates an expected call of PodSandboxStatus\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) PodSandboxStatus(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PodSandboxStatus\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).PodSandboxStatus), varargs...)\n}\n\n// PortForward mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) PortForward(arg0 context.Context, arg1 *v1alpha2.PortForwardRequest, arg2 ...grpc.CallOption) (*v1alpha2.PortForwardResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"PortForward\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.PortForwardResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PortForward indicates an expected call of PortForward\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) PortForward(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PortForward\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).PortForward), varargs...)\n}\n\n// RemoveContainer mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) RemoveContainer(arg0 context.Context, arg1 *v1alpha2.RemoveContainerRequest, arg2 ...grpc.CallOption) (*v1alpha2.RemoveContainerResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"RemoveContainer\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.RemoveContainerResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// RemoveContainer indicates an expected call of RemoveContainer\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) RemoveContainer(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RemoveContainer\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).RemoveContainer), varargs...)\n}\n\n// RemovePodSandbox mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) RemovePodSandbox(arg0 context.Context, arg1 *v1alpha2.RemovePodSandboxRequest, arg2 ...grpc.CallOption) (*v1alpha2.RemovePodSandboxResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"RemovePodSandbox\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.RemovePodSandboxResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// RemovePodSandbox indicates an expected call of RemovePodSandbox\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) RemovePodSandbox(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RemovePodSandbox\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).RemovePodSandbox), varargs...)\n}\n\n// ReopenContainerLog mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) ReopenContainerLog(arg0 context.Context, arg1 *v1alpha2.ReopenContainerLogRequest, arg2 ...grpc.CallOption) (*v1alpha2.ReopenContainerLogResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"ReopenContainerLog\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.ReopenContainerLogResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ReopenContainerLog indicates an expected call of ReopenContainerLog\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) ReopenContainerLog(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ReopenContainerLog\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).ReopenContainerLog), varargs...)\n}\n\n// RunPodSandbox mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) RunPodSandbox(arg0 context.Context, arg1 *v1alpha2.RunPodSandboxRequest, arg2 ...grpc.CallOption) (*v1alpha2.RunPodSandboxResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"RunPodSandbox\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.RunPodSandboxResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// RunPodSandbox indicates an expected call of RunPodSandbox\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) RunPodSandbox(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RunPodSandbox\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).RunPodSandbox), varargs...)\n}\n\n// StartContainer mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) StartContainer(arg0 context.Context, arg1 *v1alpha2.StartContainerRequest, arg2 ...grpc.CallOption) (*v1alpha2.StartContainerResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"StartContainer\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.StartContainerResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// StartContainer indicates an expected call of StartContainer\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) StartContainer(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StartContainer\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).StartContainer), varargs...)\n}\n\n// Status mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) Status(arg0 context.Context, arg1 *v1alpha2.StatusRequest, arg2 ...grpc.CallOption) (*v1alpha2.StatusResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"Status\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.StatusResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Status indicates an expected call of Status\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) Status(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Status\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).Status), varargs...)\n}\n\n// StopContainer mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) StopContainer(arg0 context.Context, arg1 *v1alpha2.StopContainerRequest, arg2 ...grpc.CallOption) (*v1alpha2.StopContainerResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"StopContainer\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.StopContainerResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// StopContainer indicates an expected call of StopContainer\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) StopContainer(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StopContainer\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).StopContainer), varargs...)\n}\n\n// StopPodSandbox mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) StopPodSandbox(arg0 context.Context, arg1 *v1alpha2.StopPodSandboxRequest, arg2 ...grpc.CallOption) (*v1alpha2.StopPodSandboxResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"StopPodSandbox\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.StopPodSandboxResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// StopPodSandbox indicates an expected call of StopPodSandbox\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) StopPodSandbox(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StopPodSandbox\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).StopPodSandbox), varargs...)\n}\n\n// UpdateContainerResources mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) UpdateContainerResources(arg0 context.Context, arg1 *v1alpha2.UpdateContainerResourcesRequest, arg2 ...grpc.CallOption) (*v1alpha2.UpdateContainerResourcesResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"UpdateContainerResources\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.UpdateContainerResourcesResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// UpdateContainerResources indicates an expected call of UpdateContainerResources\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) UpdateContainerResources(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateContainerResources\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).UpdateContainerResources), varargs...)\n}\n\n// UpdateRuntimeConfig mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) UpdateRuntimeConfig(arg0 context.Context, arg1 *v1alpha2.UpdateRuntimeConfigRequest, arg2 ...grpc.CallOption) (*v1alpha2.UpdateRuntimeConfigResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"UpdateRuntimeConfig\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.UpdateRuntimeConfigResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// UpdateRuntimeConfig indicates an expected call of UpdateRuntimeConfig\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) UpdateRuntimeConfig(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateRuntimeConfig\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).UpdateRuntimeConfig), varargs...)\n}\n\n// Version mocks base method\n// nolint\nfunc (m *MockRuntimeServiceClient) Version(arg0 context.Context, arg1 *v1alpha2.VersionRequest, arg2 ...grpc.CallOption) (*v1alpha2.VersionResponse, error) {\n\tm.ctrl.T.Helper()\n\tvarargs := []interface{}{arg0, arg1}\n\tfor _, a := range arg2 {\n\t\tvarargs = append(varargs, a)\n\t}\n\tret := m.ctrl.Call(m, \"Version\", varargs...)\n\tret0, _ := ret[0].(*v1alpha2.VersionResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Version indicates an expected call of Version\n// nolint\nfunc (mr *MockRuntimeServiceClientMockRecorder) Version(arg0, arg1 interface{}, arg2 ...interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\tvarargs := append([]interface{}{arg0, arg1}, arg2...)\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Version\", reflect.TypeOf((*MockRuntimeServiceClient)(nil).Version), varargs...)\n}\n"
  },
  {
    "path": "utils/cri/mockcri/mockinterface.go",
    "content": "// Code generated by MockGen. DO NOT EDIT.\n// Source: go.aporeto.io/enforcerd/trireme-lib/utils/cri (interfaces: ExtendedRuntimeService)\n\n// Package mockcri is a generated GoMock package.\npackage mockcri\n\nimport (\n\treflect \"reflect\"\n\ttime \"time\"\n\n\tgomock \"github.com/golang/mock/gomock\"\n\tv1alpha2 \"k8s.io/cri-api/pkg/apis/runtime/v1alpha2\"\n)\n\n// MockExtendedRuntimeService is a mock of ExtendedRuntimeService interface\n// nolint\ntype MockExtendedRuntimeService struct {\n\tctrl     *gomock.Controller\n\trecorder *MockExtendedRuntimeServiceMockRecorder\n}\n\n// MockExtendedRuntimeServiceMockRecorder is the mock recorder for MockExtendedRuntimeService\n// nolint\ntype MockExtendedRuntimeServiceMockRecorder struct {\n\tmock *MockExtendedRuntimeService\n}\n\n// NewMockExtendedRuntimeService creates a new mock instance\n// nolint\nfunc NewMockExtendedRuntimeService(ctrl *gomock.Controller) *MockExtendedRuntimeService {\n\tmock := &MockExtendedRuntimeService{ctrl: ctrl}\n\tmock.recorder = &MockExtendedRuntimeServiceMockRecorder{mock}\n\treturn mock\n}\n\n// EXPECT returns an object that allows the caller to indicate expected use\n// nolint\nfunc (m *MockExtendedRuntimeService) EXPECT() *MockExtendedRuntimeServiceMockRecorder {\n\treturn m.recorder\n}\n\n// Attach mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) Attach(arg0 *v1alpha2.AttachRequest) (*v1alpha2.AttachResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Attach\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.AttachResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Attach indicates an expected call of Attach\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) Attach(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Attach\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).Attach), arg0)\n}\n\n// ContainerStats mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) ContainerStats(arg0 string) (*v1alpha2.ContainerStats, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStats\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.ContainerStats)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStats indicates an expected call of ContainerStats\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) ContainerStats(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStats\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).ContainerStats), arg0)\n}\n\n// ContainerStatus mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) ContainerStatus(arg0 string) (*v1alpha2.ContainerStatus, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStatus\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.ContainerStatus)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ContainerStatus indicates an expected call of ContainerStatus\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) ContainerStatus(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStatus\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).ContainerStatus), arg0)\n}\n\n// ContainerStatusVerbose mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) ContainerStatusVerbose(arg0 string) (*v1alpha2.ContainerStatus, map[string]string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ContainerStatusVerbose\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.ContainerStatus)\n\tret1, _ := ret[1].(map[string]string)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ContainerStatusVerbose indicates an expected call of ContainerStatusVerbose\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) ContainerStatusVerbose(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ContainerStatusVerbose\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).ContainerStatusVerbose), arg0)\n}\n\n// CreateContainer mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) CreateContainer(arg0 string, arg1 *v1alpha2.ContainerConfig, arg2 *v1alpha2.PodSandboxConfig) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"CreateContainer\", arg0, arg1, arg2)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// CreateContainer indicates an expected call of CreateContainer\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) CreateContainer(arg0, arg1, arg2 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"CreateContainer\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).CreateContainer), arg0, arg1, arg2)\n}\n\n// Exec mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) Exec(arg0 *v1alpha2.ExecRequest) (*v1alpha2.ExecResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Exec\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.ExecResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Exec indicates an expected call of Exec\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) Exec(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Exec\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).Exec), arg0)\n}\n\n// ExecSync mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) ExecSync(arg0 string, arg1 []string, arg2 time.Duration) ([]byte, []byte, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ExecSync\", arg0, arg1, arg2)\n\tret0, _ := ret[0].([]byte)\n\tret1, _ := ret[1].([]byte)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// ExecSync indicates an expected call of ExecSync\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) ExecSync(arg0, arg1, arg2 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ExecSync\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).ExecSync), arg0, arg1, arg2)\n}\n\n// ListContainerStats mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) ListContainerStats(arg0 *v1alpha2.ContainerStatsFilter) ([]*v1alpha2.ContainerStats, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ListContainerStats\", arg0)\n\tret0, _ := ret[0].([]*v1alpha2.ContainerStats)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ListContainerStats indicates an expected call of ListContainerStats\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) ListContainerStats(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ListContainerStats\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).ListContainerStats), arg0)\n}\n\n// ListContainers mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) ListContainers(arg0 *v1alpha2.ContainerFilter) ([]*v1alpha2.Container, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ListContainers\", arg0)\n\tret0, _ := ret[0].([]*v1alpha2.Container)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ListContainers indicates an expected call of ListContainers\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) ListContainers(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ListContainers\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).ListContainers), arg0)\n}\n\n// ListPodSandbox mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) ListPodSandbox(arg0 *v1alpha2.PodSandboxFilter) ([]*v1alpha2.PodSandbox, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ListPodSandbox\", arg0)\n\tret0, _ := ret[0].([]*v1alpha2.PodSandbox)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// ListPodSandbox indicates an expected call of ListPodSandbox\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) ListPodSandbox(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ListPodSandbox\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).ListPodSandbox), arg0)\n}\n\n// PodSandboxStatus mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) PodSandboxStatus(arg0 string) (*v1alpha2.PodSandboxStatus, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PodSandboxStatus\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.PodSandboxStatus)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PodSandboxStatus indicates an expected call of PodSandboxStatus\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) PodSandboxStatus(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PodSandboxStatus\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).PodSandboxStatus), arg0)\n}\n\n// PodSandboxStatusVerbose mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) PodSandboxStatusVerbose(arg0 string) (*v1alpha2.PodSandboxStatus, map[string]string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PodSandboxStatusVerbose\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.PodSandboxStatus)\n\tret1, _ := ret[1].(map[string]string)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// PodSandboxStatusVerbose indicates an expected call of PodSandboxStatusVerbose\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) PodSandboxStatusVerbose(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PodSandboxStatusVerbose\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).PodSandboxStatusVerbose), arg0)\n}\n\n// PortForward mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) PortForward(arg0 *v1alpha2.PortForwardRequest) (*v1alpha2.PortForwardResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"PortForward\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.PortForwardResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// PortForward indicates an expected call of PortForward\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) PortForward(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"PortForward\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).PortForward), arg0)\n}\n\n// RemoveContainer mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) RemoveContainer(arg0 string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RemoveContainer\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// RemoveContainer indicates an expected call of RemoveContainer\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) RemoveContainer(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RemoveContainer\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).RemoveContainer), arg0)\n}\n\n// RemovePodSandbox mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) RemovePodSandbox(arg0 string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RemovePodSandbox\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// RemovePodSandbox indicates an expected call of RemovePodSandbox\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) RemovePodSandbox(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RemovePodSandbox\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).RemovePodSandbox), arg0)\n}\n\n// ReopenContainerLog mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) ReopenContainerLog(arg0 string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"ReopenContainerLog\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// ReopenContainerLog indicates an expected call of ReopenContainerLog\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) ReopenContainerLog(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"ReopenContainerLog\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).ReopenContainerLog), arg0)\n}\n\n// RunPodSandbox mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) RunPodSandbox(arg0 *v1alpha2.PodSandboxConfig, arg1 string) (string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"RunPodSandbox\", arg0, arg1)\n\tret0, _ := ret[0].(string)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// RunPodSandbox indicates an expected call of RunPodSandbox\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) RunPodSandbox(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"RunPodSandbox\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).RunPodSandbox), arg0, arg1)\n}\n\n// StartContainer mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) StartContainer(arg0 string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"StartContainer\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// StartContainer indicates an expected call of StartContainer\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) StartContainer(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StartContainer\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).StartContainer), arg0)\n}\n\n// Status mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) Status() (*v1alpha2.RuntimeStatus, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Status\")\n\tret0, _ := ret[0].(*v1alpha2.RuntimeStatus)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Status indicates an expected call of Status\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) Status() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Status\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).Status))\n}\n\n// StatusVerbose mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) StatusVerbose() (*v1alpha2.RuntimeStatus, map[string]string, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"StatusVerbose\")\n\tret0, _ := ret[0].(*v1alpha2.RuntimeStatus)\n\tret1, _ := ret[1].(map[string]string)\n\tret2, _ := ret[2].(error)\n\treturn ret0, ret1, ret2\n}\n\n// StatusVerbose indicates an expected call of StatusVerbose\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) StatusVerbose() *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StatusVerbose\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).StatusVerbose))\n}\n\n// StopContainer mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) StopContainer(arg0 string, arg1 int64) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"StopContainer\", arg0, arg1)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// StopContainer indicates an expected call of StopContainer\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) StopContainer(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StopContainer\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).StopContainer), arg0, arg1)\n}\n\n// StopPodSandbox mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) StopPodSandbox(arg0 string) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"StopPodSandbox\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// StopPodSandbox indicates an expected call of StopPodSandbox\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) StopPodSandbox(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"StopPodSandbox\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).StopPodSandbox), arg0)\n}\n\n// UpdateContainerResources mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) UpdateContainerResources(arg0 string, arg1 *v1alpha2.LinuxContainerResources) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateContainerResources\", arg0, arg1)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateContainerResources indicates an expected call of UpdateContainerResources\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) UpdateContainerResources(arg0, arg1 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateContainerResources\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).UpdateContainerResources), arg0, arg1)\n}\n\n// UpdateRuntimeConfig mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) UpdateRuntimeConfig(arg0 *v1alpha2.RuntimeConfig) error {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"UpdateRuntimeConfig\", arg0)\n\tret0, _ := ret[0].(error)\n\treturn ret0\n}\n\n// UpdateRuntimeConfig indicates an expected call of UpdateRuntimeConfig\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) UpdateRuntimeConfig(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"UpdateRuntimeConfig\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).UpdateRuntimeConfig), arg0)\n}\n\n// Version mocks base method\n// nolint\nfunc (m *MockExtendedRuntimeService) Version(arg0 string) (*v1alpha2.VersionResponse, error) {\n\tm.ctrl.T.Helper()\n\tret := m.ctrl.Call(m, \"Version\", arg0)\n\tret0, _ := ret[0].(*v1alpha2.VersionResponse)\n\tret1, _ := ret[1].(error)\n\treturn ret0, ret1\n}\n\n// Version indicates an expected call of Version\n// nolint\nfunc (mr *MockExtendedRuntimeServiceMockRecorder) Version(arg0 interface{}) *gomock.Call {\n\tmr.mock.ctrl.T.Helper()\n\treturn mr.mock.ctrl.RecordCallWithMethodType(mr.mock, \"Version\", reflect.TypeOf((*MockExtendedRuntimeService)(nil).Version), arg0)\n}\n"
  },
  {
    "path": "utils/crypto/crypto.go",
    "content": "package crypto\n\nimport (\n\t\"bytes\"\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\t\"crypto/hmac\"\n\t\"crypto/rand\"\n\t\"crypto/sha256\"\n\t\"crypto/x509\"\n\t\"encoding/base64\"\n\t\"encoding/binary\"\n\t\"encoding/pem\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math/big\"\n\tmrand \"math/rand\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/ugorji/go/codec\"\n\t\"go.aporeto.io/tg/tglib/windowscertbug\"\n\t\"go.uber.org/zap\"\n)\n\ntype nonce struct {\n\tr *mrand.Rand\n\tsync.Mutex\n}\n\n// PublicKey is an intermediate structure to create gobs\ntype PublicKey struct {\n\tX *big.Int\n\tY *big.Int\n}\n\n//Nonce16Byte interface generates 16 byte nonce\ntype Nonce16Byte interface {\n\tGenerateNonce16Bytes([]byte)\n}\n\nvar doOnce sync.Once\nvar n nonce\n\n// Nonce initializes and returns nonce of type Nonce16Byte.\nfunc Nonce() Nonce16Byte {\n\tdoOnce.Do(func() {\n\t\tn.r = mrand.New(mrand.NewSource(time.Now().UnixNano()))\n\t})\n\n\treturn &n\n}\n\nfunc (n *nonce) GenerateNonce16Bytes(b []byte) {\n\tn.Lock()\n\tlow := n.r.Uint64()\n\thigh := n.r.Uint64()\n\tn.Unlock()\n\n\tbinary.LittleEndian.PutUint64(b[:8], low)\n\tbinary.LittleEndian.PutUint64(b[8:], high)\n}\n\n// ComputeHmac256 computes the HMAC256 of the message\nfunc ComputeHmac256(tags []byte, key []byte) ([]byte, error) {\n\n\tvar buffer bytes.Buffer\n\tif err := binary.Write(&buffer, binary.BigEndian, tags); err != nil {\n\t\treturn []byte{}, err\n\t}\n\n\th := hmac.New(sha256.New, key)\n\n\tif _, err := h.Write(buffer.Bytes()); err != nil {\n\t\treturn []byte{}, err\n\t}\n\n\treturn h.Sum(nil), nil\n\n}\n\n// VerifyHmac verifies if the HMAC of the message matches the one provided\nfunc VerifyHmac(tags []byte, expectedMAC []byte, key []byte) bool {\n\tmessageMAC, err := ComputeHmac256(tags, key)\n\tif err != nil {\n\t\treturn false\n\t}\n\n\treturn hmac.Equal(messageMAC, expectedMAC)\n}\n\n// GenerateRandomBytes returns securely generated random bytes.\n// It will return an error if the system's secure random\n// number generator fails to function correctly, in which\n// case the caller should not continue.\nfunc GenerateRandomBytes(n int) ([]byte, error) {\n\tb := make([]byte, n)\n\n\tif _, err := rand.Read(b); err != nil {\n\t\tzap.L().Debug(\"GenerateRandomBytes failed\", zap.Error(err))\n\t\treturn nil, err\n\t}\n\n\ts := base64.StdEncoding.EncodeToString(b)\n\n\treturn []byte(s[:n]), nil\n}\n\n// GenerateRandomString returns a URL-safe, base64 encoded\n// securely generated random string.\n// It will return an error if the system's secure random\n// number generator fails to function correctly, in which\n// case the caller should not continue.\nfunc GenerateRandomString(s int) (string, error) {\n\tb, err := GenerateRandomBytes(s)\n\treturn base64.URLEncoding.EncodeToString(b), err\n}\n\n// CreateEphemeralKey creates an ephmeral private/public key based on the\n// provided public key and the corresponding elliptic curve\nfunc CreateEphemeralKey(curve func() elliptic.Curve, pub *ecdsa.PublicKey) (*ecdsa.PrivateKey, []byte) {\n\n\tephemeral, err := ecdsa.GenerateKey(curve(), rand.Reader)\n\tif err != nil {\n\t\tzap.L().Error(\"CreateEphemeralKey failed, returning empty array of bytes\", zap.Error(err))\n\t\treturn nil, []byte{}\n\t}\n\n\tephPub := elliptic.Marshal(pub.Curve, ephemeral.PublicKey.X, ephemeral.PublicKey.Y)\n\n\treturn ephemeral, ephPub\n\n}\n\n// LoadRootCertificates loads the certificates in the provide PEM buffer in a CertPool\nfunc LoadRootCertificates(rootPEM []byte) *x509.CertPool {\n\n\troots := x509.NewCertPool()\n\n\tok := roots.AppendCertsFromPEM(rootPEM)\n\tif !ok {\n\t\tzap.L().Error(\"AppendCertsFromPEM failed\", zap.ByteString(\"rootPEM\", rootPEM))\n\t\treturn nil\n\t}\n\n\treturn roots\n\n}\n\n// LoadEllipticCurveKey parses and creates an EC key\nfunc LoadEllipticCurveKey(keyPEM []byte) (*ecdsa.PrivateKey, error) {\n\n\tblock, _ := pem.Decode(keyPEM)\n\tif block == nil {\n\t\treturn nil, fmt.Errorf(\"LoadElliticCurveKey bad pem block: %s\", string(keyPEM))\n\t}\n\n\t// Parse the key\n\tkey, err := x509.ParseECPrivateKey(block.Bytes)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn key, nil\n}\n\n// LoadAndVerifyCertificate parses, validates, and creates a certificate structure from a PEM buffer\n// It must be provided with the a CertPool\nfunc LoadAndVerifyCertificate(certPEM []byte, roots *x509.CertPool) (*x509.Certificate, error) {\n\n\tcert, err := LoadCertificate(certPEM)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\topts := x509.VerifyOptions{\n\t\tRoots: roots,\n\t}\n\n\tif _, err := windowscertbug.VerifyCertificate(cert, opts); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn cert, nil\n\n}\n\n// LoadAndVerifyECSecrets loads all the certificates and keys to memory in the right data structures\nfunc LoadAndVerifyECSecrets(keyPEM, certPEM, caCertPEM []byte) (key *ecdsa.PrivateKey, cert *x509.Certificate, rootCertPool *x509.CertPool, err error) {\n\n\t// Parse the key\n\tkey, err = LoadEllipticCurveKey(keyPEM)\n\tif err != nil {\n\t\treturn nil, nil, nil, err\n\t}\n\n\trootCertPool = LoadRootCertificates(caCertPEM)\n\tif rootCertPool == nil {\n\t\treturn nil, nil, nil, errors.New(\"unable to load root certificate pool\")\n\t}\n\n\tcert, err = LoadAndVerifyCertificate(certPEM, rootCertPool)\n\tif err != nil {\n\t\treturn nil, nil, nil, err\n\t}\n\n\treturn key, cert, rootCertPool, nil\n\n}\n\n// LoadCertificate loads a certificate from a PEM file without verifying\n// Should only be used for loading a root CA certificate. It will only read\n// the first certificate\nfunc LoadCertificate(certPEM []byte) (*x509.Certificate, error) {\n\n\t// Decode the certificate\n\tcertBlock, _ := pem.Decode(certPEM)\n\tif certBlock == nil {\n\t\treturn nil, fmt.Errorf(\"unable to parse pem block: %s\", string(certPEM))\n\t}\n\n\t// Create the certificate structure\n\tcert, err := x509.ParseCertificate(certBlock.Bytes)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn cert, nil\n}\n\n//EncodePublicKeyV1 encodes the public key to a byte slice\nfunc EncodePublicKeyV1(publicKey *ecdsa.PublicKey) []byte {\n\n\tp := &PublicKey{X: publicKey.X, Y: publicKey.Y}\n\n\tbuf := make([]byte, 0, 1400)\n\tvar h codec.Handle = new(codec.CborHandle)\n\tenc := codec.NewEncoderBytes(&buf, h)\n\n\tif err := enc.Encode(p); err != nil {\n\t\treturn nil\n\t}\n\n\treturn buf\n\n}\n\n// DecodePublicKeyV1 decodes the provided public key\nfunc DecodePublicKeyV1(key []byte) (*ecdsa.PublicKey, error) {\n\tvar p PublicKey\n\n\tvar h codec.Handle = new(codec.CborHandle)\n\n\tdec := codec.NewDecoderBytes(key, h)\n\tif err := dec.Decode(&p); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &ecdsa.PublicKey{\n\t\tCurve: elliptic.P256(),\n\t\tX:     p.X,\n\t\tY:     p.Y,\n\t}, nil\n}\n\n//EncodePublicKeyV2 encodes the public key to a byte slice\nfunc EncodePublicKeyV2(publicKey *ecdsa.PublicKey) []byte {\n\treturn elliptic.Marshal(publicKey.Curve, publicKey.X, publicKey.Y)\n}\n\n// DecodePublicKeyV2 decodes the provided public key\nfunc DecodePublicKeyV2(key []byte) (*ecdsa.PublicKey, error) {\n\n\tx, y := elliptic.Unmarshal(elliptic.P256(), key)\n\tif x == nil || y == nil {\n\t\treturn nil, fmt.Errorf(\"Failed to decode public key\")\n\t}\n\n\treturn &ecdsa.PublicKey{\n\t\tCurve: elliptic.P256(),\n\t\tX:     x,\n\t\tY:     y,\n\t}, nil\n}\n\n//EncodePrivateKey encodes the private key to a byte slice.\nfunc EncodePrivateKey(privateKey *ecdsa.PrivateKey) []byte {\n\treturn elliptic.Marshal(privateKey.PublicKey.Curve, privateKey.D, privateKey.PublicKey.X)\n}\n"
  },
  {
    "path": "utils/crypto/crypto_test.go",
    "content": "// +build !windows\n\npackage crypto\n\nimport (\n\t\"errors\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nconst (\n\tcaPool = `-----BEGIN CERTIFICATE-----\nMIIBhTCCASwCCQC8b53yGlcQazAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABJxneTUqhbtgEIwpKUUzwz3h92SqcOdIw3mfQkMjg3Vobvr6JKlpXYe9xhsN\nrygJmLhMAN9gjF9qM9ybdbe+m3owCgYIKoZIzj0EAwIDRwAwRAIgC1fVMqdBy/o3\njNUje/Hx0fZF9VDyUK4ld+K/wF3QdK4CID1ONj/Kqinrq2OpjYdkgIjEPuXoOoR1\ntCym8dnq4wtH\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIB3jCCAYOgAwIBAgIJALsW7pyC2ERQMAoGCCqGSM49BAMCMEsxCzAJBgNVBAYT\nAlVTMQswCQYDVQQIDAJDQTEMMAoGA1UEBwwDU0pDMRAwDgYDVQQKDAdUcmlyZW1l\nMQ8wDQYDVQQDDAZ1YnVudHUwHhcNMTYwOTI3MjI0OTAwWhcNMjYwOTI1MjI0OTAw\nWjBLMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4G\nA1UECgwHVHJpcmVtZTEPMA0GA1UEAwwGdWJ1bnR1MFkwEwYHKoZIzj0CAQYIKoZI\nzj0DAQcDQgAE4c2Fd7XeIB1Vfs51fWwREfLLDa55J+NBalV12CH7YEAnEXjl47aV\ncmNqcAtdMUpf2oz9nFVI81bgO+OSudr3CqNQME4wHQYDVR0OBBYEFOBftuI09mmu\nrXjqDyIta1gT8lqvMB8GA1UdIwQYMBaAFOBftuI09mmurXjqDyIta1gT8lqvMAwG\nA1UdEwQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAMylAHhbFA0KqhXIFiXNpEbH\nJKaELL6UXXdeQ5yup8q+AiEAh5laB9rbgTymjaANcZ2YzEZH4VFS3CKoSdVqgnwC\ndW4=\n-----END CERTIFICATE-----`\n\tcertPEM = `-----BEGIN CERTIFICATE-----\nMIIBhjCCASwCCQCPCdgp39gHJTAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABAHwC/gHz4/w58a1OrVJBMsxMgsZ35TlsYaLstW3c4Se78TG9b0N9fGW1v1r\nzDf6IyF6l2KxKSEqKI854+OsOuMwCgYIKoZIzj0EAwIDSAAwRQIgQwQn0jnK/XvD\nKxgQd/0pW5FOAaB41cMcw4/XVlphO1oCIQDlGie+WlOMjCzrV0Xz+XqIIi1pIgPT\nIG7Nv+YlTVp5qA==\n-----END CERTIFICATE-----`\n)\n\n// TestComputeVerifyHMAC tests the compute and verify of HMAC functions\nfunc TestComputeVerifyHMAC(t *testing.T) {\n\tConvey(\"Given a token and a key\", t, func() {\n\n\t\ttoken := make([]byte, 256)\n\t\tfor i := uint8(0); i < 255; i++ {\n\t\t\ttoken[i] = i\n\t\t}\n\n\t\tkey := make([]byte, 32)\n\t\tfor i := uint8(0); i < 32; i++ {\n\t\t\tkey[i] = i\n\t\t}\n\n\t\tConvey(\"When I sign the token with the key\", func() {\n\t\t\texpectedMac, err := ComputeHmac256(token, key)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(expectedMac, ShouldNotBeNil)\n\n\t\t\tConvey(\"I should be able to verify the token with the same key\", func() {\n\t\t\t\tverified := VerifyHmac(token, expectedMac, key)\n\t\t\t\tSo(verified, ShouldBeTrue)\n\t\t\t})\n\n\t\t\tConvey(\"If I provide the worng key, I should fail verification\", func() {\n\t\t\t\tfakeKey := make([]byte, 32)\n\t\t\t\tverified := VerifyHmac(token, expectedMac, fakeKey)\n\t\t\t\tSo(verified, ShouldBeFalse)\n\t\t\t})\n\n\t\t\tConvey(\"If I provide the wright key, but I havae the wrong signature, I should fail verification\", func() {\n\t\t\t\tfailedMac := make([]byte, 32)\n\t\t\t\tverified := VerifyHmac(token, failedMac, key)\n\t\t\t\tSo(verified, ShouldBeFalse)\n\t\t\t})\n\t\t})\n\t})\n}\n\n// TestRandomString tests the random string generation function and the random byte generation\nfunc TestRandomString(t *testing.T) {\n\tConvey(\"Given a string length of 16\", t, func() {\n\t\tlength := 16\n\t\tConvey(\"I should be able to generate two random strings that are not equal\", func() {\n\t\t\tstring1, err1 := GenerateRandomString(length)\n\t\t\tstring2, err2 := GenerateRandomString(length)\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\tSo(err2, ShouldBeNil)\n\t\t\tSo(string1, ShouldNotEqual, string2)\n\t\t})\n\t})\n\tConvey(\"Given a string length of 32\", t, func() {\n\t\tlength := 16\n\t\tConvey(\"I should be able to generate two random strings that are not equal\", func() {\n\t\t\tstring1, err1 := GenerateRandomString(length)\n\t\t\tstring2, err2 := GenerateRandomString(length)\n\t\t\tSo(err1, ShouldBeNil)\n\t\t\tSo(err2, ShouldBeNil)\n\t\t\tSo(string1, ShouldNotEqual, string2)\n\t\t})\n\t})\n}\n\n// TestFuncLoadEllipticCurve\nfunc TestFuncLoadEllipticCurve(t *testing.T) {\n\tConvey(\"Given a valid EC key\", t, func() {\n\t\tkeyPEM := `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIPkiHqtH372JJdAG/IxJlE1gv03cdwa8Lhg2b3m/HmbyoAoGCCqGSM49\nAwEHoUQDQgAEAfAL+AfPj/DnxrU6tUkEyzEyCxnflOWxhouy1bdzhJ7vxMb1vQ31\n8ZbW/WvMN/ojIXqXYrEpISoojznj46w64w==\n-----END EC PRIVATE KEY-----`\n\t\tConvey(\"I should be able to load the key\", func() {\n\t\t\tkey, err := LoadEllipticCurveKey([]byte(keyPEM))\n\t\t\tSo(key, ShouldNotBeNil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\t})\n\n\tConvey(\"Given an invalid PEM BLOCK\", t, func() {\n\t\tkeyPEM := \"\"\n\t\tConvey(\"I should get an error\", func() {\n\t\t\t_, err := LoadEllipticCurveKey([]byte(keyPEM))\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n\n\tConvey(\"Given an invalid Key file\", t, func() {\n\t\tkeyPEM := `-----BEGIN EC PRIVATE KEY-----\n-----END EC PRIVATE KEY-----`\n\t\tConvey(\"I should get an error\", func() {\n\t\t\t_, err := LoadEllipticCurveKey([]byte(keyPEM))\n\t\t\tSo(err, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\n// TestFuncLoadRootCertificates test the loading of root certs in a cert pool\nfunc TestFuncLoadRootCertificates(t *testing.T) {\n\tConvey(\"Given a valid certificate chain\", t, func() {\n\n\t\tConvey(\"I should be able to get a valid certificate chain\", func() {\n\t\t\troots := LoadRootCertificates([]byte(caPool))\n\t\t\tSo(roots, ShouldNotBeNil)\n\t\t\tSo(len(roots.Subjects()), ShouldEqual, 2)\n\t\t})\n\t})\n}\n\n// TestFuncLoadAndVerifyCertificate\nfunc TestLoadAndVerifyCertificate(t *testing.T) {\n\tConvey(\"Given a valid certificate chain\", t, func() {\n\t\troots := LoadRootCertificates([]byte(caPool))\n\t\tSo(roots, ShouldNotBeNil)\n\t\tSo(len(roots.Subjects()), ShouldEqual, 2)\n\n\t\tConvey(\"Given a certificate signed by the intermediatery\", func() {\n\t\t\tConvey(\"I should be able to load and verify the certificate\", func() {\n\t\t\t\tcert, err := LoadAndVerifyCertificate([]byte(certPEM), roots)\n\t\t\t\tSo(cert, ShouldNotBeNil)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given the root CA certificate only \", t, func() {\n\t\trootCA := `\n-----BEGIN CERTIFICATE-----\nMIIB3jCCAYOgAwIBAgIJALsW7pyC2ERQMAoGCCqGSM49BAMCMEsxCzAJBgNVBAYT\nAlVTMQswCQYDVQQIDAJDQTEMMAoGA1UEBwwDU0pDMRAwDgYDVQQKDAdUcmlyZW1l\nMQ8wDQYDVQQDDAZ1YnVudHUwHhcNMTYwOTI3MjI0OTAwWhcNMjYwOTI1MjI0OTAw\nWjBLMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4G\nA1UECgwHVHJpcmVtZTEPMA0GA1UEAwwGdWJ1bnR1MFkwEwYHKoZIzj0CAQYIKoZI\nzj0DAQcDQgAE4c2Fd7XeIB1Vfs51fWwREfLLDa55J+NBalV12CH7YEAnEXjl47aV\ncmNqcAtdMUpf2oz9nFVI81bgO+OSudr3CqNQME4wHQYDVR0OBBYEFOBftuI09mmu\nrXjqDyIta1gT8lqvMB8GA1UdIwQYMBaAFOBftuI09mmurXjqDyIta1gT8lqvMAwG\nA1UdEwQFMAMBAf8wCgYIKoZIzj0EAwIDSQAwRgIhAMylAHhbFA0KqhXIFiXNpEbH\nJKaELL6UXXdeQ5yup8q+AiEAh5laB9rbgTymjaANcZ2YzEZH4VFS3CKoSdVqgnwC\ndW4=\n-----END CERTIFICATE-----`\n\t\troots := LoadRootCertificates([]byte(rootCA))\n\t\tSo(roots, ShouldNotBeNil)\n\t\tSo(len(roots.Subjects()), ShouldEqual, 1)\n\n\t\tConvey(\"Given a certificate signed by the intermediatery\", func() {\n\t\t\tConvey(\"I should be able to fail to verify the certificate \", func() {\n\t\t\t\tcert, err := LoadAndVerifyCertificate([]byte(certPEM), roots)\n\t\t\t\tSo(cert, ShouldBeNil)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given a good CA \", t, func() {\n\t\tgoodCA := `-----BEGIN CERTIFICATE-----\nMIIBhTCCASwCCQC8b53yGlcQazAKBggqhkjOPQQDAjBLMQswCQYDVQQGEwJVUzEL\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABJxneTUqhbtgEIwpKUUzwz3h92SqcOdIw3mfQkMjg3Vobvr6JKlpXYe9xhsN\nrygJmLhMAN9gjF9qM9ybdbe+m3owCgYIKoZIzj0EAwIDRwAwRAIgC1fVMqdBy/o3\njNUje/Hx0fZF9VDyUK4ld+K/wF3QdK4CID1ONj/Kqinrq2OpjYdkgIjEPuXoOoR1\ntCym8dnq4wtH\n-----END CERTIFICATE-----`\n\n\t\troots := LoadRootCertificates([]byte(goodCA))\n\t\tSo(roots, ShouldNotBeNil)\n\t\tSo(len(roots.Subjects()), ShouldEqual, 1)\n\n\t\tConvey(\"Given a bad certificate \", func() {\n\t\t\tbadCA := `-----BEGIN CERTIFICATE-----\nMAkGA1UECAwCQ0ExDDAKBgNVBAcMA1NKQzEQMA4GA1UECgwHVHJpcmVtZTEPMA0G\nA1UEAwwGdWJ1bnR1MB4XDTE2MDkyNzIyNDkwMFoXDTI2MDkyNTIyNDkwMFowSzEL\nMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMQwwCgYDVQQHDANTSkMxEDAOBgNVBAoM\nB1RyaXJlbWUxDzANBgNVBAMMBnVidW50dTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABAHwC/gHz4/w58a1OrVJBMsxMgsZ35TlsYaLstW3c4Se78TG9b0N9fGW1v1r\nzDf6IyF6l2KxKSEqKI854+OsOuMwCgYIKoZIzj0EAwIDSAAwRQIgQwQn0jnK/XvD\nKxgQd/0pW5FOAaB41cMcw4/XVlphO1oCIQDlGie+WlOMjCzrV0Xz+XqIIi1pIgPT\nIG7Nv+YlTVp5qA==\n-----END CERTIFICATE-----`\n\t\t\tConvey(\"I should be able to fail to verify the certificate \", func() {\n\t\t\t\tcert, err := LoadAndVerifyCertificate([]byte(badCA), roots)\n\t\t\t\tSo(cert, ShouldBeNil)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n\n\tConvey(\"Given bad certificate   \", t, func() {\n\n\t\tConvey(\"Where the certificate block is bad  \", func() {\n\t\t\temptyCA := ``\n\t\t\tConvey(\"I should be able to fail to verify the certificate \", func() {\n\t\t\t\tcert, err := LoadAndVerifyCertificate([]byte(emptyCA), nil)\n\t\t\t\tSo(cert, ShouldBeNil)\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\n// TestLoadAndVerifyECSecrets\nfunc TestLoadAndVerifyECSecrets(t *testing.T) {\n\n\tConvey(\"Given a valid EC key\", t, func() {\n\t\tkeyPEM := `-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIPkiHqtH372JJdAG/IxJlE1gv03cdwa8Lhg2b3m/HmbyoAoGCCqGSM49\nAwEHoUQDQgAEAfAL+AfPj/DnxrU6tUkEyzEyCxnflOWxhouy1bdzhJ7vxMb1vQ31\n8ZbW/WvMN/ojIXqXYrEpISoojznj46w64w==\n-----END EC PRIVATE KEY-----`\n\t\tConvey(\"I should be able to load the key\", func() {\n\t\t\tkey, err := LoadEllipticCurveKey([]byte(keyPEM))\n\t\t\tSo(key, ShouldNotBeNil)\n\t\t\tSo(err, ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"Given a valid certificate chain\", func() {\n\t\t\troots := LoadRootCertificates([]byte(caPool))\n\t\t\tSo(roots, ShouldNotBeNil)\n\t\t\tSo(len(roots.Subjects()), ShouldEqual, 2)\n\n\t\t\tConvey(\"Given a certificate signed by the intermediatery\", func() {\n\t\t\t\tConvey(\"I should be able to load and verify the certificate\", func() {\n\t\t\t\t\tcert, err := LoadAndVerifyCertificate([]byte(certPEM), roots)\n\t\t\t\t\tSo(cert, ShouldNotBeNil)\n\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t})\n\n\t\t\t\tConvey(\"Given I have valid EC key, certificate chain and signed certificate\", func() {\n\t\t\t\t\tkey, cert, certPool, err := LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(caPool))\n\n\t\t\t\t\tConvey(\"I should be able to load and verify all the certificates and keys in the right data structures\", func() {\n\t\t\t\t\t\tSo(key, ShouldNotBeNil)\n\t\t\t\t\t\tSo(cert, ShouldNotBeNil)\n\t\t\t\t\t\tSo(certPool, ShouldNotBeNil)\n\t\t\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\t\t})\n\t\t\t\t})\n\n\t\t\t\tConvey(\"Given I have invalid EC key, valid certificate chain and signed certificate\", func() {\n\t\t\t\t\tinvalidKeyPEM := `-----BEGIN EC PRIVATE KEY-----\n\t\t\t-----END EC PRIVATE KEY-----`\n\t\t\t\t\tkey, cert, certPool, err := LoadAndVerifyECSecrets([]byte(invalidKeyPEM), []byte(certPEM), []byte(caPool))\n\n\t\t\t\t\tConvey(\"I should be able to fail verifying the EC key\", func() {\n\t\t\t\t\t\tSo(key, ShouldBeNil)\n\t\t\t\t\t\tSo(cert, ShouldBeNil)\n\t\t\t\t\t\tSo(certPool, ShouldBeNil)\n\t\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"LoadElliticCurveKey bad pem block: -----BEGIN EC PRIVATE KEY-----\\n\\t\\t\\t-----END EC PRIVATE KEY-----\"))\n\t\t\t\t\t})\n\t\t\t\t})\n\n\t\t\t\tConvey(\"Given I have valid EC key, invalid certificate chain and valid signed certificate\", func() {\n\t\t\t\t\tinvalidCaPool := `-----BEGIN CERTIFICATE-----\n\t\t\t-----END CERTIFICATE-----`\n\t\t\t\t\tkey, cert, certPool, err := LoadAndVerifyECSecrets([]byte(keyPEM), []byte(certPEM), []byte(invalidCaPool))\n\n\t\t\t\t\tConvey(\"I should be able to fail loading the certificate pool\", func() {\n\t\t\t\t\t\tSo(key, ShouldBeNil)\n\t\t\t\t\t\tSo(cert, ShouldBeNil)\n\t\t\t\t\t\tSo(certPool, ShouldBeNil)\n\t\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to load root certificate pool\"))\n\t\t\t\t\t})\n\t\t\t\t})\n\n\t\t\t\tConvey(\"Given I have valid EC key, certificate chain and invalid signed certificate\", func() {\n\t\t\t\t\tinvalidCertPEM := `-----BEGIN CERTIFICATE-----\n\t\t-----END CERTIFICATE-----`\n\t\t\t\t\tkey, cert, certPool, err := LoadAndVerifyECSecrets([]byte(keyPEM), []byte(invalidCertPEM), []byte(caPool))\n\n\t\t\t\t\tConvey(\"I should be able to fail verifying certificate (bad certificate)\", func() {\n\t\t\t\t\t\tSo(key, ShouldBeNil)\n\t\t\t\t\t\tSo(cert, ShouldBeNil)\n\t\t\t\t\t\tSo(certPool, ShouldBeNil)\n\t\t\t\t\t\tSo(err, ShouldResemble, errors.New(\"unable to parse pem block: -----BEGIN CERTIFICATE-----\\n\\t\\t-----END CERTIFICATE-----\"))\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\n// func TestPublicKeyEncDec(t *testing.T) {\n\n// \tprivateKey1, _ := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n// \tencPub1 := EncodePublicKey(&privateKey1.PublicKey)\n// \tdecPub1, _ := DecodePublicKey(encPub1)\n\n// \tpubKeyEq1 := reflect.DeepEqual(privateKey1.PublicKey, *decPub1)\n\n// \tassert.Equal(t, pubKeyEq1, true, \"public key1 should be equal\")\n// }\n"
  },
  {
    "path": "utils/fqdn/fqdn.go",
    "content": "package fqdn\n\nimport (\n\t\"net\"\n\t\"os\"\n\t\"strings\"\n\t\"sync\"\n)\n\n// defining os and net package functions in their own variables\n// so that we can mock them for unit tests\nvar (\n\tosHostname    = os.Hostname\n\tnetLookupIP   = net.LookupIP\n\tnetLookupAddr = net.LookupAddr\n)\n\nconst unknownHostname = \"unknown\"\n\n// InitializeAlternativeHostname can be used to set an alternative hostname that is being used by `FindFQDN`.\n// The enforcer can use this during startup to provide an alternative value.\nfunc InitializeAlternativeHostname(hostname string) {\n\tif hostname != \"\" {\n\t\talternativeHostnameOnce.Do(func() {\n\t\t\talternativeHostnameLock.Lock()\n\n\t\t\talternativeHostname = hostname\n\n\t\t\talternativeHostnameLock.Unlock()\n\t\t})\n\t}\n}\n\nfunc getAlternativeHostname() string {\n\talternativeHostnameLock.RLock()\n\tdefer alternativeHostnameLock.RUnlock()\n\treturn alternativeHostname\n}\n\nvar (\n\talternativeHostnameOnce = &sync.Once{}\n\talternativeHostnameLock sync.RWMutex\n\talternativeHostname     string\n)\n\n// Find returns fqdn. It uses the following algorithm:\n// First of all, it will return the globally set alternative hostname if it has been initialized with previously with `IntializeAlternativeHostname`\n// If this is not set, it will try to determine the hostname, resolve the hostname to an IP,\n// and based on the hostname it will perform a reverse DNS lookup for the IP.\n// The first entry of the reverse DNS lookup will be returned.\n// If there are any errors during this process, this function will return \"unknown\".\n// It will never return an empty string.\nfunc Find() string {\n\t// return with the global alternative hostname if this is what we really want to do\n\talternativeHostname := getAlternativeHostname()\n\tif alternativeHostname != \"\" {\n\t\treturn alternativeHostname\n\t}\n\n\t// for some cloud providers (like AWS at some point) we prefer different FQDNs\n\t// return with th\n\n\thostnameRaw, err := osHostname()\n\tif err != nil {\n\t\treturn unknownHostname\n\t}\n\n\t// net.LookupIP will actually error if hostname is empty\n\t// so if there is no hostname set in the kernel, also return unknown\n\t// as in all other error cases we want to return a valid string\n\t// make sure that it is set to either os.Hostname or \"unknown\", but is never empty\n\thostname := hostnameRaw\n\tif hostnameRaw == \"\" {\n\t\thostname = unknownHostname\n\t}\n\n\taddrs, err := netLookupIP(hostnameRaw)\n\tif err != nil {\n\t\treturn hostname\n\t}\n\n\tfor _, addr := range addrs {\n\t\tif ipv4 := addr.To4(); ipv4 != nil {\n\t\t\tip, err := ipv4.MarshalText()\n\t\t\tif err != nil {\n\t\t\t\t// impossible case and only possible if there is a bug in golang:\n\t\t\t\t// this will only error if this is not a valid IP address\n\t\t\t\t// To4() already proves that\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\thosts, err := netLookupAddr(string(ip))\n\t\t\tif err != nil || len(hosts) == 0 {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tfqdn := hosts[0]\n\t\t\tret := strings.TrimSuffix(fqdn, \".\") // return fqdn without trailing dot\n\t\t\tif ret != \"\" {\n\t\t\t\treturn ret\n\t\t\t}\n\t\t}\n\t\tif ipv6 := addr.To16(); ipv6 != nil {\n\t\t\tip, err := ipv6.MarshalText()\n\t\t\tif err != nil {\n\t\t\t\t// impossible case and only possible if there is a bug in golang:\n\t\t\t\t// this will only error if this is not a valid IP address\n\t\t\t\t// To16() already proves that\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\thosts, err := netLookupAddr(string(ip))\n\t\t\tif err != nil || len(hosts) == 0 {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tfqdn := hosts[0]\n\t\t\tret := strings.TrimSuffix(fqdn, \".\") // return fqdn without trailing dot\n\t\t\tif ret != \"\" {\n\t\t\t\treturn ret\n\t\t\t}\n\t\t}\n\t}\n\n\t// fall back to os.Hostname or unknown if none of that worked\n\treturn hostname\n}\n"
  },
  {
    "path": "utils/fqdn/fqdn_test.go",
    "content": "package fqdn\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"os\"\n\t\"sync\"\n\t\"testing\"\n)\n\nfunc TestFind(t *testing.T) {\n\tip41 := net.ParseIP(\"192.0.2.1\")\n\tif ip41 == nil {\n\t\tpanic(\"failed to parse ip41\")\n\t}\n\tip42 := net.ParseIP(\"192.0.2.2\")\n\tif ip42 == nil {\n\t\tpanic(\"failed to parse ip42\")\n\t}\n\tip61 := net.ParseIP(\"2001:db8::68\")\n\tif ip61 == nil {\n\t\tpanic(\"failed to parse ip61\")\n\t}\n\tip62 := net.ParseIP(\"2001:db8::69\")\n\tif ip62 == nil {\n\t\tpanic(\"failed to parse ip62\")\n\t}\n\tinvalidIP := (net.IP)([]byte(\"012345678901234567890\"))\n\ttests := []struct {\n\t\tname          string\n\t\twant          string\n\t\tosHostname    func() (string, error)\n\t\tnetLookupIP   func(string) ([]net.IP, error)\n\t\tnetLookupAddr func(string) ([]string, error)\n\t\tpre           func()\n\t}{\n\t\t{\n\t\t\tname: \"os.Hostname errors\",\n\t\t\twant: unknownHostname,\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn \"\", fmt.Errorf(\"failed to get hostname from the kernel\")\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"os.Hostname does not error but returns empty\",\n\t\t\twant: unknownHostname,\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn \"\", nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"net.LookupIP errors\",\n\t\t\twant: constMyhostname,\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn constMyhostname, nil\n\t\t\t},\n\t\t\tnetLookupIP: func(string) ([]net.IP, error) {\n\t\t\t\treturn nil, fmt.Errorf(\"massive error\")\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"net.LookupIP returns empty\",\n\t\t\twant: constMyhostname,\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn constMyhostname, nil\n\t\t\t},\n\t\t\tnetLookupIP: func(string) ([]net.IP, error) {\n\t\t\t\treturn []net.IP{}, nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"net.LookupIP returns addresses which cannot be reversed looked up\",\n\t\t\twant: constMyhostname,\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn constMyhostname, nil\n\t\t\t},\n\t\t\tnetLookupIP: func(string) ([]net.IP, error) {\n\t\t\t\treturn []net.IP{ip41, ip42}, nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t// NOTE: this will not cover the branch if `ipv4.MarshalText()` errors, because that is an impossible branch\n\t\t\tname: \"net.LookupIP returns an invalid IP address\",\n\t\t\twant: constMyhostname,\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn constMyhostname, nil\n\t\t\t},\n\t\t\tnetLookupIP: func(string) ([]net.IP, error) {\n\t\t\t\treturn []net.IP{invalidIP}, nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"net.LookupIP returns one address which can be reversed looked up\",\n\t\t\twant: \"myhostname.local\",\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn constMyhostname, nil\n\t\t\t},\n\t\t\tnetLookupIP: func(string) ([]net.IP, error) {\n\t\t\t\treturn []net.IP{ip41, ip42}, nil\n\t\t\t},\n\t\t\tnetLookupAddr: func(addr string) ([]string, error) {\n\t\t\t\tif addr == \"192.0.2.2\" {\n\t\t\t\t\treturn []string{\"myhostname.local.\"}, nil\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"net.LookupIP returns IPv6 addresses which cannot be reversed looked up\",\n\t\t\twant: constMyhostname,\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn constMyhostname, nil\n\t\t\t},\n\t\t\tnetLookupIP: func(string) ([]net.IP, error) {\n\t\t\t\treturn []net.IP{ip61, ip62}, nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"net.LookupIP returns one IPv6 address which can be reversed looked up\",\n\t\t\twant: \"myhostname.local\",\n\t\t\tosHostname: func() (string, error) {\n\t\t\t\treturn constMyhostname, nil\n\t\t\t},\n\t\t\tnetLookupIP: func(string) ([]net.IP, error) {\n\t\t\t\treturn []net.IP{ip61, ip62}, nil\n\t\t\t},\n\t\t\tnetLookupAddr: func(addr string) ([]string, error) {\n\t\t\t\tif addr == \"2001:db8::69\" {\n\t\t\t\t\treturn []string{\"myhostname.local.\"}, nil\n\t\t\t\t}\n\t\t\t\treturn nil, nil\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"if an alternative hostname is set, return with this instead\",\n\t\t\twant: constHostname,\n\t\t\tpre: func() {\n\t\t\t\tInitializeAlternativeHostname(constHostname)\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif tt.pre != nil {\n\t\t\t\ttt.pre()\n\t\t\t}\n\t\t\tif tt.osHostname != nil {\n\t\t\t\tosHostname = tt.osHostname\n\t\t\t} else {\n\t\t\t\tosHostname = os.Hostname\n\t\t\t}\n\t\t\tif tt.netLookupIP != nil {\n\t\t\t\tnetLookupIP = tt.netLookupIP\n\t\t\t} else {\n\t\t\t\tnetLookupIP = net.LookupIP\n\t\t\t}\n\t\t\tif tt.netLookupAddr != nil {\n\t\t\t\tnetLookupAddr = tt.netLookupAddr\n\t\t\t} else {\n\t\t\t\tnetLookupAddr = net.LookupAddr\n\t\t\t}\n\t\t\tif got := Find(); got != tt.want {\n\t\t\t\tt.Errorf(\"FindFQDN() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\t// reset everything after each run\n\t\t\talternativeHostnameLock.Lock()\n\t\t\tdefer alternativeHostnameLock.Unlock()\n\t\t\talternativeHostname = \"\"\n\t\t\talternativeHostnameOnce = &sync.Once{}\n\t\t})\n\t}\n}\n\nconst (\n\tconstHostname   = \"alternative.hostname\"\n\tconstMyhostname = \"myhostname\"\n)\n\nfunc TestInitializeAlternativeHostname(t *testing.T) {\n\ttype args struct {\n\t\thostname string\n\t}\n\ttests := []struct {\n\t\tname string\n\t\targs args\n\t\twant string\n\t\tpre  func()\n\t}{\n\t\t{\n\t\t\tname: \"set and success\",\n\t\t\targs: args{hostname: constHostname},\n\t\t\twant: constHostname,\n\t\t},\n\t\t{\n\t\t\tname: \"second initialize call will not override\",\n\t\t\targs: args{hostname: \"hostname2\"},\n\t\t\twant: constHostname,\n\t\t\tpre: func() {\n\t\t\t\tInitializeAlternativeHostname(constHostname)\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"an empty initialize call does not initialize it\",\n\t\t\targs: args{hostname: constHostname},\n\t\t\twant: constHostname,\n\t\t\tpre: func() {\n\t\t\t\tInitializeAlternativeHostname(\"\")\n\t\t\t},\n\t\t},\n\t}\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\tif tt.pre != nil {\n\t\t\t\ttt.pre()\n\t\t\t}\n\t\t\tInitializeAlternativeHostname(tt.args.hostname)\n\t\t\tif got := getAlternativeHostname(); got != tt.want {\n\t\t\t\tt.Errorf(\"IntializeAlternativeHostname() = %v, want %v\", got, tt.want)\n\t\t\t}\n\t\t\t// reset everything after each run\n\t\t\talternativeHostnameLock.Lock()\n\t\t\tdefer alternativeHostnameLock.Unlock()\n\t\t\talternativeHostname = \"\"\n\t\t\talternativeHostnameOnce = &sync.Once{}\n\t\t})\n\t}\n}\n"
  },
  {
    "path": "utils/frontman/driver_windows.go",
    "content": "// +build windows\n\npackage frontman\n\nimport (\n\t\"syscall\"\n)\n\n// ABI represents the 'application binary interface' to the Frontman dll\ntype ABI interface {\n\tFrontmanOpenShared() (uintptr, error)\n\tGetDestInfo(driverHandle, socket, destInfo uintptr) (uintptr, error)\n\tApplyDestHandle(socket, destHandle uintptr) (uintptr, error)\n\tFreeDestHandle(destHandle uintptr) (uintptr, error)\n\tNewIpset(driverHandle, name, ipsetType, ipset uintptr) (uintptr, error)\n\tGetIpset(driverHandle, name, ipset uintptr) (uintptr, error)\n\tDestroyAllIpsets(driverHandle, prefix uintptr) (uintptr, error)\n\tListIpsets(driverHandle, ipsetNames, ipsetNamesSize, bytesReturned uintptr) (uintptr, error)\n\tListIpsetsDetail(driverHandle, format, ipsetNames, ipsetNamesSize, bytesReturned uintptr) (uintptr, error)\n\tIpsetAdd(driverHandle, ipset, entry, timeout uintptr) (uintptr, error)\n\tIpsetAddOption(driverHandle, ipset, entry, option, timeout uintptr) (uintptr, error)\n\tIpsetDelete(driverHandle, ipset, entry uintptr) (uintptr, error)\n\tIpsetDestroy(driverHandle, ipset uintptr) (uintptr, error)\n\tIpsetFlush(driverHandle, ipset uintptr) (uintptr, error)\n\tIpsetTest(driverHandle, ipset, entry uintptr) (uintptr, error)\n\tPacketFilterStart(frontman, firewallName, receiveCallback, loggingCallback uintptr) (uintptr, error)\n\tPacketFilterClose() (uintptr, error)\n\tPacketFilterForward(info, packet uintptr) (uintptr, error)\n\tAppendFilter(driverHandle, outbound, filterName, isGotoFilter uintptr) (uintptr, error)\n\tInsertFilter(driverHandle, outbound, priority, filterName, isGotoFilter uintptr) (uintptr, error)\n\tDestroyFilter(driverHandle, filterName uintptr) (uintptr, error)\n\tEmptyFilter(driverHandle, filterName uintptr) (uintptr, error)\n\tGetFilterList(driverHandle, outbound, buffer, bufferSize, bytesReturned uintptr) (uintptr, error)\n\tAppendFilterCriteria(driverHandle, filterName, criteriaName, ruleSpec, ipsetRuleSpecs, ipsetRuleSpecCount uintptr) (uintptr, error)\n\tDeleteFilterCriteria(driverHandle, filterName, criteriaName uintptr) (uintptr, error)\n\tGetCriteriaList(driverHandle, format, criteriaList, criteriaListSize, bytesReturned uintptr) (uintptr, error)\n}\n\ntype driver struct {\n}\n\n// Driver is actually the concrete calls into the Frontman dll, which call into the driver\nvar Driver = ABI(&driver{})\n\nfunc (d *driver) FrontmanOpenShared() (uintptr, error) {\n\tret, _, err := frontManOpenProc.Call()\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) GetDestInfo(driverHandle, socket, destInfo uintptr) (uintptr, error) {\n\tret, _, err := getDestInfoProc.Call(driverHandle, socket, destInfo)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) ApplyDestHandle(socket, destHandle uintptr) (uintptr, error) {\n\tret, _, err := applyDestHandleProc.Call(socket, destHandle)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) FreeDestHandle(destHandle uintptr) (uintptr, error) {\n\tret, _, err := freeDestHandleProc.Call(destHandle)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) NewIpset(driverHandle, name, ipsetType, ipset uintptr) (uintptr, error) {\n\tret, _, err := newIpsetProc.Call(driverHandle, name, ipsetType, ipset)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) GetIpset(driverHandle, name, ipset uintptr) (uintptr, error) {\n\tret, _, err := getIpsetProc.Call(driverHandle, name, ipset)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) DestroyAllIpsets(driverHandle, prefix uintptr) (uintptr, error) {\n\tret, _, err := destroyAllIpsetsProc.Call(driverHandle, prefix)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) ListIpsets(driverHandle, ipsetNames, ipsetNamesSize, bytesReturned uintptr) (uintptr, error) {\n\tret, _, err := listIpsetsProc.Call(driverHandle, ipsetNames, ipsetNamesSize, bytesReturned)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) ListIpsetsDetail(driverHandle, format, ipsetNames, ipsetNamesSize, bytesReturned uintptr) (uintptr, error) {\n\tret, _, err := listIpsetsDetailProc.Call(driverHandle, format, ipsetNames, ipsetNamesSize, bytesReturned)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) IpsetAdd(driverHandle, ipset, entry, timeout uintptr) (uintptr, error) {\n\tret, _, err := ipsetAddProc.Call(driverHandle, ipset, entry, timeout)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) IpsetAddOption(driverHandle, ipset, entry, option, timeout uintptr) (uintptr, error) {\n\tret, _, err := ipsetAddOptionProc.Call(driverHandle, ipset, entry, option, timeout)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) IpsetDelete(driverHandle, ipset, entry uintptr) (uintptr, error) {\n\tret, _, err := ipsetDeleteProc.Call(driverHandle, ipset, entry)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) IpsetDestroy(driverHandle, ipset uintptr) (uintptr, error) {\n\tret, _, err := ipsetDestroyProc.Call(driverHandle, ipset)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) IpsetFlush(driverHandle, ipset uintptr) (uintptr, error) {\n\tret, _, err := ipsetFlushProc.Call(driverHandle, ipset)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) IpsetTest(driverHandle, ipset, entry uintptr) (uintptr, error) {\n\tret, _, err := ipsetTestProc.Call(driverHandle, ipset, entry)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) PacketFilterStart(frontman, firewallName, receiveCallback, loggingCallback uintptr) (uintptr, error) {\n\tret, _, err := packetFilterStartProc.Call(frontman, firewallName, receiveCallback, loggingCallback)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) PacketFilterClose() (uintptr, error) {\n\tret, _, err := packetFilterCloseProc.Call()\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) PacketFilterForward(info, packet uintptr) (uintptr, error) {\n\tret, _, err := packetFilterForwardProc.Call(info, packet)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) AppendFilter(driverHandle, outbound, filterName, isGotoFilter uintptr) (uintptr, error) {\n\tret, _, err := appendFilterProc.Call(driverHandle, outbound, filterName, isGotoFilter)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) InsertFilter(driverHandle, outbound, priority, filterName, isGotoFilter uintptr) (uintptr, error) {\n\tret, _, err := insertFilterProc.Call(driverHandle, outbound, priority, filterName, isGotoFilter)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) DestroyFilter(driverHandle, filterName uintptr) (uintptr, error) {\n\tret, _, err := destroyFilterProc.Call(driverHandle, filterName)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) EmptyFilter(driverHandle, filterName uintptr) (uintptr, error) {\n\tret, _, err := emptyFilterProc.Call(driverHandle, filterName)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) GetFilterList(driverHandle, outbound, buffer, bufferSize, bytesReturned uintptr) (uintptr, error) {\n\tret, _, err := getFilterListProc.Call(driverHandle, outbound, buffer, bufferSize, bytesReturned)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) AppendFilterCriteria(driverHandle, filterName, criteriaName, ruleSpec, ipsetRuleSpecs, ipsetRuleSpecCount uintptr) (uintptr, error) {\n\tret, _, err := appendFilterCriteriaProc.Call(driverHandle, filterName, criteriaName, ruleSpec, ipsetRuleSpecs, ipsetRuleSpecCount)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) DeleteFilterCriteria(driverHandle, filterName, criteriaName uintptr) (uintptr, error) {\n\tret, _, err := deleteFilterCriteriaProc.Call(driverHandle, filterName, criteriaName)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\nfunc (d *driver) GetCriteriaList(driverHandle, format, criteriaList, criteriaListSize, bytesReturned uintptr) (uintptr, error) {\n\tret, _, err := getCriteriaListProc.Call(driverHandle, format, criteriaList, criteriaListSize, bytesReturned)\n\tif err == syscall.Errno(0) {\n\t\terr = nil\n\t}\n\treturn ret, err\n}\n\n// Frontman.dll procs to be called from Go\nvar (\n\tdriverDll        = syscall.NewLazyDLL(\"Frontman.dll\")\n\tfrontManOpenProc = driverDll.NewProc(\"FrontmanOpenShared\")\n\n\t// Frontman procs needed for app proxy. The pattern to follow is\n\t// - call FrontmanGetDestInfo to get original ip/port\n\t// - create new proxy socket\n\t// - call FrontmanApplyDestHandle to update WFP redirect data\n\t// - connect on the new proxy socket\n\t// - free kernel data by calling FrontmanFreeDestHandle\n\tgetDestInfoProc     = driverDll.NewProc(\"FrontmanGetDestInfo\")\n\tapplyDestHandleProc = driverDll.NewProc(\"FrontmanApplyDestHandle\")\n\tfreeDestHandleProc  = driverDll.NewProc(\"FrontmanFreeDestHandle\")\n\n\tnewIpsetProc         = driverDll.NewProc(\"IpsetProvider_NewIpset\")\n\tgetIpsetProc         = driverDll.NewProc(\"IpsetProvider_GetIpset\")\n\tdestroyAllIpsetsProc = driverDll.NewProc(\"IpsetProvider_DestroyAll\")\n\tlistIpsetsProc       = driverDll.NewProc(\"IpsetProvider_ListIPSets\")\n\tlistIpsetsDetailProc = driverDll.NewProc(\"IpsetProvider_ListIPSetsDetail\")\n\tipsetAddProc         = driverDll.NewProc(\"Ipset_Add\")\n\tipsetAddOptionProc   = driverDll.NewProc(\"Ipset_AddOption\")\n\tipsetDeleteProc      = driverDll.NewProc(\"Ipset_Delete\")\n\tipsetDestroyProc     = driverDll.NewProc(\"Ipset_Destroy\")\n\tipsetFlushProc       = driverDll.NewProc(\"Ipset_Flush\")\n\tipsetTestProc        = driverDll.NewProc(\"Ipset_Test\")\n\n\tpacketFilterStartProc   = driverDll.NewProc(\"PacketFilterStart\")\n\tpacketFilterCloseProc   = driverDll.NewProc(\"PacketFilterClose\")\n\tpacketFilterForwardProc = driverDll.NewProc(\"PacketFilterForwardPacket\")\n\n\tappendFilterProc         = driverDll.NewProc(\"AppendFilter\")\n\tinsertFilterProc         = driverDll.NewProc(\"InsertFilter\")\n\tdestroyFilterProc        = driverDll.NewProc(\"DestroyFilter\")\n\temptyFilterProc          = driverDll.NewProc(\"EmptyFilter\")\n\tgetFilterListProc        = driverDll.NewProc(\"GetFilterList\")\n\tappendFilterCriteriaProc = driverDll.NewProc(\"AppendFilterCriteria\")\n\tdeleteFilterCriteriaProc = driverDll.NewProc(\"DeleteFilterCriteria\")\n\tgetCriteriaListProc      = driverDll.NewProc(\"GetCriteriaList\")\n)\n\n// See frontmanIO.h for #defines\nconst (\n\tFilterActionContinue = iota\n\tFilterActionAllow\n\tFilterActionBlock\n\tFilterActionProxy\n\tFilterActionNfq\n\tFilterActionForceNfq\n\tFilterActionAllowOnce\n\tFilterActionGotoFilter\n\tFilterActionSetMark\n)\n\n// See frontmanIO.h for #defines\nconst (\n\tBytesMatchStartIPHeader = iota + 1\n\tBytesMatchStartProtocolHeader\n\tBytesMatchStartPayload\n)\n\n// ProcessMatch constants\nconst (\n\tProcessMatchProcess  = iota + 1 // Match the process id\n\tProcessMatchChildren            // Match the child processes\n)\n\n// See Filter_set.h\nconst (\n\tCriteriaListFormatString = iota + 1\n\tCriteriaListFormatJSON\n)\n\n// See Ipset.h\nconst (\n\tIpsetsDetailFormatString = iota + 1\n\tIpsetsDetailFormatJSON\n)\n\n// See frontmanIO.h\nconst (\n\tMatchTypeMatch   = uint8(1)\n\tMatchTypeNoMatch = uint8(2)\n)\n\n// See frontmanIO.h\nconst (\n\tIPVersionAny = uint8(0) // Rule is for Ipv4 or Ipv6\n\tIPVersion4   = uint8(1) // Rule is just for Ipv4\n\tIPVersion6   = uint8(2) // Rule is just for Ipv6\n)\n\n// DestInfo mirrors frontman's DEST_INFO struct\ntype DestInfo struct {\n\tIPAddr     *uint16 // WCHAR* IPAddress\t\tDestination address allocated and will be free by FrontmanFreeDestHandle\n\tPort       uint16  // USHORT Port\t\t\tDestination port\n\tOutbound   int32   // INT32 Outbound\t\tWhether or not this is an outbound or inbound connection\n\tProcessID  uint64  // UINT64 ProcessId\t\tProcess id.  Only available for outbound connections\n\tDestHandle uintptr // LPVOID DestHandle\t\tHandle to memory that must be freed by called ProxyDestConnected when connection is established.\n}\n\n// PacketInfo mirrors frontman's FRONTMAN_PACKET_INFO struct\ntype PacketInfo struct {\n\tIpv4                         uint8\n\tProtocol                     uint8\n\tOutbound                     uint8\n\tDrop                         uint8\n\tIgnoreFlow                   uint8\n\tHandleLoopback               uint8 // Not to be set by go code, but is for outbound loopback packets\n\tNewPacket                    uint8 // Set to 1 if packet did not originate from the driver.\n\tNoPidMatchOnFlow             uint8 // Set to 1 to ignore process ID rule matches.\n\tDropFlow                     uint8\n\tSetMark                      uint8\n\tReserved                     [2]uint8\n\tSetMarkValue                 uint32\n\tLocalPort                    uint16\n\tRemotePort                   uint16\n\tLocalAddr                    [4]uint32\n\tRemoteAddr                   [4]uint32\n\tIfIdx                        uint32\n\tSubIfIdx                     uint32\n\tCompartmentID                uint32\n\tPacketSize                   uint32\n\tMark                         uint32\n\tStartTimeReceivedFromNetwork uint64\n\tStartTimeSentToUserLand      uint64\n}\n\n// LogPacketInfo mirrors frontman's FRONTMAN_LOG_PACKET_INFO struct\ntype LogPacketInfo struct {\n\tIpv4       uint8\n\tProtocol   uint8\n\tOutbound   uint8\n\tReserved1  uint8\n\tLocalPort  uint16\n\tRemotePort uint16\n\tLocalAddr  [4]uint32\n\tRemoteAddr [4]uint32\n\tPacketSize uint32\n\tGroupID    uint32\n\tLogPrefix  [64]uint16\n}\n\n// IpsetRuleSpec mirrors frontman's IPSET_RULE_SPEC struct\ntype IpsetRuleSpec struct {\n\tNotIpset     uint8\n\tIpsetDstIP   uint8\n\tIpsetDstPort uint8\n\tIpsetSrcIP   uint8\n\tIpsetSrcPort uint8\n\tReserved1    uint8\n\tReserved2    uint8\n\tReserved3    uint8\n\tIpsetName    uintptr // const wchar_t*\n}\n\n// PortRange mirrors frontman's PORT_RANGE struct\ntype PortRange struct {\n\tPortStart uint16\n\tPortEnd   uint16\n}\n\n// IcmpRange mirrors frontman's ICMP_RANGE struct\ntype IcmpRange struct {\n\tIcmpTypeSpecified uint8\n\tIcmpType          uint8\n\tIcmpCodeSpecified uint8\n\tIcmpCodeLower     uint8\n\tIcmpCodeUpper     uint8\n}\n\n// RuleSpec mirrors frontman's RULE_SPEC struct\ntype RuleSpec struct {\n\tAction                 uint8\n\tLog                    uint8\n\tProtocol               uint8\n\tProtocolSpecified      uint8\n\tAleAuthConnect         uint8 // not used by us\n\tProcessFlags           uint8 // See frontmanIO.h bit mask PROCESS_MATCH_PROCESS and/or PROCESS_MATCH_CHILDREN\n\tTCPFlags               uint8\n\tTCPFlagsMask           uint8\n\tTCPFlagsSpecified      uint8\n\tTCPOption              uint8\n\tTCPOptionSpecified     uint8\n\tCompartmentIDSpecified uint8\n\tBytesNoMatch           uint8\n\tFlowMarkMatchType      uint8 // MATCH_TYPE_MATCH = 1 MATCH_TYPE_NOMATCH = 2\n\tIPVersionMatch         uint8 // IP_VERSION_ANY, IP_VERSION_4, or IP_VERSION_6\n\tReserved               uint8\n\tFlowMark               uint32\n\tCompartmentID          uint32\n\tIcmpRanges             *IcmpRange\n\tIcmpRangeCount         int32\n\tProxyPort              uint16\n\tBytesMatchStart        int16 // See frontmanIO.h for BYTESMATCH defines.\n\tBytesMatchOffset       int32\n\tBytesMatchSize         int32\n\tBytesMatch             *byte\n\tMark                   uint32\n\tGroupID                uint32\n\tSrcPortCount           int32\n\tDstPortCount           int32\n\tSrcPorts               *PortRange\n\tDstPorts               *PortRange\n\tLogPrefix              uintptr // const wchar_t*\n\tApplication            uintptr // const wchar_t*\n\tProcessID              uint64\n\tGotoFilterName         uintptr // const wchar_t*\n}\n"
  },
  {
    "path": "utils/frontman/driver_windows_test.go",
    "content": "// +build windows,ignore\n\npackage frontman\n\n// TODO(windows): these tests need to be moved to integration tests when possible\n\nimport (\n\t\"testing\"\n\t\"unsafe\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestFrontmanStructLayout(t *testing.T) {\n\n\tConvey(\"Given a Frontman PDB\", t, func() {\n\n\t\tpdb, err := abi.FindFrontmanPdb()\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"The layout of DestInfo and DEST_INFO should be the same\", func() {\n\t\t\tlayout, err := pdb.GetStructLayout(\"_DEST_INFO\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(unsafe.Sizeof(DestInfo{}), ShouldEqual, layout.Size)\n\t\t\t// WCHAR* IPAddress\n\t\t\tindex := 0\n\t\t\tSo(\"IPAddress\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(DestInfo{}.IpAddr), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// USHORT Port\n\t\t\tindex++\n\t\t\tSo(\"Port\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(DestInfo{}.Port), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// INT32 Outbound\n\t\t\tindex++\n\t\t\tSo(\"Outbound\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(DestInfo{}.Outbound), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT64 ProcessId\n\t\t\tindex++\n\t\t\tSo(\"ProcessId\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(DestInfo{}.ProcessId), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// LPVOID DestHandle\n\t\t\tindex++\n\t\t\tSo(\"DestHandle\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(DestInfo{}.DestHandle), ShouldEqual, layout.Members[index].Offset)\n\t\t})\n\n\t\tConvey(\"The layout of PacketInfo and FRONTMAN_PACKET_INFO should be the same\", func() {\n\t\t\tlayout, err := pdb.GetStructLayout(\"FRONTMAN_PACKET_INFO\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(unsafe.Sizeof(PacketInfo{}), ShouldEqual, layout.Size)\n\t\t\t// UINT8 Ipv4\n\t\t\tindex := 0\n\t\t\tSo(\"Ipv4\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.Ipv4), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 Protocol\n\t\t\tindex++\n\t\t\tSo(\"Protocol\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.Protocol), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 Outbound\n\t\t\tindex++\n\t\t\tSo(\"Outbound\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.Outbound), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 Drop\n\t\t\tindex++\n\t\t\tSo(\"Drop\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.Drop), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IngoreFlow [sic]\n\t\t\tindex++\n\t\t\tSo(\"IngoreFlow\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.IgnoreFlow), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// skip reserved\n\t\t\tindex += 3\n\t\t\t// UINT16 LocalPort\n\t\t\tindex++\n\t\t\tSo(\"LocalPort\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.LocalPort), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT16 RemotePort\n\t\t\tindex++\n\t\t\tSo(\"RemotePort\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.RemotePort), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 LocalAddr[4]\n\t\t\tindex++\n\t\t\tSo(\"LocalAddr\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.LocalAddr), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 RemoteAddr[4]\n\t\t\tindex++\n\t\t\tSo(\"RemoteAddr\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.RemoteAddr), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 IfIdx\n\t\t\tindex++\n\t\t\tSo(\"IfIdx\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.IfIdx), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 SubIfIdx\n\t\t\tindex++\n\t\t\tSo(\"SubIfIdx\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.SubIfIdx), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 PacketSize\n\t\t\tindex++\n\t\t\tSo(\"PacketSize\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.PacketSize), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 Mark\n\t\t\tindex++\n\t\t\tSo(\"Mark\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.Mark), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT64 StartTimeReceivedFromNetwork\n\t\t\tindex++\n\t\t\tSo(\"StartTimeReceivedFromNetwork\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.StartTimeReceivedFromNetwork), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT64 StartTimeSentToUserLand\n\t\t\tindex++\n\t\t\tSo(\"StartTimeSentToUserLand\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PacketInfo{}.StartTimeSentToUserLand), ShouldEqual, layout.Members[index].Offset)\n\t\t})\n\n\t\tConvey(\"The layout of RuleSpec and RULE_SPEC should be the same\", func() {\n\t\t\tlayout, err := pdb.GetStructLayout(\"RULE_SPEC\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(unsafe.Sizeof(RuleSpec{}), ShouldEqual, layout.Size)\n\t\t\t// UINT8 Action\n\t\t\tindex := 0\n\t\t\tSo(\"Action\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.Action), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 Log\n\t\t\tindex++\n\t\t\tSo(\"Log\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.Log), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 Protocol\n\t\t\tindex++\n\t\t\tSo(\"Protocol\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.Protocol), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 ProtocolSpecified\n\t\t\tindex++\n\t\t\tSo(\"ProtocolSpecified\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.ProtocolSpecified), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IcmpType\n\t\t\tindex++\n\t\t\tSo(\"IcmpType\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.IcmpType), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IcmpTypeSpecified\n\t\t\tindex++\n\t\t\tSo(\"IcmpTypeSpecified\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.IcmpTypeSpecified), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IcmpCode\n\t\t\tindex++\n\t\t\tSo(\"IcmpCode\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.IcmpCode), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IcmpCodeSpecified\n\t\t\tindex++\n\t\t\tSo(\"IcmpCodeSpecified\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.IcmpCodeSpecified), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 AleAuthConnect\n\t\t\tindex++\n\t\t\tSo(\"AleAuthConnect\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.AleAuthConnect), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// skip reserved\n\t\t\tindex++\n\t\t\t// UINT16 ProxyPort\n\t\t\tindex++\n\t\t\tSo(\"ProxyPort\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.ProxyPort), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// INT16 BytesMatchStart\n\t\t\tindex++\n\t\t\tSo(\"BytesMatchStart\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.BytesMatchStart), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// INT32 BytesMatchOffset\n\t\t\tindex++\n\t\t\tSo(\"BytesMatchOffset\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.BytesMatchOffset), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// INT32 BytesMatchSize\n\t\t\tindex++\n\t\t\tSo(\"BytesMatchSize\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.BytesMatchSize), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// PBYTE BytesMatch\n\t\t\tindex++\n\t\t\tSo(\"BytesMatch\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.BytesMatch), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 Mark\n\t\t\tindex++\n\t\t\tSo(\"Mark\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.Mark), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 GroupId\n\t\t\tindex++\n\t\t\tSo(\"GroupId\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.GroupId), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// INT32 SrcPortCount\n\t\t\tindex++\n\t\t\tSo(\"SrcPortCount\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.SrcPortCount), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// INT32 DstPortCount\n\t\t\tindex++\n\t\t\tSo(\"DstPortCount\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.DstPortCount), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// PPORT_RANGE SrcPorts\n\t\t\tindex++\n\t\t\tSo(\"SrcPorts\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.SrcPorts), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// PPORT_RANGE DstPorts\n\t\t\tindex++\n\t\t\tSo(\"DstPorts\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.DstPorts), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// LPCWCH LogPrefix\n\t\t\tindex++\n\t\t\tSo(\"LogPrefix\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.LogPrefix), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// LPCWCH Application\n\t\t\tindex++\n\t\t\tSo(\"Application\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(RuleSpec{}.Application), ShouldEqual, layout.Members[index].Offset)\n\t\t})\n\n\t\tConvey(\"The layout of IpsetRuleSpec and IPSET_RULE_SPEC should be the same\", func() {\n\t\t\tlayout, err := pdb.GetStructLayout(\"IPSET_RULE_SPEC\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(unsafe.Sizeof(IpsetRuleSpec{}), ShouldEqual, layout.Size)\n\t\t\t// UINT8 NotIpset\n\t\t\tindex := 0\n\t\t\tSo(\"NotIpset\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(IpsetRuleSpec{}.NotIpset), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IpsetDstIp\n\t\t\tindex++\n\t\t\tSo(\"IpsetDstIp\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(IpsetRuleSpec{}.IpsetDstIp), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IpsetDstPort\n\t\t\tindex++\n\t\t\tSo(\"IpsetDstPort\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(IpsetRuleSpec{}.IpsetDstPort), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IpsetSrcIp\n\t\t\tindex++\n\t\t\tSo(\"IpsetSrcIp\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(IpsetRuleSpec{}.IpsetSrcIp), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 IpsetSrcPort\n\t\t\tindex++\n\t\t\tSo(\"IpsetSrcPort\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(IpsetRuleSpec{}.IpsetSrcPort), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// skip reserved\n\t\t\tindex++\n\t\t\t// LPCWCH IpsetName\n\t\t\tindex++\n\t\t\tSo(\"IpsetName\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(IpsetRuleSpec{}.IpsetName), ShouldEqual, layout.Members[index].Offset)\n\t\t})\n\n\t\tConvey(\"The layout of PortRange and PORT_RANGE should be the same\", func() {\n\t\t\tlayout, err := pdb.GetStructLayout(\"PORT_RANGE\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(unsafe.Sizeof(PortRange{}), ShouldEqual, layout.Size)\n\t\t\t// UINT16 PortStart\n\t\t\tindex := 0\n\t\t\tSo(\"PortStart\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PortRange{}.PortStart), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT16 PortEnd\n\t\t\tindex++\n\t\t\tSo(\"PortEnd\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(PortRange{}.PortEnd), ShouldEqual, layout.Members[index].Offset)\n\t\t})\n\n\t\tConvey(\"The layout of LogPacketInfo and FRONTMAN_LOG_PACKET_INFO should be the same\", func() {\n\t\t\tlayout, err := pdb.GetStructLayout(\"FRONTMAN_LOG_PACKET_INFO\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(unsafe.Sizeof(LogPacketInfo{}), ShouldEqual, layout.Size)\n\t\t\t// UINT8 Ipv4\n\t\t\tindex := 0\n\t\t\tSo(\"Ipv4\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.Ipv4), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 Protocol\n\t\t\tindex++\n\t\t\tSo(\"Protocol\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.Protocol), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT8 Outbound\n\t\t\tindex++\n\t\t\tSo(\"Outbound\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.Outbound), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// skip reserved\n\t\t\tindex++\n\t\t\t// UINT16 LocalPort\n\t\t\tindex++\n\t\t\tSo(\"LocalPort\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.LocalPort), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT16 RemotePort\n\t\t\tindex++\n\t\t\tSo(\"RemotePort\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.RemotePort), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 LocalAddr[4]\n\t\t\tindex++\n\t\t\tSo(\"LocalAddr\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.LocalAddr), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// // UINT32 RemoteAddr[4]\n\t\t\tindex++\n\t\t\tSo(\"RemoteAddr\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.RemoteAddr), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 PacketSize\n\t\t\tindex++\n\t\t\tSo(\"PacketSize\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.PacketSize), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// UINT32 GroupId\n\t\t\tindex++\n\t\t\tSo(\"GroupId\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.GroupId), ShouldEqual, layout.Members[index].Offset)\n\t\t\t// WCHAR LogPrefix[LOGPREFIX_MAX_LENGTH]\n\t\t\tindex++\n\t\t\tSo(\"LogPrefix\", ShouldEqual, layout.Members[index].Name)\n\t\t\tSo(unsafe.Offsetof(LogPacketInfo{}.LogPrefix), ShouldEqual, layout.Members[index].Offset)\n\t\t})\n\n\t})\n\n}\n\nfunc TestFrontmanFunctionArguments(t *testing.T) {\n\n\tconst pointerSize = unsafe.Sizeof(uintptr(0))\n\n\tConvey(\"Given a Frontman PDB\", t, func() {\n\n\t\tpdb, err := abi.FindFrontmanPdb()\n\t\tSo(err, ShouldBeNil)\n\n\t\tConvey(\"The arguments to FrontmanGetDestInfo should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"FrontmanGetDestInfo\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 3)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to FrontmanApplyDestHandle should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"FrontmanApplyDestHandle\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 2)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to FrontmanFreeDestHandle should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"FrontmanFreeDestHandle\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 1)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to IpsetProvider_NewIpset should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"IpsetProvider_NewIpset\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 4)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to IpsetProvider_GetIpset should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"IpsetProvider_GetIpset\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 3)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to IpsetProvider_DestroyAll should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"IpsetProvider_DestroyAll\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 2)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to IpsetProvider_ListIPSets should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"IpsetProvider_ListIPSets\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 4)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to IpsetProvider_ListIPSetsDetail should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"IpsetProvider_ListIPSetsDetail\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 5)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, unsafe.Sizeof(int32(0)))\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t\tSo(funcArgs[4].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to Ipset_Add should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"Ipset_Add\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 4)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t})\n\n\t\tConvey(\"The arguments to Ipset_AddOption should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"Ipset_AddOption\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 5)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[4].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t})\n\n\t\tConvey(\"The arguments to Ipset_Delete should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"Ipset_Delete\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 3)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to Ipset_Destroy should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"Ipset_Destroy\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 2)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to Ipset_Flush should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"Ipset_Flush\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 2)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to Ipset_Test should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"Ipset_Test\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 3)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to PacketFilterStart should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"PacketFilterStart\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 4)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to PacketFilterClose should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"PacketFilterClose\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 0)\n\t\t})\n\n\t\tConvey(\"The arguments to PacketFilterForwardPacket should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"PacketFilterForwardPacket\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 2)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to AppendFilter should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"AppendFilter\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 3)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to InsertFilter should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"InsertFilter\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 4)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to DestroyFilter should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"DestroyFilter\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 2)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to EmptyFilter should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"EmptyFilter\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 2)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to GetFilterList should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"GetFilterList\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 5)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t\tSo(funcArgs[4].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to AppendFilterCriteria should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"AppendFilterCriteria\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 6)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[4].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[5].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t})\n\n\t\tConvey(\"The arguments to DeleteFilterCriteria should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"DeleteFilterCriteria\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 3)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t\tConvey(\"The arguments to GetCriteriaList should be as expected\", func() {\n\t\t\tfuncArgs, err := pdb.GetFunctionArguments(\"GetCriteriaList\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(funcArgs), ShouldEqual, 5)\n\t\t\tSo(funcArgs[0].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[1].Size, ShouldEqual, unsafe.Sizeof(int32(0)))\n\t\t\tSo(funcArgs[2].Size, ShouldEqual, pointerSize)\n\t\t\tSo(funcArgs[3].Size, ShouldEqual, unsafe.Sizeof(uint32(0)))\n\t\t\tSo(funcArgs[4].Size, ShouldEqual, pointerSize)\n\t\t})\n\n\t})\n\n}\n"
  },
  {
    "path": "utils/frontman/rulecleanup_windows.go",
    "content": "// +build windows\n\npackage frontman\n\nimport (\n\t\"strings\"\n)\n\ntype ruleCleanup interface {\n\tmapIpsetToRule(ipsetName string, filterName, criteriaName string)\n\tdeleteRulesForIpset(wrapDriver WrapDriver, ipsetName string) error\n\tdeleteRuleForIpsetByPrefix(wrapDriver WrapDriver, prefix string) error\n\tdeleteRuleFromIpsetMap(filterName, criteriaName string)\n\tgetRulesForIpset(ipsetName string) []*filterRulePair\n}\n\ntype ruleCleaner struct {\n\tipsetToRule map[string][]*filterRulePair\n}\n\ntype filterRulePair struct {\n\tfilterName   string\n\truleCriteria string\n}\n\nfunc newRuleCleanup() ruleCleanup {\n\treturn &ruleCleaner{\n\t\tipsetToRule: make(map[string][]*filterRulePair),\n\t}\n}\n\n// We keep a map of ipset name to rules. This is a safeguard to ensure that rules are deleted.\n// Normally, during policy update, rules and ipsets are deleted explicitly and all works well.\n// There are some instances though (Discovery Mode) where criteria name strings use in deletion\n// do not align with criteria name strings used during addition. This will result in rules\n// stranded in our driver, which is really bad for multiple reasons.\n// Since deletion of ipsets is more orderly than deletion of rules by criteria name, and since\n// the rules that are subject to being stranded are all so far associated with an ephemeral ipset,\n// we can use the deletion of an ipset to trigger a cleanup of all rules associated with it.\n\nfunc (r *ruleCleaner) mapIpsetToRule(ipsetName string, filterName, criteriaName string) {\n\tr.ipsetToRule[ipsetName] = append(r.ipsetToRule[ipsetName], &filterRulePair{filterName: filterName, ruleCriteria: criteriaName})\n}\n\nfunc (r *ruleCleaner) deleteRulesForIpset(wrapDriver WrapDriver, ipsetName string) error {\n\tfor _, frpair := range r.ipsetToRule[ipsetName] {\n\t\t// In recent changes, these rule are already gone, so don't log an error if this fails.\n\t\twrapDriver.DeleteFilterCriteria(frpair.filterName, frpair.ruleCriteria) //nolint\n\t}\n\tdelete(r.ipsetToRule, ipsetName)\n\treturn nil\n}\n\nfunc (r *ruleCleaner) deleteRuleForIpsetByPrefix(wrapDriver WrapDriver, prefix string) error {\n\n\tipsets, err := wrapDriver.ListIpsets()\n\tif err != nil {\n\t\treturn err\n\t}\n\tfor _, ipsetName := range ipsets {\n\t\tif strings.HasPrefix(ipsetName, prefix) {\n\t\t\tif err := r.deleteRulesForIpset(wrapDriver, ipsetName); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (r *ruleCleaner) deleteRuleFromIpsetMap(filterName, criteriaName string) {\n\t// quadratic for now, optimize later if we need to\n\tfor ipsetName, frpairs := range r.ipsetToRule {\n\t\tfor i, frpair := range frpairs {\n\t\t\tif frpair.filterName == filterName && frpair.ruleCriteria == criteriaName {\n\t\t\t\t// delete pair from slice\n\t\t\t\tfrpairs[i] = frpairs[len(frpairs)-1]\n\t\t\t\tr.ipsetToRule[ipsetName] = frpairs[:len(frpairs)-1]\n\t\t\t\t// rule can be mapped from multiple ipsets, so continue our outer loop\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (r *ruleCleaner) getRulesForIpset(ipsetName string) []*filterRulePair {\n\tfrpairs := r.ipsetToRule[ipsetName]\n\tresult := make([]*filterRulePair, len(frpairs))\n\tcopy(result, frpairs)\n\treturn result\n}\n"
  },
  {
    "path": "utils/frontman/rulecleanup_windows_test.go",
    "content": "// +build windows\n\npackage frontman\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n)\n\nconst (\n\tipset1  = \"ipset-1\"\n\tipset2  = \"ipset-2\"\n\tipset3  = \"ipset-3\"\n\tipset4  = \"ipset-4\"\n\tipset5  = \"ipset-5\"\n\tipset6a = \"ipset-6a\"\n\tipset6b = \"ipset-6b\"\n\tfilter1 = \"filter 1\"\n\tfilter2 = \"filter 2\"\n\tfilter3 = \"filter 3\"\n\tfilter4 = \"filter 4\"\n)\n\nvar (\n\trule1 = fmt.Sprintf(\"rule 1 for %s\", ipset1)\n\trule2 = fmt.Sprintf(\"rule 2 for %s and %s\", ipset1, ipset2)\n\trule3 = fmt.Sprintf(\"rule 3 for %s\", ipset2)\n\trule4 = fmt.Sprintf(\"rule 4 for %s\", ipset3)\n\trule5 = fmt.Sprintf(\"rule 5 for %s\", ipset4)\n\trule6 = fmt.Sprintf(\"rule 6 for %s and %s and two different filters\", ipset4, ipset5)\n\trule7 = fmt.Sprintf(\"rule 7 for %s and two different filters\", ipset4)\n\trule8 = fmt.Sprintf(\"rule 8 for %s\", ipset6a)\n\trule9 = fmt.Sprintf(\"rule 9 for %s\", ipset6b)\n)\n\ntype wrapperForTest struct {\n\tcleaner ruleCleanup\n\tipsets  map[string]bool\n\trules   map[string]map[string]bool\n}\n\nfunc (w *wrapperForTest) addRule(ipsetName, filterName, ruleCriteria string) {\n\tw.cleaner.mapIpsetToRule(ipsetName, filterName, ruleCriteria)\n\tw.ipsets[ipsetName] = true\n\tif _, ok := w.rules[filterName]; !ok {\n\t\tw.rules[filterName] = make(map[string]bool)\n\t}\n\tw.rules[filterName][ruleCriteria] = true\n}\n\nfunc (w *wrapperForTest) deleteIpset(ipsetName string) error {\n\tif err := w.cleaner.deleteRulesForIpset(w, ipsetName); err != nil {\n\t\treturn err\n\t}\n\tdelete(w.ipsets, ipsetName)\n\treturn nil\n}\n\nfunc (w *wrapperForTest) deleteIpsetByPrefix(ipsetNamePrefix string) error {\n\tif err := w.cleaner.deleteRuleForIpsetByPrefix(w, ipsetNamePrefix); err != nil {\n\t\treturn err\n\t}\n\tfor ipsetName := range w.ipsets {\n\t\tif strings.HasPrefix(ipsetName, ipsetNamePrefix) {\n\t\t\tdelete(w.ipsets, ipsetName)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (w *wrapperForTest) ruleExists(filterName, ruleCriteria string) bool {\n\t_, ok := w.rules[filterName][ruleCriteria]\n\treturn ok\n}\n\nfunc (w *wrapperForTest) deleteRule(filterName, ruleCriteria string) {\n\tw.cleaner.deleteRuleFromIpsetMap(filterName, ruleCriteria)\n\tdelete(w.rules[filterName], ruleCriteria)\n}\n\nfunc Test_ruleCleaner(t *testing.T) {\n\n\twrapper := &wrapperForTest{\n\t\tipsets:  make(map[string]bool),\n\t\trules:   make(map[string]map[string]bool),\n\t\tcleaner: newRuleCleanup(),\n\t}\n\n\t// Add some ipsets and rules\n\twrapper.addRule(ipset1, filter1, rule1)\n\twrapper.addRule(ipset1, filter2, rule2)\n\twrapper.addRule(ipset2, filter2, rule2)\n\twrapper.addRule(ipset2, filter1, rule3)\n\twrapper.addRule(ipset2, filter2, rule3)\n\twrapper.addRule(ipset3, filter3, rule4)\n\twrapper.addRule(ipset1, filter3, rule1)\n\twrapper.addRule(ipset1, filter4, rule1)\n\twrapper.addRule(ipset4, filter1, rule5)\n\twrapper.addRule(ipset4, filter1, rule6)\n\twrapper.addRule(ipset5, filter1, rule6)\n\twrapper.addRule(ipset4, filter2, rule7)\n\twrapper.addRule(ipset4, filter2, rule6)\n\twrapper.addRule(ipset5, filter2, rule6)\n\twrapper.addRule(ipset6a, filter3, rule8)\n\twrapper.addRule(ipset6b, filter3, rule9)\n\n\t// I delete an ipset. Its rules should be deleted.\n\tif err := wrapper.deleteIpset(ipset2); err != nil {\n\t\tt.Errorf(\"deleteIpset failed: %w\", err)\n\t}\n\tif wrapper.ruleExists(filter2, rule2) || wrapper.ruleExists(filter1, rule3) || wrapper.ruleExists(filter2, rule3) {\n\t\tt.Errorf(\"deleting ipset 2 did not delete all its rules\")\n\t}\n\tif !wrapper.ruleExists(filter1, rule1) {\n\t\tt.Errorf(\"deleting ipset 2 deleted extra rules\")\n\t}\n\n\t// Delete by prefix\n\tif err := wrapper.deleteIpsetByPrefix(\"ipset-6\"); err != nil {\n\t\tt.Errorf(\"deleteIpsetByPrefix failed: %w\", err)\n\t}\n\tif wrapper.ruleExists(filter3, rule8) || wrapper.ruleExists(filter3, rule9) {\n\t\tt.Errorf(\"deleting by prefix did not work\")\n\t}\n\n\t// Delete another ipset associated with different filters and other ipsets\n\tif err := wrapper.deleteIpset(ipset4); err != nil {\n\t\tt.Errorf(\"deleteIpset failed: %w\", err)\n\t}\n\tif wrapper.ruleExists(filter1, rule5) || wrapper.ruleExists(filter1, rule6) || wrapper.ruleExists(filter2, rule6) || wrapper.ruleExists(filter2, rule7) {\n\t\tt.Errorf(\"deleting ipset 4 did not delete all its rules\")\n\t}\n\n\t// Rules 1 and 4 should be remaining\n\tif !wrapper.ruleExists(filter1, rule1) || !wrapper.ruleExists(filter3, rule1) || !wrapper.ruleExists(filter4, rule1) {\n\t\tt.Errorf(\"expected rule 1 to exist but does not\")\n\t}\n\tif !wrapper.ruleExists(filter3, rule4) {\n\t\tt.Errorf(\"expected rule 4 to exist but does not\")\n\t}\n\n\t// Delete rules 1 and 4 and verify\n\twrapper.deleteRule(filter1, rule1)\n\twrapper.deleteRule(filter3, rule1)\n\twrapper.deleteRule(filter4, rule1)\n\twrapper.deleteRule(filter3, rule4)\n\tif wrapper.ruleExists(filter1, rule1) || wrapper.ruleExists(filter3, rule1) || wrapper.ruleExists(filter4, rule1) || wrapper.ruleExists(filter3, rule4) {\n\t\tt.Errorf(\"deleting rules 1 and 4 did not work\")\n\t}\n\n\t// Rules should be gone\n\tfor filterName, ruleCriterias := range wrapper.rules {\n\t\tfor ruleCriteria := range ruleCriterias {\n\t\t\tt.Errorf(\"deleted rules but this one for filter %s remains: %s\", filterName, ruleCriteria)\n\t\t}\n\t}\n\n\t// Now we still have some ipsets not deleted, so their maps may not be empty. Explicitly delete these rules.\n\twrapper.deleteRule(filter2, rule2) // in ipset1 map\n\twrapper.deleteRule(filter1, rule6) // in ipset5 map\n\twrapper.deleteRule(filter2, rule6) // in ipset5 map\n\n\t// Verify nothing left in cleaner\n\tremainingRules := make([]*filterRulePair, 0)\n\tremainingRules = append(remainingRules,\n\t\tappend(wrapper.cleaner.getRulesForIpset(ipset1),\n\t\t\tappend(wrapper.cleaner.getRulesForIpset(ipset2),\n\t\t\t\tappend(wrapper.cleaner.getRulesForIpset(ipset3),\n\t\t\t\t\tappend(wrapper.cleaner.getRulesForIpset(ipset4),\n\t\t\t\t\t\tappend(wrapper.cleaner.getRulesForIpset(ipset5),\n\t\t\t\t\t\t\tappend(wrapper.cleaner.getRulesForIpset(ipset6a),\n\t\t\t\t\t\t\t\twrapper.cleaner.getRulesForIpset(ipset6b)...)...)...)...)...)...)...)\n\tfor _, frpair := range remainingRules {\n\t\tt.Errorf(\"deleted rules but cleaner still has this one for filter %s: %s\", frpair.filterName, frpair.ruleCriteria)\n\t}\n}\n\n// WrapDriver implementation for the test only needs to implement the functions called by the rule cleaner\n\nfunc (w *wrapperForTest) ListIpsets() ([]string, error) {\n\tipsetNames := make([]string, 0, len(w.ipsets))\n\tfor ipsetName := range w.ipsets {\n\t\tipsetNames = append(ipsetNames, ipsetName)\n\t}\n\treturn ipsetNames, nil\n}\n\nfunc (w *wrapperForTest) DeleteFilterCriteria(filterName, criteriaName string) error {\n\tdelete(w.rules[filterName], criteriaName)\n\treturn nil\n}\n\n// Dummy implementations below\n\nfunc (w *wrapperForTest) GetDestInfo(socket uintptr, destInfo *DestInfo) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) ApplyDestHandle(socket, destHandle uintptr) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) FreeDestHandle(destHandle uintptr) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) NewIpset(name, ipsetType string) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (w *wrapperForTest) GetIpset(name string) (uintptr, error) {\n\treturn 1, nil\n}\n\nfunc (w *wrapperForTest) DestroyAllIpsets(prefix string) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) ListIpsetsDetail(format int) (string, error) {\n\treturn \"\", nil\n}\n\nfunc (w *wrapperForTest) IpsetAdd(ipsetHandle uintptr, entry string, timeout int) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) IpsetAddOption(ipsetHandle uintptr, entry, option string, timeout int) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) IpsetDelete(ipsetHandle uintptr, entry string) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) IpsetDestroy(ipsetHandle uintptr, name string) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) IpsetFlush(ipsetHandle uintptr) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) IpsetTest(ipsetHandle uintptr, entry string) (bool, error) {\n\treturn false, nil\n}\n\nfunc (w *wrapperForTest) PacketFilterStart(firewallName string, receiveCallback, loggingCallback func(uintptr, uintptr) uintptr) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) PacketFilterClose() error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) PacketFilterForward(info *PacketInfo, packetBytes []byte) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) AppendFilter(outbound bool, filterName string, isGotoFilter bool) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) InsertFilter(outbound bool, priority int, filterName string, isGotoFilter bool) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) DestroyFilter(filterName string) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) EmptyFilter(filterName string) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) GetFilterList(outbound bool) ([]string, error) {\n\treturn nil, nil\n}\n\nfunc (w *wrapperForTest) AppendFilterCriteria(filterName, criteriaName string, ruleSpec *RuleSpec, ipsetRuleSpecs []IpsetRuleSpec) error {\n\treturn nil\n}\n\nfunc (w *wrapperForTest) GetCriteriaList(format int) (string, error) {\n\treturn \"\", nil\n}\n"
  },
  {
    "path": "utils/frontman/utils_windows.go",
    "content": "// +build windows\n\npackage frontman\n\nimport (\n\t\"syscall\"\n\t\"unsafe\"\n)\n\n// WideCharPointerToString converts a pointer to a zero-terminated wide character string to a golang string\nfunc WideCharPointerToString(pszWide *uint16) string {\n\n\tptr := uintptr(unsafe.Pointer(pszWide)) // nolint:govet\n\tbuf := make([]uint16, 0, 256)\n\tfor {\n\t\tch := *((*uint16)(unsafe.Pointer(ptr))) // nolint:govet\n\t\tbuf = append(buf, ch)\n\t\tif ch == 0 {\n\t\t\tbreak\n\t\t}\n\n\t\tptr += 2\n\t}\n\treturn syscall.UTF16ToString(buf)\n}\n"
  },
  {
    "path": "utils/frontman/wrapper_windows.go",
    "content": "// +build windows\n\npackage frontman\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"syscall\"\n\t\"unsafe\"\n\n\t\"go.uber.org/zap\"\n\t\"golang.org/x/sys/windows\"\n)\n\n// WrapDriver represents convenience wrapper methods for calling our Windows Frontman DLL\ntype WrapDriver interface {\n\tGetDestInfo(socket uintptr, destInfo *DestInfo) error\n\tApplyDestHandle(socket, destHandle uintptr) error\n\tFreeDestHandle(destHandle uintptr) error\n\tNewIpset(name, ipsetType string) (uintptr, error)\n\tGetIpset(name string) (uintptr, error)\n\tDestroyAllIpsets(prefix string) error\n\tListIpsets() ([]string, error)\n\tListIpsetsDetail(format int) (string, error)\n\tIpsetAdd(ipsetHandle uintptr, entry string, timeout int) error\n\tIpsetAddOption(ipsetHandle uintptr, entry, option string, timeout int) error\n\tIpsetDelete(ipsetHandle uintptr, entry string) error\n\tIpsetDestroy(ipsetHandle uintptr, name string) error\n\tIpsetFlush(ipsetHandle uintptr) error\n\tIpsetTest(ipsetHandle uintptr, entry string) (bool, error)\n\tPacketFilterStart(firewallName string, receiveCallback, loggingCallback func(uintptr, uintptr) uintptr) error\n\tPacketFilterClose() error\n\tPacketFilterForward(info *PacketInfo, packetBytes []byte) error\n\tAppendFilter(outbound bool, filterName string, isGotoFilter bool) error\n\tInsertFilter(outbound bool, priority int, filterName string, isGotoFilter bool) error\n\tDestroyFilter(filterName string) error\n\tEmptyFilter(filterName string) error\n\tGetFilterList(outbound bool) ([]string, error)\n\tAppendFilterCriteria(filterName, criteriaName string, ruleSpec *RuleSpec, ipsetRuleSpecs []IpsetRuleSpec) error\n\tDeleteFilterCriteria(filterName, criteriaName string) error\n\tGetCriteriaList(format int) (string, error)\n}\n\ntype wrapper struct {\n\tdriverHandle uintptr\n\truleCleaner  ruleCleanup\n}\n\n// Wrapper is the driver/dll wrapper implementation\nvar Wrapper = WrapDriver(&wrapper{\n\tdriverHandle: uintptr(syscall.InvalidHandle),\n\truleCleaner:  newRuleCleanup(),\n})\n\nfunc (w *wrapper) initDriverHandle() {\n\tif w.driverHandle == 0 || syscall.Handle(w.driverHandle) == syscall.InvalidHandle {\n\t\tret, err := Driver.FrontmanOpenShared()\n\t\tif err != nil {\n\t\t\tzap.L().Fatal(\"Unable to initialize Frontman driver. Check Windows Event Viewer System logs for errors, and ensure that no other instances are currently running.\", zap.Error(err))\n\t\t}\n\t\tw.driverHandle = ret\n\t}\n}\n\nfunc (w *wrapper) GetDestInfo(socket uintptr, destInfo *DestInfo) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.GetDestInfo(w.driverHandle, socket, uintptr(unsafe.Pointer(destInfo))); ret == 0 {\n\t\treturn fmt.Errorf(\"GetDestInfo failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) ApplyDestHandle(socket, destHandle uintptr) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.ApplyDestHandle(socket, destHandle); ret == 0 {\n\t\treturn fmt.Errorf(\"ApplyDestHandle failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) FreeDestHandle(destHandle uintptr) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.FreeDestHandle(destHandle); ret == 0 {\n\t\treturn fmt.Errorf(\"FreeDestHandle failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) NewIpset(name, ipsetType string) (uintptr, error) {\n\tw.initDriverHandle()\n\tvar ipsetHandle uintptr\n\tif ret, err := Driver.NewIpset(w.driverHandle, marshalString(name), marshalString(ipsetType), uintptr(unsafe.Pointer(&ipsetHandle))); ret == 0 {\n\t\treturn 0, fmt.Errorf(\"NewIpset failed: %v\", err)\n\t}\n\treturn ipsetHandle, nil\n}\n\nfunc (w *wrapper) GetIpset(name string) (uintptr, error) {\n\tw.initDriverHandle()\n\tvar ipsetHandle uintptr\n\tif ret, err := Driver.GetIpset(w.driverHandle, marshalString(name), uintptr(unsafe.Pointer(&ipsetHandle))); ret == 0 {\n\t\treturn 0, fmt.Errorf(\"GetIpset failed: %v\", err)\n\t}\n\treturn ipsetHandle, nil\n}\n\nfunc (w *wrapper) DestroyAllIpsets(prefix string) error {\n\tw.initDriverHandle()\n\t// order important: we must call cleaner routine before telling driver to delete the ipsets\n\terrCleaner := w.ruleCleaner.deleteRuleForIpsetByPrefix(w, prefix)\n\tif ret, err := Driver.DestroyAllIpsets(w.driverHandle, marshalString(prefix)); ret == 0 {\n\t\treturn fmt.Errorf(\"DestroyAllIpsets failed: %v\", err)\n\t}\n\treturn errCleaner\n}\n\nfunc (w *wrapper) ListIpsets() ([]string, error) {\n\tw.initDriverHandle()\n\t// first query for needed buffer size\n\tvar bytesNeeded, ignore uint32\n\tret, err := Driver.ListIpsets(w.driverHandle, 0, 0, uintptr(unsafe.Pointer(&bytesNeeded)))\n\tif ret != 0 && bytesNeeded == 0 {\n\t\treturn []string{}, nil\n\t}\n\tif err != windows.ERROR_INSUFFICIENT_BUFFER {\n\t\treturn nil, fmt.Errorf(\"ListIpsets failed: %v\", err)\n\t}\n\tif bytesNeeded%2 != 0 {\n\t\treturn nil, fmt.Errorf(\"ListIpsets failed: odd result (%d)\", bytesNeeded)\n\t}\n\t// then allocate buffer for wide string and call again\n\tbuf := make([]uint16, bytesNeeded/2)\n\tret, err = Driver.ListIpsets(w.driverHandle, uintptr(unsafe.Pointer(&buf[0])), uintptr(bytesNeeded), uintptr(unsafe.Pointer(&ignore)))\n\tif ret == 0 {\n\t\treturn nil, fmt.Errorf(\"ListIpsets failed: %v\", err)\n\t}\n\tstr := syscall.UTF16ToString(buf)\n\treturn strings.Split(str, \",\"), nil\n}\n\nfunc (w *wrapper) ListIpsetsDetail(format int) (string, error) {\n\tw.initDriverHandle()\n\t// first query for needed buffer size\n\tvar bytesNeeded, ignore uint32\n\temptyStr := \"\"\n\tret, err := Driver.ListIpsetsDetail(w.driverHandle, uintptr(format), 0, 0, uintptr(unsafe.Pointer(&bytesNeeded)))\n\tif ret != 0 && bytesNeeded == 0 {\n\t\treturn emptyStr, nil\n\t}\n\tif err != windows.ERROR_INSUFFICIENT_BUFFER {\n\t\treturn emptyStr, fmt.Errorf(\"ListIpsetsDetail failed: %v\", err)\n\t}\n\tif bytesNeeded%2 != 0 {\n\t\treturn emptyStr, fmt.Errorf(\"ListIpsetsDetail failed: odd result (%d)\", bytesNeeded)\n\t}\n\t// then allocate buffer for wide string and call again\n\tbuf := make([]uint16, bytesNeeded/2)\n\tret, err = Driver.ListIpsetsDetail(w.driverHandle, uintptr(format), uintptr(unsafe.Pointer(&buf[0])), uintptr(bytesNeeded), uintptr(unsafe.Pointer(&ignore)))\n\tif ret == 0 {\n\t\treturn emptyStr, fmt.Errorf(\"ListIpsetsDetail failed: %v\", err)\n\t}\n\tstr := syscall.UTF16ToString(buf)\n\treturn str, nil\n}\n\nfunc (w *wrapper) IpsetAdd(ipsetHandle uintptr, entry string, timeout int) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.IpsetAdd(w.driverHandle, ipsetHandle, marshalString(entry), uintptr(timeout)); ret == 0 {\n\t\t// no error if already exists\n\t\tif err == windows.ERROR_ALREADY_EXISTS {\n\t\t\treturn nil\n\t\t}\n\t\treturn fmt.Errorf(\"IpsetAdd failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) IpsetAddOption(ipsetHandle uintptr, entry, option string, timeout int) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.IpsetAddOption(w.driverHandle, ipsetHandle, marshalString(entry), marshalString(option), uintptr(timeout)); ret == 0 {\n\t\t// no error if already exists\n\t\tif err == windows.ERROR_ALREADY_EXISTS {\n\t\t\treturn nil\n\t\t}\n\t\treturn fmt.Errorf(\"IpsetAddOption failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) IpsetDelete(ipsetHandle uintptr, entry string) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.IpsetDelete(w.driverHandle, ipsetHandle, marshalString(entry)); ret == 0 {\n\t\treturn fmt.Errorf(\"IpsetDelete failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) IpsetDestroy(ipsetHandle uintptr, name string) error {\n\tw.initDriverHandle()\n\terrCleaner := w.ruleCleaner.deleteRulesForIpset(w, name)\n\tif ret, err := Driver.IpsetDestroy(w.driverHandle, ipsetHandle); ret == 0 {\n\t\treturn fmt.Errorf(\"IpsetDestroy failed: %v\", err)\n\t}\n\treturn errCleaner\n}\n\nfunc (w *wrapper) IpsetFlush(ipsetHandle uintptr) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.IpsetFlush(w.driverHandle, ipsetHandle); ret == 0 {\n\t\treturn fmt.Errorf(\"IpsetFlush failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) IpsetTest(ipsetHandle uintptr, entry string) (bool, error) {\n\tw.initDriverHandle()\n\tif ret, err := Driver.IpsetTest(w.driverHandle, ipsetHandle, marshalString(entry)); ret == 0 {\n\t\tif err == nil {\n\t\t\treturn false, nil\n\t\t}\n\t\treturn false, fmt.Errorf(\"IpsetTest failed: %v\", err)\n\t}\n\treturn true, nil\n}\n\nfunc (w *wrapper) PacketFilterStart(firewallName string, receiveCallback, loggingCallback func(uintptr, uintptr) uintptr) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.PacketFilterStart(w.driverHandle, marshalString(firewallName), syscall.NewCallbackCDecl(receiveCallback), syscall.NewCallbackCDecl(loggingCallback)); ret == 0 {\n\t\treturn fmt.Errorf(\"PacketFilterStart failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) PacketFilterClose() error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.PacketFilterClose(); ret == 0 {\n\t\treturn fmt.Errorf(\"PacketFilterClose failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) PacketFilterForward(info *PacketInfo, packetBytes []byte) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.PacketFilterForward(uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(&packetBytes[0]))); ret == 0 {\n\t\treturn fmt.Errorf(\"PacketFilterForward failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) AppendFilter(outbound bool, filterName string, isGotoFilter bool) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.AppendFilter(w.driverHandle, marshalBool(outbound), marshalString(filterName), marshalBool(isGotoFilter)); ret == 0 {\n\t\t// no error if already exists\n\t\tif err == windows.ERROR_ALREADY_EXISTS {\n\t\t\treturn nil\n\t\t}\n\t\treturn fmt.Errorf(\"AppendFilter failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) InsertFilter(outbound bool, priority int, filterName string, isGotoFilter bool) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.InsertFilter(w.driverHandle, marshalBool(outbound), uintptr(priority), marshalString(filterName), marshalBool(isGotoFilter)); ret == 0 {\n\t\treturn fmt.Errorf(\"InsertFilter failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) DestroyFilter(filterName string) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.DestroyFilter(w.driverHandle, marshalString(filterName)); ret == 0 {\n\t\treturn fmt.Errorf(\"DestroyFilter failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) EmptyFilter(filterName string) error {\n\tw.initDriverHandle()\n\tif ret, err := Driver.EmptyFilter(w.driverHandle, marshalString(filterName)); ret == 0 {\n\t\treturn fmt.Errorf(\"EmptyFilter failed: %v\", err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) GetFilterList(outbound bool) ([]string, error) {\n\tw.initDriverHandle()\n\t// first query for needed buffer size\n\tvar bytesNeeded, ignore uint32\n\tret, err := Driver.GetFilterList(w.driverHandle, marshalBool(outbound), 0, 0, uintptr(unsafe.Pointer(&bytesNeeded)))\n\tif ret != 0 && bytesNeeded == 0 {\n\t\treturn []string{}, nil\n\t}\n\tif err != windows.ERROR_INSUFFICIENT_BUFFER {\n\t\treturn nil, fmt.Errorf(\"GetFilterList failed: %v\", err)\n\t}\n\tif bytesNeeded%2 != 0 {\n\t\treturn nil, fmt.Errorf(\"GetFilterList failed: odd result (%d)\", bytesNeeded)\n\t}\n\t// then allocate buffer for wide string and call again\n\tbuf := make([]uint16, bytesNeeded/2)\n\n\t// Not sure how this happens, but sometimes on shutdown we get a panic because the len(buf) is zero\n\tif len(buf) <= 0 {\n\t\treturn nil, fmt.Errorf(\"GetFilterList returned unexpected bytes needed value: %d\", bytesNeeded)\n\t}\n\n\tret, err = Driver.GetFilterList(w.driverHandle, marshalBool(outbound), uintptr(unsafe.Pointer(&buf[0])), uintptr(bytesNeeded), uintptr(unsafe.Pointer(&ignore)))\n\tif ret == 0 {\n\t\treturn nil, fmt.Errorf(\"GetFilterList failed: %v\", err)\n\t}\n\tstr := syscall.UTF16ToString(buf)\n\treturn strings.Split(str, \",\"), nil\n}\n\nfunc (w *wrapper) AppendFilterCriteria(filterName, criteriaName string, ruleSpec *RuleSpec, ipsetRuleSpecs []IpsetRuleSpec) error {\n\tw.initDriverHandle()\n\tif len(ipsetRuleSpecs) > 0 {\n\t\tif ret, err := Driver.AppendFilterCriteria(w.driverHandle,\n\t\t\tmarshalString(filterName),\n\t\t\tmarshalString(criteriaName),\n\t\t\tuintptr(unsafe.Pointer(ruleSpec)),\n\t\t\tuintptr(unsafe.Pointer(&ipsetRuleSpecs[0])),\n\t\t\tuintptr(len(ipsetRuleSpecs))); ret == 0 {\n\t\t\treturn fmt.Errorf(\"AppendFilterCriteria failed: %v\", err)\n\t\t}\n\t\tfor _, ipsrs := range ipsetRuleSpecs {\n\t\t\tif ipsrs.IpsetName != 0 {\n\t\t\t\tipsetName := WideCharPointerToString((*uint16)(unsafe.Pointer(ipsrs.IpsetName))) // nolint:govet\n\t\t\t\tw.ruleCleaner.mapIpsetToRule(ipsetName, filterName, criteriaName)\n\t\t\t}\n\t\t}\n\t} else {\n\t\tif ret, err := Driver.AppendFilterCriteria(w.driverHandle,\n\t\t\tmarshalString(filterName),\n\t\t\tmarshalString(criteriaName),\n\t\t\tuintptr(unsafe.Pointer(ruleSpec)), 0, 0); ret == 0 {\n\t\t\treturn fmt.Errorf(\"AppendFilterCriteria failed: %v\", err)\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) DeleteFilterCriteria(filterName, criteriaName string) error {\n\tw.initDriverHandle()\n\tw.ruleCleaner.deleteRuleFromIpsetMap(filterName, criteriaName)\n\tif ret, err := Driver.DeleteFilterCriteria(w.driverHandle, marshalString(filterName), marshalString(criteriaName)); ret == 0 {\n\t\treturn fmt.Errorf(\"DeleteFilterCriteria failed - could not delete %s: %v\", criteriaName, err)\n\t}\n\treturn nil\n}\n\nfunc (w *wrapper) GetCriteriaList(format int) (string, error) {\n\tw.initDriverHandle()\n\t// first query for needed buffer size\n\tvar bytesNeeded, ignore uint32\n\temptyStr := \"\"\n\tret, err := Driver.GetCriteriaList(w.driverHandle, uintptr(format), 0, 0, uintptr(unsafe.Pointer(&bytesNeeded)))\n\tif ret != 0 && bytesNeeded == 0 {\n\t\treturn emptyStr, nil\n\t}\n\tif err != windows.ERROR_INSUFFICIENT_BUFFER {\n\t\treturn emptyStr, fmt.Errorf(\"GetCriteriaList failed: %v\", err)\n\t}\n\tif bytesNeeded%2 != 0 {\n\t\treturn emptyStr, fmt.Errorf(\"GetCriteriaList failed: odd result (%d)\", bytesNeeded)\n\t}\n\t// then allocate buffer for wide string and call again\n\tbuf := make([]uint16, bytesNeeded/2)\n\tret, err = Driver.GetCriteriaList(w.driverHandle, uintptr(format), uintptr(unsafe.Pointer(&buf[0])), uintptr(bytesNeeded), uintptr(unsafe.Pointer(&ignore)))\n\tif ret == 0 {\n\t\treturn emptyStr, fmt.Errorf(\"GetCriteriaList failed: %v\", err)\n\t}\n\tstr := syscall.UTF16ToString(buf)\n\treturn str, nil\n}\n\nfunc marshalString(str string) uintptr {\n\treturn uintptr(unsafe.Pointer(syscall.StringToUTF16Ptr(str))) // nolint\n}\n\nfunc marshalBool(b bool) uintptr {\n\tif b {\n\t\treturn 1\n\t}\n\treturn 0\n}\n"
  },
  {
    "path": "utils/ipprefix/ipprefix.go",
    "content": "package ipprefix\n\nimport (\n\t\"encoding/binary\"\n\t\"net\"\n\t\"sync\"\n)\n\n// FuncOnLpmIP is the type of func which will operate on the value associated with the lpm ip.\ntype FuncOnLpmIP func(val interface{}) bool\n\n// FuncOnVals is the type of the func which will operate on each value and will return a new value\n// for each associated value.\ntype FuncOnVals func(val interface{}) interface{}\n\n//IPcache is an interface which provides functionality to store ip's and do longest prefix match\ntype IPcache interface {\n\t// Put takes an argument an ip address, mask and value.\n\tPut(net.IP, int, interface{})\n\t// Get takes an argument the IP address and mask and returns the value that is stored for\n\t// that key.\n\tGet(net.IP, int) (interface{}, bool)\n\t// RunFuncOnLpmIP function takes as an argument an IP address and a function. It finds the\n\t// subnet to which this IP belongs with the longest prefix match. It then calls the\n\t// function supplied by the user on the value stored and if it succeeds then it returns.\n\tRunFuncOnLpmIP(net.IP, FuncOnLpmIP)\n\t// RunFuncOnVals takes an argument a function which is called on all the values stored in\n\t// the cache. This can be used to update the old values with the new values. If the new\n\t// value is nil, it will delete the key.\n\tRunFuncOnVals(FuncOnVals)\n}\n\nconst (\n\tipv4MaskSize = 32 + 1\n\tipv6MaskSize = 128 + 1\n)\n\ntype ipcacheV4 struct {\n\tipv4 []map[uint32]interface{}\n\tsync.RWMutex\n}\n\ntype ipcacheV6 struct {\n\tipv6 []map[[16]byte]interface{}\n\tsync.RWMutex\n}\n\ntype ipcache struct {\n\tipv4 *ipcacheV4\n\tipv6 *ipcacheV6\n}\n\nfunc (cache *ipcacheV4) Put(ip net.IP, mask int, val interface{}) {\n\tcache.Lock()\n\tdefer cache.Unlock()\n\n\tif cache.ipv4[mask] == nil {\n\t\tcache.ipv4[mask] = map[uint32]interface{}{}\n\t}\n\n\tm := cache.ipv4[mask]\n\t// the following expression is ANDing the ip with the mask\n\tkey := binary.BigEndian.Uint32(ip) & binary.BigEndian.Uint32(net.CIDRMask(mask, 32))\n\tif val == nil {\n\t\tdelete(m, key)\n\t} else {\n\t\tm[key] = val\n\t}\n}\n\nfunc (cache *ipcacheV4) Get(ip net.IP, mask int) (interface{}, bool) {\n\tcache.RLock()\n\tdefer cache.RUnlock()\n\n\tm := cache.ipv4[mask]\n\tif m != nil {\n\t\tval, ok := m[binary.BigEndian.Uint32(ip)&binary.BigEndian.Uint32(net.CIDRMask(mask, 32))]\n\t\tif ok {\n\t\t\treturn val, true\n\t\t}\n\t}\n\n\treturn nil, false\n}\n\nfunc (cache *ipcacheV4) RunFuncOnLpmIP(ip net.IP, f func(val interface{}) bool) {\n\tcache.Lock()\n\tdefer cache.Unlock()\n\n\tfor i := len(cache.ipv4) - 1; i >= 0; i-- {\n\t\tm := cache.ipv4[i]\n\t\tif m != nil {\n\t\t\tval, ok := m[binary.BigEndian.Uint32(ip)&binary.BigEndian.Uint32(net.CIDRMask(i, 32))]\n\t\t\tif ok && f(val) {\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}\n\n}\n\nfunc (cache *ipcacheV4) RunFuncOnVals(f func(val interface{}) interface{}) {\n\tcache.Lock()\n\tdefer cache.Unlock()\n\n\tfor mask, m := range cache.ipv4 {\n\t\tif m == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tfor ip, val := range m {\n\t\t\tv := f(val)\n\t\t\tif v == nil {\n\t\t\t\tdelete(m, ip)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tm[ip] = v\n\t\t}\n\n\t\tif len(m) == 0 {\n\t\t\tcache.ipv4[mask] = nil\n\t\t}\n\t}\n\n}\n\nfunc (cache *ipcacheV6) Put(ip net.IP, mask int, val interface{}) {\n\tcache.Lock()\n\tdefer cache.Unlock()\n\n\tif cache.ipv6[mask] == nil {\n\t\tcache.ipv6[mask] = map[[16]byte]interface{}{}\n\t}\n\n\tm := cache.ipv6[mask]\n\t// the following expression is ANDing the ip with the mask\n\tvar maskip [16]byte\n\tcopy(maskip[:], ip.Mask(net.CIDRMask(mask, 128)))\n\n\tif val == nil {\n\t\tdelete(m, maskip)\n\t} else {\n\t\tm[maskip] = val\n\t}\n}\n\nfunc (cache *ipcacheV6) Get(ip net.IP, mask int) (interface{}, bool) {\n\tcache.RLock()\n\tdefer cache.RUnlock()\n\n\tm := cache.ipv6[mask]\n\tif m != nil {\n\t\tvar maskip [16]byte\n\t\tcopy(maskip[:], ip.Mask(net.CIDRMask(mask, 128)))\n\t\tval, ok := m[maskip]\n\t\tif ok {\n\t\t\treturn val, true\n\t\t}\n\t}\n\n\treturn nil, false\n}\n\nfunc (cache *ipcacheV6) RunFuncOnLpmIP(ip net.IP, f func(val interface{}) bool) {\n\tcache.Lock()\n\tdefer cache.Unlock()\n\n\tfor i := len(cache.ipv6) - 1; i >= 0; i-- {\n\t\tm := cache.ipv6[i]\n\t\tif m != nil {\n\t\t\tvar maskip [16]byte\n\t\t\tcopy(maskip[:], ip.Mask(net.CIDRMask(i, 128)))\n\t\t\tval, ok := m[maskip]\n\t\t\tif ok && f(val) {\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (cache *ipcacheV6) RunFuncOnVals(f func(val interface{}) interface{}) {\n\tcache.Lock()\n\tdefer cache.Unlock()\n\n\tfor mask, m := range cache.ipv6 {\n\t\tif m == nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tfor ip, val := range m {\n\t\t\tv := f(val)\n\t\t\tif v == nil {\n\t\t\t\tdelete(m, ip)\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tm[ip] = v\n\t\t}\n\n\t\tif len(m) == 0 {\n\t\t\tcache.ipv6[mask] = nil\n\t\t}\n\t}\n}\n\n//NewIPCache creates an object which is implementing the interface IPcache\nfunc NewIPCache() IPcache {\n\treturn &ipcache{\n\t\tipv4: &ipcacheV4{ipv4: make([]map[uint32]interface{}, ipv4MaskSize)},\n\t\tipv6: &ipcacheV6{ipv6: make([]map[[16]byte]interface{}, ipv6MaskSize)},\n\t}\n}\n\nfunc (cache *ipcache) Put(ip net.IP, mask int, val interface{}) {\n\tif ip.To4() != nil {\n\t\tcache.ipv4.Put(ip.To4(), mask, val)\n\t\treturn\n\t}\n\n\tcache.ipv6.Put(ip.To16(), mask, val)\n}\n\nfunc (cache *ipcache) Get(ip net.IP, mask int) (interface{}, bool) {\n\n\tif ip.To4() != nil {\n\t\treturn cache.ipv4.Get(ip.To4(), mask)\n\t}\n\n\treturn cache.ipv6.Get(ip.To16(), mask)\n}\n\nfunc (cache *ipcache) RunFuncOnLpmIP(ip net.IP, f FuncOnLpmIP) {\n\tif ip.To4() != nil {\n\t\tcache.ipv4.RunFuncOnLpmIP(ip.To4(), f)\n\t\treturn\n\t}\n\n\tcache.ipv6.RunFuncOnLpmIP(ip.To16(), f)\n}\n\nfunc (cache *ipcache) RunFuncOnVals(f FuncOnVals) {\n\n\tcache.ipv4.RunFuncOnVals(f)\n\tcache.ipv6.RunFuncOnVals(f)\n}\n"
  },
  {
    "path": "utils/ipprefix/ipprefix_test.go",
    "content": "// +build !windows\n\npackage ipprefix\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\t\"testing\"\n\n\t\"github.com/magiconair/properties/assert\"\n)\n\nconst (\n\tmask24  = \"24mask\"\n\tmask32  = \"32mask\"\n\tmask128 = \"128mask\"\n\tmask0   = \"mask0\"\n)\n\nfunc TestPutGetV4(t *testing.T) {\n\tipcache := NewIPCache()\n\n\tip := net.ParseIP(\"10.0.0.1\")\n\tipcache.Put(ip, 32, mask32)\n\tipcache.Put(ip, 24, mask24)\n\tipcache.Put(ip, 0, mask0)\n\n\tval, ok := ipcache.Get(ip, 32)\n\tassert.Equal(t, ok, true, \"Get should return Success\")\n\tassert.Equal(t, val.(string), mask32, fmt.Sprintf(\"Returned value should be %s\", mask32))\n\tval, ok = ipcache.Get(net.ParseIP(\"10.0.0.2\"), 24)\n\tassert.Equal(t, ok, true, \"Get should return Success\")\n\tassert.Equal(t, val.(string), mask24, fmt.Sprintf(\"Returned value should be %s\", mask24))\n\n\t_, ok = ipcache.Get(net.ParseIP(\"8.8.8.8\"), 0)\n\tassert.Equal(t, ok, true, \"should be found in cache\")\n\n\t_, ok = ipcache.Get(ip, 10)\n\tassert.Equal(t, ok, false, \"Get should return nil\")\n\n\tipcache.Put(ip, 32, nil)\n\t_, ok = ipcache.Get(ip, 32)\n\tassert.Equal(t, ok, false, \"Should not be found in cache\")\n}\n\nfunc TestPutGetV6(t *testing.T) {\n\tipcache := NewIPCache()\n\n\tip := net.ParseIP(\"8000::220\")\n\tipcache.Put(ip, 128, mask128)\n\tipcache.Put(ip, 24, mask24)\n\tipcache.Put(ip, 0, mask0)\n\n\tval, ok := ipcache.Get(ip, 128)\n\tassert.Equal(t, ok, true, \"Get should return success\")\n\tassert.Equal(t, val.(string), mask128, fmt.Sprintf(\"Returned value should be %s\", mask128))\n\tval, ok = ipcache.Get(ip, 24)\n\tassert.Equal(t, ok, true, \"Get should return success\")\n\tassert.Equal(t, val.(string), mask24, fmt.Sprintf(\"Returned value should be %s\", mask24))\n\n\t_, ok = ipcache.Get(net.ParseIP(\"abcd::200\"), 0)\n\tassert.Equal(t, ok, true, \"Get should return success\")\n\n\t_, ok = ipcache.Get(ip, 10)\n\tassert.Equal(t, ok, false, \"Get should return nil\")\n\n\tipcache.Put(ip, 128, nil)\n\t_, ok = ipcache.Get(ip, 128)\n\tassert.Equal(t, ok, false, \"Get should return nil\")\n}\n\nfunc TestRunIPV4(t *testing.T) {\n\tvar found bool\n\n\tipcache := NewIPCache()\n\n\tip := net.ParseIP(\"10.0.0.1\")\n\tipcache.Put(ip, 32, mask32)\n\tipcache.Put(ip, 24, mask24)\n\n\ttestRunIP := func(val interface{}) bool {\n\t\tfound = false\n\t\tif val != nil {\n\t\t\tstr := val.(string)\n\n\t\t\tif str == mask32 {\n\t\t\t\tfound = true\n\t\t\t}\n\t\t}\n\t\treturn true\n\t}\n\n\tipcache.RunFuncOnLpmIP(ip, testRunIP)\n\tassert.Equal(t, found, true, \"found should be true\")\n}\n\nfunc TestRunIPv6(t *testing.T) {\n\n\tvar found bool\n\n\tipcache := NewIPCache()\n\n\tip := net.ParseIP(\"8000::220\")\n\tipcache.Put(ip, 128, mask128)\n\tipcache.Put(ip, 24, mask24)\n\n\ttestRunIP := func(val interface{}) bool {\n\t\tfound = false\n\t\tif val != nil {\n\t\t\tstr := val.(string)\n\n\t\t\tif str == mask128 {\n\t\t\t\tfound = true\n\t\t\t}\n\t\t}\n\t\treturn true\n\t}\n\n\tipcache.RunFuncOnLpmIP(ip, testRunIP)\n\tassert.Equal(t, found, true, \"found should be true\")\n}\n\nfunc TestRunValIPv4(t *testing.T) {\n\n\tipcache := NewIPCache()\n\n\tip := net.ParseIP(\"10.0.0.1\")\n\tipcache.Put(ip, 32, mask32)\n\tipcache.Put(ip, 24, mask24)\n\n\tm := map[string]bool{}\n\tm[mask32] = true\n\tm[mask24] = true\n\n\ttestRunVal := func(val interface{}) interface{} {\n\t\tif val != nil {\n\t\t\ts := val.(string)\n\t\t\tif s == mask32 {\n\t\t\t\tdelete(m, s)\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\n\t\treturn val\n\t}\n\n\tipcache.RunFuncOnVals(testRunVal)\n\tassert.Equal(t, len(m), 1, \"map should be of length 0\")\n\t_, ok := ipcache.Get(ip, 32)\n\tassert.Equal(t, ok, false, \"Get should return nil\")\n\t_, ok = ipcache.Get(ip, 24)\n\tassert.Equal(t, ok, true, \"Get should return true\")\n\n}\n\nfunc TestRunValIPv6(t *testing.T) {\n\n\tipcache := NewIPCache()\n\n\tip := net.ParseIP(\"8000::220\")\n\tipcache.Put(ip, 128, mask128)\n\tipcache.Put(ip, 24, mask24)\n\n\tm := map[string]bool{}\n\tm[mask128] = true\n\tm[mask24] = true\n\n\ttestRunVal := func(val interface{}) interface{} {\n\t\tif val != nil {\n\t\t\ts := val.(string)\n\t\t\tif s == mask128 {\n\t\t\t\tdelete(m, s)\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\n\t\treturn val\n\t}\n\n\tipcache.RunFuncOnVals(testRunVal)\n\tassert.Equal(t, len(m), 1, \"map should be of length 0\")\n\t_, ok := ipcache.Get(ip, 128)\n\tassert.Equal(t, ok, false, \"Get should return nil\")\n\t_, ok = ipcache.Get(ip, 24)\n\tassert.Equal(t, ok, true, \"Get should return true\")\n}\n"
  },
  {
    "path": "utils/netinterfaces/netinterfaces.go",
    "content": "package netinterfaces\n\nimport (\n\t\"fmt\"\n\t\"net\"\n\n\t\"go.uber.org/zap\"\n)\n\n// NetworkInterface holds info of a network interface\ntype NetworkInterface struct {\n\tName   string\n\tIPs    []net.IP\n\tIPNets []*net.IPNet\n\tFlags  net.Flags\n}\n\n// GetInterfacesInfo returns interface info\nfunc GetInterfacesInfo() ([]NetworkInterface, error) {\n\n\tnetInterfaces := []NetworkInterface{}\n\n\t// List interfaces\n\tifaces, err := net.Interfaces()\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"unable to get interfaces: %v\", err)\n\t}\n\n\tfor _, intf := range ifaces {\n\t\tipList := []net.IP{}\n\t\tipNetList := []*net.IPNet{}\n\n\t\t// List interface addresses\n\t\taddrs, err := intf.Addrs()\n\t\tif err != nil {\n\t\t\tzap.L().Warn(\"unable to get interface addresses\",\n\t\t\t\tzap.String(\"interface\", intf.Name),\n\t\t\t\tzap.Error(err))\n\t\t\tcontinue\n\t\t}\n\n\t\tfor _, addr := range addrs {\n\t\t\tip, ipNet, err := net.ParseCIDR(addr.String())\n\t\t\tif err != nil {\n\t\t\t\tzap.L().Warn(\"unable to parse address\",\n\t\t\t\t\tzap.String(\"interface\", intf.Name),\n\t\t\t\t\tzap.String(\"addr\", addr.String()),\n\t\t\t\t\tzap.Error(err))\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tipList = append(ipList, ip)\n\t\t\tipNetList = append(ipNetList, ipNet)\n\t\t}\n\n\t\tnetInterface := NetworkInterface{\n\t\t\tName:   intf.Name,\n\t\t\tIPs:    ipList,\n\t\t\tIPNets: ipNetList,\n\t\t\tFlags:  intf.Flags,\n\t\t}\n\n\t\tnetInterfaces = append(netInterfaces, netInterface)\n\t}\n\n\treturn netInterfaces, nil\n}\n"
  },
  {
    "path": "utils/nfqparser/constants.go",
    "content": "package nfqparser\n\nconst (\n\tnfqFilePath = \"/proc/net/netfilter/nfnetlink_queue\"\n)\n"
  },
  {
    "path": "utils/nfqparser/nfqlayout.go",
    "content": "package nfqparser\n\nimport \"fmt\"\n\n// NFQLayout is the layout of /proc/net/netfilter/nfnetlink_queue\ntype NFQLayout struct {\n\tQueueNum string\n\t// process ID of software listening to the queue\n\tPeerPortID string\n\t// current number of packets waiting in the queue\n\tQueueTotal string\n\t// 0 and 1 only message only provide meta data. If 2, the message provides a part of packet of size copy range.\n\tCopyMode string\n\t// length of packet data to put in message\n\tCopyRange string\n\t// number of packets dropped because queue was full\n\tQueueDropped string\n\t// number of packets dropped because netlink message could not be sent to userspace.\n\t// If this counter is not zero, try to increase netlink buffer size. On the application side,\n\t// you will see gap in packet id if netlink message are lost.\n\tUserDropped string\n\t// packet id of last packet\n\tIDSequence string\n}\n\n//  String returns string representation of particular queue\nfunc (n *NFQLayout) String() string {\n\n\tif n == nil {\n\t\treturn \"\"\n\t}\n\n\treturn fmt.Sprintf(\"%v\", *n)\n}\n"
  },
  {
    "path": "utils/nfqparser/nfqparser.go",
    "content": "package nfqparser\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"io/ioutil\"\n\t\"strings\"\n\t\"sync\"\n)\n\n// NFQParser holds nfqparser fields\ntype NFQParser struct {\n\tnfqStr string\n\t// NOTE: For unit test\n\tfilePath string\n\tcontents map[string]NFQLayout\n\n\tsync.Mutex\n}\n\n// NewNFQParser returns nfqparser handler\nfunc NewNFQParser() *NFQParser {\n\n\treturn &NFQParser{\n\t\tcontents: make(map[string]NFQLayout),\n\t\tfilePath: nfqFilePath,\n\t}\n}\n\n// Synchronize reads from file and parses it\nfunc (n *NFQParser) Synchronize() error {\n\n\tn.Lock()\n\tdefer n.Unlock()\n\n\tdata, err := ioutil.ReadFile(n.filePath)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tn.nfqStr = string(data)\n\n\tscanner := bufio.NewScanner(bytes.NewReader(data))\n\tfor scanner.Scan() {\n\t\tline := scanner.Text()\n\t\tlineParts := strings.Fields(line)\n\t\tnewNFQ := makeNFQLayout(lineParts)\n\n\t\tn.contents[newNFQ.QueueNum] = newNFQ\n\t}\n\n\treturn nil\n}\n\n// RetrieveByQueue returns layout for a specific queue number\nfunc (n *NFQParser) RetrieveByQueue(queueNum string) *NFQLayout {\n\n\tn.Lock()\n\tdefer n.Unlock()\n\n\tcontent, ok := n.contents[queueNum]\n\tif ok {\n\t\treturn &content\n\t}\n\n\treturn nil\n}\n\n// RetrieveAll returns all layouts\nfunc (n *NFQParser) RetrieveAll() map[string]NFQLayout {\n\n\tn.Lock()\n\tdefer n.Unlock()\n\n\treturn n.contents\n}\n\n// String returns string renresentation of nfqueue data\nfunc (n *NFQParser) String() string {\n\n\tn.Lock()\n\tdefer n.Unlock()\n\n\treturn n.nfqStr\n}\n\nfunc makeNFQLayout(data []string) NFQLayout {\n\n\tnewNFQ := NFQLayout{}\n\tnewNFQ.QueueNum = data[0]\n\tnewNFQ.PeerPortID = data[1]\n\tnewNFQ.QueueTotal = data[2]\n\tnewNFQ.CopyMode = data[3]\n\tnewNFQ.CopyRange = data[4]\n\tnewNFQ.QueueDropped = data[5]\n\tnewNFQ.UserDropped = data[6]\n\tnewNFQ.IDSequence = data[7]\n\n\treturn newNFQ\n}\n"
  },
  {
    "path": "utils/nfqparser/nfqparser_test.go",
    "content": "// +build !windows\n\npackage nfqparser\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nconst (\n\ttestFilePath = \"/tmp/nfqdrops\"\n\ttestNFQData  = `      0  13206     0 2 65531     0     0        0  1\n    1 3333107750     0 2 65531     0     0        0  1\n    2 3881398569     0 2 65531     0     0        1  1\n    3 2633750685     0 2 65531     0     0        0  1\n    4 3605545056     0 2 65531     0     0        0  1\n    5 3473230188     0 2 65531     0     0        2  1\n    6 4025478776     0 2 65531     0     0        3  1\n    7 2806986372     0 2 65531     0     0        1  1`\n)\n\nfunc init() {\n\ttestCreateFile()\n}\n\nfunc testProperLayout() *NFQLayout {\n\n\treturn &NFQLayout{\n\t\tQueueNum:     \"6\",\n\t\tPeerPortID:   \"4025478776\",\n\t\tQueueTotal:   \"0\",\n\t\tCopyMode:     \"2\",\n\t\tCopyRange:    \"65531\",\n\t\tQueueDropped: \"0\",\n\t\tUserDropped:  \"0\",\n\t\tIDSequence:   \"3\",\n\t}\n}\n\nfunc testCreateFile() {\n\n\tif err := ioutil.WriteFile(testFilePath, []byte(testNFQData), 0777); err != nil {\n\t\tpanic(err)\n\t}\n}\n\nfunc TestNFQParserRetrieveByQueue(t *testing.T) {\n\n\tConvey(\"Given I create a new nfqparser instance\", t, func() {\n\t\tnfqParser := NewNFQParser()\n\t\tnfqParser.filePath = testFilePath\n\n\t\tConvey(\"Given I try to synchronize data\", func() {\n\t\t\terr := nfqParser.Synchronize()\n\n\t\t\tConvey(\"Then I should not get any errors\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\n\t\t\tConvey(\"Given I try to retrieve data for a queue\", func() {\n\t\t\t\tqueueData := nfqParser.RetrieveByQueue(\"6\")\n\n\t\t\t\tConvey(\"Then queue data should match\", func() {\n\t\t\t\t\tSo(queueData, ShouldResemble, testProperLayout())\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"Given I try to retrieve data for a queue and compare a specific field\", func() {\n\t\t\t\tqueueData := nfqParser.RetrieveByQueue(\"6\")\n\n\t\t\t\tConvey(\"Then queue data should match\", func() {\n\t\t\t\t\tSo(queueData.CopyRange, ShouldEqual, testProperLayout().CopyRange)\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"Given I try to retrieve data for a queue with different expected data\", func() {\n\t\t\t\tqueueData := nfqParser.RetrieveByQueue(\"1\")\n\n\t\t\t\tConvey(\"Then queue data should match\", func() {\n\t\t\t\t\tSo(queueData, ShouldNotResemble, testProperLayout())\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"Given I try to retrieve data for a invalid queue num\", func() {\n\t\t\t\tqueueData := nfqParser.RetrieveByQueue(\"9\")\n\n\t\t\t\tConvey(\"Then queue data should match\", func() {\n\t\t\t\t\tSo(queueData, ShouldBeNil)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestNFQParserRetrieveAll(t *testing.T) {\n\n\tConvey(\"Given I create a new nfqparser instance\", t, func() {\n\t\tnfqParser := NewNFQParser()\n\t\tnfqParser.filePath = testFilePath\n\n\t\tConvey(\"Given I try to synchronize data\", func() {\n\t\t\terr := nfqParser.Synchronize()\n\n\t\t\tConvey(\"Then I should not get any errors\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\n\t\t\tConvey(\"Given I try to retrieve all\", func() {\n\t\t\t\tqueueData := nfqParser.RetrieveAll()\n\n\t\t\t\tConvey(\"Then length of queue data should be equal\", func() {\n\t\t\t\t\tSo(len(queueData), ShouldEqual, 8)\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestNFQParserString(t *testing.T) {\n\n\tConvey(\"Given I create a new nfqparser instance\", t, func() {\n\t\tnfqParser := NewNFQParser()\n\t\tnfqParser.filePath = testFilePath\n\n\t\tConvey(\"Given I try to synchronize data\", func() {\n\t\t\terr := nfqParser.Synchronize()\n\n\t\t\tConvey(\"Then I should not get any errors\", func() {\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\n\t\t\tConvey(\"Given I try to get string representation\", func() {\n\t\t\t\tstrData := nfqParser.String()\n\n\t\t\t\tConvey(\"Then queue data should match\", func() {\n\t\t\t\t\tSo(strData, ShouldEqual, testNFQData)\n\t\t\t\t})\n\t\t\t})\n\n\t\t\tConvey(\"Given I try to retrieve data for a queue with different expected data\", func() {\n\t\t\t\tqueueData := nfqParser.RetrieveByQueue(\"6\")\n\n\t\t\t\tConvey(\"Then queue data should match\", func() {\n\t\t\t\t\tSo(queueData, ShouldResemble, testProperLayout())\n\t\t\t\t})\n\n\t\t\t\tConvey(\"Given I try to get string representation of particular queue number\", func() {\n\t\t\t\t\tqueueData := nfqParser.RetrieveByQueue(\"6\")\n\n\t\t\t\t\tConvey(\"Then queue data should match\", func() {\n\t\t\t\t\t\tSo(queueData.String(), ShouldEqual, fmt.Sprintf(\"%v\", testProperLayout()))\n\t\t\t\t\t})\n\t\t\t\t})\n\n\t\t\t\tConvey(\"Given I try to get string representation of empty layout, it should not panic\", func() {\n\t\t\t\t\tnfqParser := NewNFQParser()\n\t\t\t\t\tstr := nfqParser.RetrieveByQueue(\"6\").String()\n\n\t\t\t\t\tConvey(\"Then queue data should match\", func() {\n\t\t\t\t\t\tSo(str, ShouldEqual, \"\")\n\t\t\t\t\t})\n\t\t\t\t})\n\t\t\t})\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "utils/portcache/portcache.go",
    "content": "package portcache\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/cache\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\n// PortCache is a generic cache of port pairs or exact ports. It can store\n// and do lookups of ports on exact matches or ranges. It returns the stored\n// values\ntype PortCache struct {\n\tports  cache.DataStore\n\tranges []*portspec.PortSpec\n\tsync.Mutex\n}\n\n// NewPortCache creates a new port cache\nfunc NewPortCache(name string) *PortCache {\n\treturn &PortCache{\n\t\tports:  cache.NewCache(name),\n\t\tranges: []*portspec.PortSpec{},\n\t}\n}\n\n// AddPortSpec adds a port spec into the cache\nfunc (p *PortCache) AddPortSpec(s *portspec.PortSpec) {\n\tif s.Min == s.Max {\n\t\tp.ports.AddOrUpdate(s.Min, s)\n\t} else {\n\t\t// Remove the range if it exists\n\t\tp.Remove(s) // nolint\n\t\t// Insert the portspec\n\t\tp.Lock()\n\t\tp.ranges = append([]*portspec.PortSpec{s}, p.ranges...)\n\t\tp.Unlock()\n\t}\n}\n\n// AddPortSpecToEnd adds a range at the end of the cache\nfunc (p *PortCache) AddPortSpecToEnd(s *portspec.PortSpec) {\n\n\t// Remove the range if it exists\n\tp.Remove(s) // nolint\n\n\tp.Lock()\n\tp.ranges = append(p.ranges, s)\n\tp.Unlock()\n\n}\n\n// AddUnique adds a port spec into the cache and makes sure its unique\nfunc (p *PortCache) AddUnique(s *portspec.PortSpec) error {\n\tp.Lock()\n\tdefer p.Unlock()\n\n\tif s.Min == s.Max {\n\t\tif err, _ := p.ports.Get(s.Min); err != nil {\n\t\t\treturn fmt.Errorf(\"Port already exists: %s\", err)\n\t\t}\n\t}\n\n\tfor _, r := range p.ranges {\n\t\tif r.Max <= s.Min || r.Min >= s.Max {\n\t\t\tcontinue\n\t\t}\n\t\treturn fmt.Errorf(\"Overlap detected: %d %d\", r.Max, r.Min)\n\t}\n\n\tif s.Min == s.Max {\n\t\treturn p.ports.Add(s.Min, s)\n\t}\n\n\tp.ranges = append(p.ranges, s)\n\treturn nil\n}\n\n// GetSpecValueFromPort searches the cache for a match based on a port\n// It will return the first match found on exact ports or on the ranges\n// of ports. If there are multiple intervals that match it will randomly\n// return one of them.\nfunc (p *PortCache) GetSpecValueFromPort(port uint16) (interface{}, error) {\n\tif spec, err := p.ports.Get(port); err == nil {\n\t\treturn spec.(*portspec.PortSpec).Value(), nil\n\t}\n\n\tp.Lock()\n\tdefer p.Unlock()\n\tfor _, s := range p.ranges {\n\t\tif s.Min <= port && port <= s.Max {\n\t\t\treturn s.Value(), nil\n\t\t}\n\t}\n\n\treturn nil, fmt.Errorf(\"No match for port %d\", port)\n}\n\n// GetAllSpecValueFromPort will return all the specs that potentially match. This\n// will allow for overlapping ranges\nfunc (p *PortCache) GetAllSpecValueFromPort(port uint16) ([]interface{}, error) {\n\tvar allMatches []interface{}\n\n\tif spec, err := p.ports.Get(port); err == nil {\n\t\tallMatches = append(allMatches, spec.(*portspec.PortSpec).Value())\n\t}\n\n\tp.Lock()\n\tdefer p.Unlock()\n\tfor _, s := range p.ranges {\n\t\tif s.Min <= port && port < s.Max {\n\t\t\tallMatches = append(allMatches, s.Value())\n\t\t}\n\t}\n\n\tif len(allMatches) == 0 {\n\t\treturn nil, fmt.Errorf(\"No match for port %d\", port)\n\t}\n\treturn allMatches, nil\n}\n\n// Remove will remove a port from the cache\nfunc (p *PortCache) Remove(s *portspec.PortSpec) error {\n\n\tif s.Min == s.Max {\n\t\treturn p.ports.Remove(s.Min)\n\t}\n\n\tp.Lock()\n\tdefer p.Unlock()\n\tfor i, r := range p.ranges {\n\t\tif r.Min == s.Min && r.Max == s.Max {\n\t\t\tleft := p.ranges[:i]\n\t\t\tright := p.ranges[i+1:]\n\t\t\tp.ranges = append(left, right...)\n\t\t\treturn nil\n\t\t}\n\t}\n\n\treturn fmt.Errorf(\"port not found\")\n}\n\n// RemoveStringPorts will remove a port from the cache\nfunc (p *PortCache) RemoveStringPorts(ports string) error {\n\n\ts, err := portspec.NewPortSpecFromString(ports, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\treturn p.Remove(s)\n}\n"
  },
  {
    "path": "utils/portcache/portcache_test.go",
    "content": "// +build !windows\n\npackage portcache\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n\t\"go.aporeto.io/enforcerd/trireme-lib/utils/portspec\"\n)\n\nfunc TestNewPortCache(t *testing.T) {\n\tConvey(\"When I creat a new port cache\", t, func() {\n\t\tp := NewPortCache(\"test\")\n\t\tConvey(\"The cache must be initilized\", func() {\n\t\t\tSo(p, ShouldNotBeNil)\n\t\t\tSo(p.ports, ShouldNotBeNil)\n\t\t\tSo(p.ranges, ShouldNotBeNil)\n\t\t})\n\t})\n}\n\nfunc TestAddPortSpec(t *testing.T) {\n\tConvey(\"Given an initialized cached\", t, func() {\n\t\tp := NewPortCache(\"test\")\n\t\tConvey(\"When I add a port spec with a single port, it must added to the map\", func() {\n\t\t\ts, err := portspec.NewPortSpec(10, 10, \"10\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\t\t\tstored, err := p.ports.Get(uint16(10))\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(stored.(*portspec.PortSpec), ShouldNotBeNil)\n\t\t\tSo(stored.(*portspec.PortSpec).Max, ShouldEqual, 10)\n\t\t\tSo(stored.(*portspec.PortSpec).Min, ShouldEqual, 10)\n\t\t\tSo(stored.(*portspec.PortSpec).Value().(string), ShouldResemble, \"10\")\n\t\t})\n\n\t\tConvey(\"When I add a port spec with a range of ports, be added to the list\", func() {\n\t\t\ts, err := portspec.NewPortSpec(10, 20, \"range\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\t\t\tSo(len(p.ranges), ShouldEqual, 1)\n\t\t\tSo(p.ranges[0].Min, ShouldEqual, 10)\n\t\t\tSo(p.ranges[0].Max, ShouldEqual, 20)\n\t\t\tSo(p.ranges[0].Value().(string), ShouldResemble, \"range\")\n\t\t})\n\t})\n}\n\nfunc TestSearch(t *testing.T) {\n\tConvey(\"Given an initialized cache\", t, func() {\n\t\tp := NewPortCache(\"test\")\n\t\tConvey(\"When I add both single ports and rages\", func() {\n\t\t\ts, err := portspec.NewPortSpec(10, 20, \"range1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\ts, err = portspec.NewPortSpec(30, 40, \"range2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\ts, err = portspec.NewPortSpec(50, 60, \"range3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\ts, err = portspec.NewPortSpec(15, 15, \"15\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\ts, err = portspec.NewPortSpec(25, 25, \"25\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\tConvey(\"If I match both exact and range, I should get the exact\", func() {\n\t\t\t\ts, err := p.GetSpecValueFromPort(15)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(s.(string), ShouldResemble, \"15\")\n\t\t\t\ts, err = p.GetSpecValueFromPort(25)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(s.(string), ShouldResemble, \"25\")\n\t\t\t})\n\n\t\t\tConvey(\"If I match the range beginning, I should get the result\", func() {\n\t\t\t\ts, err := p.GetSpecValueFromPort(10)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(s.(string), ShouldResemble, \"range1\")\n\t\t\t})\n\n\t\t\tConvey(\"If I match the range end , I should get the result\", func() {\n\t\t\t\ts, err := p.GetSpecValueFromPort(19)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(s.(string), ShouldResemble, \"range1\")\n\t\t\t\ts, err = p.GetSpecValueFromPort(39)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(s.(string), ShouldResemble, \"range2\")\n\t\t\t\ts, err = p.GetSpecValueFromPort(59)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(s.(string), ShouldResemble, \"range3\")\n\t\t\t})\n\n\t\t\tConvey(\"Last number is included \", func() {\n\t\t\t\t_, err := p.GetSpecValueFromPort(20)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestGetAll(t *testing.T) {\n\tConvey(\"Given an initialized cache\", t, func() {\n\t\tp := NewPortCache(\"test\")\n\t\tConvey(\"When I add both single ports and rages\", func() {\n\t\t\ts, err := portspec.NewPortSpec(10, 20, \"range1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\ts, err = portspec.NewPortSpec(30, 40, \"range2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\ts, err = portspec.NewPortSpec(50, 60, \"range3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\ts, err = portspec.NewPortSpec(15, 15, \"15\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\ts, err = portspec.NewPortSpec(25, 25, \"25\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tp.AddPortSpec(s)\n\n\t\t\tConvey(\"If I match both exact and range, I should get the exact\", func() {\n\t\t\t\ts, err := p.GetAllSpecValueFromPort(15)\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(len(s), ShouldEqual, 2)\n\n\t\t\t\tSo(s[0].(string), ShouldResemble, \"15\")\n\t\t\t\tSo(s[1].(string), ShouldResemble, \"range1\")\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestAddUnique(t *testing.T) {\n\tConvey(\"Given an initialized cache\", t, func() {\n\t\tp := NewPortCache(\"test\")\n\t\tConvey(\"When I add unique entries, I should get no errors \", func() {\n\t\t\ts, err := portspec.NewPortSpec(10, 20, \"range1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(30, 40, \"range2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(50, 60, \"range3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I add non-unique entries, I should get  errors \", func() {\n\t\t\ts, err := portspec.NewPortSpec(10, 20, \"range1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(30, 40, \"range1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(5, 15, \"range2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldNotBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(15, 25, \"range2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldNotBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(5, 25, \"range3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldNotBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(5, 5, \"range3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(15, 15, \"range3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldNotBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(25, 25, \"range3\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I match error'd unique entries and a valid range, I should get the valid range only\", func() {\n\t\t\ts, err := portspec.NewPortSpec(10, 20, \"range1\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(5, 15, \"range2\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldNotBeNil)\n\n\t\t\ts, err = portspec.NewPortSpec(15, 15, \"15\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(p.AddUnique(s), ShouldNotBeNil)\n\n\t\t\ta, err := p.GetAllSpecValueFromPort(15)\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tSo(len(a), ShouldEqual, 1)\n\t\t\tSo(a[0].(string), ShouldResemble, \"range1\")\n\t\t})\n\t})\n}\n\nfunc TestRemoveStringPort(t *testing.T) {\n\tConvey(\"Given an initialized cache\", t, func() {\n\t\tp := NewPortCache(\"test\")\n\n\t\ts, err := portspec.NewPortSpec(10, 20, \"range1\")\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\ts, err = portspec.NewPortSpec(30, 40, \"range2\")\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\ts, err = portspec.NewPortSpec(100, 100, \"range3\")\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p.AddUnique(s), ShouldBeNil)\n\n\t\tConvey(\"When I remove valid entries, there should be no errors\", func() {\n\t\t\tSo(p.RemoveStringPorts(\"10:20\"), ShouldBeNil)\n\t\t\tSo(p.RemoveStringPorts(\"30:40\"), ShouldBeNil)\n\t\t\tSo(p.RemoveStringPorts(\"100:100\"), ShouldBeNil)\n\t\t})\n\n\t\tConvey(\"When I remove invalid entries, I should get an error\", func() {\n\t\t\tSo(p.RemoveStringPorts(\"100:200\"), ShouldNotBeNil)\n\t\t\tSo(p.RemoveStringPorts(\"10:40\"), ShouldNotBeNil)\n\t\t})\n\t})\n}\n"
  },
  {
    "path": "utils/portspec/portspec.go",
    "content": "package portspec\n\n// This package manages all port spec functions and validations and can\n// be reused by all other packages.\n\nimport (\n\t\"errors\"\n\t\"strconv\"\n\t\"strings\"\n)\n\n// PortSpec is the specification of a port or port range\ntype PortSpec struct {\n\tMin   uint16 `json:\"Min,omitempty\"`\n\tMax   uint16 `json:\"Max,omitempty\"`\n\tvalue interface{}\n}\n\n// NewPortSpec creates a new port spec\nfunc NewPortSpec(min, max uint16, value interface{}) (*PortSpec, error) {\n\n\tif min > max {\n\t\treturn nil, errors.New(\"Min port greater than max\")\n\t}\n\n\treturn &PortSpec{\n\t\tMin:   min,\n\t\tMax:   max,\n\t\tvalue: value,\n\t}, nil\n}\n\n// NewPortSpecFromString creates a new port spec\nfunc NewPortSpecFromString(ports string, value interface{}) (*PortSpec, error) {\n\n\tvar min, max int\n\tvar err error\n\tif strings.Contains(ports, \":\") {\n\t\tportMinMax := strings.SplitN(ports, \":\", 2)\n\t\tif len(portMinMax) != 2 {\n\t\t\treturn nil, errors.New(\"Invalid port specification\")\n\t\t}\n\n\t\tmin, err = strconv.Atoi(portMinMax[0])\n\t\tif err != nil || min < 0 {\n\t\t\treturn nil, errors.New(\"Min is not a valid port\")\n\t\t}\n\n\t\tmax, err = strconv.Atoi(portMinMax[1])\n\t\tif err != nil || max >= 65536 {\n\t\t\treturn nil, errors.New(\"Max is not a valid port\")\n\t\t}\n\t} else {\n\t\tmin, err = strconv.Atoi(ports)\n\t\tif err != nil || min >= 65536 || min < 0 {\n\t\t\treturn nil, errors.New(\"Port is larger than 2^16 or invalid port\")\n\t\t}\n\t\tmax = min\n\t}\n\n\treturn NewPortSpec(uint16(min), uint16(max), value)\n}\n\n// IsMultiPort returns true if the spec is for multiple ports.\nfunc (s *PortSpec) IsMultiPort() bool {\n\treturn s.Min != s.Max\n}\n\n// SinglePort returns the port of a non multi-port spec\nfunc (s *PortSpec) SinglePort() (uint16, error) {\n\tif s.IsMultiPort() {\n\t\treturn 0, errors.New(\"Not a single port specification\")\n\t}\n\n\treturn s.Min, nil\n}\n\n// Range returns the range of a spec.\nfunc (s *PortSpec) Range() (uint16, uint16) {\n\treturn s.Min, s.Max\n}\n\n// MultiPort returns the multi-port range as a string.\nfunc (s *PortSpec) String() string {\n\tif s.IsMultiPort() {\n\t\treturn strconv.Itoa(int(s.Min)) + \":\" + strconv.Itoa(int(s.Max))\n\t}\n\n\treturn strconv.Itoa(int(s.Min))\n}\n\n// Value returns the value of the portspec if one is there\nfunc (s *PortSpec) Value() interface{} {\n\treturn s.value\n}\n\n// Overlaps returns true if the provided port spec overlaps with the given one.\nfunc (s *PortSpec) Overlaps(p *PortSpec) bool {\n\ta := p\n\tb := s\n\tif a.Min > b.Min {\n\t\ta = s\n\t\tb = p\n\t}\n\tif a.Max >= b.Min {\n\t\treturn true\n\t}\n\treturn false\n}\n\n// Intersects returns true if the provided port spec intersect with the given one.\nfunc (s *PortSpec) Intersects(p *PortSpec) bool {\n\tif p.Min == p.Max {\n\t\treturn s.IsIncluded(int(p.Min))\n\t}\n\treturn s.IsIncluded(int(p.Min)) && s.IsIncluded(int(p.Max))\n}\n\n// IsIncluded returns trues if a port is within the range of the portspec\nfunc (s *PortSpec) IsIncluded(port int) bool {\n\tp := uint16(port)\n\tif s.Min <= p && p <= s.Max {\n\t\treturn true\n\t}\n\treturn false\n}\n"
  },
  {
    "path": "utils/portspec/portspec_test.go",
    "content": "// +build !windows\n\npackage portspec\n\nimport (\n\t\"testing\"\n\n\t. \"github.com/smartystreets/goconvey/convey\"\n)\n\nfunc TestNewPortSpec(t *testing.T) {\n\tConvey(\"When I create a new port spec\", t, func() {\n\t\tp, err := NewPortSpec(0, 10, \"portspec\")\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"The correct values must be set\", func() {\n\t\t\tSo(p, ShouldNotBeNil)\n\t\t\tSo(p.Min, ShouldEqual, 0)\n\t\t\tSo(p.Max, ShouldEqual, 10)\n\t\t\tSo(p.value.(string), ShouldResemble, \"portspec\")\n\t\t})\n\t})\n}\nfunc TestNewPortSpecFromString(t *testing.T) {\n\tConvey(\"When I create a valid single port spec from string it should succeed\", t, func() {\n\t\tp, err := NewPortSpecFromString(\"10\", \"string\")\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p.Min, ShouldEqual, uint16(10))\n\t\tSo(p.Max, ShouldEqual, uint16(10))\n\t\tSo(p.value.(string), ShouldResemble, \"string\")\n\t})\n\n\tConvey(\"When I create a valid a range  port spec from string it should succeed\", t, func() {\n\t\tp, err := NewPortSpecFromString(\"10:20\", \"string\")\n\t\tSo(err, ShouldBeNil)\n\t\tSo(p.Min, ShouldEqual, uint16(10))\n\t\tSo(p.Max, ShouldEqual, uint16(20))\n\t\tSo(p.value.(string), ShouldResemble, \"string\")\n\t})\n\n\tConvey(\"When I create singe port with value greater than 2^16 it shoud fail \", t, func() {\n\t\t_, err := NewPortSpecFromString(\"70000\", \"string\")\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"When I create singe port with a negative value it should fail\", t, func() {\n\t\t_, err := NewPortSpecFromString(\"-1\", \"string\")\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"When I create a range with min > max it shoud fail\", t, func() {\n\t\t_, err := NewPortSpecFromString(\"20:10\", \"string\")\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"When I create a range with negative min or max  it shoud fail\", t, func() {\n\t\t_, err := NewPortSpecFromString(\"-20:10\", \"string\")\n\t\tSo(err, ShouldNotBeNil)\n\t\t_, err = NewPortSpecFromString(\"-20:-110\", \"string\")\n\t\tSo(err, ShouldNotBeNil)\n\t})\n\n\tConvey(\"When I create a range with invalid characters it should fail\", t, func() {\n\t\t_, err := NewPortSpecFromString(\"10,20\", \"string\")\n\t\tSo(err, ShouldNotBeNil)\n\t})\n}\n\nfunc TestIsMultiPort(t *testing.T) {\n\tConvey(\"Given a portspec\", t, func() {\n\t\ts, err := NewPortSpecFromString(\"10:20\", \"string\")\n\t\tSo(err, ShouldBeNil)\n\t\tConvey(\"multiport should return true\", func() {\n\t\t\tSo(s.IsMultiPort(), ShouldBeTrue)\n\t\t})\n\t})\n}\n\nfunc TestRange(t *testing.T) {\n\tConvey(\"Given a portspec\", t, func() {\n\t\tConvey(\"If it is multiport\", func() {\n\t\t\ts, err := NewPortSpecFromString(\"10:20\", \"string\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tConvey(\"Range  should return the value ranges\", func() {\n\t\t\t\tmin, max := s.Range()\n\t\t\t\tSo(min, ShouldEqual, 10)\n\t\t\t\tSo(max, ShouldEqual, 20)\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"If it is not multiport, it should return the one port\", func() {\n\t\t\ts, err := NewPortSpecFromString(\"10\", \"string\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tConvey(\"Multiport should an error\", func() {\n\t\t\t\tmin, max := s.Range()\n\t\t\t\tSo(min, ShouldEqual, 10)\n\t\t\t\tSo(max, ShouldEqual, 10)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestSinglePort(t *testing.T) {\n\tConvey(\"Given a portspec\", t, func() {\n\t\tConvey(\"If it is singleport\", func() {\n\t\t\ts, err := NewPortSpecFromString(\"10\", \"string\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tConvey(\"Multiport should return the value ranges\", func() {\n\t\t\t\tm, err := s.SinglePort()\n\t\t\t\tSo(err, ShouldBeNil)\n\t\t\t\tSo(m, ShouldEqual, uint16(10))\n\t\t\t})\n\t\t})\n\n\t\tConvey(\"If it is not multiport\", func() {\n\t\t\ts, err := NewPortSpecFromString(\"10:20\", \"string\")\n\t\t\tSo(err, ShouldBeNil)\n\t\t\tConvey(\"Singleport  should an error\", func() {\n\t\t\t\t_, err := s.SinglePort()\n\t\t\t\tSo(err, ShouldNotBeNil)\n\t\t\t})\n\t\t})\n\t})\n}\n\nfunc TestValue(t *testing.T) {\n\tConvey(\"Given a portspec, value should return the correct value\", t, func() {\n\t\ts, err := NewPortSpecFromString(\"10:20\", \"value\")\n\t\tSo(err, ShouldBeNil)\n\t\tSo(s.value.(string), ShouldResemble, \"value\")\n\t})\n}\n\nfunc TestOVerlap(t *testing.T) {\n\tConvey(\"Given two portspecs that don't overlap\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:20\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"30:40\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get false\", func() {\n\t\t\tSo(a.Overlaps(b), ShouldBeFalse)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that overlap\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:50\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"30:40\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Overlaps(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that partially overlap\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:35\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"30:40\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Overlaps(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that partially overlap at the end\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"35:45\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"30:40\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Overlaps(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that partially overlap at a point\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:20\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"10\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Overlaps(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that partially overlap at the end point\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:20\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"20\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Overlaps(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that overlap\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"20\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"20\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Overlaps(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that do not overlap\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"80\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"443\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Overlaps(b), ShouldBeFalse)\n\t\t})\n\t})\n}\n\nfunc TestIntersect(t *testing.T) {\n\tConvey(\"Given two portspecs that don't intersect\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:20\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"30:40\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get false\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeFalse)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that intersect\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:50\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"30:40\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that intersect\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:50\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"45\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that partially overlap\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:35\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"30:40\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeFalse)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that partially overlap at the end\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"35:45\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"30:40\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeFalse)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that partially overlap at a point\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:20\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"10\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that partially overlap at the end point\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"10:20\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"20\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two portspecs that overlap\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"20\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"20\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeTrue)\n\t\t})\n\t})\n\n\tConvey(\"Given two  portspecs that do not overlap\", t, func() {\n\t\ta, err1 := NewPortSpecFromString(\"80\", \"value\")\n\t\tb, err2 := NewPortSpecFromString(\"443\", \"value\")\n\t\tSo(err1, ShouldBeNil)\n\t\tSo(err2, ShouldBeNil)\n\t\tConvey(\"I should get true\", func() {\n\t\t\tSo(a.Intersects(b), ShouldBeFalse)\n\t\t})\n\t})\n}\n"
  }
]