[
  {
    "path": ".gitignore",
    "content": "# Compiled Object files, Static and Dynamic libs (Shared Objects)\n*.o\n*.a\n*.so\n\n# Folders\n_obj\n_test\n\n# Architecture specific extensions/prefixes\n*.[568vq]\n[568vq].out\n\n*.cgo1.go\n*.cgo2.c\n_cgo_defun.c\n_cgo_gotypes.go\n_cgo_export.*\n\n_testmain.go\n\n*.exe\n*.test\n*.prof\n\nbin/\ngopath/\n\ntests/distribution\ntests/rendered-test-*\ntests/rkt-uuid-test-*\ntests/*.docker\n\n*.swp\n*.aci\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "## v0.17.2\n\nThis is a bugfix release to ensure compatibility with newer go-1.10 toolchain.\n\n - lib/internal: fix tar header format ([#260](https://github.com/appc/docker2aci/pull/260)).\n - tests: update script to run within gopath ([#261](https://github.com/appc/docker2aci/pull/261)).\n\n## v0.17.1\n\nThis is a bugfix release that fixes pulling certain images from the Google Container Registry.\n\n - lib: add signed manifest media type ([#255](https://github.com/appc/docker2aci/pull/255)).\n\n## v0.17.0\n\nThis is mostly a bugfix release that fixes a couple of panics and supports additional docker image syntax.\n\n - Avoid panicking on scratch images ([248](https://github.com/appc/docker2aci/pull/248)).\n - Bugfix/panic on invalid env entry ([#249](https://github.com/appc/docker2aci/pull/249)).\n - lib/common: update `ParseDockerURL` ([#250](https://github.com/appc/docker2aci/pull/250)).\n\n## v0.16.0\n\nThis release adds a manifest hash annotation on converted images and introduces some API changes to allow for more granular control on registries and media types.\n\n - Annotate manifest hash ([#237](https://github.com/appc/docker2aci/pull/237)).\n - Allow selective disabling of registries and media types ([#239](https://github.com/appc/docker2aci/pull/239)).\n - Update appc/spec to 0.8.10 ([#242](https://github.com/appc/docker2aci/pull/242)).\n\n## v0.15.0\n\nThis release improves translation of arch labels and image name annotations. It also changes the default output image filename.\n\n - Translate \"os\" and \"arch\" labels of image manifest ([#234](https://github.com/appc/docker2aci/pull/234)).\n - Minor style changes ([#230](https://github.com/appc/docker2aci/pull/230)).\n - Bump appc/spec library version to 0.8.9 ([#233](https://github.com/appc/docker2aci/pull/233)).\n - Image from file improvements; guesses at \"originalname\" and fixes for \"--image\"([#229](https://github.com/appc/docker2aci/pull/229)).\n\n## v0.14.0\n\nThis release adds compatibility for OCI v1.0.0-rc2 types, introduces supports for converting image labels, and fixes some issues related to automatic fallback to registry API v1.\n\n - log: introduce Logger interface ([#218](https://github.com/appc/docker2aci/pull/218))\n - lib/internal: set UserLabels to be Docker image labels ([#223](https://github.com/appc/docker2aci/pull/223)).\n - fetch: annotate originally requested name ([#224](https://github.com/appc/docker2aci/pull/224)).\n - types: update OCI image-spec to rc2 ([#226](https://github.com/appc/docker2aci/pull/226)).\n - lib/internal: fix v2 registry check URL ([#220](https://github.com/appc/docker2aci/pull/220))\n - lib/internal: allow auto fallback from v2 API to v1 ([#222](https://github.com/appc/docker2aci/pull/222)).\n\n## v0.13.0\n\nThis release adds support for converting local OCI bundles and fixes two security issues (CVE-2016-7569 and CVE-2016-8579). It also includes fixes for several image fetching and conversion bugs.\n\n - docker2aci: add support for converting OCI tarfiles ([#200](https://github.com/appc/docker2aci/pull/200)).\n - docker2aci: additional validation on malformed images ([#204](https://github.com/appc/docker2aci/pull/204)). Fixes (CVE-2016-7569 and CVE-2016-8579).\n - lib: Use the new media types for oci ([#213](https://github.com/appc/docker2aci/pull/213)).\n - backend/repository: assume no v2 on unexpected status ([#214](https://github.com/appc/docker2aci/pull/214)).\n - lib/internal: do not compare tag when pulling by digest ([#207](https://github.com/appc/docker2aci/pull/207)).\n - lib/internal: re-use uid value when gid is missing ([#206](https://github.com/appc/docker2aci/pull/206)).\n - lib/internal: add entrypoint/cmd annotations to v21 images ([#199](https://github.com/appc/docker2aci/pull/199)).\n\n## v0.12.3\n\nThis is another bugfix release.\n\n- lib/repository2: get the correct layer index ([#188](https://github.com/appc/docker2aci/pull/188)). This fixes layer ordering for the Docker API v2.1.\n- lib/repository2: fix manifest v2.2 layer ordering ([#190](https://github.com/appc/docker2aci/pull/190)). This fixes layer ordering for the Docker API v2.2.\n\n## v0.12.2\n\nThis is a bugfix release.\n\n- lib/repository2: populate reverseLayers correctly ([#185](https://github.com/appc/docker2aci/pull/185)). It caused converted Image Manifests to have the wrong fields. Add a test to make sure this won't go unnoticed again.\n- tests: remove redundant code and simplify ([#186](https://github.com/appc/docker2aci/pull/186)).\n\n## v0.12.1\n\nThis release fixes a couple of bugs, adds image fetching tests, and replaces godep with glide for vendoring.\n\n- Replace Godeps with glide ([#174](https://github.com/appc/docker2aci/pull/174)).\n- Avoid O(N) and fix defer reader close ([#180](https://github.com/appc/docker2aci/pull/180)).\n- Add golang tests to lib/test to test image fetching ([#181](https://github.com/appc/docker2aci/pull/181)).\n\n## v0.12.0\n\nv0.12.0 introduces support for the Docker v2.2 image format and OCI image format. It also fixes a bug that prevented pulling by digest to work.\n\n- backend/repository2: don't ignore when there's an image digest ([#171](https://github.com/appc/docker2aci/pull/171)).\n- lib/repository2: add support for docker v2.2 and OCI ([#176](https://github.com/appc/docker2aci/pull/176)).\n\n## v0.11.1\n\nv0.11.1 is a bugfix release.\n\n- Fix parallel pull synchronisation ([#167](https://github.com/appc/docker2aci/pull/167), [#168](https://github.com/appc/docker2aci/pull/168)).\n\n## v0.11.0\n\nThis release splits the `--insecure` flag in two, `--insecure-skip-verify` to skip TLS verification, and `--insecure-allow-http` to allow unencrypted connections when fetching images. It also includes a couple of bugfixes.\n\n- Add missing message to channel on successful layer download ([#161](https://github.com/appc/docker2aci/pull/161)).\n- Fix a panic when a layer being fetched encounters an error ([#162](https://github.com/appc/docker2aci/pull/162)).\n- Split `--insecure` flag in two ([#163](https://github.com/appc/docker2aci/pull/163)).\n\n## v0.10.0\n\nThis release includes two major performance optimizations: parallel layer pull and parallel ACI compression.\n\n- Pull layers in parallel ([#158](https://github.com/appc/docker2aci/pull/158)).\n- Use a parallel compression library ([#157](https://github.com/appc/docker2aci/pull/157)).\n- Fix auth token parsing to handle services with spaces in their names ([#150](https://github.com/appc/docker2aci/pull/150)).\n\n## v0.9.3\n\nv0.9.3 is a minor bug fix release.\n\n- Use the default transport when doing HTTP requests ([#147](https://github.com/appc/docker2aci/pull/147)). We were using an empty transport which didn't pass on the proxy configuration.\n\n## v0.9.2\n\nv0.9.2 is a minor release with a bug fix and a cleanup over the previous one.\n\n- Use upstream docker functions to parse docker URLs and parse digest ([#140](https://github.com/appc/docker2aci/pull/140)).\n- Change docker entrypoint/cmd annotations to json ([#142](https://github.com/appc/docker2aci/pull/142)).\n\n## v0.9.1\n\nv0.9.1 is mainly a bugfix and cleanup release.\n\n- Remove redundant dependency fetching, we're vendoring them now ([#134](https://github.com/appc/docker2aci/pull/134)).\n- Export ParseDockerURL which is used by rkt ([#135](https://github.com/appc/docker2aci/pull/135)).\n- Export annotations so people can use them outside docker2aci ([#135](https://github.com/appc/docker2aci/pull/135)).\n- Refactor the library so internal functions are in the \"internal\" package ([#135](https://github.com/appc/docker2aci/pull/135)).\n- Document release process and add a bump-version script ([#137](https://github.com/appc/docker2aci/pull/137)).\n\n## v0.9.0\n\nv0.9.0 is the initial release of docker2aci.\n\ndocker2aci converts to ACI Docker images from a remote repository or from a local file generated with \"docker save\".\n\nIt supports v1 and v2 Docker registries, compression, and layer squashing.\n"
  },
  {
    "path": "Documentation/devel/release.md",
    "content": "# docker2aci release guide\n\nHow to perform a release of docker2aci.\nThis guide is probably unnecessarily verbose, so improvements welcomed.\nOnly parts of the procedure are automated; this is somewhat intentional (manual steps for sanity checking) but it can probably be further scripted, please help.\n\nThe following example assumes we're going from version 0.9.0 (`v0.9.0`) to 0.9.1 (`v0.9.1`).\n\nLet's get started:\n\n- Start at the relevant milestone on GitHub (e.g. https://github.com/appc/docker2aci/milestones/v0.9.1): ensure all referenced issues are closed (or moved elsewhere, if they're not done). Close the milestone.\n- Branch from the latest master, make sure your git status is clean\n- Ensure the build is clean!\n  - `git clean -ffdx && ./build.sh && ./tests/test.sh` should work\n  - Integration tests on CI should be green\n- Update the [release notes](https://github.com/appc/docker2aci/blob/master/CHANGELOG.md).\n  Try to capture most of the salient changes since the last release, but don't go into unnecessary detail (better to link/reference the documentation wherever possible).\n\nThe docker2aci version is [hardcoded in the repository](https://github.com/appc/docker2aci/blob/master/lib/version.go#L19), so the first thing to do is bump it:\n\n- Run `scripts/bump-release v0.9.1`.\n  This should generate two commits: a bump to the actual release (e.g. v0.9.1), and then a bump to the release+git (e.g. v0.9.1+git).\n  The actual release version should only exist in a single commit!\n- Sanity check what the script did with `git diff HEAD^^` or similar.\n- If the script didn't work, yell at the author and/or fix it.\n  It can almost certainly be improved.\n- File a PR and get a review from another [MAINTAINER](https://github.com/appc/docker2aci/blob/master/MAINTAINERS).\n  This is useful to a) sanity check the diff, and b) be very explicit/public that a release is happening\n- Ensure the CI on the release PR is green!\n\nAfter merging and going back to master branch, we check out the release version and tag it:\n\n- `git checkout HEAD^` should work; sanity check lib/version.go (the `Version` variable) after doing this\n- Add a signed tag: `git tag -s v0.9.1`.\n- Build docker2aci\n  - `sudo git clean -ffdx && ./build.sh`\n  - Sanity check `bin/docker2aci -version`\n- Push the tag to GitHub: `git push --tags`\n\nNow we switch to the GitHub web UI to conduct the release:\n\n- https://github.com/appc/docker2aci/releases/new\n- For now, check \"This is a pre-release\"\n- Tag \"v0.9.1\", release title \"v0.9.1\"\n- Copy-paste the release notes you added earlier in [CHANGELOG.md](https://github.com/appc/docker2aci/blob/master/CHANGELOG.md)\n- You can also add a little more detail and polish to the release notes here if you wish, as it is more targeted towards users (vs the changelog being more for developers); use your best judgement and see previous releases on GH for examples.\n- Attach the release.\n  This is a simple tarball:\n\n```\n\texport NAME=\"docker2aci-v0.9.1\"\n\tmkdir $NAME\n\tcp bin/docker2aci $NAME/\n\tsudo chown -R root:root $NAME/\n\ttar czvf $NAME.tar.gz --numeric-owner $NAME/\n```\n\n- Attach the release signature; your personal GPG is okay for now:\n\n```\n\tgpg --detach-sign $NAME.tar.gz\n```\n\n- Publish the release!\n\n- Clean your git tree: `sudo git clean -ffdx`.\n"
  },
  {
    "path": "LICENSE",
    "content": "Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n"
  },
  {
    "path": "MAINTAINERS",
    "content": "Alban Crequy <alban@kinvolk.io> (@alban)\nIago López Galeiras <iago@kinvolk.io> (@iaguis)\nKrzesimir Nowak <krzesimir@kinvolk.io> (@krnowak)\n"
  },
  {
    "path": "README.md",
    "content": "# docker2aci - Convert docker images to ACI\n\n[![Build Status](https://semaphoreci.com/api/v1/projects/4472761c-2b88-41f2-b2de-bf0447a8a290/610597/badge.svg)](https://semaphoreci.com/appc/docker2aci)\n\ndocker2aci is a small library and CLI binary that converts Docker images to\n[ACI][aci]. It takes as input either a file generated by \"docker save\" or a\nDocker registry URL. It gets all the layers of a Docker image and squashes them\ninto an ACI image. Optionally, it can generate one ACI for each layer, setting\nthe correct dependencies.\n\nAll ACIs generated are compressed with gzip by default. Compression can be\ndisabled by specifying `--compression=none`.\n\n\n## Build\n\nRequirements: golang 1.6+\n\n\tgit clone git://github.com/appc/docker2aci\n\tcd docker2aci\n\t./build.sh\n\n## Volumes\n\nDocker Volumes get converted to mountPoints in the [Image Manifest\nSchema][imageschema]. Since mountPoints need a name and Docker Volumes don't,\ndocker2aci generates a name by appending the path to `volume-` replacing\nnon-alphanumeric characters with dashes. That is, if a Volume has `/var/tmp`\nas path, the resulting mountPoint name will be `volume-var-tmp`.\n\nWhen the docker2aci CLI binary converts a Docker Volume to a mountPoint it will\nprint its name, path and whether it is read-only or not.\n\n## Ports\n\nDocker Ports get converted to ports in the [Image Manifest\nSchema][imageschema]. The resulting port name will be the port number and the\nprotocol separated by a dash. For example: `6379-tcp`.\n\n## CLI examples\n\n```\n$ docker2aci docker://busybox\nDownloading sha256:55dc925c23d: [==============================] 674 KB/674 KB\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\n\nGenerated ACI(s):\nlibrary-busybox-latest.aci\n$ actool --debug validate library-busybox-latest.aci\nlibrary-busybox-latest.aci: valid app container image\n```\n\n```\n$ /docker2aci --nosquash docker://quay.io/coreos/etcd:latest\nDownloading sha256:f05e5379dcb: [==============================] 3.98 MB/3.98 MB\nDownloading sha256:af1897d2d32: [==============================] 3.5 MB/3.5 MB\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\n\nConverted ports:\n        name: \"2379-tcp\", protocol: \"tcp\", port: 2379, count: 1, socketActivated: false\n        name: \"2380-tcp\", protocol: \"tcp\", port: 2380, count: 1, socketActivated: false\n        name: \"4001-tcp\", protocol: \"tcp\", port: 4001, count: 1, socketActivated: false\n        name: \"7001-tcp\", protocol: \"tcp\", port: 7001, count: 1, socketActivated: false\n\nGenerated ACI(s):\ncoreos-etcd-d21dd9a5886270b7c2c379c02fc548e0696b139c43bb12fdb2d9b63409717485-latest-linux-amd64-3.aci\ncoreos-etcd-620329641f386e62c7b0e0fa60a9acef100e71058124ddc7f1969557c72b2458-latest-linux-amd64-2.aci\ncoreos-etcd-9cd3f08f7ccfaad24c73757a5b4f79601f2790726d6ccdd556a82e5c9c5ddbfa-latest-linux-amd64-1.aci\ncoreos-etcd-9cd3f08f7ccfaad24c73757a5b4f79601f2790726d6ccdd556a82e5c9c5ddbfa-latest-linux-amd64-0.aci\n```\n\n```\n$ docker save -o ubuntu.docker ubuntu\n$ docker2aci ubuntu.docker\nExtracting 706766fe1019\nExtracting a62a42e77c9c\nExtracting 2c014f14d3d9\nExtracting b7cf8f0d9e82\n\nGenerated ACI(s):\nubuntu-latest.aci\n$ actool --debug validate ubuntu-latest.aci\nubuntu-latest.aci: valid app container image\n```\n\n```\n$ docker2aci docker://redis\nDownloading sha256:c666c10c893: [==============================] 37.2 MB/37.2 MB\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:d6f52360d0a: [==============================] 1.69 KB/1.69 KB\nDownloading sha256:8c3a687fd4c: [==============================] 5.93 MB/5.93 MB\nDownloading sha256:15554e0e598: [==============================] 109 KB/109 KB \nDownloading sha256:3286d490a29: [==============================] 611 KB/611 KB \nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:a3d89b95a63: [==============================] 3.04 MB/3.04 MB\nDownloading sha256:1c4db557158: [==============================] 98 B/98 B\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:a1a961e320b: [==============================] 196 B/196 B\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\nDownloading sha256:a3ed95caeb0: [==============================] 32 B/32 B\n\nConverted volumes:\n        name: \"volume-data\", path: \"/data\", readOnly: false\n\nConverted ports:\n        name: \"6379-tcp\", protocol: \"tcp\", port: 6379, count: 1, socketActivated: false\n\nGenerated ACI(s):\nlibrary-redis-latest.aci\n$ actool --debug validate library-redis-latest.aci\nlibrary-redis-latest.aci: valid app container image\n```\n\n[aci]: https://github.com/appc/spec/blob/master/SPEC.md#app-container-image\n[imageschema]: https://github.com/appc/spec/blob/master/spec/aci.md#image-manifest-schema\n"
  },
  {
    "path": "build.sh",
    "content": "#!/usr/bin/env bash\nset -e\n\n# Gets the directory that this script is stored in.\n# https://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" && pwd )\"\n\nORG_PATH=\"github.com/appc\"\nREPO_PATH=\"${ORG_PATH}/docker2aci\"\nVERSION=$(git describe --dirty --always)\nGLDFLAGS=\"-X ${REPO_PATH}/lib.Version=${VERSION}\"\n\nif [ ! -h ${DIR}/gopath/src/${REPO_PATH} ]; then\n  mkdir -p ${DIR}/gopath/src/${ORG_PATH}\n  cd ${DIR} && ln -s ../../../.. gopath/src/${REPO_PATH} || exit 255\nfi\n\nexport GO15VENDOREXPERIMENT=1\nexport GOBIN=${DIR}/bin\nexport GOPATH=${DIR}/gopath\nexport GOOS GOARCH\n\neval $(go env)\n\nif [ \"${GOOS}\" = \"freebsd\" ]; then\n    # /usr/bin/cc is clang on freebsd, but we need to tell it to go to\n    # make it generate proper flavour of code that doesn't emit\n    # warnings.\n    export CC=clang\nfi\n\necho \"Building docker2aci...\"\ngo build -o ${GOBIN}/docker2aci -ldflags \"${GLDFLAGS}\" ${REPO_PATH}\n"
  },
  {
    "path": "glide.yaml",
    "content": "package: github.com/appc/docker2aci\nimport:\n- package: github.com/appc/spec\n  version: 0.8.10\n  subpackages:\n  - aci\n  - pkg/acirenderer\n  - schema\n  - schema/types\n- package: github.com/coreos/ioprogress\n  version: 4637e494fd9b23c5565ee193e89f91fdc1639bc0\n- package: github.com/coreos/pkg\n  version: 2.0.0\n  subpackages:\n  - progressutil\n- package: github.com/docker/distribution\n  version: 5db89f0ca68677abc5eefce8f2a0a772c98ba52d\n  subpackages:\n  - reference\n- package: github.com/klauspost/pgzip\n  version: 1.0.0\n- package: github.com/opencontainers/image-spec\n  version: v1.0.0-rc2\n  subpackages:\n  - specs-go\n- package: github.com/opencontainers/go-digest\n  version: 21dfd564fd89c944783d00d069f33e3e7123c448\n"
  },
  {
    "path": "lib/common/common.go",
    "content": "// Copyright 2016 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package common provides misc types and variables.\npackage common\n\nimport (\n\t\"fmt\"\n\t\"regexp\"\n\n\t\"github.com/appc/docker2aci/lib/internal/docker\"\n\t\"github.com/docker/distribution/reference\"\n\n\tspec \"github.com/opencontainers/image-spec/specs-go/v1\"\n)\n\ntype Compression int\n\nconst (\n\tNoCompression = iota\n\tGzipCompression\n)\n\nvar (\n\tvalidId = regexp.MustCompile(`^(\\w+:)?([A-Fa-f0-9]+)$`)\n)\n\nconst (\n\t// AppcDockerOriginalName is the unmodified name this image was originally\n\t// referenced by for fetching, e.g. something like \"nginx:tag\" or\n\t// \"quay.io/user/image:latest\" This is identical in most cases to\n\t// 'registryurl/repository:tag' but may differ for the default Dockerhub\n\t// registry or if the tag was inferred as latest.\n\tAppcDockerOriginalName  = \"appc.io/docker/originalname\"\n\tAppcDockerRegistryURL   = \"appc.io/docker/registryurl\"\n\tAppcDockerRepository    = \"appc.io/docker/repository\"\n\tAppcDockerTag           = \"appc.io/docker/tag\"\n\tAppcDockerImageID       = \"appc.io/docker/imageid\"\n\tAppcDockerParentImageID = \"appc.io/docker/parentimageid\"\n\tAppcDockerEntrypoint    = \"appc.io/docker/entrypoint\"\n\tAppcDockerCmd           = \"appc.io/docker/cmd\"\n\tAppcDockerManifestHash  = \"appc.io/docker/manifesthash\"\n)\n\nconst defaultTag = \"latest\"\n\n// ParsedDockerURL represents a parsed Docker URL.\ntype ParsedDockerURL struct {\n\tOriginalName string\n\tIndexURL     string\n\tImageName    string\n\tTag          string\n\tDigest       string\n}\n\ntype ErrSeveralImages struct {\n\tMsg    string\n\tImages []string\n}\n\n// InsecureConfig represents the different insecure options available\ntype InsecureConfig struct {\n\tSkipVerify bool\n\tAllowHTTP  bool\n}\n\nfunc (e *ErrSeveralImages) Error() string {\n\treturn e.Msg\n}\n\n// ParseDockerURL takes a Docker URL and returns a ParsedDockerURL with its\n// index URL, image name, and tag.\nfunc ParseDockerURL(arg string) (*ParsedDockerURL, error) {\n\tr, err := reference.ParseNormalizedNamed(arg)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar tag, digest string\n\tswitch x := r.(type) {\n\tcase reference.Canonical:\n\t\tdigest = x.Digest().String()\n\tcase reference.NamedTagged:\n\t\ttag = x.Tag()\n\tdefault:\n\t\ttag = defaultTag\n\t}\n\n\tindexURL, remoteName := docker.SplitReposName(reference.FamiliarName(r))\n\n\treturn &ParsedDockerURL{\n\t\tOriginalName: arg,\n\t\tIndexURL:     indexURL,\n\t\tImageName:    remoteName,\n\t\tTag:          tag,\n\t\tDigest:       digest,\n\t}, nil\n}\n\n// ValidateLayerId validates a layer ID\nfunc ValidateLayerId(id string) error {\n\tif ok := validId.MatchString(id); !ok {\n\t\treturn fmt.Errorf(\"invalid layer ID %q\", id)\n\t}\n\treturn nil\n}\n\n/*\n * Media Type Selectors Section\n */\n\nconst (\n\tMediaTypeDockerV21Manifest       = \"application/vnd.docker.distribution.manifest.v1+json\"\n\tMediaTypeDockerV21SignedManifest = \"application/vnd.docker.distribution.manifest.v1+prettyjws\"\n\tMediaTypeDockerV21ManifestLayer  = \"application/vnd.docker.container.image.rootfs.diff+x-gtar\"\n\n\tMediaTypeDockerV22Manifest     = \"application/vnd.docker.distribution.manifest.v2+json\"\n\tMediaTypeDockerV22ManifestList = \"application/vnd.docker.distribution.manifest.list.v2+json\"\n\tMediaTypeDockerV22Config       = \"application/vnd.docker.container.image.v1+json\"\n\tMediaTypeDockerV22RootFS       = \"application/vnd.docker.image.rootfs.diff.tar.gzip\"\n\n\tMediaTypeOCIV1Manifest     = spec.MediaTypeImageManifest\n\tMediaTypeOCIV1ManifestList = spec.MediaTypeImageManifestList\n\tMediaTypeOCIV1Config       = spec.MediaTypeImageConfig\n\tMediaTypeOCIV1Layer        = spec.MediaTypeImageLayer\n)\n\n// MediaTypeOption represents the media types for a given docker image (or oci)\n// spec.\ntype MediaTypeOption int\n\nconst (\n\tMediaTypeOptionDockerV21 = iota\n\tMediaTypeOptionDockerV22\n\tMediaTypeOptionOCIV1Pre\n)\n\n// MediaTypeSet represents a set of media types which docker2aci is to use when\n// fetchimg images. As an example if a MediaTypeSet is equal to\n// {MediaTypeOptionDockerV22, MediaTypeOptionOCIV1Pre}, then when an image pull\n// is made V2.1 images will not be fetched. This doesn't apply to V1 pulls. As\n// an edge case if a MedaTypeSet is nil or empty, that means that _every_ type\n// of media type is enabled. This type is intended to be a set, and putting\n// duplicates in this set is generally unadvised.\ntype MediaTypeSet []MediaTypeOption\n\nfunc (m MediaTypeSet) ManifestMediaTypes() []string {\n\tif len(m) == 0 {\n\t\treturn []string{\n\t\t\tMediaTypeDockerV21Manifest,\n\t\t\tMediaTypeDockerV21SignedManifest,\n\t\t\tMediaTypeDockerV22Manifest,\n\t\t\tMediaTypeOCIV1Manifest,\n\t\t}\n\t}\n\tret := []string{}\n\tfor _, option := range m {\n\t\tswitch option {\n\t\tcase MediaTypeOptionDockerV21:\n\t\t\tret = append(ret, MediaTypeDockerV21Manifest)\n\t\t\tret = append(ret, MediaTypeDockerV21SignedManifest)\n\t\tcase MediaTypeOptionDockerV22:\n\t\t\tret = append(ret, MediaTypeDockerV22Manifest)\n\t\tcase MediaTypeOptionOCIV1Pre:\n\t\t\tret = append(ret, MediaTypeOCIV1Manifest)\n\t\t}\n\t}\n\treturn ret\n}\n\nfunc (m MediaTypeSet) ConfigMediaTypes() []string {\n\tif len(m) == 0 {\n\t\treturn []string{\n\t\t\tMediaTypeDockerV22Config,\n\t\t\tMediaTypeOCIV1Config,\n\t\t}\n\t}\n\tret := []string{}\n\tfor _, option := range m {\n\t\tswitch option {\n\t\tcase MediaTypeOptionDockerV21:\n\t\tcase MediaTypeOptionDockerV22:\n\t\t\tret = append(ret, MediaTypeDockerV22Config)\n\t\tcase MediaTypeOptionOCIV1Pre:\n\t\t\tret = append(ret, MediaTypeOCIV1Config)\n\t\t}\n\t}\n\treturn ret\n}\n\nfunc (m MediaTypeSet) LayerMediaTypes() []string {\n\tif len(m) == 0 {\n\t\treturn []string{\n\t\t\tMediaTypeDockerV22RootFS,\n\t\t\tMediaTypeOCIV1Layer,\n\t\t}\n\t}\n\tret := []string{}\n\tfor _, option := range m {\n\t\tswitch option {\n\t\tcase MediaTypeOptionDockerV21:\n\t\tcase MediaTypeOptionDockerV22:\n\t\t\tret = append(ret, MediaTypeDockerV22RootFS)\n\t\tcase MediaTypeOptionOCIV1Pre:\n\t\t\tret = append(ret, MediaTypeOCIV1Layer)\n\t\t}\n\t}\n\treturn ret\n}\n\n// RegistryOption represents a type of a registry, based on the version of the\n// docker http API.\ntype RegistryOption int\n\nconst (\n\tRegistryOptionV1 = iota\n\tRegistryOptionV2\n)\n\n// RegistryOptionSet represents a set of registry types which docker2aci is to\n// use when fetching images. As an example if a RegistryOptionSet is equal to\n// {RegistryOptionV2}, then v1 pulls are disabled. As an edge case if a\n// RegistryOptionSet is nil or empty, that means that _every_ type of registry\n// is enabled. This type is intended to be a set, and putting duplicates in this\n// set is generally unadvised.\ntype RegistryOptionSet []RegistryOption\n\nfunc (r RegistryOptionSet) AllowsV1() bool {\n\tif len(r) == 0 {\n\t\treturn true\n\t}\n\tfor _, o := range r {\n\t\tif o == RegistryOptionV1 {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc (r RegistryOptionSet) AllowsV2() bool {\n\tif len(r) == 0 {\n\t\treturn true\n\t}\n\tfor _, o := range r {\n\t\tif o == RegistryOptionV2 {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n"
  },
  {
    "path": "lib/common/common_test.go",
    "content": "// Copyright 2017 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage common\n\nimport (\n\t_ \"crypto/sha256\"\n\t\"reflect\"\n\t\"testing\"\n)\n\nfunc TestMediaTypeSet(t *testing.T) {\n\ttests := []struct {\n\t\tms                    MediaTypeSet\n\t\texpectedManifestTypes []string\n\t\texpectedConfigTypes   []string\n\t\texpectedLayerTypes    []string\n\t}{\n\t\t{\n\t\t\tMediaTypeSet{MediaTypeOptionDockerV21},\n\t\t\t[]string{MediaTypeDockerV21Manifest, MediaTypeDockerV21SignedManifest},\n\t\t\t[]string{},\n\t\t\t[]string{},\n\t\t},\n\t\t{\n\t\t\tMediaTypeSet{MediaTypeOptionDockerV22},\n\t\t\t[]string{MediaTypeDockerV22Manifest},\n\t\t\t[]string{MediaTypeDockerV22Config},\n\t\t\t[]string{MediaTypeDockerV22RootFS},\n\t\t},\n\t\t{\n\t\t\tMediaTypeSet{MediaTypeOptionOCIV1Pre},\n\t\t\t[]string{MediaTypeOCIV1Manifest},\n\t\t\t[]string{MediaTypeOCIV1Config},\n\t\t\t[]string{MediaTypeOCIV1Layer},\n\t\t},\n\t\t{\n\t\t\tMediaTypeSet{},\n\t\t\t[]string{MediaTypeDockerV21Manifest, MediaTypeDockerV21SignedManifest, MediaTypeDockerV22Manifest, MediaTypeOCIV1Manifest},\n\t\t\t[]string{MediaTypeDockerV22Config, MediaTypeOCIV1Config},\n\t\t\t[]string{MediaTypeDockerV22RootFS, MediaTypeOCIV1Layer},\n\t\t},\n\t\t{\n\t\t\tMediaTypeSet{MediaTypeOptionDockerV21, MediaTypeOptionDockerV22, MediaTypeOptionOCIV1Pre},\n\t\t\t[]string{MediaTypeDockerV21Manifest, MediaTypeDockerV21SignedManifest, MediaTypeDockerV22Manifest, MediaTypeOCIV1Manifest},\n\t\t\t[]string{MediaTypeDockerV22Config, MediaTypeOCIV1Config},\n\t\t\t[]string{MediaTypeDockerV22RootFS, MediaTypeOCIV1Layer},\n\t\t},\n\t\t{\n\t\t\tMediaTypeSet{MediaTypeOptionDockerV21, MediaTypeOptionOCIV1Pre},\n\t\t\t[]string{MediaTypeDockerV21Manifest, MediaTypeDockerV21SignedManifest, MediaTypeOCIV1Manifest},\n\t\t\t[]string{MediaTypeOCIV1Config},\n\t\t\t[]string{MediaTypeOCIV1Layer},\n\t\t},\n\t}\n\n\tfor _, test := range tests {\n\t\tif !isEqual(test.expectedManifestTypes, test.ms.ManifestMediaTypes()) {\n\t\t\tt.Errorf(\"expected manifest media types didn't match what was returned:\\n%v\\n%v\", test.expectedManifestTypes, test.ms.ManifestMediaTypes())\n\t\t}\n\t\tif !isEqual(test.expectedConfigTypes, test.ms.ConfigMediaTypes()) {\n\t\t\tt.Errorf(\"expected config media types didn't match what was returned:\\n%v\\n%v\", test.expectedConfigTypes, test.ms.ConfigMediaTypes())\n\t\t}\n\t\tif !isEqual(test.expectedLayerTypes, test.ms.LayerMediaTypes()) {\n\t\t\tt.Errorf(\"expected layer media types didn't match what was returned:\\n%v\\n%v\", test.expectedLayerTypes, test.ms.LayerMediaTypes())\n\t\t}\n\t}\n}\n\nfunc TestRegistryOptionSet(t *testing.T) {\n\ttests := []struct {\n\t\trs       RegistryOptionSet\n\t\tallowsV1 bool\n\t\tallowsV2 bool\n\t}{\n\t\t{\n\t\t\tRegistryOptionSet{RegistryOptionV1}, true, false,\n\t\t},\n\t\t{\n\t\t\tRegistryOptionSet{RegistryOptionV2}, false, true,\n\t\t},\n\t\t{\n\t\t\tRegistryOptionSet{RegistryOptionV1, RegistryOptionV2}, true, true,\n\t\t},\n\t\t{\n\t\t\tRegistryOptionSet{}, true, true,\n\t\t},\n\t}\n\tfor _, test := range tests {\n\t\tif test.allowsV1 != test.rs.AllowsV1() {\n\t\t\tt.Errorf(\"doesn't allow V1 when it should\")\n\t\t}\n\t\tif test.allowsV2 != test.rs.AllowsV2() {\n\t\t\tt.Errorf(\"doesn't allow V1 when it should\")\n\t\t}\n\t}\n}\n\nfunc isEqual(val1, val2 []string) bool {\n\tif len(val1) != len(val2) {\n\t\treturn false\n\t}\nloop1:\n\tfor _, thing1 := range val1 {\n\t\tfor _, thing2 := range val2 {\n\t\t\tif thing1 == thing2 {\n\t\t\t\tcontinue loop1\n\t\t\t}\n\t\t}\n\t\treturn false\n\t}\n\treturn true\n}\n\nfunc TestParseDockerURL(t *testing.T) {\n\ttests := []struct {\n\t\tinput    string\n\t\texpected *ParsedDockerURL\n\t}{\n\t\t{\n\t\t\t\"busybox\",\n\t\t\t&ParsedDockerURL{\n\t\t\t\tOriginalName: \"busybox\",\n\t\t\t\tIndexURL:     \"registry-1.docker.io\",\n\t\t\t\tImageName:    \"library/busybox\",\n\t\t\t\tTag:          \"latest\",\n\t\t\t\tDigest:       \"\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t\"library/busybox\",\n\t\t\t&ParsedDockerURL{\n\t\t\t\tOriginalName: \"library/busybox\",\n\t\t\t\tIndexURL:     \"registry-1.docker.io\",\n\t\t\t\tImageName:    \"library/busybox\",\n\t\t\t\tTag:          \"latest\",\n\t\t\t\tDigest:       \"\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t\"docker.io/library/busybox:1\",\n\t\t\t&ParsedDockerURL{\n\t\t\t\tOriginalName: \"docker.io/library/busybox:1\",\n\t\t\t\tIndexURL:     \"registry-1.docker.io\",\n\t\t\t\tImageName:    \"library/busybox\",\n\t\t\t\tTag:          \"1\",\n\t\t\t\tDigest:       \"\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t\"docker.io/library/busybox\",\n\t\t\t&ParsedDockerURL{\n\t\t\t\tOriginalName: \"docker.io/library/busybox\",\n\t\t\t\tIndexURL:     \"registry-1.docker.io\",\n\t\t\t\tImageName:    \"library/busybox\",\n\t\t\t\tTag:          \"latest\",\n\t\t\t\tDigest:       \"\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t\"gcr.io/google-samples/node-hello:1.0\",\n\t\t\t&ParsedDockerURL{\n\t\t\t\tOriginalName: \"gcr.io/google-samples/node-hello:1.0\",\n\t\t\t\tIndexURL:     \"gcr.io\",\n\t\t\t\tImageName:    \"google-samples/node-hello\",\n\t\t\t\tTag:          \"1.0\",\n\t\t\t\tDigest:       \"\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t\"alpine@sha256:ea0d1389812f43e474c50155ec4914e1b48792d420820c15cab28c0794034950\",\n\t\t\t&ParsedDockerURL{\n\t\t\t\tOriginalName: \"alpine@sha256:ea0d1389812f43e474c50155ec4914e1b48792d420820c15cab28c0794034950\",\n\t\t\t\tIndexURL:     \"registry-1.docker.io\",\n\t\t\t\tImageName:    \"library/alpine\",\n\t\t\t\tTag:          \"\",\n\t\t\t\tDigest:       \"sha256:ea0d1389812f43e474c50155ec4914e1b48792d420820c15cab28c0794034950\",\n\t\t\t},\n\t\t},\n\t}\n\tfor _, test := range tests {\n\t\tparsed, err := ParseDockerURL(test.input)\n\t\tif err != nil && test.expected != nil {\n\t\t\tt.Errorf(\"error when parsing %q: %v\\nexpected: %+v\", test.input, err, test.expected)\n\t\t} else if err == nil && test.expected == nil {\n\t\t\tt.Errorf(\"expected %q to result in error\\n\", test.input)\n\t\t} else if !reflect.DeepEqual(test.expected, parsed) {\n\t\t\tt.Errorf(\"expected and parsed `&ParsedDockerURL{}` differ:\\nexpected: %+v\\nparsed:   %+v\\n\", test.expected, parsed)\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "lib/conversion_store.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage docker2aci\n\nimport (\n\t\"crypto/sha512\"\n\t\"fmt\"\n\t\"hash\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\n\t\"github.com/appc/spec/aci\"\n\t\"github.com/appc/spec/schema\"\n\t\"github.com/appc/spec/schema/types\"\n)\n\nconst (\n\thashPrefix = \"sha512-\"\n)\n\ntype aciInfo struct {\n\tpath          string\n\tkey           string\n\tImageManifest *schema.ImageManifest\n}\n\n// conversionStore is an simple implementation of the acirenderer.ACIRegistry\n// interface. It stores the Docker layers converted to ACI so we can take\n// advantage of acirenderer to generate a squashed ACI Image.\ntype conversionStore struct {\n\tacis map[string]*aciInfo\n}\n\nfunc newConversionStore() *conversionStore {\n\treturn &conversionStore{acis: make(map[string]*aciInfo)}\n}\n\nfunc (ms *conversionStore) WriteACI(path string) (string, error) {\n\tf, err := os.Open(path)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer f.Close()\n\n\tcr, err := aci.NewCompressedReader(f)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer cr.Close()\n\n\th := sha512.New()\n\tr := io.TeeReader(cr, h)\n\n\t// read the file so we can get the hash\n\tif _, err := io.Copy(ioutil.Discard, r); err != nil {\n\t\treturn \"\", fmt.Errorf(\"error reading ACI: %v\", err)\n\t}\n\n\tim, err := aci.ManifestFromImage(f)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tkey := ms.HashToKey(h)\n\tms.acis[key] = &aciInfo{path: path, key: key, ImageManifest: im}\n\treturn key, nil\n}\n\nfunc (ms *conversionStore) GetImageManifest(key string) (*schema.ImageManifest, error) {\n\taci, ok := ms.acis[key]\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"aci with key: %s not found\", key)\n\t}\n\treturn aci.ImageManifest, nil\n}\n\nfunc (ms *conversionStore) GetACI(name types.ACIdentifier, labels types.Labels) (string, error) {\n\tfor _, aci := range ms.acis {\n\t\t// we implement this function to comply with the interface so don't\n\t\t// bother implementing a proper label check\n\t\tif aci.ImageManifest.Name.String() == name.String() {\n\t\t\treturn aci.key, nil\n\t\t}\n\t}\n\treturn \"\", fmt.Errorf(\"aci not found\")\n}\n\nfunc (ms *conversionStore) ReadStream(key string) (io.ReadCloser, error) {\n\timg, ok := ms.acis[key]\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"stream for key: %s not found\", key)\n\t}\n\tf, err := os.Open(img.path)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error opening aci: %s\", img.path)\n\t}\n\n\ttr, err := aci.NewCompressedReader(f)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn tr, nil\n}\n\nfunc (ms *conversionStore) ResolveKey(key string) (string, error) {\n\treturn key, nil\n}\n\nfunc (ms *conversionStore) HashToKey(h hash.Hash) string {\n\ts := h.Sum(nil)\n\treturn fmt.Sprintf(\"%s%x\", hashPrefix, s)\n}\n"
  },
  {
    "path": "lib/docker2aci.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package docker2aci implements a simple library for converting docker images to\n// App Container Images (ACIs).\npackage docker2aci\n\nimport (\n\t\"archive/tar\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/lib/internal\"\n\t\"github.com/appc/docker2aci/lib/internal/backend/file\"\n\t\"github.com/appc/docker2aci/lib/internal/backend/repository\"\n\t\"github.com/appc/docker2aci/lib/internal/docker\"\n\t\"github.com/appc/docker2aci/lib/internal/tarball\"\n\t\"github.com/appc/docker2aci/lib/internal/util\"\n\t\"github.com/appc/docker2aci/pkg/log\"\n\t\"github.com/appc/spec/pkg/acirenderer\"\n\t\"github.com/appc/spec/schema\"\n\tappctypes \"github.com/appc/spec/schema/types\"\n\tgzip \"github.com/klauspost/pgzip\"\n)\n\n// CommonConfig represents the shared configuration options for converting\n// Docker images.\ntype CommonConfig struct {\n\tSquash                bool               // squash the layers in one file\n\tOutputDir             string             // where to put the resulting ACI\n\tTmpDir                string             // directory to use for temporary files\n\tCompression           common.Compression // which compression to use for the resulting file(s)\n\tCurrentManifestHashes []string           // any manifest hashes the caller already has\n\n\tInfo  log.Logger\n\tDebug log.Logger\n}\n\nfunc (c *CommonConfig) initLogger() {\n\tif c.Info == nil {\n\t\tc.Info = log.NewStdLogger(os.Stderr)\n\t}\n\n\tif c.Debug == nil {\n\t\tc.Debug = log.NewNopLogger()\n\t}\n}\n\n// RemoteConfig represents the remote repository specific configuration for\n// converting Docker images.\ntype RemoteConfig struct {\n\tCommonConfig\n\tUsername        string                // username to use if the image to convert needs authentication\n\tPassword        string                // password to use if the image to convert needs authentication\n\tInsecure        common.InsecureConfig // Insecure options\n\tMediaTypes      common.MediaTypeSet\n\tRegistryOptions common.RegistryOptionSet\n}\n\n// FileConfig represents the saved file specific configuration for converting\n// Docker images.\ntype FileConfig struct {\n\tCommonConfig\n\tDockerURL string // select an image if there are several images/tags in the file, Syntax: \"{docker registry URL}/{image name}:{tag}\"\n}\n\n// ConvertRemoteRepo generates ACI images from docker registry URLs.  It takes\n// as input a dockerURL of the form:\n//\n//     {registry URL}/{repository}:{reference[tag|digest]}\n//\n// It then gets all the layers of the requested image and converts each of\n// them to ACI.\n// It returns the list of generated ACI paths.\nfunc ConvertRemoteRepo(dockerURL string, config RemoteConfig) ([]string, error) {\n\tconfig.initLogger()\n\n\treturn (&converter{\n\t\tbackend: repository.NewRepositoryBackend(\n\t\t\tconfig.Username,\n\t\t\tconfig.Password,\n\t\t\tconfig.Insecure,\n\t\t\tconfig.Debug,\n\t\t\tconfig.MediaTypes,\n\t\t\tconfig.RegistryOptions,\n\t\t),\n\t\tdockerURL: dockerURL,\n\t\tconfig:    config.CommonConfig,\n\t}).convert()\n}\n\n// ConvertSavedFile generates ACI images from a file generated with \"docker\n// save\".  If there are several images/tags in the file, a particular image can\n// be chosen via FileConfig.DockerURL.\n//\n// It returns the list of generated ACI paths.\nfunc ConvertSavedFile(dockerSavedFile string, config FileConfig) ([]string, error) {\n\tconfig.initLogger()\n\n\tf, err := os.Open(dockerSavedFile)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error opening file: %v\", err)\n\t}\n\tdefer f.Close()\n\n\treturn (&converter{\n\t\tbackend:   file.NewFileBackend(f, config.Debug, config.Info),\n\t\tdockerURL: config.DockerURL,\n\t\tconfig:    config.CommonConfig,\n\t}).convert()\n}\n\n// GetIndexName returns the docker index server from a docker URL.\nfunc GetIndexName(dockerURL string) string {\n\tindex, _ := docker.SplitReposName(dockerURL)\n\treturn index\n}\n\n// GetDockercfgAuth reads a ~/.dockercfg file and returns the username and password\n// of the given docker index server.\nfunc GetDockercfgAuth(indexServer string) (string, string, error) {\n\treturn docker.GetAuthInfo(indexServer)\n}\n\ntype converter struct {\n\tbackend   internal.Docker2ACIBackend\n\tdockerURL string\n\tconfig    CommonConfig\n}\n\nfunc (c *converter) convert() ([]string, error) {\n\tc.config.Debug.Println(\"Getting image info...\")\n\tancestry, manhash, parsedDockerURL, err := c.backend.GetImageInfo(c.dockerURL)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif len(ancestry) == 0 {\n\t\treturn nil, fmt.Errorf(\"backend image had no useful layers: not creating ACI\")\n\t}\n\tfor _, h := range c.config.CurrentManifestHashes {\n\t\tif manhash == h {\n\t\t\treturn nil, nil\n\t\t}\n\t}\n\n\tlayersOutputDir := c.config.OutputDir\n\tif c.config.Squash {\n\t\tlayersOutputDir, err = ioutil.TempDir(c.config.TmpDir, \"docker2aci-\")\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"error creating dir: %v\", err)\n\t\t}\n\t\tdefer os.RemoveAll(layersOutputDir)\n\t}\n\n\tconversionStore := newConversionStore()\n\n\t// only compress individual layers if we're not squashing\n\tlayerCompression := c.config.Compression\n\tif c.config.Squash {\n\t\tlayerCompression = common.NoCompression\n\t}\n\n\taciLayerPaths, aciManifests, err := c.backend.BuildACI(ancestry, manhash, parsedDockerURL, layersOutputDir, c.config.TmpDir, layerCompression)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tvar images acirenderer.Images\n\tfor i, aciLayerPath := range aciLayerPaths {\n\t\tkey, err := conversionStore.WriteACI(aciLayerPath)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"error inserting in the conversion store: %v\", err)\n\t\t}\n\n\t\timages = append(images, acirenderer.Image{Im: aciManifests[i], Key: key, Level: uint16(len(aciLayerPaths) - 1 - i)})\n\t}\n\n\t// acirenderer expects images in order from upper to base layer\n\timages = util.ReverseImages(images)\n\tif c.config.Squash {\n\t\tsquashedImagePath, err := squashLayers(images, conversionStore, *parsedDockerURL, c.config.OutputDir, c.config.Compression, c.config.Debug)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"error squashing image: %v\", err)\n\t\t}\n\t\taciLayerPaths = []string{squashedImagePath}\n\t}\n\n\treturn aciLayerPaths, nil\n}\n\n// squashLayers receives a list of ACI layer file names ordered from base image\n// to application image and squashes them into one ACI\nfunc squashLayers(images []acirenderer.Image, aciRegistry acirenderer.ACIRegistry, parsedDockerURL common.ParsedDockerURL, outputDir string, compression common.Compression, debug log.Logger) (path string, err error) {\n\tdebug.Println(\"Squashing layers...\")\n\tdebug.Println(\"Rendering ACI...\")\n\trenderedACI, err := acirenderer.GetRenderedACIFromList(images, aciRegistry)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"error rendering squashed image: %v\", err)\n\t}\n\tmanifests, err := getManifests(renderedACI, aciRegistry)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"error getting manifests: %v\", err)\n\t}\n\n\tsquashedFilename := getSquashedFilename(parsedDockerURL)\n\tsquashedImagePath := filepath.Join(outputDir, squashedFilename)\n\n\tsquashedTempFile, err := ioutil.TempFile(outputDir, \"docker2aci-squashedFile-\")\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tdefer func() {\n\t\tif err == nil {\n\t\t\terr = squashedTempFile.Close()\n\t\t} else {\n\t\t\t// remove temp file on error\n\t\t\t// we ignore its error to not mask the real error\n\t\t\tos.Remove(squashedTempFile.Name())\n\t\t}\n\t}()\n\n\tdebug.Println(\"Writing squashed ACI...\")\n\tif err := writeSquashedImage(squashedTempFile, renderedACI, aciRegistry, manifests, compression); err != nil {\n\t\treturn \"\", fmt.Errorf(\"error writing squashed image: %v\", err)\n\t}\n\n\tdebug.Println(\"Validating squashed ACI...\")\n\tif err := internal.ValidateACI(squashedTempFile.Name()); err != nil {\n\t\treturn \"\", fmt.Errorf(\"error validating image: %v\", err)\n\t}\n\n\tif err := os.Rename(squashedTempFile.Name(), squashedImagePath); err != nil {\n\t\treturn \"\", err\n\t}\n\n\tdebug.Println(\"ACI squashed!\")\n\treturn squashedImagePath, nil\n}\n\nfunc getSquashedFilename(parsedDockerURL common.ParsedDockerURL) string {\n\tsquashedFilename := strings.Replace(parsedDockerURL.ImageName, \"/\", \"-\", -1)\n\tif parsedDockerURL.Tag != \"\" {\n\t\tsquashedFilename += \"-\" + parsedDockerURL.Tag\n\t}\n\tsquashedFilename += \".aci\"\n\n\treturn squashedFilename\n}\n\nfunc getManifests(renderedACI acirenderer.RenderedACI, aciRegistry acirenderer.ACIRegistry) ([]schema.ImageManifest, error) {\n\tvar manifests []schema.ImageManifest\n\n\tfor _, aci := range renderedACI {\n\t\tim, err := aciRegistry.GetImageManifest(aci.Key)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tmanifests = append(manifests, *im)\n\t}\n\n\treturn manifests, nil\n}\n\nfunc writeSquashedImage(outputFile *os.File, renderedACI acirenderer.RenderedACI, aciProvider acirenderer.ACIProvider, manifests []schema.ImageManifest, compression common.Compression) error {\n\tvar tarWriterTarget io.WriteCloser = outputFile\n\n\tswitch compression {\n\tcase common.NoCompression:\n\tcase common.GzipCompression:\n\t\ttarWriterTarget = gzip.NewWriter(outputFile)\n\t\tdefer tarWriterTarget.Close()\n\tdefault:\n\t\treturn fmt.Errorf(\"unexpected compression enum value: %d\", compression)\n\t}\n\n\toutputWriter := tar.NewWriter(tarWriterTarget)\n\tdefer outputWriter.Close()\n\n\tfinalManifest := mergeManifests(manifests)\n\n\tif err := internal.WriteManifest(outputWriter, finalManifest); err != nil {\n\t\treturn err\n\t}\n\n\tif err := internal.WriteRootfsDir(outputWriter); err != nil {\n\t\treturn err\n\t}\n\n\ttype hardLinkEntry struct {\n\t\tfirstLinkCleanName string\n\t\tfirstLinkHeader    tar.Header\n\t\tkeepOriginal       bool\n\t\twalked             bool\n\t}\n\t// map aciFileKey -> cleanTarget -> hardLinkEntry\n\thardLinks := make(map[string]map[string]hardLinkEntry)\n\n\t// first pass: read all the entries and build the hardLinks map in memory\n\t// but don't write on disk\n\tfor _, aciFile := range renderedACI {\n\t\trs, err := aciProvider.ReadStream(aciFile.Key)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer rs.Close()\n\n\t\thardLinks[aciFile.Key] = map[string]hardLinkEntry{}\n\n\t\tsquashWalker := func(t *tarball.TarFile) error {\n\t\t\tcleanName := filepath.Clean(t.Name())\n\t\t\t// the rootfs and the squashed manifest are added separately\n\t\t\tif cleanName == \"manifest\" || cleanName == \"rootfs\" {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\t_, keep := aciFile.FileMap[cleanName]\n\t\t\tif keep && t.Header.Typeflag == tar.TypeLink {\n\t\t\t\tcleanTarget := filepath.Clean(t.Linkname())\n\t\t\t\tif _, ok := hardLinks[aciFile.Key][cleanTarget]; !ok {\n\t\t\t\t\t_, keepOriginal := aciFile.FileMap[cleanTarget]\n\t\t\t\t\thardLinks[aciFile.Key][cleanTarget] = hardLinkEntry{cleanName, *t.Header, keepOriginal, false}\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\n\t\ttr := tar.NewReader(rs)\n\t\tif err := tarball.Walk(*tr, squashWalker); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// second pass: write on disk\n\tfor _, aciFile := range renderedACI {\n\t\trs, err := aciProvider.ReadStream(aciFile.Key)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdefer rs.Close()\n\n\t\tsquashWalker := func(t *tarball.TarFile) error {\n\t\t\tcleanName := filepath.Clean(t.Name())\n\t\t\t// the rootfs and the squashed manifest are added separately\n\t\t\tif cleanName == \"manifest\" || cleanName == \"rootfs\" {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\t_, keep := aciFile.FileMap[cleanName]\n\n\t\t\tif link, ok := hardLinks[aciFile.Key][cleanName]; ok {\n\t\t\t\tif keep != link.keepOriginal {\n\t\t\t\t\treturn fmt.Errorf(\"logic error: should we keep file %q?\", cleanName)\n\t\t\t\t}\n\t\t\t\tif keep {\n\t\t\t\t\tif err := outputWriter.WriteHeader(t.Header); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"error writing header: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := io.Copy(outputWriter, t.TarStream); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"error copying file into the tar out: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t// The current file does not remain but there is a hard link pointing to\n\t\t\t\t\t// it. Write the current file but with the filename of the first hard link\n\t\t\t\t\t// pointing to it. That first hard link will not be written later, see\n\t\t\t\t\t// variable \"alreadyWritten\".\n\t\t\t\t\tlink.firstLinkHeader.Size = t.Header.Size\n\t\t\t\t\tlink.firstLinkHeader.Typeflag = t.Header.Typeflag\n\t\t\t\t\tlink.firstLinkHeader.Linkname = \"\"\n\n\t\t\t\t\tif err := outputWriter.WriteHeader(&link.firstLinkHeader); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"error writing header: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := io.Copy(outputWriter, t.TarStream); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"error copying file into the tar out: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else if keep {\n\t\t\t\talreadyWritten := false\n\t\t\t\tif t.Header.Typeflag == tar.TypeLink {\n\t\t\t\t\tcleanTarget := filepath.Clean(t.Linkname())\n\t\t\t\t\tif link, ok := hardLinks[aciFile.Key][cleanTarget]; ok {\n\t\t\t\t\t\tif !link.keepOriginal {\n\t\t\t\t\t\t\tif link.walked {\n\t\t\t\t\t\t\t\tt.Header.Linkname = link.firstLinkCleanName\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\talreadyWritten = true\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tlink.walked = true\n\t\t\t\t\t\thardLinks[aciFile.Key][cleanTarget] = link\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif !alreadyWritten {\n\t\t\t\t\tif err := outputWriter.WriteHeader(t.Header); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"error writing header: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t\tif _, err := io.Copy(outputWriter, t.TarStream); err != nil {\n\t\t\t\t\t\treturn fmt.Errorf(\"error copying file into the tar out: %v\", err)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\n\t\ttr := tar.NewReader(rs)\n\t\tif err := tarball.Walk(*tr, squashWalker); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc mergeManifests(manifests []schema.ImageManifest) schema.ImageManifest {\n\t// FIXME(iaguis) we take app layer's manifest as the final manifest for now\n\tmanifest := manifests[0]\n\n\tmanifest.Dependencies = nil\n\n\tlayerIndex := -1\n\tfor i, l := range manifest.Labels {\n\t\tif l.Name.String() == \"layer\" {\n\t\t\tlayerIndex = i\n\t\t}\n\t}\n\n\tif layerIndex != -1 {\n\t\tmanifest.Labels = append(manifest.Labels[:layerIndex], manifest.Labels[layerIndex+1:]...)\n\t}\n\n\tnameWithoutLayerID := appctypes.MustACIdentifier(stripLayerID(manifest.Name.String()))\n\n\tmanifest.Name = *nameWithoutLayerID\n\n\t// once the image is squashed, we don't need a pathWhitelist\n\tmanifest.PathWhitelist = nil\n\n\treturn manifest\n}\n\n// striplayerID strips the layer ID from an app name:\n//\n// myregistry.com/organization/app-name-85738f8f9a7f1b04b5329c590ebcb9e425925c6d0984089c43a022de4f19c281\n// myregistry.com/organization/app-name\nfunc stripLayerID(layerName string) string {\n\tn := strings.LastIndex(layerName, \"-\")\n\treturn layerName[:n]\n}\n"
  },
  {
    "path": "lib/internal/backend/file/file.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package file is an implementation of Docker2ACIBackend for files saved via\n// \"docker save\".\n//\n// Note: this package is an implementation detail and shouldn't be used outside\n// of docker2aci.\npackage file\n\nimport (\n\t\"archive/tar\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/lib/internal\"\n\t\"github.com/appc/docker2aci/lib/internal/tarball\"\n\t\"github.com/appc/docker2aci/lib/internal/types\"\n\t\"github.com/appc/docker2aci/lib/internal/typesV2\"\n\t\"github.com/appc/docker2aci/pkg/log\"\n\t\"github.com/appc/spec/schema\"\n\tspec \"github.com/opencontainers/image-spec/specs-go/v1\"\n)\n\ntype FileBackend struct {\n\tfile        *os.File\n\tdebug, info log.Logger\n}\n\nfunc NewFileBackend(file *os.File, debug, info log.Logger) *FileBackend {\n\treturn &FileBackend{\n\t\tfile:  file,\n\t\tdebug: debug,\n\t\tinfo:  info,\n\t}\n}\n\n// GetImageInfo, given the url for a docker image, will return the\n// following:\n// - []string: an ordered list of all layer hashes\n// - string: a unique identifier for this image, like a hash of the manifest\n// - *common.ParsedDockerURL: a parsed docker URL\n// - error: an error if one occurred\nfunc (lb *FileBackend) GetImageInfo(dockerURL string) ([]string, string, *common.ParsedDockerURL, error) {\n\t// a missing Docker URL could mean that the file only contains one\n\t// image so it's okay for dockerURL to be blank\n\tvar parsedDockerURL *common.ParsedDockerURL\n\tif dockerURL != \"\" {\n\t\tvar err error\n\t\tparsedDockerURL, err = common.ParseDockerURL(dockerURL)\n\t\tif err != nil {\n\t\t\treturn nil, \"\", nil, fmt.Errorf(\"image provided couldnot be parsed: %v\", err)\n\t\t}\n\t}\n\n\tvar ancestry []string\n\t// default file name is the tar name stripped\n\tname := strings.Split(filepath.Base(lb.file.Name()), \".\")[0]\n\tappImageID, ancestry, parsedDockerURL, err := getImageID(lb.file, parsedDockerURL, name, lb.debug)\n\tif err != nil {\n\t\treturn nil, \"\", nil, err\n\t}\n\n\tif len(ancestry) == 0 {\n\t\tancestry, err = getAncestry(lb.file, appImageID, lb.debug)\n\t\tif err != nil {\n\t\t\treturn nil, \"\", nil, fmt.Errorf(\"error getting ancestry: %v\", err)\n\t\t}\n\t} else {\n\t\t// for oci the first image is the config\n\t\tancestry = append([]string{appImageID}, ancestry...)\n\t}\n\n\treturn ancestry, appImageID, parsedDockerURL, nil\n}\n\nfunc (lb *FileBackend) BuildACI(layerIDs []string, manhash string, dockerURL *common.ParsedDockerURL, outputDir string, tmpBaseDir string, compression common.Compression) ([]string, []*schema.ImageManifest, error) {\n\tif strings.Contains(layerIDs[0], \":\") {\n\t\treturn lb.BuildACIV22(layerIDs, manhash, dockerURL, outputDir, tmpBaseDir, compression)\n\t}\n\tvar aciLayerPaths []string\n\tvar aciManifests []*schema.ImageManifest\n\tvar curPwl []string\n\n\ttmpDir, err := ioutil.TempDir(tmpBaseDir, \"docker2aci-\")\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"error creating dir: %v\", err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\n\tfor i := len(layerIDs) - 1; i >= 0; i-- {\n\t\tif err := common.ValidateLayerId(layerIDs[i]); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t\tj, err := getJson(lb.file, layerIDs[i])\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error getting layer json: %v\", err)\n\t\t}\n\n\t\tlayerData := types.DockerImageData{}\n\t\tif err := json.Unmarshal(j, &layerData); err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error unmarshaling layer data: %v\", err)\n\t\t}\n\n\t\ttmpLayerPath := path.Join(tmpDir, layerIDs[i])\n\t\ttmpLayerPath += \".tar\"\n\n\t\tlayerTarPath := path.Join(layerIDs[i], \"layer.tar\")\n\t\tlayerFile, err := extractEmbeddedLayer(lb.file, layerTarPath, tmpLayerPath, lb.info)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error getting layer from file: %v\", err)\n\t\t}\n\t\tdefer layerFile.Close()\n\n\t\tlb.debug.Println(\"Generating layer ACI...\")\n\t\taciPath, manifest, err := internal.GenerateACI(i, manhash, layerData, dockerURL, outputDir, layerFile, curPwl, compression, lb.debug)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error generating ACI: %v\", err)\n\t\t}\n\n\t\taciLayerPaths = append(aciLayerPaths, aciPath)\n\t\taciManifests = append(aciManifests, manifest)\n\t\tcurPwl = manifest.PathWhitelist\n\t}\n\n\treturn aciLayerPaths, aciManifests, nil\n}\n\nfunc (lb *FileBackend) BuildACIV22(layerIDs []string, manhash string, dockerURL *common.ParsedDockerURL, outputDir string, tmpBaseDir string, compression common.Compression) ([]string, []*schema.ImageManifest, error) {\n\tif len(layerIDs) < 2 {\n\t\treturn nil, nil, fmt.Errorf(\"insufficient layers for oci image\")\n\t}\n\tvar aciLayerPaths []string\n\tvar aciManifests []*schema.ImageManifest\n\tvar curPwl []string\n\n\timageID := layerIDs[0]\n\tlayerIDs = layerIDs[1:]\n\n\tj, err := getJsonV22(lb.file, imageID)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"error getting layer from file: %v\", err)\n\t}\n\timageConfig := typesV2.ImageConfig{}\n\tif err := json.Unmarshal(j, &imageConfig); err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"error unmarshaling image data: %v\", err)\n\t}\n\n\ttmpDir, err := ioutil.TempDir(tmpBaseDir, \"docker2aci-\")\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"error creating dir: %v\", err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\tfor i := len(layerIDs) - 1; i >= 0; i-- {\n\t\tparts := strings.Split(layerIDs[i], \":\")\n\t\ttmpLayerPath := path.Join(tmpDir, parts[1])\n\t\ttmpLayerPath += \".tar\"\n\t\tlayerTarPath := path.Join(append([]string{\"blobs\"}, parts...)...)\n\t\tlayerFile, err := extractEmbeddedLayer(lb.file, layerTarPath, tmpLayerPath, lb.info)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error getting layer from file: %v\", err)\n\t\t}\n\t\tdefer layerFile.Close()\n\t\tlb.debug.Println(\"Generating layer ACI...\")\n\t\tvar aciPath string\n\t\tvar manifest *schema.ImageManifest\n\t\tif i != 0 {\n\t\t\taciPath, manifest, err = internal.GenerateACI22LowerLayer(dockerURL, parts[1], outputDir, layerFile, curPwl, compression)\n\t\t} else {\n\t\t\taciPath, manifest, err = internal.GenerateACI22TopLayer(dockerURL, manhash, &imageConfig, parts[1], outputDir, layerFile, curPwl, compression, aciManifests, lb.debug)\n\t\t}\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error generating ACI: %v\", err)\n\t\t}\n\n\t\taciLayerPaths = append(aciLayerPaths, aciPath)\n\t\taciManifests = append(aciManifests, manifest)\n\t\tcurPwl = manifest.PathWhitelist\n\t}\n\n\treturn aciLayerPaths, aciManifests, nil\n}\n\nfunc getImageID(file *os.File, dockerURL *common.ParsedDockerURL, name string, debug log.Logger) (string, []string, *common.ParsedDockerURL, error) {\n\tdebug.Println(\"getting image id...\")\n\ttype tags map[string]string\n\ttype apps map[string]tags\n\n\t_, err := file.Seek(0, 0)\n\tif err != nil {\n\t\treturn \"\", nil, nil, fmt.Errorf(\"error seeking file: %v\", err)\n\t}\n\n\ttag := \"latest\"\n\tif dockerURL != nil {\n\t\ttag = dockerURL.Tag\n\t}\n\n\tvar imageID string\n\tvar ancestry []string\n\tvar appName string\n\treposWalker := func(t *tarball.TarFile) error {\n\t\tclean := filepath.Clean(t.Name())\n\t\tif clean == \"repositories\" {\n\t\t\trepob, err := ioutil.ReadAll(t.TarStream)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"error reading repositories file: %v\", err)\n\t\t\t}\n\n\t\t\tvar unparsedRepositories apps\n\t\t\tif err := json.Unmarshal(repob, &unparsedRepositories); err != nil {\n\t\t\t\treturn fmt.Errorf(\"error unmarshaling repositories file\")\n\t\t\t}\n\n\t\t\trepositories := make(apps, 0)\n\t\t\t// Normalize repository keys since the image potentially passed in is\n\t\t\t// normalized\n\t\t\tfor key, val := range unparsedRepositories {\n\t\t\t\tparsed, err := common.ParseDockerURL(key)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn fmt.Errorf(\"error parsing key %q in repositories: %v\", key, err)\n\t\t\t\t}\n\t\t\t\trepositories[parsed.ImageName] = val\n\t\t\t}\n\n\t\t\tif dockerURL == nil {\n\t\t\t\tn := len(repositories)\n\t\t\t\tswitch {\n\t\t\t\tcase n == 1:\n\t\t\t\t\tfor key, _ := range repositories {\n\t\t\t\t\t\tappName = key\n\t\t\t\t\t}\n\t\t\t\tcase n > 1:\n\t\t\t\t\tvar appNames []string\n\t\t\t\t\tfor key, _ := range repositories {\n\t\t\t\t\t\tappNames = append(appNames, key)\n\t\t\t\t\t}\n\t\t\t\t\treturn &common.ErrSeveralImages{\n\t\t\t\t\t\tMsg:    \"several images found\",\n\t\t\t\t\t\tImages: appNames,\n\t\t\t\t\t}\n\t\t\t\tdefault:\n\t\t\t\t\treturn fmt.Errorf(\"no images found\")\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tappName = dockerURL.ImageName\n\t\t\t}\n\n\t\t\tapp, ok := repositories[appName]\n\t\t\tif !ok {\n\t\t\t\treturn fmt.Errorf(\"app %q not found\", appName)\n\t\t\t}\n\n\t\t\t_, ok = app[tag]\n\t\t\tif !ok {\n\t\t\t\tif len(app) == 1 {\n\t\t\t\t\tfor key, _ := range app {\n\t\t\t\t\t\ttag = key\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\treturn fmt.Errorf(\"tag %q not found\", tag)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif dockerURL == nil {\n\t\t\t\tdockerURL = &common.ParsedDockerURL{\n\t\t\t\t\tOriginalName: \"\",\n\t\t\t\t\tIndexURL:     \"\",\n\t\t\t\t\tTag:          tag,\n\t\t\t\t\tImageName:    appName,\n\t\t\t\t}\n\t\t\t}\n\n\t\t\timageID = string(app[tag])\n\t\t}\n\n\t\tif clean == \"refs/\"+tag {\n\t\t\trefb, err := ioutil.ReadAll(t.TarStream)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"error reading ref descriptor for tag %s: %v\", tag, err)\n\t\t\t}\n\n\t\t\tif dockerURL == nil {\n\t\t\t\tdockerURL = &common.ParsedDockerURL{\n\t\t\t\t\tIndexURL:  \"\",\n\t\t\t\t\tTag:       tag,\n\t\t\t\t\tImageName: name,\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tvar ref spec.Descriptor\n\t\t\tif err := json.Unmarshal(refb, &ref); err != nil {\n\t\t\t\treturn fmt.Errorf(\"error unmarshaling ref descriptor for tag %s\", tag)\n\t\t\t}\n\t\t\timageID, ancestry, err = getDataFromManifest(file, ref.Digest)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\treturn io.EOF\n\t\t}\n\t\treturn nil\n\t}\n\n\ttr := tar.NewReader(file)\n\tif err := tarball.Walk(*tr, reposWalker); err != nil && err != io.EOF {\n\t\treturn \"\", nil, nil, err\n\t}\n\n\tif imageID == \"\" {\n\t\treturn \"\", nil, nil, fmt.Errorf(\"Could not find image\")\n\t}\n\n\treturn imageID, ancestry, dockerURL, nil\n}\n\nfunc getDataFromManifest(file *os.File, manifestID string) (string, []string, error) {\n\t_, err := file.Seek(0, 0)\n\tif err != nil {\n\t\treturn \"\", nil, fmt.Errorf(\"error seeking file: %v\", err)\n\t}\n\n\tparts := append([]string{\"blobs\"}, strings.Split(manifestID, \":\")...)\n\tjsonPath := path.Join(parts...)\n\n\tvar imageID string\n\tvar ancestry []string\n\treposWalker := func(t *tarball.TarFile) error {\n\t\tclean := filepath.Clean(t.Name())\n\t\tif clean == jsonPath {\n\n\t\t\tmanb, err := ioutil.ReadAll(t.TarStream)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"error reading image manifest: %v\", err)\n\t\t\t}\n\n\t\t\tvar manifest typesV2.ImageManifest\n\t\t\tif err := json.Unmarshal(manb, &manifest); err != nil {\n\t\t\t\treturn fmt.Errorf(\"error unmarshaling image manifest\")\n\t\t\t}\n\t\t\tif manifest.Config == nil {\n\t\t\t\treturn fmt.Errorf(\"manifest does not contain a config\")\n\t\t\t}\n\t\t\timageID = manifest.Config.Digest\n\t\t\t// put them in reverse order\n\t\t\tfor i := len(manifest.Layers) - 1; i >= 0; i-- {\n\t\t\t\tancestry = append(ancestry, manifest.Layers[i].Digest)\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}\n\n\ttr := tar.NewReader(file)\n\tif err := tarball.Walk(*tr, reposWalker); err != nil {\n\t\treturn \"\", nil, err\n\t}\n\n\treturn imageID, ancestry, nil\n}\n\nfunc getJson(file *os.File, layerID string) ([]byte, error) {\n\tjsonPath := path.Join(layerID, \"json\")\n\treturn getTarFileBytes(file, jsonPath)\n}\n\nfunc getJsonV22(file *os.File, layerID string) ([]byte, error) {\n\tparts := append([]string{\"blobs\"}, strings.Split(layerID, \":\")...)\n\tjsonPath := path.Join(parts...)\n\treturn getTarFileBytes(file, jsonPath)\n}\n\nfunc getTarFileBytes(file *os.File, path string) ([]byte, error) {\n\t_, err := file.Seek(0, 0)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error seeking file: %v\", err)\n\t}\n\n\tvar fileBytes []byte\n\tfileWalker := func(t *tarball.TarFile) error {\n\t\tif filepath.Clean(t.Name()) == path {\n\t\t\tfileBytes, err = ioutil.ReadAll(t.TarStream)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\n\t\treturn nil\n\t}\n\n\ttr := tar.NewReader(file)\n\tif err := tarball.Walk(*tr, fileWalker); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif fileBytes == nil {\n\t\treturn nil, fmt.Errorf(\"file %q not found\", path)\n\t}\n\n\treturn fileBytes, nil\n}\n\nfunc extractEmbeddedLayer(file *os.File, layerTarPath string, outputPath string, info log.Logger) (*os.File, error) {\n\tinfo.Println(\"Extracting \", layerTarPath)\n\t_, err := file.Seek(0, 0)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error seeking file: %v\", err)\n\t}\n\n\tvar layerFile *os.File\n\tfileWalker := func(t *tarball.TarFile) error {\n\t\tif filepath.Clean(t.Name()) == layerTarPath {\n\t\t\tlayerFile, err = os.Create(outputPath)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"error creating layer: %v\", err)\n\t\t\t}\n\n\t\t\t_, err = io.Copy(layerFile, t.TarStream)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"error getting layer: %v\", err)\n\t\t\t}\n\t\t}\n\n\t\treturn nil\n\t}\n\n\ttr := tar.NewReader(file)\n\tif err := tarball.Walk(*tr, fileWalker); err != nil {\n\t\treturn nil, err\n\t}\n\n\tif layerFile == nil {\n\t\treturn nil, fmt.Errorf(\"file %q not found\", layerTarPath)\n\t}\n\n\treturn layerFile, nil\n}\n\n// getAncestry computes an image ancestry, returning an ordered list\n// of dependencies starting from the topmost image to the base.\n// It checks for dependency loops via duplicate detection in the image\n// chain and errors out in such cases.\nfunc getAncestry(file *os.File, imgID string, debug log.Logger) ([]string, error) {\n\tvar ancestry []string\n\tdeps := make(map[string]bool)\n\n\tcurImgID := imgID\n\n\tvar err error\n\tfor curImgID != \"\" {\n\t\tif deps[curImgID] {\n\t\t\treturn nil, fmt.Errorf(\"dependency loop detected at image %q\", curImgID)\n\t\t}\n\t\tdeps[curImgID] = true\n\t\tancestry = append(ancestry, curImgID)\n\t\tdebug.Printf(\"Getting ancestry for layer %q\", curImgID)\n\t\tcurImgID, err = getParent(file, curImgID, debug)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn ancestry, nil\n}\n\nfunc getParent(file *os.File, imgID string, debug log.Logger) (string, error) {\n\tvar parent string\n\n\t_, err := file.Seek(0, 0)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"error seeking file: %v\", err)\n\t}\n\n\tjsonPath := filepath.Join(imgID, \"json\")\n\tparentWalker := func(t *tarball.TarFile) error {\n\t\tif filepath.Clean(t.Name()) == jsonPath {\n\t\t\tjsonb, err := ioutil.ReadAll(t.TarStream)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"error reading layer json: %v\", err)\n\t\t\t}\n\n\t\t\tvar dockerData types.DockerImageData\n\t\t\tif err := json.Unmarshal(jsonb, &dockerData); err != nil {\n\t\t\t\treturn fmt.Errorf(\"error unmarshaling layer data: %v\", err)\n\t\t\t}\n\n\t\t\tparent = dockerData.Parent\n\t\t}\n\n\t\treturn nil\n\t}\n\n\ttr := tar.NewReader(file)\n\tif err := tarball.Walk(*tr, parentWalker); err != nil {\n\t\treturn \"\", err\n\t}\n\n\tdebug.Printf(\"Layer %q depends on layer %q\", imgID, parent)\n\treturn parent, nil\n}\n"
  },
  {
    "path": "lib/internal/backend/repository/repository.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package repository is an implementation of Docker2ACIBackend for Docker\n// remote registries.\n//\n// Note: this package is an implementation detail and shouldn't be used outside\n// of docker2aci.\npackage repository\n\nimport (\n\t\"fmt\"\n\t\"net/http\"\n\t\"net/url\"\n\n\t\"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/lib/internal/typesV2\"\n\t\"github.com/appc/docker2aci/lib/internal/util\"\n\t\"github.com/appc/docker2aci/pkg/log\"\n\t\"github.com/appc/spec/schema\"\n)\n\ntype registryVersion int\n\nconst (\n\tregistryV1 registryVersion = iota\n\tregistryV2\n)\n\ntype httpStatusErr struct {\n\tStatusCode int\n\tURL        *url.URL\n}\n\nfunc (e httpStatusErr) Error() string {\n\treturn fmt.Sprintf(\"Unexpected HTTP code: %d, URL: %s\", e.StatusCode, e.URL.String())\n}\n\nfunc isErrHTTP404(err error) bool {\n\tif httperr, ok := err.(*httpStatusErr); ok && httperr.StatusCode == http.StatusNotFound {\n\t\treturn true\n\t}\n\treturn false\n}\n\ntype RepositoryBackend struct {\n\trepoData          *RepoData\n\tusername          string\n\tpassword          string\n\tinsecure          common.InsecureConfig\n\thostsV1fallback   bool\n\thostsV2Support    map[string]bool\n\thostsV2AuthTokens map[string]map[string]string\n\tschema            string\n\timageManifests    map[common.ParsedDockerURL]v2Manifest\n\timageV2Manifests  map[common.ParsedDockerURL]*typesV2.ImageManifest\n\timageConfigs      map[common.ParsedDockerURL]*typesV2.ImageConfig\n\tlayersIndex       map[string]int\n\tmediaTypes        common.MediaTypeSet\n\tregistryOptions   common.RegistryOptionSet\n\n\tdebug log.Logger\n}\n\nfunc NewRepositoryBackend(username, password string, insecure common.InsecureConfig, debug log.Logger, mediaTypes common.MediaTypeSet, registryOptions common.RegistryOptionSet) *RepositoryBackend {\n\treturn &RepositoryBackend{\n\t\tusername:          username,\n\t\tpassword:          password,\n\t\tinsecure:          insecure,\n\t\thostsV1fallback:   false,\n\t\thostsV2Support:    make(map[string]bool),\n\t\thostsV2AuthTokens: make(map[string]map[string]string),\n\t\timageManifests:    make(map[common.ParsedDockerURL]v2Manifest),\n\t\timageV2Manifests:  make(map[common.ParsedDockerURL]*typesV2.ImageManifest),\n\t\timageConfigs:      make(map[common.ParsedDockerURL]*typesV2.ImageConfig),\n\t\tlayersIndex:       make(map[string]int),\n\t\tmediaTypes:        mediaTypes,\n\t\tregistryOptions:   registryOptions,\n\t\tdebug:             debug,\n\t}\n}\n\n// GetImageInfo, given the url for a docker image, will return the\n// following:\n// - []string: an ordered list of all layer hashes\n// - string: a unique identifier for this image, like a hash of the manifest\n// - *common.ParsedDockerURL: a parsed docker URL\n// - error: an error if one occurred\nfunc (rb *RepositoryBackend) GetImageInfo(url string) ([]string, string, *common.ParsedDockerURL, error) {\n\tdockerURL, err := common.ParseDockerURL(url)\n\tif err != nil {\n\t\treturn nil, \"\", nil, err\n\t}\n\n\tvar supportsV2, supportsV1, ok bool\n\tvar URLSchema string\n\n\tif supportsV2, ok = rb.hostsV2Support[dockerURL.IndexURL]; !ok {\n\t\tvar err error\n\t\tURLSchema, supportsV2, err = rb.supportsRegistry(dockerURL.IndexURL, registryV2)\n\t\tif err != nil {\n\t\t\treturn nil, \"\", nil, err\n\t\t}\n\t\trb.schema = URLSchema + \"://\"\n\t\trb.hostsV2Support[dockerURL.IndexURL] = supportsV2\n\t}\n\n\t// try v2\n\tif supportsV2 && rb.registryOptions.AllowsV2() {\n\t\tlayers, manhash, dockerURL, err := rb.getImageInfoV2(dockerURL)\n\t\tif !isErrHTTP404(err) {\n\t\t\treturn layers, manhash, dockerURL, err\n\t\t}\n\t\t// fallback on 404 failure\n\t\trb.hostsV1fallback = true\n\t\t// unless we can't fallback\n\t\tif !rb.registryOptions.AllowsV1() {\n\t\t\treturn nil, \"\", nil, err\n\t\t}\n\t}\n\n\tif !rb.registryOptions.AllowsV1() {\n\t\treturn nil, \"\", nil, fmt.Errorf(\"no remaining enabled registry options\")\n\t}\n\n\tURLSchema, supportsV1, err = rb.supportsRegistry(dockerURL.IndexURL, registryV1)\n\tif err != nil {\n\t\treturn nil, \"\", nil, err\n\t}\n\tif !supportsV1 && rb.hostsV1fallback {\n\t\treturn nil, \"\", nil, fmt.Errorf(\"attempted fallback to API v1 but not supported\")\n\t}\n\tif !supportsV1 && !supportsV2 {\n\t\treturn nil, \"\", nil, fmt.Errorf(\"registry doesn't support API v2 nor v1\")\n\t}\n\trb.schema = URLSchema + \"://\"\n\t// try v1, hard fail on failure\n\treturn rb.getImageInfoV1(dockerURL)\n}\n\nfunc (rb *RepositoryBackend) BuildACI(layerIDs []string, manhash string, dockerURL *common.ParsedDockerURL, outputDir string, tmpBaseDir string, compression common.Compression) ([]string, []*schema.ImageManifest, error) {\n\tif rb.hostsV1fallback || !rb.hostsV2Support[dockerURL.IndexURL] {\n\t\treturn rb.buildACIV1(layerIDs, manhash, dockerURL, outputDir, tmpBaseDir, compression)\n\t} else {\n\t\treturn rb.buildACIV2(layerIDs, manhash, dockerURL, outputDir, tmpBaseDir, compression)\n\t}\n}\n\n// checkRegistryStatus determines registry API version compatibility according to spec:\n// https://docs.docker.com/registry/spec/api/#/api-version-check\nfunc checkRegistryStatus(statusCode int, hdr http.Header, version registryVersion) (bool, error) {\n\tswitch statusCode {\n\tcase http.StatusOK, http.StatusUnauthorized:\n\t\tok := true\n\t\tif version == registryV2 {\n\t\t\t// According to v2 spec, registries SHOULD set this header value\n\t\t\t// and clients MAY fallback to v1 if missing, as done here.\n\t\t\tok = hdr.Get(\"Docker-Distribution-API-Version\") == \"registry/2.0\"\n\t\t}\n\t\treturn ok, nil\n\t}\n\treturn false, nil\n}\n\nfunc (rb *RepositoryBackend) supportsRegistry(indexURL string, version registryVersion) (schema string, ok bool, err error) {\n\tvar URLPath string\n\tswitch version {\n\tcase registryV1:\n\t\tURLPath = \"v1/_ping\"\n\tcase registryV2:\n\t\tURLPath = \"v2/\"\n\t}\n\n\tfetch := func(schema string) (res *http.Response, err error) {\n\t\tu := url.URL{Scheme: schema, Host: indexURL, Path: URLPath}\n\t\treq, err := http.NewRequest(\"GET\", u.String(), nil)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\trb.setBasicAuth(req)\n\n\t\tclient := util.GetTLSClient(rb.insecure.SkipVerify)\n\t\tres, err = client.Do(req)\n\t\treturn\n\t}\n\n\tschema = \"https\"\n\tres, err := fetch(schema)\n\tif err == nil {\n\t\tok, err = checkRegistryStatus(res.StatusCode, res.Header, version)\n\t\tdefer res.Body.Close()\n\t}\n\tif err != nil || !ok {\n\t\tif rb.insecure.AllowHTTP {\n\t\t\tschema = \"http\"\n\t\t\tres, err = fetch(schema)\n\t\t\tif err == nil {\n\t\t\t\tok, err = checkRegistryStatus(res.StatusCode, res.Header, version)\n\t\t\t\tdefer res.Body.Close()\n\t\t\t}\n\t\t}\n\t\treturn schema, ok, err\n\t}\n\n\treturn schema, ok, err\n}\n"
  },
  {
    "path": "lib/internal/backend/repository/repository1.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage repository\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"net/http\"\n\t\"os\"\n\t\"path\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/lib/internal\"\n\t\"github.com/appc/docker2aci/lib/internal/types\"\n\t\"github.com/appc/docker2aci/lib/internal/util\"\n\t\"github.com/appc/spec/schema\"\n\t\"github.com/coreos/ioprogress\"\n)\n\ntype RepoData struct {\n\tTokens    []string\n\tEndpoints []string\n\tCookie    []string\n}\n\nfunc (rb *RepositoryBackend) getImageInfoV1(dockerURL *common.ParsedDockerURL) ([]string, string, *common.ParsedDockerURL, error) {\n\trepoData, err := rb.getRepoDataV1(dockerURL.IndexURL, dockerURL.ImageName)\n\tif err != nil {\n\t\treturn nil, \"\", nil, fmt.Errorf(\"error getting repository data: %v\", err)\n\t}\n\n\t// TODO(iaguis) check more endpoints\n\tappImageID, err := rb.getImageIDFromTagV1(repoData.Endpoints[0], dockerURL.ImageName, dockerURL.Tag, repoData)\n\tif err != nil {\n\t\treturn nil, \"\", nil, fmt.Errorf(\"error getting ImageID from tag %s: %v\", dockerURL.Tag, err)\n\t}\n\n\tancestry, err := rb.getAncestryV1(appImageID, repoData.Endpoints[0], repoData)\n\tif err != nil {\n\t\treturn nil, \"\", nil, err\n\t}\n\n\trb.repoData = repoData\n\n\treturn ancestry, appImageID, dockerURL, nil\n}\n\nfunc (rb *RepositoryBackend) buildACIV1(layerIDs []string, manhash string, dockerURL *common.ParsedDockerURL, outputDir string, tmpBaseDir string, compression common.Compression) ([]string, []*schema.ImageManifest, error) {\n\tlayerFiles := make([]*os.File, len(layerIDs))\n\tlayerDatas := make([]types.DockerImageData, len(layerIDs))\n\n\ttmpParentDir, err := ioutil.TempDir(tmpBaseDir, \"docker2aci-\")\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tdefer os.RemoveAll(tmpParentDir)\n\n\tvar doneChannels []chan error\n\tfor i, layerID := range layerIDs {\n\t\tif err := common.ValidateLayerId(layerID); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t\tdoneChan := make(chan error)\n\t\tdoneChannels = append(doneChannels, doneChan)\n\t\t// https://github.com/golang/go/wiki/CommonMistakes\n\t\ti := i // golang--\n\t\tlayerID := layerID\n\t\tgo func() {\n\t\t\ttmpDir, err := ioutil.TempDir(tmpParentDir, \"\")\n\t\t\tif err != nil {\n\t\t\t\tdoneChan <- fmt.Errorf(\"error creating dir: %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tj, size, err := rb.getJsonV1(layerID, rb.repoData.Endpoints[0], rb.repoData)\n\t\t\tif err != nil {\n\t\t\t\tdoneChan <- fmt.Errorf(\"error getting image json: %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tlayerDatas[i] = types.DockerImageData{}\n\t\t\tif err := json.Unmarshal(j, &layerDatas[i]); err != nil {\n\t\t\t\tdoneChan <- fmt.Errorf(\"error unmarshaling layer data: %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tlayerFiles[i], err = rb.getLayerV1(layerID, rb.repoData.Endpoints[0], rb.repoData, size, tmpDir)\n\t\t\tif err != nil {\n\t\t\t\tdoneChan <- fmt.Errorf(\"error getting the remote layer: %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tdoneChan <- nil\n\t\t}()\n\t}\n\tfor _, doneChan := range doneChannels {\n\t\terr := <-doneChan\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t}\n\tvar aciLayerPaths []string\n\tvar aciManifests []*schema.ImageManifest\n\tvar curPwl []string\n\n\tfor i := len(layerIDs) - 1; i >= 0; i-- {\n\t\trb.debug.Println(\"Generating layer ACI...\")\n\t\taciPath, manifest, err := internal.GenerateACI(i, manhash, layerDatas[i], dockerURL, outputDir, layerFiles[i], curPwl, compression, rb.debug)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error generating ACI: %v\", err)\n\t\t}\n\t\taciLayerPaths = append(aciLayerPaths, aciPath)\n\t\taciManifests = append(aciManifests, manifest)\n\t\tcurPwl = manifest.PathWhitelist\n\n\t\tlayerFiles[i].Close()\n\t}\n\n\treturn aciLayerPaths, aciManifests, nil\n}\n\nfunc (rb *RepositoryBackend) getRepoDataV1(indexURL string, remote string) (*RepoData, error) {\n\tclient := util.GetTLSClient(rb.insecure.SkipVerify)\n\trepositoryURL := rb.schema + path.Join(indexURL, \"v1\", \"repositories\", remote, \"images\")\n\n\treq, err := http.NewRequest(\"GET\", repositoryURL, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif rb.username != \"\" && rb.password != \"\" {\n\t\treq.SetBasicAuth(rb.username, rb.password)\n\t}\n\n\treq.Header.Set(\"X-Docker-Token\", \"true\")\n\n\tres, err := client.Do(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer res.Body.Close()\n\n\tif res.StatusCode != 200 {\n\t\treturn nil, &httpStatusErr{res.StatusCode, req.URL}\n\t}\n\n\tvar tokens []string\n\tif res.Header.Get(\"X-Docker-Token\") != \"\" {\n\t\ttokens = res.Header[\"X-Docker-Token\"]\n\t}\n\n\tvar cookies []string\n\tif res.Header.Get(\"Set-Cookie\") != \"\" {\n\t\tcookies = res.Header[\"Set-Cookie\"]\n\t}\n\n\tvar endpoints []string\n\tif res.Header.Get(\"X-Docker-Endpoints\") != \"\" {\n\t\tendpoints = makeEndpointsListV1(res.Header[\"X-Docker-Endpoints\"])\n\t} else {\n\t\t// Assume same endpoint\n\t\tendpoints = append(endpoints, indexURL)\n\t}\n\n\treturn &RepoData{\n\t\tEndpoints: endpoints,\n\t\tTokens:    tokens,\n\t\tCookie:    cookies,\n\t}, nil\n}\n\nfunc (rb *RepositoryBackend) getImageIDFromTagV1(registry string, appName string, tag string, repoData *RepoData) (string, error) {\n\tclient := util.GetTLSClient(rb.insecure.SkipVerify)\n\t// we get all the tags instead of directly getting the imageID of the\n\t// requested one (.../tags/TAG) because even though it's specified in the\n\t// Docker API, some registries (e.g. Google Container Registry) don't\n\t// implement it.\n\treq, err := http.NewRequest(\"GET\", rb.schema+path.Join(registry, \"repositories\", appName, \"tags\"), nil)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to get Image ID: %s, URL: %s\", err, req.URL)\n\t}\n\n\tsetAuthTokenV1(req, repoData.Tokens)\n\tsetCookieV1(req, repoData.Cookie)\n\tres, err := client.Do(req)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to get Image ID: %s, URL: %s\", err, req.URL)\n\t}\n\tdefer res.Body.Close()\n\n\tif res.StatusCode != 200 {\n\t\treturn \"\", &httpStatusErr{res.StatusCode, req.URL}\n\t}\n\n\tj, err := ioutil.ReadAll(res.Body)\n\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tvar tags map[string]string\n\n\tif err := json.Unmarshal(j, &tags); err != nil {\n\t\treturn \"\", fmt.Errorf(\"error unmarshaling: %v\", err)\n\t}\n\n\timageID, ok := tags[tag]\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"tag %s not found\", tag)\n\t}\n\n\treturn imageID, nil\n}\n\nfunc (rb *RepositoryBackend) getAncestryV1(imgID, registry string, repoData *RepoData) ([]string, error) {\n\tclient := util.GetTLSClient(rb.insecure.SkipVerify)\n\treq, err := http.NewRequest(\"GET\", rb.schema+path.Join(registry, \"images\", imgID, \"ancestry\"), nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tsetAuthTokenV1(req, repoData.Tokens)\n\tsetCookieV1(req, repoData.Cookie)\n\tres, err := client.Do(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer res.Body.Close()\n\n\tif res.StatusCode != 200 {\n\t\treturn nil, &httpStatusErr{res.StatusCode, req.URL}\n\t}\n\n\tvar ancestry []string\n\n\tj, err := ioutil.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"Failed to read downloaded json: %s (%s)\", err, j)\n\t}\n\n\tif err := json.Unmarshal(j, &ancestry); err != nil {\n\t\treturn nil, fmt.Errorf(\"error unmarshaling: %v\", err)\n\t}\n\n\treturn ancestry, nil\n}\n\nfunc (rb *RepositoryBackend) getJsonV1(imgID, registry string, repoData *RepoData) ([]byte, int64, error) {\n\tclient := util.GetTLSClient(rb.insecure.SkipVerify)\n\treq, err := http.NewRequest(\"GET\", rb.schema+path.Join(registry, \"images\", imgID, \"json\"), nil)\n\tif err != nil {\n\t\treturn nil, -1, err\n\t}\n\tsetAuthTokenV1(req, repoData.Tokens)\n\tsetCookieV1(req, repoData.Cookie)\n\tres, err := client.Do(req)\n\tif err != nil {\n\t\treturn nil, -1, err\n\t}\n\tdefer res.Body.Close()\n\n\tif res.StatusCode != 200 {\n\t\treturn nil, -1, &httpStatusErr{res.StatusCode, req.URL}\n\t}\n\n\timageSize := int64(-1)\n\n\tif hdr := res.Header.Get(\"X-Docker-Size\"); hdr != \"\" {\n\t\timageSize, err = strconv.ParseInt(hdr, 10, 64)\n\t\tif err != nil {\n\t\t\treturn nil, -1, err\n\t\t}\n\t}\n\n\tb, err := ioutil.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn nil, -1, fmt.Errorf(\"failed to read downloaded json: %v (%s)\", err, b)\n\t}\n\n\treturn b, imageSize, nil\n}\n\nfunc (rb *RepositoryBackend) getLayerV1(imgID, registry string, repoData *RepoData, imgSize int64, tmpDir string) (*os.File, error) {\n\tclient := util.GetTLSClient(rb.insecure.SkipVerify)\n\treq, err := http.NewRequest(\"GET\", rb.schema+path.Join(registry, \"images\", imgID, \"layer\"), nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tsetAuthTokenV1(req, repoData.Tokens)\n\tsetCookieV1(req, repoData.Cookie)\n\n\tres, err := client.Do(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer res.Body.Close()\n\n\tif res.StatusCode != 200 {\n\t\tres.Body.Close()\n\t\treturn nil, &httpStatusErr{res.StatusCode, req.URL}\n\t}\n\n\t// if we didn't receive the size via X-Docker-Size when we retrieved the\n\t// layer's json, try Content-Length\n\tif imgSize == -1 {\n\t\tif hdr := res.Header.Get(\"Content-Length\"); hdr != \"\" {\n\t\t\timgSize, err = strconv.ParseInt(hdr, 10, 64)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t}\n\n\tprefix := \"Downloading \" + imgID[:12]\n\tfmtBytesSize := 18\n\tbarSize := int64(80 - len(prefix) - fmtBytesSize)\n\tbar := ioprogress.DrawTextFormatBarForW(barSize, os.Stderr)\n\tfmtfunc := func(progress, total int64) string {\n\t\treturn fmt.Sprintf(\n\t\t\t\"%s: %s %s\",\n\t\t\tprefix,\n\t\t\tbar(progress, total),\n\t\t\tioprogress.DrawTextFormatBytes(progress, total),\n\t\t)\n\t}\n\n\tprogressReader := &ioprogress.Reader{\n\t\tReader:       res.Body,\n\t\tSize:         imgSize,\n\t\tDrawFunc:     ioprogress.DrawTerminalf(os.Stderr, fmtfunc),\n\t\tDrawInterval: 500 * time.Millisecond,\n\t}\n\n\tlayerFile, err := ioutil.TempFile(tmpDir, \"dockerlayer-\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t_, err = io.Copy(layerFile, progressReader)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif err := layerFile.Sync(); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn layerFile, nil\n}\n\nfunc setAuthTokenV1(req *http.Request, token []string) {\n\tif req.Header.Get(\"Authorization\") == \"\" {\n\t\treq.Header.Set(\"Authorization\", \"Token \"+strings.Join(token, \",\"))\n\t}\n}\n\nfunc setCookieV1(req *http.Request, cookie []string) {\n\tif req.Header.Get(\"Cookie\") == \"\" {\n\t\treq.Header.Set(\"Cookie\", strings.Join(cookie, \"\"))\n\t}\n}\n\nfunc makeEndpointsListV1(headers []string) []string {\n\tvar endpoints []string\n\n\tfor _, ep := range headers {\n\t\tendpointsList := strings.Split(ep, \",\")\n\t\tfor _, endpointEl := range endpointsList {\n\t\t\tendpoints = append(\n\t\t\t\tendpoints,\n\t\t\t\tpath.Join(strings.TrimSpace(endpointEl), \"v1\"))\n\t\t}\n\t}\n\n\treturn endpoints\n}\n"
  },
  {
    "path": "lib/internal/backend/repository/repository2.go",
    "content": "// Copyright 2016 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage repository\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"net/http\"\n\t\"os\"\n\t\"path\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/lib/internal\"\n\t\"github.com/appc/docker2aci/lib/internal/types\"\n\t\"github.com/appc/docker2aci/lib/internal/typesV2\"\n\t\"github.com/appc/docker2aci/lib/internal/util\"\n\t\"github.com/appc/spec/schema\"\n\t\"github.com/coreos/pkg/progressutil\"\n\tgodigest \"github.com/opencontainers/go-digest\"\n)\n\nconst (\n\tdefaultIndexURL = \"registry-1.docker.io\"\n)\n\n// A manifest conforming to the docker v2.1 spec\ntype v2Manifest struct {\n\tName     string `json:\"name\"`\n\tTag      string `json:\"tag\"`\n\tFSLayers []struct {\n\t\tBlobSum string `json:\"blobSum\"`\n\t} `json:\"fsLayers\"`\n\tHistory []struct {\n\t\tV1Compatibility string `json:\"v1Compatibility\"`\n\t} `json:\"history\"`\n\tSignature []byte `json:\"signature\"`\n}\n\nfunc (rb *RepositoryBackend) getImageInfoV2(dockerURL *common.ParsedDockerURL) ([]string, string, *common.ParsedDockerURL, error) {\n\tlayers, manhash, err := rb.getManifestV2(dockerURL)\n\tif err != nil {\n\t\treturn nil, \"\", nil, err\n\t}\n\n\treturn layers, manhash, dockerURL, nil\n}\n\nfunc (rb *RepositoryBackend) buildACIV2(layerIDs []string, manhash string, dockerURL *common.ParsedDockerURL, outputDir string, tmpBaseDir string, compression common.Compression) ([]string, []*schema.ImageManifest, error) {\n\t_, isVersion22 := rb.imageV2Manifests[*dockerURL]\n\tif isVersion22 {\n\t\treturn rb.buildACIV22(layerIDs, manhash, dockerURL, outputDir, tmpBaseDir, compression)\n\t}\n\treturn rb.buildACIV21(layerIDs, manhash, dockerURL, outputDir, tmpBaseDir, compression)\n}\n\nfunc (rb *RepositoryBackend) buildACIV21(layerIDs []string, manhash string, dockerURL *common.ParsedDockerURL, outputDir string, tmpBaseDir string, compression common.Compression) ([]string, []*schema.ImageManifest, error) {\n\tlayerFiles := make([]*os.File, len(layerIDs))\n\tlayerDatas := make([]types.DockerImageData, len(layerIDs))\n\n\ttmpParentDir, err := ioutil.TempDir(tmpBaseDir, \"docker2aci-\")\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tdefer os.RemoveAll(tmpParentDir)\n\n\tcopier := progressutil.NewCopyProgressPrinter()\n\n\tvar errChannels []chan error\n\tclosers := make([]io.ReadCloser, len(layerIDs))\n\tvar wg sync.WaitGroup\n\tfor i, layerID := range layerIDs {\n\t\tif err := common.ValidateLayerId(layerID); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t\twg.Add(1)\n\t\terrChan := make(chan error, 1)\n\t\terrChannels = append(errChannels, errChan)\n\t\t// https://github.com/golang/go/wiki/CommonMistakes\n\t\ti := i // golang--\n\t\tlayerID := layerID\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\n\t\t\tmanifest := rb.imageManifests[*dockerURL]\n\n\t\t\tlayerIndex, ok := rb.layersIndex[layerID]\n\t\t\tif !ok {\n\t\t\t\terrChan <- fmt.Errorf(\"layer not found in manifest: %s\", layerID)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tif len(manifest.History) <= layerIndex {\n\t\t\t\terrChan <- fmt.Errorf(\"history not found for layer %s\", layerID)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tlayerDatas[i] = types.DockerImageData{}\n\t\t\tif err := json.Unmarshal([]byte(manifest.History[layerIndex].V1Compatibility), &layerDatas[i]); err != nil {\n\t\t\t\terrChan <- fmt.Errorf(\"error unmarshaling layer data: %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\ttmpDir, err := ioutil.TempDir(tmpParentDir, \"\")\n\t\t\tif err != nil {\n\t\t\t\terrChan <- fmt.Errorf(\"error creating dir: %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tlayerFiles[i], closers[i], err = rb.getLayerV2(layerID, dockerURL, tmpDir, copier)\n\t\t\tif err != nil {\n\t\t\t\terrChan <- fmt.Errorf(\"error getting the remote layer: %v\", err)\n\t\t\t\treturn\n\t\t\t}\n\t\t\terrChan <- nil\n\t\t}()\n\t}\n\t// Need to wait for all of the readers to be added to the copier (which happens during rb.getLayerV2)\n\twg.Wait()\n\terr = copier.PrintAndWait(os.Stderr, 500*time.Millisecond, nil)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tfor _, closer := range closers {\n\t\tif closer != nil {\n\t\t\tcloser.Close()\n\t\t}\n\t}\n\tfor _, errChan := range errChannels {\n\t\terr := <-errChan\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t}\n\tfor _, layerFile := range layerFiles {\n\t\terr := layerFile.Sync()\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t}\n\tvar aciLayerPaths []string\n\tvar aciManifests []*schema.ImageManifest\n\tvar curPwl []string\n\tfor i := len(layerIDs) - 1; i >= 0; i-- {\n\t\trb.debug.Println(\"Generating layer ACI...\")\n\t\taciPath, aciManifest, err := internal.GenerateACI(i, manhash, layerDatas[i], dockerURL, outputDir, layerFiles[i], curPwl, compression, rb.debug)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error generating ACI: %v\", err)\n\t\t}\n\t\taciLayerPaths = append(aciLayerPaths, aciPath)\n\t\taciManifests = append(aciManifests, aciManifest)\n\t\tcurPwl = aciManifest.PathWhitelist\n\n\t\tlayerFiles[i].Close()\n\t}\n\n\treturn aciLayerPaths, aciManifests, nil\n}\n\ntype layer struct {\n\tindex  int\n\tfile   *os.File\n\tcloser io.Closer\n\terr    error\n}\n\nfunc (rb *RepositoryBackend) buildACIV22(layerIDs []string, manhash string, dockerURL *common.ParsedDockerURL, outputDir string, tmpBaseDir string, compression common.Compression) ([]string, []*schema.ImageManifest, error) {\n\tlayerFiles := make([]*os.File, len(layerIDs))\n\n\ttmpParentDir, err := ioutil.TempDir(tmpBaseDir, \"docker2aci-\")\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tdefer os.RemoveAll(tmpParentDir)\n\n\tcopier := progressutil.NewCopyProgressPrinter()\n\n\tresultChan := make(chan layer, len(layerIDs))\n\tfor i, layerID := range layerIDs {\n\t\tif err := common.ValidateLayerId(layerID); err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t\t// https://github.com/golang/go/wiki/CommonMistakes\n\t\ti := i // golang--\n\t\tlayerID := layerID\n\t\tgo func() {\n\t\t\ttmpDir, err := ioutil.TempDir(tmpParentDir, \"\")\n\t\t\tif err != nil {\n\t\t\t\tresultChan <- layer{\n\t\t\t\t\tindex: i,\n\t\t\t\t\terr:   fmt.Errorf(\"error creating dir: %v\", err),\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tlayerFile, closer, err := rb.getLayerV2(layerID, dockerURL, tmpDir, copier)\n\t\t\tif err != nil {\n\t\t\t\tresultChan <- layer{\n\t\t\t\t\tindex: i,\n\t\t\t\t\terr:   fmt.Errorf(\"error getting the remote layer: %v\", err),\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t\tresultChan <- layer{\n\t\t\t\tindex:  i,\n\t\t\t\tfile:   layerFile,\n\t\t\t\tcloser: closer,\n\t\t\t\terr:    nil,\n\t\t\t}\n\t\t}()\n\t}\n\tvar errs []error\n\tfor i := 0; i < len(layerIDs); i++ {\n\t\tres := <-resultChan\n\t\tif res.closer != nil {\n\t\t\tdefer res.closer.Close()\n\t\t}\n\t\tif res.file != nil {\n\t\t\tdefer res.file.Close()\n\t\t}\n\t\tif res.err != nil {\n\t\t\terrs = append(errs, res.err)\n\t\t}\n\t\tlayerFiles[res.index] = res.file\n\t}\n\tif len(errs) > 0 {\n\t\treturn nil, nil, errs[0]\n\t}\n\terr = copier.PrintAndWait(os.Stderr, 500*time.Millisecond, nil)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\tfor _, layerFile := range layerFiles {\n\t\terr := layerFile.Sync()\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t}\n\tvar aciLayerPaths []string\n\tvar aciManifests []*schema.ImageManifest\n\tvar curPwl []string\n\tvar i int\n\tfor i = 0; i < len(layerIDs)-1; i++ {\n\t\trb.debug.Println(\"Generating layer ACI...\")\n\t\taciPath, aciManifest, err := internal.GenerateACI22LowerLayer(dockerURL, layerIDs[i], outputDir, layerFiles[i], curPwl, compression)\n\t\tif err != nil {\n\t\t\treturn nil, nil, fmt.Errorf(\"error generating ACI: %v\", err)\n\t\t}\n\t\taciLayerPaths = append(aciLayerPaths, aciPath)\n\t\taciManifests = append(aciManifests, aciManifest)\n\t\tcurPwl = aciManifest.PathWhitelist\n\t}\n\trb.debug.Println(\"Generating layer ACI...\")\n\taciPath, aciManifest, err := internal.GenerateACI22TopLayer(dockerURL, manhash, rb.imageConfigs[*dockerURL], layerIDs[i], outputDir, layerFiles[i], curPwl, compression, aciManifests, rb.debug)\n\tif err != nil {\n\t\treturn nil, nil, fmt.Errorf(\"error generating ACI: %v\", err)\n\t}\n\taciLayerPaths = append(aciLayerPaths, aciPath)\n\taciManifests = append(aciManifests, aciManifest)\n\n\treturn aciLayerPaths, aciManifests, nil\n}\n\nfunc (rb *RepositoryBackend) getManifestV2(dockerURL *common.ParsedDockerURL) ([]string, string, error) {\n\tvar reference string\n\tif dockerURL.Digest != \"\" {\n\t\treference = dockerURL.Digest\n\t} else {\n\t\treference = dockerURL.Tag\n\t}\n\turl := rb.schema + path.Join(dockerURL.IndexURL, \"v2\", dockerURL.ImageName, \"manifests\", reference)\n\n\treq, err := http.NewRequest(\"GET\", url, nil)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\trb.setBasicAuth(req)\n\n\tres, err := rb.makeRequest(req, dockerURL.ImageName, rb.mediaTypes.ManifestMediaTypes())\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\tdefer res.Body.Close()\n\n\tif res.StatusCode != http.StatusOK {\n\t\treturn nil, \"\", &httpStatusErr{res.StatusCode, req.URL}\n\t}\n\n\tswitch res.Header.Get(\"content-type\") {\n\tcase common.MediaTypeDockerV22Manifest, common.MediaTypeOCIV1Manifest:\n\t\treturn rb.getManifestV22(dockerURL, res)\n\tcase common.MediaTypeDockerV21Manifest:\n\t\treturn rb.getManifestV21(dockerURL, res)\n\t}\n\treturn rb.getManifestV21(dockerURL, res)\n}\n\nfunc (rb *RepositoryBackend) getManifestV21(dockerURL *common.ParsedDockerURL, res *http.Response) ([]string, string, error) {\n\tmanblob, err := ioutil.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\tmanifest := &v2Manifest{}\n\n\terr = json.Unmarshal(manblob, manifest)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\tmanhash := godigest.FromBytes(manblob)\n\n\tif manifest.Name != dockerURL.ImageName {\n\t\treturn nil, \"\", fmt.Errorf(\"name doesn't match what was requested, expected: %s, downloaded: %s\", dockerURL.ImageName, manifest.Name)\n\t}\n\n\tif dockerURL.Tag != \"\" && manifest.Tag != dockerURL.Tag {\n\t\treturn nil, \"\", fmt.Errorf(\"tag doesn't match what was requested, expected: %s, downloaded: %s\", dockerURL.Tag, manifest.Tag)\n\t}\n\n\tif err := fixManifestLayers(manifest); err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\t//TODO: verify signature here\n\n\tlayers := make([]string, len(manifest.FSLayers))\n\n\tfor i, layer := range manifest.FSLayers {\n\t\tif _, ok := rb.layersIndex[layer.BlobSum]; !ok {\n\t\t\trb.layersIndex[layer.BlobSum] = i\n\t\t}\n\t\tlayers[i] = layer.BlobSum\n\t}\n\n\trb.imageManifests[*dockerURL] = *manifest\n\n\treturn layers, string(manhash), nil\n}\n\nfunc (rb *RepositoryBackend) getManifestV22(dockerURL *common.ParsedDockerURL, res *http.Response) ([]string, string, error) {\n\tmanblob, err := ioutil.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\tmanifest := &typesV2.ImageManifest{}\n\n\terr = json.Unmarshal(manblob, manifest)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\tmanhash := godigest.FromBytes(manblob)\n\n\t//TODO: verify signature here\n\n\tlayers := make([]string, len(manifest.Layers))\n\n\tfor i, layer := range manifest.Layers {\n\t\tlayers[i] = layer.Digest\n\t}\n\n\terr = rb.getConfigV22(dockerURL, manifest.Config.Digest)\n\tif err != nil {\n\t\treturn nil, \"\", err\n\t}\n\n\trb.imageV2Manifests[*dockerURL] = manifest\n\n\treturn layers, string(manhash), nil\n}\n\nfunc (rb *RepositoryBackend) getConfigV22(dockerURL *common.ParsedDockerURL, configDigest string) error {\n\turl := rb.schema + path.Join(dockerURL.IndexURL, \"v2\", dockerURL.ImageName, \"blobs\", configDigest)\n\treq, err := http.NewRequest(\"GET\", url, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\trb.setBasicAuth(req)\n\n\tres, err := rb.makeRequest(req, dockerURL.ImageName, rb.mediaTypes.ConfigMediaTypes())\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer res.Body.Close()\n\n\tconfblob, err := ioutil.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn err\n\t}\n\tconfig := &typesV2.ImageConfig{}\n\terr = json.Unmarshal(confblob, config)\n\tif err != nil {\n\t\treturn err\n\t}\n\trb.imageConfigs[*dockerURL] = config\n\treturn nil\n}\n\nfunc fixManifestLayers(manifest *v2Manifest) error {\n\ttype imageV1 struct {\n\t\tID     string\n\t\tParent string\n\t}\n\timgs := make([]*imageV1, len(manifest.FSLayers))\n\tfor i := range manifest.FSLayers {\n\t\timg := &imageV1{}\n\n\t\tif err := json.Unmarshal([]byte(manifest.History[i].V1Compatibility), img); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\timgs[i] = img\n\t\tif err := common.ValidateLayerId(img.ID); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\tif imgs[len(imgs)-1].Parent != \"\" {\n\t\treturn errors.New(\"Invalid parent ID in the base layer of the image.\")\n\t}\n\n\t// check general duplicates to error instead of a deadlock\n\tidmap := make(map[string]struct{})\n\n\tvar lastID string\n\tfor _, img := range imgs {\n\t\t// skip IDs that appear after each other, we handle those later\n\t\tif _, exists := idmap[img.ID]; img.ID != lastID && exists {\n\t\t\treturn fmt.Errorf(\"ID %+v appears multiple times in manifest\", img.ID)\n\t\t}\n\t\tlastID = img.ID\n\t\tidmap[lastID] = struct{}{}\n\t}\n\n\t// backwards loop so that we keep the remaining indexes after removing items\n\tfor i := len(imgs) - 2; i >= 0; i-- {\n\t\tif imgs[i].ID == imgs[i+1].ID { // repeated ID. remove and continue\n\t\t\tmanifest.FSLayers = append(manifest.FSLayers[:i], manifest.FSLayers[i+1:]...)\n\t\t\tmanifest.History = append(manifest.History[:i], manifest.History[i+1:]...)\n\t\t} else if imgs[i].Parent != imgs[i+1].ID {\n\t\t\treturn fmt.Errorf(\"Invalid parent ID. Expected %v, got %v.\", imgs[i+1].ID, imgs[i].Parent)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (rb *RepositoryBackend) getLayerV2(layerID string, dockerURL *common.ParsedDockerURL, tmpDir string, copier *progressutil.CopyProgressPrinter) (*os.File, io.ReadCloser, error) {\n\tvar (\n\t\terr error\n\t\tres *http.Response\n\t\turl = rb.schema + path.Join(dockerURL.IndexURL, \"v2\", dockerURL.ImageName, \"blobs\", layerID)\n\t)\n\treq, err := http.NewRequest(\"GET\", url, nil)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\trb.setBasicAuth(req)\n\n\tres, err = rb.makeRequest(req, dockerURL.ImageName, rb.mediaTypes.LayerMediaTypes())\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\tdefer func() {\n\t\tif err != nil && res != nil {\n\t\t\tres.Body.Close()\n\t\t}\n\t}()\n\n\tif res.StatusCode == http.StatusTemporaryRedirect || res.StatusCode == http.StatusFound {\n\t\tlocation := res.Header.Get(\"Location\")\n\t\tif location != \"\" {\n\t\t\treq, err = http.NewRequest(\"GET\", location, nil)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t\tres.Body.Close()\n\t\t\tres = nil\n\t\t\tres, err = rb.makeRequest(req, dockerURL.ImageName, rb.mediaTypes.LayerMediaTypes())\n\t\t\tif err != nil {\n\t\t\t\treturn nil, nil, err\n\t\t\t}\n\t\t}\n\t}\n\n\tif res.StatusCode != http.StatusOK {\n\t\treturn nil, nil, &httpStatusErr{res.StatusCode, req.URL}\n\t}\n\n\tvar in io.Reader\n\tin = res.Body\n\n\tvar size int64\n\n\tif hdr := res.Header.Get(\"Content-Length\"); hdr != \"\" {\n\t\tsize, err = strconv.ParseInt(hdr, 10, 64)\n\t\tif err != nil {\n\t\t\treturn nil, nil, err\n\t\t}\n\t}\n\n\tname := \"Downloading \" + layerID[:18]\n\n\tlayerFile, err := ioutil.TempFile(tmpDir, \"dockerlayer-\")\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\terr = copier.AddCopy(in, name, size, layerFile)\n\tif err != nil {\n\t\treturn nil, nil, err\n\t}\n\n\treturn layerFile, res.Body, nil\n}\n\nfunc (rb *RepositoryBackend) makeRequest(req *http.Request, repo string, acceptHeaders []string) (*http.Response, error) {\n\tsetBearerHeader := false\n\thostAuthTokens, ok := rb.hostsV2AuthTokens[req.URL.Host]\n\tif ok {\n\t\tauthToken, ok := hostAuthTokens[repo]\n\t\tif ok {\n\t\t\treq.Header.Set(\"Authorization\", \"Bearer \"+authToken)\n\t\t\tsetBearerHeader = true\n\t\t}\n\t}\n\n\tfor _, acceptHeader := range acceptHeaders {\n\t\treq.Header.Add(\"Accept\", acceptHeader)\n\t}\n\n\tclient := util.GetTLSClient(rb.insecure.SkipVerify)\n\tres, err := client.Do(req)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tif res.StatusCode == http.StatusUnauthorized && setBearerHeader {\n\t\treturn res, err\n\t}\n\n\thdr := res.Header.Get(\"www-authenticate\")\n\tif res.StatusCode != http.StatusUnauthorized || hdr == \"\" {\n\t\treturn res, err\n\t}\n\n\ttokens := strings.Split(hdr, \",\")\n\tif len(tokens) != 3 ||\n\t\t!strings.HasPrefix(strings.ToLower(tokens[0]), \"bearer realm\") {\n\t\treturn res, err\n\t}\n\tres.Body.Close()\n\n\tvar realm, service, scope string\n\tfor _, token := range tokens {\n\t\tif strings.HasPrefix(strings.ToLower(token), \"bearer realm\") {\n\t\t\trealm = strings.Trim(token[len(\"bearer realm=\"):], \"\\\"\")\n\t\t}\n\t\tif strings.HasPrefix(token, \"service\") {\n\t\t\tservice = strings.Trim(token[len(\"service=\"):], \"\\\"\")\n\t\t}\n\t\tif strings.HasPrefix(token, \"scope\") {\n\t\t\tscope = strings.Trim(token[len(\"scope=\"):], \"\\\"\")\n\t\t}\n\t}\n\n\tif realm == \"\" {\n\t\treturn nil, fmt.Errorf(\"missing realm in bearer auth challenge\")\n\t}\n\tif service == \"\" {\n\t\treturn nil, fmt.Errorf(\"missing service in bearer auth challenge\")\n\t}\n\t// The scope can be empty if we're not getting a token for a specific repo\n\tif scope == \"\" && repo != \"\" {\n\t\t// If the scope is empty and it shouldn't be, we can infer it based on the repo\n\t\tscope = fmt.Sprintf(\"repository:%s:pull\", repo)\n\t}\n\n\tauthReq, err := http.NewRequest(\"GET\", realm, nil)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tgetParams := authReq.URL.Query()\n\tgetParams.Add(\"service\", service)\n\tif scope != \"\" {\n\t\tgetParams.Add(\"scope\", scope)\n\t}\n\tauthReq.URL.RawQuery = getParams.Encode()\n\n\trb.setBasicAuth(authReq)\n\n\tres, err = client.Do(authReq)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer res.Body.Close()\n\n\tswitch res.StatusCode {\n\tcase http.StatusUnauthorized:\n\t\treturn nil, fmt.Errorf(\"unable to retrieve auth token: 401 unauthorized\")\n\tcase http.StatusOK:\n\t\tbreak\n\tdefault:\n\t\treturn nil, &httpStatusErr{res.StatusCode, authReq.URL}\n\t}\n\n\ttokenBlob, err := ioutil.ReadAll(res.Body)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\ttokenStruct := struct {\n\t\tToken string `json:\"token\"`\n\t}{}\n\n\terr = json.Unmarshal(tokenBlob, &tokenStruct)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\thostAuthTokens, ok = rb.hostsV2AuthTokens[req.URL.Host]\n\tif !ok {\n\t\thostAuthTokens = make(map[string]string)\n\t\trb.hostsV2AuthTokens[req.URL.Host] = hostAuthTokens\n\t}\n\n\thostAuthTokens[repo] = tokenStruct.Token\n\n\treturn rb.makeRequest(req, repo, acceptHeaders)\n}\n\nfunc (rb *RepositoryBackend) setBasicAuth(req *http.Request) {\n\tif rb.username != \"\" && rb.password != \"\" {\n\t\treq.SetBasicAuth(rb.username, rb.password)\n\t}\n}\n"
  },
  {
    "path": "lib/internal/docker/docker.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage docker\n\nimport (\n\t\"encoding/base64\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path\"\n\t\"runtime\"\n\t\"strings\"\n\n\t\"github.com/appc/docker2aci/lib/internal/types\"\n)\n\nconst (\n\tdockercfgFileName    = \"config.json\"\n\tdockercfgFileNameOld = \".dockercfg\"\n\tdefaultIndexURL      = \"registry-1.docker.io\"\n\tdefaultIndexURLAuth  = \"https://index.docker.io/v1/\"\n\tdefaultRepoPrefix    = \"library/\"\n)\n\n// SplitReposName breaks a repo name into an index name and remote name.\nfunc SplitReposName(name string) (indexName, remoteName string) {\n\ti := strings.IndexRune(name, '/')\n\tif i == -1 || (!strings.ContainsAny(name[:i], \".:\") && name[:i] != \"localhost\") {\n\t\tindexName, remoteName = defaultIndexURL, name\n\t} else {\n\t\tindexName, remoteName = name[:i], name[i+1:]\n\t}\n\tif indexName == defaultIndexURL && !strings.ContainsRune(remoteName, '/') {\n\t\tremoteName = defaultRepoPrefix + remoteName\n\t}\n\treturn\n}\n\n// Get a repos name and returns the right reposName + tag\n// The tag can be confusing because of a port in a repository name.\n//     Ex: localhost.localdomain:5000/samalba/hipache:latest\nfunc parseRepositoryTag(repos string) (string, string) {\n\tn := strings.LastIndex(repos, \":\")\n\tif n < 0 {\n\t\treturn repos, \"\"\n\t}\n\tif tag := repos[n+1:]; !strings.Contains(tag, \"/\") {\n\t\treturn repos[:n], tag\n\t}\n\treturn repos, \"\"\n}\n\nfunc decodeDockerAuth(s string) (string, string, error) {\n\tdecoded, err := base64.StdEncoding.DecodeString(s)\n\tif err != nil {\n\t\treturn \"\", \"\", err\n\t}\n\tparts := strings.SplitN(string(decoded), \":\", 2)\n\tif len(parts) != 2 {\n\t\treturn \"\", \"\", fmt.Errorf(\"invalid auth configuration file\")\n\t}\n\tuser := parts[0]\n\tpassword := strings.Trim(parts[1], \"\\x00\")\n\treturn user, password, nil\n}\n\nfunc getHomeDir() string {\n\tif runtime.GOOS == \"windows\" {\n\t\treturn os.Getenv(\"USERPROFILE\")\n\t}\n\treturn os.Getenv(\"HOME\")\n}\n\n// GetDockercfgAuth reads a ~/.dockercfg file and returns the username and password\n// of the given docker index server.\nfunc GetAuthInfo(indexServer string) (string, string, error) {\n\t// official docker registry\n\tif indexServer == defaultIndexURL {\n\t\tindexServer = defaultIndexURLAuth\n\t}\n\tdockerCfgPath := path.Join(getHomeDir(), \".docker\", dockercfgFileName)\n\tif _, err := os.Stat(dockerCfgPath); err == nil {\n\t\tj, err := ioutil.ReadFile(dockerCfgPath)\n\t\tif err != nil {\n\t\t\treturn \"\", \"\", err\n\t\t}\n\t\tvar dockerAuth types.DockerConfigFile\n\t\tif err := json.Unmarshal(j, &dockerAuth); err != nil {\n\t\t\treturn \"\", \"\", err\n\t\t}\n\t\t// try the normal case\n\t\tif c, ok := dockerAuth.AuthConfigs[indexServer]; ok {\n\t\t\treturn decodeDockerAuth(c.Auth)\n\t\t}\n\t} else if os.IsNotExist(err) {\n\t\toldDockerCfgPath := path.Join(getHomeDir(), dockercfgFileNameOld)\n\t\tif _, err := os.Stat(oldDockerCfgPath); err != nil {\n\t\t\treturn \"\", \"\", nil //missing file is not an error\n\t\t}\n\t\tj, err := ioutil.ReadFile(oldDockerCfgPath)\n\t\tif err != nil {\n\t\t\treturn \"\", \"\", err\n\t\t}\n\t\tvar dockerAuthOld map[string]types.DockerAuthConfigOld\n\t\tif err := json.Unmarshal(j, &dockerAuthOld); err != nil {\n\t\t\treturn \"\", \"\", err\n\t\t}\n\t\tif c, ok := dockerAuthOld[indexServer]; ok {\n\t\t\treturn decodeDockerAuth(c.Auth)\n\t\t}\n\t} else {\n\t\t// if file is there but we can't stat it for any reason other\n\t\t// than it doesn't exist then stop\n\t\treturn \"\", \"\", fmt.Errorf(\"%s - %v\", dockerCfgPath, err)\n\t}\n\treturn \"\", \"\", nil\n}\n"
  },
  {
    "path": "lib/internal/internal.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package internal provides functions shared by different parts of docker2aci.\n//\n// Note: this package is an implementation detail and shouldn't be used outside\n// of docker2aci.\npackage internal\n\nimport (\n\t\"archive/tar\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/lib/internal/tarball\"\n\t\"github.com/appc/docker2aci/lib/internal/types\"\n\t\"github.com/appc/docker2aci/lib/internal/typesV2\"\n\t\"github.com/appc/docker2aci/lib/internal/util\"\n\t\"github.com/appc/docker2aci/pkg/log\"\n\t\"github.com/appc/spec/aci\"\n\t\"github.com/appc/spec/schema\"\n\tappctypes \"github.com/appc/spec/schema/types\"\n\tgzip \"github.com/klauspost/pgzip\"\n)\n\n// Docker2ACIBackend is the interface that abstracts converting Docker layers\n// to ACI from where they're stored (remote or file).\n//\n// GetImageInfo takes a Docker URL and returns a list of layers and the parsed\n// Docker URL.\n//\n// BuildACI takes a Docker layer, converts it to ACI and returns its output\n// path and its converted ImageManifest.\ntype Docker2ACIBackend interface {\n\t// GetImageInfo, given the url for a docker image, will return the\n\t// following:\n\t// - []string: an ordered list of all layer hashes\n\t// - string: a unique identifier for this image, like a hash of the manifest\n\t// - *common.ParsedDockerURL: a parsed docker URL\n\t// - error: an error if one occurred\n\tGetImageInfo(dockerUrl string) ([]string, string, *common.ParsedDockerURL, error)\n\tBuildACI(layerIDs []string, manhash string, dockerURL *common.ParsedDockerURL, outputDir string, tmpBaseDir string, compression common.Compression) ([]string, []*schema.ImageManifest, error)\n}\n\n// GenerateACI takes a Docker layer and generates an ACI from it.\nfunc GenerateACI(layerNumber int, manhash string, layerData types.DockerImageData, dockerURL *common.ParsedDockerURL, outputDir string, layerFile *os.File, curPwl []string, compression common.Compression, debug log.Logger) (string, *schema.ImageManifest, error) {\n\tmanifest, err := GenerateManifest(layerData, manhash, dockerURL, debug)\n\tif err != nil {\n\t\treturn \"\", nil, fmt.Errorf(\"error generating the manifest: %v\", err)\n\t}\n\n\timageName := strings.Replace(dockerURL.ImageName, \"/\", \"-\", -1)\n\taciPath := generateACIPath(outputDir, imageName, layerData.ID, dockerURL.Tag, layerData.OS, layerData.Architecture, layerNumber)\n\n\tmanifest, err = writeACI(layerFile, *manifest, curPwl, aciPath, compression)\n\tif err != nil {\n\t\treturn \"\", nil, fmt.Errorf(\"error writing ACI: %v\", err)\n\t}\n\n\tif err := ValidateACI(aciPath); err != nil {\n\t\treturn \"\", nil, fmt.Errorf(\"invalid ACI generated: %v\", err)\n\t}\n\n\treturn aciPath, manifest, nil\n}\n\nfunc GenerateACI22LowerLayer(dockerURL *common.ParsedDockerURL, layerDigest string, outputDir string, layerFile *os.File, curPwl []string, compression common.Compression) (string, *schema.ImageManifest, error) {\n\tformattedDigest := strings.Replace(layerDigest, \":\", \"-\", -1)\n\taciName := fmt.Sprintf(\"%s/%s-%s\", dockerURL.IndexURL, dockerURL.ImageName, formattedDigest)\n\tsanitizedAciName, err := appctypes.SanitizeACIdentifier(aciName)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\tmanifest, err := GenerateEmptyManifest(sanitizedAciName)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\n\taciPath := generateACIPath(outputDir, aciName, layerDigest, dockerURL.Tag, runtime.GOOS, runtime.GOARCH, -1)\n\tmanifest, err = writeACI(layerFile, *manifest, curPwl, aciPath, compression)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\n\terr = ValidateACI(aciPath)\n\tif err != nil {\n\t\treturn \"\", nil, fmt.Errorf(\"invalid ACI generated: %v\", err)\n\t}\n\treturn aciPath, manifest, nil\n}\n\nfunc GenerateACI22TopLayer(dockerURL *common.ParsedDockerURL, manhash string, imageConfig *typesV2.ImageConfig, layerDigest string, outputDir string, layerFile *os.File, curPwl []string, compression common.Compression, lowerLayers []*schema.ImageManifest, debug log.Logger) (string, *schema.ImageManifest, error) {\n\taciName := fmt.Sprintf(\"%s/%s-%s\", dockerURL.IndexURL, dockerURL.ImageName, layerDigest)\n\tsanitizedAciName, err := appctypes.SanitizeACIdentifier(aciName)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\tmanifest, err := GenerateManifestV22(sanitizedAciName, manhash, layerDigest, dockerURL, imageConfig, lowerLayers, debug)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\n\taciPath := generateACIPath(outputDir, aciName, layerDigest, dockerURL.Tag, runtime.GOOS, runtime.GOARCH, -1)\n\tmanifest, err = writeACI(layerFile, *manifest, curPwl, aciPath, compression)\n\tif err != nil {\n\t\treturn \"\", nil, err\n\t}\n\n\terr = ValidateACI(aciPath)\n\tif err != nil {\n\t\treturn \"\", nil, fmt.Errorf(\"invalid ACI generated: %v\", err)\n\t}\n\treturn aciPath, manifest, nil\n}\n\nfunc generateACIPath(outputDir, imageName, digest, tag, osString, arch string, layerNum int) string {\n\taciPath := imageName\n\tif tag != \"\" {\n\t\taciPath += \"-\" + tag\n\t}\n\tif osString != \"\" {\n\t\taciPath += \"-\" + osString\n\t\tif arch != \"\" {\n\t\t\taciPath += \"-\" + arch\n\t\t}\n\t}\n\tif layerNum != -1 {\n\t\taciPath += \"-\" + strconv.Itoa(layerNum)\n\t}\n\taciPath += \".aci\"\n\treturn path.Join(outputDir, aciPath)\n}\n\nfunc generateEPCmdAnnotation(dockerEP, dockerCmd []string) (string, string, error) {\n\tvar entrypointAnnotation, cmdAnnotation string\n\tif len(dockerEP) > 0 {\n\t\tentry, err := json.Marshal(dockerEP)\n\t\tif err != nil {\n\t\t\treturn \"\", \"\", err\n\t\t}\n\t\tentrypointAnnotation = string(entry)\n\t}\n\tif len(dockerCmd) > 0 {\n\t\tcmd, err := json.Marshal(dockerCmd)\n\t\tif err != nil {\n\t\t\treturn \"\", \"\", err\n\t\t}\n\t\tcmdAnnotation = string(cmd)\n\t}\n\n\treturn entrypointAnnotation, cmdAnnotation, nil\n}\n\n// setLabel sets the label entries associated with non-empty key\n// to the single non-empty value. It replaces any existing values\n// associated with key.\nfunc setLabel(labels map[appctypes.ACIdentifier]string, key, val string) {\n\tif key != \"\" && val != \"\" {\n\t\tlabels[*appctypes.MustACIdentifier(key)] = val\n\t}\n}\n\n// setOSArch translates the given OS and architecture strings into\n// the compatible with application container specification and sets\n// the respective label entries.\n//\n// Returns an error if label translation fails.\nfunc setOSArch(labels map[appctypes.ACIdentifier]string, os, arch string) error {\n\t// Translate arch tuple into the appc arch tuple.\n\tappcOS, appcArch, err := appctypes.ToAppcOSArch(os, arch, \"\")\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Set translated labels.\n\tsetLabel(labels, \"os\", appcOS)\n\tsetLabel(labels, \"arch\", appcArch)\n\treturn nil\n}\n\n// setAnnotation sets the annotation entries associated with non-empty\n// key to the single non-empty value. It replaces any existing values\n// associated with key.\nfunc setAnnotation(annotations *appctypes.Annotations, key, val string) {\n\tif key != \"\" && val != \"\" {\n\t\tannotations.Set(*appctypes.MustACIdentifier(key), val)\n\t}\n}\n\n// GenerateManifest converts the docker manifest format to an appc\n// ImageManifest.\nfunc GenerateManifest(layerData types.DockerImageData, manhash string, dockerURL *common.ParsedDockerURL, debug log.Logger) (*schema.ImageManifest, error) {\n\tdockerConfig := layerData.Config\n\tgenManifest := &schema.ImageManifest{}\n\n\tappURL := \"\"\n\tappURL = dockerURL.IndexURL + \"/\"\n\tappURL += dockerURL.ImageName + \"-\" + layerData.ID\n\tappURL, err := appctypes.SanitizeACIdentifier(appURL)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tname := appctypes.MustACIdentifier(appURL)\n\tgenManifest.Name = *name\n\n\tacVersion, err := appctypes.NewSemVer(schema.AppContainerVersion.String())\n\tif err != nil {\n\t\tpanic(\"invalid appc spec version\")\n\t}\n\tgenManifest.ACVersion = *acVersion\n\n\tgenManifest.ACKind = appctypes.ACKind(schema.ImageManifestKind)\n\n\tvar annotations appctypes.Annotations\n\n\tlabels := make(map[appctypes.ACIdentifier]string)\n\tparentLabels := make(map[appctypes.ACIdentifier]string)\n\n\tsetLabel(labels, \"layer\", layerData.ID)\n\tsetLabel(labels, \"version\", dockerURL.Tag)\n\n\tsetOSArch(labels, layerData.OS, layerData.Architecture)\n\tsetOSArch(parentLabels, layerData.OS, layerData.Architecture)\n\n\tsetAnnotation(&annotations, \"authors\", layerData.Author)\n\tepoch := time.Unix(0, 0)\n\tif !layerData.Created.Equal(epoch) {\n\t\tsetAnnotation(&annotations, \"created\", layerData.Created.Format(time.RFC3339))\n\t}\n\tsetAnnotation(&annotations, \"docker-comment\", layerData.Comment)\n\tsetAnnotation(&annotations, common.AppcDockerOriginalName, dockerURL.OriginalName)\n\tsetAnnotation(&annotations, common.AppcDockerRegistryURL, dockerURL.IndexURL)\n\tsetAnnotation(&annotations, common.AppcDockerRepository, dockerURL.ImageName)\n\tsetAnnotation(&annotations, common.AppcDockerImageID, layerData.ID)\n\tsetAnnotation(&annotations, common.AppcDockerParentImageID, layerData.Parent)\n\tsetAnnotation(&annotations, common.AppcDockerManifestHash, manhash)\n\n\tif dockerConfig != nil {\n\t\texec := getExecCommand(dockerConfig.Entrypoint, dockerConfig.Cmd)\n\t\tuser, group := parseDockerUser(dockerConfig.User)\n\t\tvar env appctypes.Environment\n\t\tfor _, v := range dockerConfig.Env {\n\t\t\tparts := strings.SplitN(v, \"=\", 2)\n                        if len(parts) == 2 {\n\t\t\t\tenv.Set(parts[0], parts[1])\n                        }\n\t\t}\n\t\tapp := &appctypes.App{\n\t\t\tExec:             exec,\n\t\t\tUser:             user,\n\t\t\tGroup:            group,\n\t\t\tEnvironment:      env,\n\t\t\tWorkingDirectory: dockerConfig.WorkingDir,\n\t\t}\n\n\t\tapp.UserLabels = dockerConfig.Labels\n\n\t\tapp.MountPoints, err = convertVolumesToMPs(dockerConfig.Volumes)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tapp.Ports, err = convertPorts(dockerConfig.ExposedPorts, dockerConfig.PortSpecs, debug)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tep, cmd, err := generateEPCmdAnnotation(dockerConfig.Entrypoint, dockerConfig.Cmd)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif len(ep) > 0 {\n\t\t\tsetAnnotation(&annotations, common.AppcDockerEntrypoint, ep)\n\t\t}\n\t\tif len(cmd) > 0 {\n\t\t\tsetAnnotation(&annotations, common.AppcDockerCmd, cmd)\n\t\t}\n\n\t\tgenManifest.App = app\n\t}\n\n\tif layerData.Parent != \"\" {\n\t\tindexPrefix := \"\"\n\t\t// omit docker hub index URL in app name\n\t\tindexPrefix = dockerURL.IndexURL + \"/\"\n\t\tparentImageNameString := indexPrefix + dockerURL.ImageName + \"-\" + layerData.Parent\n\t\tparentImageNameString, err := appctypes.SanitizeACIdentifier(parentImageNameString)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tparentImageName := appctypes.MustACIdentifier(parentImageNameString)\n\n\t\tplbl, err := appctypes.LabelsFromMap(labels)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tgenManifest.Dependencies = append(genManifest.Dependencies, appctypes.Dependency{ImageName: *parentImageName, Labels: plbl})\n\n\t\tsetAnnotation(&annotations, common.AppcDockerTag, dockerURL.Tag)\n\t}\n\n\tgenManifest.Labels, err = appctypes.LabelsFromMap(labels)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tgenManifest.Annotations = annotations\n\n\treturn genManifest, nil\n}\n\nfunc GenerateEmptyManifest(name string) (*schema.ImageManifest, error) {\n\tacid, err := appctypes.NewACIdentifier(name)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tlabelsMap := make(map[appctypes.ACIdentifier]string)\n\terr = setOSArch(labelsMap, runtime.GOOS, runtime.GOARCH)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tlabels, err := appctypes.LabelsFromMap(labelsMap)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &schema.ImageManifest{\n\t\tACKind:    schema.ImageManifestKind,\n\t\tACVersion: schema.AppContainerVersion,\n\t\tName:      *acid,\n\t\tLabels:    labels,\n\t}, nil\n}\n\n// GenerateManifestV22, given a large set of information (documented a couple\n// lines down), will produce an image manifest compliant with the Dockver V2.2\n// image spec\nfunc GenerateManifestV22(\n\tname string, // The name of this image\n\tmanhash string, // The hash of this image's manifest\n\timageDigest string, // The digest of the image\n\tdockerURL *common.ParsedDockerURL, // The parsed docker URL\n\tconfig *typesV2.ImageConfig, // The image config\n\tlowerLayers []*schema.ImageManifest, // A list of manifests for the lower layers\n\tdebug log.Logger, // The debug logger, for logging debug information\n) (*schema.ImageManifest, error) {\n\tmanifest, err := GenerateEmptyManifest(name)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tlabels := manifest.Labels.ToMap()\n\tannotations := manifest.Annotations\n\n\tsetLabel(labels, \"version\", dockerURL.Tag)\n\tsetOSArch(labels, config.OS, config.Architecture)\n\n\tsetAnnotation(&annotations, \"author\", config.Author)\n\tsetAnnotation(&annotations, \"created\", config.Created)\n\n\tsetAnnotation(&annotations, common.AppcDockerOriginalName, dockerURL.OriginalName)\n\tsetAnnotation(&annotations, common.AppcDockerRegistryURL, dockerURL.IndexURL)\n\tsetAnnotation(&annotations, common.AppcDockerRepository, dockerURL.ImageName)\n\tsetAnnotation(&annotations, common.AppcDockerImageID, imageDigest)\n\tsetAnnotation(&annotations, \"created\", config.Created)\n\tsetAnnotation(&annotations, common.AppcDockerManifestHash, manhash)\n\n\tif config.Config != nil {\n\t\tinnerCfg := config.Config\n\t\texec := getExecCommand(innerCfg.Entrypoint, innerCfg.Cmd)\n\t\tuser, group := parseDockerUser(innerCfg.User)\n\t\tvar env appctypes.Environment\n\t\tfor _, v := range innerCfg.Env {\n\t\t\tparts := strings.SplitN(v, \"=\", 2)\n                        if len(parts) == 2 {\n\t\t\t\tenv.Set(parts[0], parts[1])\n                        }\n\t\t}\n\t\tmanifest.App = &appctypes.App{\n\t\t\tExec:             exec,\n\t\t\tUser:             user,\n\t\t\tGroup:            group,\n\t\t\tEnvironment:      env,\n\t\t\tWorkingDirectory: innerCfg.WorkingDir,\n\t\t}\n\t\tmanifest.App.MountPoints, err = convertVolumesToMPs(innerCfg.Volumes)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tmanifest.App.Ports, err = convertPorts(innerCfg.ExposedPorts, nil, debug)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\tep, cmd, err := generateEPCmdAnnotation(innerCfg.Entrypoint, innerCfg.Cmd)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif len(ep) > 0 {\n\t\t\tsetAnnotation(&annotations, common.AppcDockerEntrypoint, ep)\n\t\t}\n\t\tif len(cmd) > 0 {\n\t\t\tsetAnnotation(&annotations, common.AppcDockerCmd, cmd)\n\t\t}\n\t}\n\n\tfor _, lowerLayer := range lowerLayers {\n\t\tmanifest.Dependencies = append(manifest.Dependencies, appctypes.Dependency{\n\t\t\tImageName: lowerLayer.Name,\n\t\t\tLabels:    lowerLayer.Labels,\n\t\t})\n\t}\n\n\tmanifest.Labels, err = appctypes.LabelsFromMap(labels)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tmanifest.Annotations = annotations\n\treturn manifest, nil\n}\n\n// ValidateACI checks whether the ACI in aciPath is valid.\nfunc ValidateACI(aciPath string) error {\n\taciFile, err := os.Open(aciPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer aciFile.Close()\n\n\ttr, err := aci.NewCompressedTarReader(aciFile)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer tr.Close()\n\n\tif err := aci.ValidateArchive(tr.Reader); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\ntype appcPortSorter []appctypes.Port\n\nfunc (s appcPortSorter) Len() int {\n\treturn len(s)\n}\n\nfunc (s appcPortSorter) Swap(i, j int) {\n\ts[i], s[j] = s[j], s[i]\n}\n\nfunc (s appcPortSorter) Less(i, j int) bool {\n\treturn s[i].Name.String() < s[j].Name.String()\n}\n\nfunc convertPorts(dockerExposedPorts map[string]struct{}, dockerPortSpecs []string, debug log.Logger) ([]appctypes.Port, error) {\n\tports := []appctypes.Port{}\n\n\tfor ep := range dockerExposedPorts {\n\t\tappcPort, err := parseDockerPort(ep)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tports = append(ports, *appcPort)\n\t}\n\n\tif dockerExposedPorts == nil && dockerPortSpecs != nil {\n\t\tdebug.Println(\"warning: docker image uses deprecated PortSpecs field\")\n\t\tfor _, ep := range dockerPortSpecs {\n\t\t\tappcPort, err := parseDockerPort(ep)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tports = append(ports, *appcPort)\n\t\t}\n\t}\n\n\tsort.Sort(appcPortSorter(ports))\n\n\treturn ports, nil\n}\n\nfunc parseDockerPort(dockerPort string) (*appctypes.Port, error) {\n\tvar portString string\n\tproto := \"tcp\"\n\tsp := strings.Split(dockerPort, \"/\")\n\tif len(sp) < 2 {\n\t\tportString = dockerPort\n\t} else {\n\t\tproto = sp[1]\n\t\tportString = sp[0]\n\t}\n\n\tport, err := strconv.ParseUint(portString, 10, 0)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error parsing port %q: %v\", portString, err)\n\t}\n\n\tsn, err := appctypes.SanitizeACName(dockerPort)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tappcPort := &appctypes.Port{\n\t\tName:     *appctypes.MustACName(sn),\n\t\tProtocol: proto,\n\t\tPort:     uint(port),\n\t}\n\n\treturn appcPort, nil\n}\n\ntype appcVolSorter []appctypes.MountPoint\n\nfunc (s appcVolSorter) Len() int {\n\treturn len(s)\n}\n\nfunc (s appcVolSorter) Swap(i, j int) {\n\ts[i], s[j] = s[j], s[i]\n}\n\nfunc (s appcVolSorter) Less(i, j int) bool {\n\treturn s[i].Name.String() < s[j].Name.String()\n}\n\nfunc convertVolumesToMPs(dockerVolumes map[string]struct{}) ([]appctypes.MountPoint, error) {\n\tmps := []appctypes.MountPoint{}\n\tdup := make(map[string]int)\n\n\tfor p := range dockerVolumes {\n\t\tn := filepath.Join(\"volume\", p)\n\t\tsn, err := appctypes.SanitizeACName(n)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\n\t\t// check for duplicate names\n\t\tif i, ok := dup[sn]; ok {\n\t\t\tdup[sn] = i + 1\n\t\t\tsn = fmt.Sprintf(\"%s-%d\", sn, i)\n\t\t} else {\n\t\t\tdup[sn] = 1\n\t\t}\n\n\t\tmp := appctypes.MountPoint{\n\t\t\tName: *appctypes.MustACName(sn),\n\t\t\tPath: p,\n\t\t}\n\n\t\tmps = append(mps, mp)\n\t}\n\n\tsort.Sort(appcVolSorter(mps))\n\n\treturn mps, nil\n}\n\nfunc writeACI(layer io.ReadSeeker, manifest schema.ImageManifest, curPwl []string, output string, compression common.Compression) (*schema.ImageManifest, error) {\n\tdir, _ := path.Split(output)\n\tif dir != \"\" {\n\t\terr := os.MkdirAll(dir, 0755)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"error creating ACI parent dir: %v\", err)\n\t\t}\n\t}\n\taciFile, err := os.Create(output)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error creating ACI file: %v\", err)\n\t}\n\tdefer aciFile.Close()\n\n\tvar w io.WriteCloser = aciFile\n\tif compression == common.GzipCompression {\n\t\tw = gzip.NewWriter(aciFile)\n\t\tdefer w.Close()\n\t}\n\ttrw := tar.NewWriter(w)\n\tdefer trw.Close()\n\n\tif err := WriteRootfsDir(trw); err != nil {\n\t\treturn nil, fmt.Errorf(\"error writing rootfs entry: %v\", err)\n\t}\n\n\tfileMap := make(map[string]struct{})\n\tvar whiteouts []string\n\tconvWalker := func(t *tarball.TarFile) error {\n\t\tname := t.Name()\n\t\tif name == \"./\" {\n\t\t\treturn nil\n\t\t}\n\t\tnewName := path.Join(\"rootfs\", name)\n\t\tabsolutePath := strings.TrimPrefix(newName, \"rootfs\")\n\n\t\tif filepath.Clean(absolutePath) == \"/dev\" && t.Header.Typeflag != tar.TypeDir {\n\t\t\treturn fmt.Errorf(`invalid layer: \"/dev\" is not a directory`)\n\t\t}\n\n\t\tfileMap[absolutePath] = struct{}{}\n\t\tif strings.Contains(newName, \"/.wh.\") {\n\t\t\twhiteouts = append(whiteouts, strings.Replace(absolutePath, \".wh.\", \"\", 1))\n\t\t\treturn nil\n\t\t}\n\n\t\tnewHeader := &tar.Header{\n\t\t\tTypeflag:   t.Header.Typeflag,\n\t\t\tName:       newName,\n\t\t\tLinkname:   t.Header.Linkname,\n\t\t\tSize:       t.Header.Size,\n\t\t\tMode:       t.Header.Mode,\n\t\t\tUid:        t.Header.Uid,\n\t\t\tGid:        t.Header.Gid,\n\t\t\tUname:      t.Header.Uname,\n\t\t\tGname:      t.Header.Gname,\n\t\t\tModTime:    t.Header.ModTime,\n\t\t\tAccessTime: t.Header.AccessTime,\n\t\t\tChangeTime: t.Header.ChangeTime,\n\t\t\tDevmajor:   t.Header.Devmajor,\n\t\t\tDevminor:   t.Header.Devminor,\n\t\t\tXattrs:     t.Header.Xattrs,\n\t\t}\n\t\tif t.Header.Typeflag == tar.TypeLink {\n\t\t\tnewHeader.Linkname = path.Join(\"rootfs\", t.Linkname())\n\t\t}\n\t\tif err := trw.WriteHeader(newHeader); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif _, err := io.Copy(trw, t.TarStream); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\tif !util.In(curPwl, absolutePath) {\n\t\t\tcurPwl = append(curPwl, absolutePath)\n\t\t}\n\n\t\treturn nil\n\t}\n\ttr, err := aci.NewCompressedTarReader(layer)\n\tif err == nil {\n\t\tdefer tr.Close()\n\t\t// write files in rootfs/\n\t\tif err := tarball.Walk(*tr.Reader, convWalker); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t} else {\n\t\t// ignore errors: empty layers in tars generated by docker save are not\n\t\t// valid tar files so we ignore errors trying to open them. Converted\n\t\t// ACIs will have the manifest and an empty rootfs directory in any\n\t\t// case.\n\t}\n\tnewPwl := subtractWhiteouts(curPwl, whiteouts)\n\n\tnewPwl, err = writeStdioSymlinks(trw, fileMap, newPwl)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\t// Let's copy the newly generated PathWhitelist to avoid unintended\n\t// side-effects\n\tmanifest.PathWhitelist = make([]string, len(newPwl))\n\tcopy(manifest.PathWhitelist, newPwl)\n\n\tif err := WriteManifest(trw, manifest); err != nil {\n\t\treturn nil, fmt.Errorf(\"error writing manifest: %v\", err)\n\t}\n\n\treturn &manifest, nil\n}\n\nfunc getExecCommand(entrypoint []string, cmd []string) appctypes.Exec {\n\treturn append(entrypoint, cmd...)\n}\n\nfunc parseDockerUser(dockerUser string) (string, string) {\n\t// if the docker user is empty assume root user and group\n\tif dockerUser == \"\" {\n\t\treturn \"0\", \"0\"\n\t}\n\n\tdockerUserParts := strings.Split(dockerUser, \":\")\n\n\t// when only the user is given, the docker spec says that the default and\n\t// supplementary groups of the user in /etc/passwd should be applied.\n\t// To avoid inspecting image content, we set gid to the same value as uid.\n\tif len(dockerUserParts) < 2 {\n\t\treturn dockerUserParts[0], dockerUserParts[0]\n\t}\n\n\treturn dockerUserParts[0], dockerUserParts[1]\n}\n\nfunc subtractWhiteouts(pathWhitelist []string, whiteouts []string) []string {\n\tmatchPaths := []string{}\n\tfor _, path := range pathWhitelist {\n\t\t// If one of the parent dirs of the current path matches the\n\t\t// whiteout then also this path should be removed\n\t\tcurPath := path\n\t\tfor curPath != \"/\" {\n\t\t\tfor _, whiteout := range whiteouts {\n\t\t\t\tif curPath == whiteout {\n\t\t\t\t\tmatchPaths = append(matchPaths, path)\n\t\t\t\t}\n\t\t\t}\n\t\t\tcurPath = filepath.Dir(curPath)\n\t\t}\n\t}\n\tfor _, matchPath := range matchPaths {\n\t\tidx := util.IndexOf(pathWhitelist, matchPath)\n\t\tif idx != -1 {\n\t\t\tpathWhitelist = append(pathWhitelist[:idx], pathWhitelist[idx+1:]...)\n\t\t}\n\t}\n\n\tsort.Sort(sort.StringSlice(pathWhitelist))\n\n\treturn pathWhitelist\n}\n\n// WriteManifest writes a schema.ImageManifest entry on a tar.Writer.\nfunc WriteManifest(outputWriter *tar.Writer, manifest schema.ImageManifest) error {\n\tb, err := json.Marshal(manifest)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\thdr := getGenericTarHeader()\n\thdr.Name = \"manifest\"\n\thdr.Mode = 0644\n\thdr.Size = int64(len(b))\n\thdr.Typeflag = tar.TypeReg\n\n\tif err := outputWriter.WriteHeader(hdr); err != nil {\n\t\treturn err\n\t}\n\tif _, err := outputWriter.Write(b); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// WriteRootfsDir writes a \"rootfs\" dir entry on a tar.Writer.\nfunc WriteRootfsDir(tarWriter *tar.Writer) error {\n\thdr := getGenericTarHeader()\n\thdr.Name = \"rootfs\"\n\thdr.Mode = 0755\n\thdr.Size = int64(0)\n\thdr.Typeflag = tar.TypeDir\n\n\treturn tarWriter.WriteHeader(hdr)\n}\n\ntype symlink struct {\n\tlinkname string\n\ttarget   string\n}\n\n// writeStdioSymlinks adds the /dev/stdin, /dev/stdout, /dev/stderr, and\n// /dev/fd symlinks expected by Docker to the converted ACIs so apps can find\n// them as expected\nfunc writeStdioSymlinks(tarWriter *tar.Writer, fileMap map[string]struct{}, pwl []string) ([]string, error) {\n\tstdioSymlinks := []symlink{\n\t\t{\"/dev/stdin\", \"/proc/self/fd/0\"},\n\t\t// Docker makes /dev/{stdout,stderr} point to /proc/self/fd/{1,2} but\n\t\t// we point to /dev/console instead in order to support the case when\n\t\t// stdout/stderr is a Unix socket (e.g. for the journal).\n\t\t{\"/dev/stdout\", \"/dev/console\"},\n\t\t{\"/dev/stderr\", \"/dev/console\"},\n\t\t{\"/dev/fd\", \"/proc/self/fd\"},\n\t}\n\n\tfor _, s := range stdioSymlinks {\n\t\tname := s.linkname\n\t\ttarget := s.target\n\t\tif _, exists := fileMap[name]; exists {\n\t\t\tcontinue\n\t\t}\n\t\thdr := &tar.Header{\n\t\t\tName:     filepath.Join(\"rootfs\", name),\n\t\t\tMode:     0777,\n\t\t\tTypeflag: tar.TypeSymlink,\n\t\t\tLinkname: target,\n\t\t}\n\t\tif err := tarWriter.WriteHeader(hdr); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif !util.In(pwl, name) {\n\t\t\tpwl = append(pwl, name)\n\t\t}\n\t}\n\n\treturn pwl, nil\n}\n\nfunc getGenericTarHeader() *tar.Header {\n\t// FIXME(iaguis) Use docker image time instead of the Unix Epoch?\n\thdr := &tar.Header{\n\t\tUid:        0,\n\t\tGid:        0,\n\t\tModTime:    time.Unix(0, 0),\n\t\tUname:      \"0\",\n\t\tGname:      \"0\",\n\t\tChangeTime: time.Unix(0, 0),\n\t}\n\n\treturn hdr\n}\n"
  },
  {
    "path": "lib/internal/internal_test.go",
    "content": "// Copyright 2017 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage internal\n\nimport (\n\t\"testing\"\n\n\t\"github.com/appc/spec/schema/types\"\n)\n\nfunc TestSetLabel(t *testing.T) {\n\tlabels := make(map[types.ACIdentifier]string)\n\n\ttests := []struct {\n\t\tkey, value string\n\t\tok         bool\n\t}{\n\t\t{\"\", \"amd64\", false},\n\t\t{\"freebsd\", \"\", false},\n\t\t{\"\", \"\", false},\n\t\t{\"version\", \"1.2.3\", true},\n\t\t{\"os\", \"linux\", true},\n\t\t{\"arch\", \"aarch64\", true},\n\t\t{\"arch\", \"amd64\", true},\n\t}\n\n\tfor i, tt := range tests {\n\t\tsetLabel(labels, tt.key, tt.value)\n\n\t\tvalue, ok := labels[types.ACIdentifier(tt.key)]\n\t\tif ok != tt.ok {\n\t\t\tconst text = \"#%d failed on label existence validation: %v != %v\"\n\t\t\tt.Errorf(text, i, ok, tt.ok)\n\t\t}\n\n\t\tif tt.ok && value != tt.value {\n\t\t\tconst text = \"#%d wrong label for %s key: %v != %v\"\n\t\t\tt.Errorf(text, i, tt.key, value, tt.value)\n\t\t}\n\t}\n}\n\nfunc TestSetAnnotation(t *testing.T) {\n\tvar annotations types.Annotations\n\n\ttests := []struct {\n\t\tkey, value string\n\t\tok         bool\n\t}{\n\t\t{\"\", \"\", false},\n\t\t{\"\", \"name\", false},\n\t\t{\"gentoo\", \"\", false},\n\t\t{\"entrypoint\", \"/bin/bash\", true},\n\t\t{\"entrypoint\", \"/bin/sh\", true},\n\t\t{\"cmd\", \"-c\", true},\n\t}\n\n\tfor i, tt := range tests {\n\t\tsetAnnotation(&annotations, tt.key, tt.value)\n\n\t\tvalue, ok := annotations.Get(tt.key)\n\t\tif ok != tt.ok {\n\t\t\tconst text = \"#%d failed on annotation existence validation: %v != %v\"\n\t\t\tt.Errorf(text, i, ok, tt.ok)\n\t\t}\n\n\t\tif tt.ok && value != tt.value {\n\t\t\tconst text = \"#%d wrong annotation for %s key: %v != %v\"\n\t\t\tt.Errorf(text, i, tt.key, value, tt.value)\n\t\t}\n\t}\n}\n\nfunc TestOSArch(t *testing.T) {\n\ttests := []struct {\n\t\tsrcOS, srcArch string\n\t\tdstOS, dstArch string\n\t\terr            bool\n\t}{\n\t\t{\"\", \"\", \"\", \"\", false},\n\t\t{\"TempleOS\", \"ia64\", \"\", \"\", false},\n\t\t{\"linux\", \"amd64\", \"linux\", \"amd64\", true},\n\t\t{\"linux\", \"arm64\", \"linux\", \"aarch64\", true},\n\t\t{\"freebsd\", \"386\", \"freebsd\", \"i386\", true},\n\t}\n\n\tfor i, tt := range tests {\n\t\tlabels := make(map[types.ACIdentifier]string)\n\t\terr := setOSArch(labels, tt.srcOS, tt.srcArch)\n\n\t\tif tt.err != (err == nil) {\n\t\t\tconst text = \"#%d unexpected result of os/arch conversion: %v\"\n\t\t\tt.Errorf(text, i, err)\n\t\t}\n\n\t\tif labels[\"os\"] != tt.dstOS {\n\t\t\tconst text = \"#%d expected %v os, got %v instead\"\n\t\t\tt.Errorf(text, i, tt.dstOS, labels[\"os\"])\n\t\t}\n\n\t\tif labels[\"arch\"] != tt.dstArch {\n\t\t\tconst text = \"#%d expected %v arch, got %v instead\"\n\t\t\tt.Errorf(text, i, tt.dstArch, labels[\"arch\"])\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "lib/internal/tarball/tarfile.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package tarball provides functions to manipulate tar files.\n//\n// Note: this package is an implementation detail and shouldn't be used outside\n// of docker2aci.\npackage tarball\n\nimport (\n\t\"archive/tar\"\n\t\"io\"\n)\n\n// TarFile is a representation of a file in a tarball. It consists of two parts,\n// the Header and the Stream. The Header is a regular tar header, the Stream\n// is a byte stream that can be used to read the file's contents.\ntype TarFile struct {\n\tHeader    *tar.Header\n\tTarStream io.Reader\n}\n\n// Name returns the name of the file as reported by the header.\nfunc (t *TarFile) Name() string {\n\treturn t.Header.Name\n}\n\n// Linkname returns the Linkname of the file as reported by the header.\nfunc (t *TarFile) Linkname() string {\n\treturn t.Header.Linkname\n}\n"
  },
  {
    "path": "lib/internal/tarball/walk.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage tarball\n\nimport (\n\t\"archive/tar\"\n\t\"fmt\"\n\t\"io\"\n)\n\n// WalkFunc is a func for handling each file (header and byte stream) in a tarball\ntype WalkFunc func(t *TarFile) error\n\n// Walk walks through the files in the tarball represented by tarstream and\n// passes each of them to the WalkFunc provided as an argument\nfunc Walk(tarReader tar.Reader, walkFunc func(t *TarFile) error) error {\n\tfor {\n\t\thdr, err := tarReader.Next()\n\t\tif err == io.EOF {\n\t\t\t// end of tar archive\n\t\t\tbreak\n\t\t}\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"Error reading tar entry: %v\", err)\n\t\t}\n\t\tif err := walkFunc(&TarFile{Header: hdr, TarStream: &tarReader}); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "lib/internal/types/docker_types.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport \"time\"\n\n// DockerImageData stores the JSON structure of a Docker image.\n// Taken and adapted from upstream Docker.\ntype DockerImageData struct {\n\tID              string             `json:\"id\"`\n\tParent          string             `json:\"parent,omitempty\"`\n\tComment         string             `json:\"comment,omitempty\"`\n\tCreated         time.Time          `json:\"created\"`\n\tContainer       string             `json:\"container,omitempty\"`\n\tContainerConfig DockerImageConfig  `json:\"container_config,omitempty\"`\n\tDockerVersion   string             `json:\"docker_version,omitempty\"`\n\tAuthor          string             `json:\"author,omitempty\"`\n\tConfig          *DockerImageConfig `json:\"config,omitempty\"`\n\tArchitecture    string             `json:\"architecture,omitempty\"`\n\tOS              string             `json:\"os,omitempty\"`\n\tChecksum        string             `json:\"checksum\"`\n}\n\n// Note: the Config structure should hold only portable information about the container.\n// Here, \"portable\" means \"independent from the host we are running on\".\n// Non-portable information *should* appear in HostConfig.\n// Taken and adapted from upstream Docker.\ntype DockerImageConfig struct {\n\tHostname        string\n\tDomainname      string\n\tUser            string\n\tMemory          int64  // Memory limit (in bytes)\n\tMemorySwap      int64  // Total memory usage (memory + swap); set `-1' to disable swap\n\tCpuShares       int64  // CPU shares (relative weight vs. other containers)\n\tCpuset          string // Cpuset 0-2, 0,1\n\tAttachStdin     bool\n\tAttachStdout    bool\n\tAttachStderr    bool\n\tPortSpecs       []string // Deprecated - Can be in the format of 8080/tcp\n\tExposedPorts    map[string]struct{}\n\tTty             bool // Attach standard streams to a tty, including stdin if it is not closed.\n\tOpenStdin       bool // Open stdin\n\tStdinOnce       bool // If true, close stdin after the 1 attached client disconnects.\n\tEnv             []string\n\tCmd             []string\n\tImage           string // Name of the image as it was passed by the operator (eg. could be symbolic)\n\tVolumes         map[string]struct{}\n\tWorkingDir      string\n\tEntrypoint      []string\n\tNetworkDisabled bool\n\tMacAddress      string\n\tOnBuild         []string\n\tLabels          map[string]string\n}\n\n// DockerAuthConfigOld represents the deprecated ~/.dockercfg auth\n// configuration.\n// Taken from upstream Docker.\ntype DockerAuthConfigOld struct {\n\tUsername      string `json:\"username,omitempty\"`\n\tPassword      string `json:\"password,omitempty\"`\n\tAuth          string `json:\"auth\"`\n\tEmail         string `json:\"email\"`\n\tServerAddress string `json:\"serveraddress,omitempty\"`\n}\n\n// DockerAuthConfig represents a config.json auth entry.\n// Taken from upstream Docker.\ntype DockerAuthConfig struct {\n\tUsername      string `json:\"username,omitempty\"`\n\tPassword      string `json:\"password,omitempty\"`\n\tAuth          string `json:\"auth,omitempty\"`\n\tServerAddress string `json:\"serveraddress,omitempty\"`\n\tRegistryToken string `json:\"registrytoken,omitempty\"`\n}\n\n// DockerConfigFile represents a config.json auth file.\n// Taken from upstream docker.\ntype DockerConfigFile struct {\n\tAuthConfigs map[string]DockerAuthConfig `json:\"auths\"`\n}\n"
  },
  {
    "path": "lib/internal/typesV2/docker_types.go",
    "content": "// Copyright 2016 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage typesV2\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\n\t\"github.com/appc/docker2aci/lib/common\"\n)\n\nvar (\n\tErrIncorrectMediaType = errors.New(\"incorrect mediaType\")\n\tErrMissingConfig      = errors.New(\"the config field is empty\")\n\tErrMissingLayers      = errors.New(\"the layers field is empty\")\n)\n\ntype ImageManifest struct {\n\tSchemaVersion int                    `json:\"schemaVersion\"`\n\tMediaType     string                 `json:\"mediaType\"`\n\tConfig        *ImageManifestDigest   `json:\"config\"`\n\tLayers        []*ImageManifestDigest `json:\"layers\"`\n\tAnnotations   map[string]string      `json:\"annotations\"`\n}\n\ntype ImageManifestDigest struct {\n\tMediaType string `json:\"mediaType\"`\n\tSize      int    `json:\"size\"`\n\tDigest    string `json:\"digest\"`\n}\n\nfunc (im *ImageManifest) String() string {\n\tmanblob, err := json.Marshal(im)\n\tif err != nil {\n\t\treturn err.Error()\n\t}\n\treturn string(manblob)\n}\n\nfunc (im *ImageManifest) PrettyString() string {\n\tmanblob, err := json.MarshalIndent(im, \"\", \"    \")\n\tif err != nil {\n\t\treturn err.Error()\n\t}\n\treturn string(manblob)\n}\n\nfunc (im *ImageManifest) Validate() error {\n\tif im.MediaType != common.MediaTypeDockerV22Manifest && im.MediaType != common.MediaTypeOCIV1Manifest {\n\t\treturn ErrIncorrectMediaType\n\t}\n\tif im.Config == nil {\n\t\treturn ErrMissingConfig\n\t}\n\tif len(im.Layers) == 0 {\n\t\treturn ErrMissingLayers\n\t}\n\treturn nil\n}\n\ntype ImageConfig struct {\n\tCreated      string                `json:\"created\"`\n\tAuthor       string                `json:\"author\"`\n\tArchitecture string                `json:\"architecture\"`\n\tOS           string                `json:\"os\"`\n\tConfig       *ImageConfigConfig    `json:\"config\"`\n\tRootFS       *ImageConfigRootFS    `json:\"rootfs\"`\n\tHistory      []*ImageConfigHistory `json:\"history\"`\n}\n\ntype ImageConfigConfig struct {\n\tUser         string              `json:\"User\"`\n\tMemory       int                 `json:\"Memory\"`\n\tMemorySwap   int                 `json:\"MemorySwap\"`\n\tCpuShares    int                 `json:\"CpuShares\"`\n\tExposedPorts map[string]struct{} `json:\"ExposedPorts\"`\n\tEnv          []string            `json:\"Env\"`\n\tEntrypoint   []string            `json:\"Entrypoint\"`\n\tCmd          []string            `json:\"Cmd\"`\n\tVolumes      map[string]struct{} `json:\"Volumes\"`\n\tWorkingDir   string              `json:\"WorkingDir\"`\n}\n\ntype ImageConfigRootFS struct {\n\tDiffIDs []string `json:\"diff_ids\"`\n\tType    string   `json:\"type\"`\n}\n\ntype ImageConfigHistory struct {\n\tCreated    string `json:\"created,omitempty\"`\n\tAuthor     string `json:\"author,omitempty\"`\n\tCreatedBy  string `json:\"created_by,omitempty\"`\n\tComment    string `json:\"comment,omitempty\"`\n\tEmptyLayer bool   `json:\"empty_layer,omitempty\"`\n}\n\nfunc (ic *ImageConfig) String() string {\n\tmanblob, err := json.Marshal(ic)\n\tif err != nil {\n\t\treturn err.Error()\n\t}\n\treturn string(manblob)\n}\n\nfunc (ic *ImageConfig) PrettyString() string {\n\tmanblob, err := json.MarshalIndent(ic, \"\", \"    \")\n\tif err != nil {\n\t\treturn err.Error()\n\t}\n\treturn string(manblob)\n}\n"
  },
  {
    "path": "lib/internal/util/util.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package util defines convenience functions for handling slices and debugging.\n//\n// Note: this package is an implementation detail and shouldn't be used outside\n// of docker2aci.\npackage util\n\nimport (\n\t\"crypto/tls\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/http\"\n\t\"time\"\n\n\t\"github.com/appc/spec/pkg/acirenderer\"\n)\n\nvar (\n\tsecureClient   = newClient(false)\n\tinsecureClient = newClient(true)\n)\n\n// Quote takes a slice of strings and returns another slice with them quoted.\nfunc Quote(l []string) []string {\n\tvar quoted []string\n\n\tfor _, s := range l {\n\t\tquoted = append(quoted, fmt.Sprintf(\"%q\", s))\n\t}\n\n\treturn quoted\n}\n\n// ReverseImages takes an acirenderer.Images and reverses it.\nfunc ReverseImages(s acirenderer.Images) acirenderer.Images {\n\tvar o acirenderer.Images\n\tfor i := len(s) - 1; i >= 0; i-- {\n\t\to = append(o, s[i])\n\t}\n\n\treturn o\n}\n\n// In checks whether el is in list.\nfunc In(list []string, el string) bool {\n\treturn IndexOf(list, el) != -1\n}\n\n// IndexOf returns the index of el in list, or -1 if it's not found.\nfunc IndexOf(list []string, el string) int {\n\tfor i, x := range list {\n\t\tif el == x {\n\t\t\treturn i\n\t\t}\n\t}\n\treturn -1\n}\n\n// GetTLSClient gets an HTTP client that behaves like the default HTTP\n// client, but optionally skips the TLS certificate verification.\nfunc GetTLSClient(skipTLSCheck bool) *http.Client {\n\tif skipTLSCheck {\n\t\treturn insecureClient\n\t}\n\n\treturn secureClient\n}\n\nfunc newClient(skipTLSCheck bool) *http.Client {\n\tdialer := &net.Dialer{\n\t\tTimeout:   30 * time.Second,\n\t\tKeepAlive: 30 * time.Second,\n\t} // values taken from stdlib v1.5.3\n\n\ttr := &http.Transport{\n\t\tProxy:               http.ProxyFromEnvironment,\n\t\tDial:                dialer.Dial,\n\t\tTLSHandshakeTimeout: 10 * time.Second,\n\t} // values taken from stdlib v1.5.3\n\n\tif skipTLSCheck {\n\t\ttr.TLSClientConfig = &tls.Config{\n\t\t\tInsecureSkipVerify: true,\n\t\t}\n\t}\n\n\treturn &http.Client{\n\t\tTransport: tr,\n\t}\n}\n"
  },
  {
    "path": "lib/tests/common.go",
    "content": "package test\n\nimport (\n\t\"archive/tar\"\n\t\"bytes\"\n\t\"crypto/sha256\"\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path\"\n\n\t\"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/lib/internal/typesV2\"\n)\n\ntype Layer map[*tar.Header][]byte\n\ntype Docker22Image struct {\n\tRepoTags []string\n\tLayers   []Layer\n\tConfig   typesV2.ImageConfig\n}\n\nfunc GenerateDocker22(destPath string, img Docker22Image) error {\n\tlayerHashes, err := GenLayers(destPath, img.Layers)\n\tif err != nil {\n\t\treturn err\n\t}\n\tconfigHash, err := GenDocker22Config(destPath, img.Config, layerHashes)\n\tif err != nil {\n\t\treturn err\n\t}\n\terr = GenDocker22Manifest(destPath, configHash, layerHashes)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\nfunc GenLayers(destPath string, layers []Layer) ([]string, error) {\n\tvar layerHashes []string\n\tfor _, l := range layers {\n\t\tlayerBuffer := &bytes.Buffer{}\n\t\ttw := tar.NewWriter(layerBuffer)\n\t\tfor hdr, contents := range l {\n\t\t\thdr.Size = int64(len(contents))\n\t\t\terr := tw.WriteHeader(hdr)\n\t\t\tif err != nil {\n\t\t\t\ttw.Close()\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\t_, err = tw.Write(contents)\n\t\t\tif err != nil {\n\t\t\t\ttw.Close()\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\t\ttw.Close()\n\t\tlayerTarBlob := layerBuffer.Bytes()\n\t\th := sha256.New()\n\t\th.Write(layerTarBlob)\n\t\thashStr := hex.EncodeToString(h.Sum(nil))\n\t\tlayerHashes = append(layerHashes, hashStr)\n\t\terr := ioutil.WriteFile(path.Join(destPath, hashStr), layerTarBlob, 0644)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn layerHashes, nil\n}\n\nfunc GenDocker22Config(destPath string, conf typesV2.ImageConfig, layerHashes []string) (string, error) {\n\tconf.RootFS = &typesV2.ImageConfigRootFS{}\n\tconf.RootFS.Type = \"layers\"\n\tfor _, h := range layerHashes {\n\t\tconf.RootFS.DiffIDs = append(conf.RootFS.DiffIDs, \"sha256:\"+h)\n\t}\n\tconfblob, err := json.Marshal(conf)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\th := sha256.New()\n\th.Write(confblob)\n\thashStr := hex.EncodeToString(h.Sum(nil))\n\terr = ioutil.WriteFile(path.Join(destPath, hashStr), confblob, 0644)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn hashStr, nil\n}\n\nfunc GenDocker22Manifest(destPath, configHash string, layerHashes []string) error {\n\tgetDigestSize := func(digest string) (int64, error) {\n\t\tfi, err := os.Stat(path.Join(destPath, digest))\n\t\tif err != nil {\n\t\t\treturn 0, err\n\t\t}\n\t\treturn fi.Size(), nil\n\t}\n\n\tconfigSize, err := getDigestSize(configHash)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tmanifest := &typesV2.ImageManifest{\n\t\tSchemaVersion: 2,\n\t\tMediaType:     common.MediaTypeDockerV22Manifest,\n\t\tConfig: &typesV2.ImageManifestDigest{\n\t\t\tMediaType: common.MediaTypeDockerV22Config,\n\t\t\tSize:      int(configSize),\n\t\t\tDigest:    \"sha256:\" + configHash,\n\t\t},\n\t}\n\tfor _, h := range layerHashes {\n\t\tlayerSize, err := getDigestSize(h)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmanifest.Layers = append(manifest.Layers,\n\t\t\t&typesV2.ImageManifestDigest{\n\t\t\t\tMediaType: common.MediaTypeDockerV22RootFS,\n\t\t\t\tSize:      int(layerSize),\n\t\t\t\tDigest:    \"sha256:\" + h,\n\t\t\t})\n\t}\n\n\tmanblob, err := json.Marshal(manifest)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\terr = ioutil.WriteFile(path.Join(destPath, \"manifest.json\"), manblob, 0644)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "lib/tests/server.go",
    "content": "package test\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"net/http/httptest\"\n\t\"os\"\n\t\"path\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc RunDockerRegistry(t *testing.T, imgPath, imgName, imgRef, manifestMediaType string) *httptest.Server {\n\thandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\t\tt.Logf(\"path requested: %s\", r.URL.Path)\n\t\tif r.URL.Path == \"/v2/\" {\n\t\t\tw.Header().Add(\"Docker-Distribution-API-Version\", \"registry/2.0\")\n\t\t\tw.WriteHeader(http.StatusOK)\n\t\t\treturn\n\t\t}\n\t\tif strings.Contains(r.URL.Path, \"manifests\") {\n\t\t\tGetManifest(t, w, r, imgPath, imgName, imgRef, manifestMediaType)\n\t\t\treturn\n\t\t}\n\t\tif strings.Contains(r.URL.Path, \"blobs\") {\n\t\t\tGetBlob(t, w, r, imgPath, imgName, imgRef)\n\t\t\treturn\n\t\t}\n\t\tt.Errorf(\"invalid path: %s\", r.URL.Path)\n\t})\n\tserver := httptest.NewServer(handler)\n\treturn server\n}\n\nfunc GetManifest(t *testing.T, w http.ResponseWriter, r *http.Request, imgPath, imgName, imgRef, manifestMediaType string) {\n\tparsedImgName, parsedRef, err := parseURL(\"manifests\", r.URL.Path)\n\tif err != nil {\n\t\tw.WriteHeader(http.StatusNotFound)\n\t\tt.Errorf(\"get manifest: error parsing path: %v\", err)\n\t\treturn\n\t}\n\tif parsedImgName != imgName {\n\t\tw.WriteHeader(http.StatusNotFound)\n\t\tt.Errorf(\"get manifest: invalid image name requested: %q\", parsedImgName)\n\t\treturn\n\t}\n\tif parsedRef != imgRef {\n\t\tw.WriteHeader(http.StatusNotFound)\n\t\tt.Errorf(\"get manifest: invalid image ref requested: %q\", parsedImgName)\n\t\treturn\n\t}\n\tmanFile, err := os.Open(path.Join(imgPath, \"manifest.json\"))\n\tif err != nil {\n\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\tt.Errorf(\"get manifest: couldn't open manifest: %v\", err)\n\t\treturn\n\t}\n\tdefer manFile.Close()\n\tw.Header().Add(\"content-type\", manifestMediaType)\n\t_, err = io.Copy(w, manFile)\n\tif err != nil {\n\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\tt.Errorf(\"get manifest: couldn't copy manifest: %v\", err)\n\t\treturn\n\t}\n}\n\nfunc GetBlob(t *testing.T, w http.ResponseWriter, r *http.Request, imgPath, imgName, imgRef string) {\n\tparsedImgName, digest, err := parseURL(\"blobs\", r.URL.Path)\n\tif err != nil {\n\t\tw.WriteHeader(http.StatusNotFound)\n\t\tt.Errorf(\"get blob: %v\", err)\n\t\treturn\n\t}\n\tdigest = strings.TrimPrefix(digest, \"sha256:\")\n\tif parsedImgName != imgName {\n\t\tw.WriteHeader(http.StatusNotFound)\n\t\tt.Errorf(\"get blob: invalid image name requested: %s\", parsedImgName)\n\t\treturn\n\t}\n\tblobFile, err := os.Open(path.Join(imgPath, digest))\n\tif err != nil {\n\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\tt.Errorf(\"get blob: couldn't open manifest: %v\", err)\n\t\treturn\n\t}\n\tdefer blobFile.Close()\n\t_, err = io.Copy(w, blobFile)\n\tif err != nil {\n\t\tw.WriteHeader(http.StatusInternalServerError)\n\t\tt.Errorf(\"get blob: couldn't copy manifest: %v\", err)\n\t\treturn\n\t}\n}\n\nfunc parseURL(resource, input string) (string, string, error) {\n\ttokens := strings.Split(input, \"/\")\n\ttokLen := len(tokens)\n\tif tokLen < 5 {\n\t\treturn \"\", \"\", fmt.Errorf(\"invalid number of tokens in path: %d\", len(tokens))\n\t}\n\tif tokens[0] != \"\" {\n\t\treturn \"\", \"\", fmt.Errorf(\"path parse error: tok0 = %s\", tokens[0])\n\t}\n\tif tokens[1] != \"v2\" {\n\t\treturn \"\", \"\", fmt.Errorf(\"path parse error: tok1 = %s\", tokens[1])\n\t}\n\tif tokens[tokLen-2] != resource {\n\t\treturn \"\", \"\", fmt.Errorf(\"path parse error: tok-2 = %s\", tokens[tokLen-2])\n\t}\n\treturn path.Join(tokens[2 : tokLen-2]...), tokens[tokLen-1], nil\n}\n"
  },
  {
    "path": "lib/tests/v22_test.go",
    "content": "package test\n\nimport (\n\t\"testing\"\n\n\t\"archive/tar\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path\"\n\t\"reflect\"\n\t\"strings\"\n\t\"time\"\n\n\tdocker2aci \"github.com/appc/docker2aci/lib\"\n\td2acommon \"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/lib/internal/typesV2\"\n\t\"github.com/appc/spec/aci\"\n\t\"github.com/appc/spec/schema\"\n\t\"github.com/appc/spec/schema/types\"\n)\n\nconst variableTestValue = \"variant\"\n\n// osArchTuple is a placeholder for operating system name and respective\n// supported architecture.\ntype osArchTuple struct {\n\tOs   string\n\tArch string\n}\n\n// osArchTuples defines the list of Go os/arch pairs used to test the\n// conversion of Docker images to ACIs.\nvar osArchTuples = []osArchTuple{\n\t{\"linux\", \"amd64\"},\n\t{\"linux\", \"386\"},\n\t{\"linux\", \"arm64\"},\n\t{\"linux\", \"arm\"},\n\t{\"linux\", \"ppc64\"},\n\t{\"linux\", \"ppc64le\"},\n\t{\"linux\", \"s390x\"},\n\n\t{\"freebsd\", \"amd64\"},\n\t{\"freebsd\", \"386\"},\n\t{\"freebsd\", \"arm\"},\n\n\t{\"darwin\", \"amd64\"},\n\t{\"darwin\", \"386\"},\n}\n\n// dockerImageConfig defines the common image configuration.\nvar dockerImageConfig = typesV2.ImageConfigConfig{\n\tUser:       \"\",\n\tMemory:     12345,\n\tMemorySwap: 0,\n\tCpuShares:  9001,\n\tExposedPorts: map[string]struct{}{\n\t\t\"80\": struct{}{},\n\t},\n\tEnv: []string{\n\t\t\"FOO=1\",\n\t},\n\tEntrypoint: []string{\n\t\t\"/bin/sh\",\n\t\t\"-c\",\n\t\t\"echo\",\n\t},\n\tCmd: []string{\n\t\t\"foo\",\n\t},\n\tVolumes:    nil,\n\tWorkingDir: \"/\",\n}\n\n// testDocker22Images generates the Docker images v22 for all supported\n// os/arch pairs and calls the passed testing function.\nfunc testDocker22Images(layers []Layer, fn func(Docker22Image)) {\n\tfor _, tuple := range osArchTuples {\n\t\tconfig := typesV2.ImageConfig{\n\t\t\tCreated:      \"2016-06-02T21:43:31.291506236Z\",\n\t\t\tAuthor:       \"rkt developer <rkt-dev@googlegroups.com>\",\n\t\t\tArchitecture: tuple.Arch,\n\t\t\tOS:           tuple.Os,\n\t\t\tConfig:       &dockerImageConfig,\n\t\t}\n\n\t\t// Create a new Docker image configuration and pass it to\n\t\t// the testing function.\n\t\tfn(Docker22Image{\n\t\t\tRepoTags: []string{\"testimage:latest\"},\n\t\t\tLayers:   layers,\n\t\t\tConfig:   config,\n\t\t})\n\t}\n}\n\nfunc expectedManifest(registryUrl, imageName, imageOs, imageArch string) schema.ImageManifest {\n\treturn schema.ImageManifest{\n\t\tACKind:    types.ACKind(\"ImageManifest\"),\n\t\tACVersion: schema.AppContainerVersion,\n\t\tName:      *types.MustACIdentifier(\"variant\"),\n\t\tLabels: []types.Label{\n\t\t\ttypes.Label{\n\t\t\t\tName:  *types.MustACIdentifier(\"arch\"),\n\t\t\t\tValue: imageArch,\n\t\t\t},\n\t\t\ttypes.Label{\n\t\t\t\tName:  *types.MustACIdentifier(\"os\"),\n\t\t\t\tValue: imageOs,\n\t\t\t},\n\t\t\ttypes.Label{\n\t\t\t\tName:  *types.MustACIdentifier(\"version\"),\n\t\t\t\tValue: \"v0.1.0\",\n\t\t\t},\n\t\t},\n\t\tApp: &types.App{\n\t\t\tExec: []string{\n\t\t\t\t\"/bin/sh\",\n\t\t\t\t\"-c\",\n\t\t\t\t\"echo\",\n\t\t\t\t\"foo\",\n\t\t\t},\n\t\t\tUser:  \"0\",\n\t\t\tGroup: \"0\",\n\t\t\tEnvironment: []types.EnvironmentVariable{\n\t\t\t\t{\n\t\t\t\t\tName:  \"FOO\",\n\t\t\t\t\tValue: \"1\",\n\t\t\t\t},\n\t\t\t},\n\t\t\tWorkingDirectory: \"/\",\n\t\t\tPorts: []types.Port{\n\t\t\t\t{\n\t\t\t\t\tName:            \"80\",\n\t\t\t\t\tProtocol:        \"tcp\",\n\t\t\t\t\tPort:            80,\n\t\t\t\t\tCount:           1,\n\t\t\t\t\tSocketActivated: false,\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\tAnnotations: []types.Annotation{\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"author\"),\n\t\t\t\tValue: \"rkt developer <rkt-dev@googlegroups.com>\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"created\"),\n\t\t\t\tValue: \"2016-06-02T21:43:31.291506236Z\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"appc.io/docker/registryurl\"),\n\t\t\t\tValue: registryUrl,\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"appc.io/docker/repository\"),\n\t\t\t\tValue: \"docker2aci/dockerv22test\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"appc.io/docker/imageid\"),\n\t\t\t\tValue: variableTestValue,\n\t\t\t\t// Different each testrun for unknown reasons\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"appc.io/docker/manifesthash\"),\n\t\t\t\tValue: variableTestValue,\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"appc.io/docker/originalname\"),\n\t\t\t\tValue: imageName,\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"appc.io/docker/entrypoint\"),\n\t\t\t\tValue: \"[\\\"/bin/sh\\\",\\\"-c\\\",\\\"echo\\\"]\",\n\t\t\t},\n\t\t\t{\n\t\t\t\tName:  *types.MustACIdentifier(\"appc.io/docker/cmd\"),\n\t\t\t\tValue: \"[\\\"foo\\\"]\",\n\t\t\t},\n\t\t},\n\t}\n}\n\nfunc fetchImage(imgName, outputDir string, squash bool) ([]string, error) {\n\tconversionTmpDir, err := ioutil.TempDir(\"\", \"docker2aci-test-\")\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer os.RemoveAll(conversionTmpDir)\n\n\tconf := docker2aci.RemoteConfig{\n\t\tCommonConfig: docker2aci.CommonConfig{\n\t\t\tSquash:      squash,\n\t\t\tOutputDir:   outputDir,\n\t\t\tTmpDir:      conversionTmpDir,\n\t\t\tCompression: d2acommon.GzipCompression,\n\t\t},\n\t\tUsername: \"\",\n\t\tPassword: \"\",\n\t\tInsecure: d2acommon.InsecureConfig{\n\t\t\tSkipVerify: true,\n\t\t\tAllowHTTP:  true,\n\t\t},\n\t}\n\n\treturn docker2aci.ConvertRemoteRepo(imgName, conf)\n}\n\nfunc TestFetchingByTagV22(t *testing.T) {\n\tlayers := []Layer{\n\t\tLayer{\n\t\t\t&tar.Header{\n\t\t\t\tName:    \"thisisafile\",\n\t\t\t\tMode:    0644,\n\t\t\t\tModTime: time.Now(),\n\t\t\t}: []byte(\"these are its contents\"),\n\t\t},\n\t}\n\n\ttestDocker22Images(layers, func(img Docker22Image) {\n\t\ttmpDir, err := ioutil.TempDir(\"\", \"docker2aci-test-\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\tdefer os.RemoveAll(tmpDir)\n\n\t\terr = GenerateDocker22(tmpDir, img)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\timgName := \"docker2aci/dockerv22test\"\n\t\timgRef := \"v0.1.0\"\n\t\tserver := RunDockerRegistry(t, tmpDir, imgName, imgRef, d2acommon.MediaTypeDockerV22Manifest)\n\t\tdefer server.Close()\n\n\t\tbareServerURL := strings.TrimPrefix(server.URL, \"http://\")\n\t\tlocalUrl := path.Join(bareServerURL, imgName) + \":\" + imgRef\n\n\t\t// Convert the Docker image os/arch pair into values compatible\n\t\t// with application container image specification.\n\t\timgOs, imgArch := img.Config.OS, img.Config.Architecture\n\t\timgOs, imgArch, err = types.ToAppcOSArch(imgOs, imgArch, \"\")\n\t\tif err != nil {\n\t\t\tt.Errorf(\"unexpected error: %v\", err)\n\t\t}\n\n\t\texpectedImageManifest := expectedManifest(bareServerURL, localUrl, imgOs, imgArch)\n\n\t\toutputDir, err := ioutil.TempDir(\"\", \"docker2aci-test-\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\tdefer os.RemoveAll(outputDir)\n\n\t\tacis, err := fetchImage(localUrl, outputDir, true)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\n\t\tconverted := acis[0]\n\n\t\tf, err := os.Open(converted)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\tdefer f.Close()\n\n\t\tmanifest, err := aci.ManifestFromImage(f)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\n\t\tif err := manifestEqual(manifest, &expectedImageManifest); err != nil {\n\t\t\tt.Errorf(\"manifest doesn't match expected manifest: %v\", err)\n\t\t}\n\t})\n}\n\nfunc manifestEqual(manifest, expected *schema.ImageManifest) error {\n\tif manifest.ACKind != expected.ACKind {\n\t\treturn fmt.Errorf(\"expected ACKind %q, got %q\", expected.ACKind, manifest.ACKind)\n\t}\n\n\tif manifest.ACVersion != expected.ACVersion {\n\t\treturn fmt.Errorf(\"expected ACVersion %q, got %q\", expected.ACVersion, manifest.ACVersion)\n\t}\n\n\tif !reflect.DeepEqual(*manifest.App, *expected.App) {\n\t\treturn fmt.Errorf(\"expected App %v, got %v\", *expected.App, *manifest.App)\n\t}\n\n\tif len(manifest.Labels) != len(expected.Labels) {\n\t\treturn fmt.Errorf(\"Labels not equal: %v != %v\", manifest.Labels, expected.Labels)\n\t}\n\n\tfor _, label := range manifest.Labels {\n\t\tel, ok := expected.Labels.Get(label.Name.String())\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"expected label %v to exist, did not\", label.Name)\n\t\t}\n\t\tif label.Value != el {\n\t\t\treturn fmt.Errorf(\"expected label %v values to match, but %v != %v\", label.Name, el, label.Value)\n\t\t}\n\t}\n\n\tif len(manifest.Annotations) != len(expected.Annotations) {\n\t\treturn fmt.Errorf(\"annotations not equal: %v != %v\", manifest.Annotations, expected.Annotations)\n\t}\n\tfor _, ann := range manifest.Annotations {\n\t\tea, ok := expected.Annotations.Get(ann.Name.String())\n\t\tif ea == variableTestValue {\n\t\t\t// marker to let us know we don't have to assert on this value; skip it\n\t\t\tcontinue\n\t\t}\n\t\tif !ok {\n\t\t\treturn fmt.Errorf(\"expected annotation %v to exist, did not\", ann.Name)\n\t\t}\n\t\tif ea != ann.Value {\n\t\t\treturn fmt.Errorf(\"expected annotation %v values to match, but %v != %v\", ann.Name, ea, ann.Value)\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc TestFetchingByDigestV22(t *testing.T) {\n\tlayers := []Layer{\n\t\tLayer{\n\t\t\t&tar.Header{\n\t\t\t\tName:    \"thisisafile\",\n\t\t\t\tMode:    0644,\n\t\t\t\tModTime: time.Now(),\n\t\t\t}: []byte(\"these are its contents\"),\n\t\t},\n\t}\n\n\ttestDocker22Images(layers, func(img Docker22Image) {\n\t\ttmpDir, err := ioutil.TempDir(\"\", \"docker2aci-test-\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\tdefer os.RemoveAll(tmpDir)\n\n\t\terr = GenerateDocker22(tmpDir, img)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\timgName := \"docker2aci/dockerv22test\"\n\t\timgRef := \"sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2\"\n\t\tserver := RunDockerRegistry(t, tmpDir, imgName, imgRef, d2acommon.MediaTypeDockerV22Manifest)\n\t\tdefer server.Close()\n\n\t\tlocalUrl := path.Join(strings.TrimPrefix(server.URL, \"http://\"), imgName) + \"@\" + imgRef\n\n\t\toutputDir, err := ioutil.TempDir(\"\", \"docker2aci-test-\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\tdefer os.RemoveAll(outputDir)\n\n\t\t_, err = fetchImage(localUrl, outputDir, true)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t})\n}\n\nfunc TestFetchingMultipleLayersV22(t *testing.T) {\n\tlayers := []Layer{\n\t\tLayer{\n\t\t\t&tar.Header{\n\t\t\t\tName:    \"thisisafile\",\n\t\t\t\tMode:    0644,\n\t\t\t\tModTime: time.Now(),\n\t\t\t}: []byte(\"these are its contents\"),\n\t\t},\n\t\tLayer{\n\t\t\t&tar.Header{\n\t\t\t\tName:    \"thisisadifferentfile\",\n\t\t\t\tMode:    0644,\n\t\t\t\tModTime: time.Now(),\n\t\t\t}: []byte(\"the contents of this file are different from the last!\"),\n\t\t},\n\t}\n\n\ttestDocker22Images(layers, func(img Docker22Image) {\n\t\ttmpDir, err := ioutil.TempDir(\"\", \"docker2aci-test-\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\tdefer os.RemoveAll(tmpDir)\n\n\t\terr = GenerateDocker22(tmpDir, img)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\timgName := \"docker2aci/dockerv22test\"\n\t\timgRef := \"v0.1.0\"\n\t\tserver := RunDockerRegistry(t, tmpDir, imgName, imgRef, d2acommon.MediaTypeDockerV22Manifest)\n\t\tdefer server.Close()\n\n\t\tlocalUrl := path.Join(strings.TrimPrefix(server.URL, \"http://\"), imgName) + \":\" + imgRef\n\n\t\toutputDir, err := ioutil.TempDir(\"\", \"docker2aci-test-\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t\tdefer os.RemoveAll(outputDir)\n\n\t\t_, err = fetchImage(localUrl, outputDir, true)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"%v\", err)\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "lib/version.go",
    "content": "// Copyright 2016 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage docker2aci\n\nimport \"github.com/appc/spec/schema\"\n\nvar Version = \"0.17.2+git\"\nvar AppcVersion = schema.AppContainerVersion\n"
  },
  {
    "path": "main.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage main\n\nimport (\n\t\"flag\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/appc/docker2aci/lib\"\n\t\"github.com/appc/docker2aci/lib/common\"\n\t\"github.com/appc/docker2aci/pkg/log\"\n\n\t\"github.com/appc/spec/aci\"\n\t\"github.com/appc/spec/schema\"\n)\n\nvar (\n\tflagNoSquash           bool\n\tflagImage              string\n\tflagDebug              bool\n\tflagInsecureSkipVerify bool\n\tflagInsecureAllowHTTP  bool\n\tflagCompression        string\n\tflagVersion            bool\n)\n\nfunc init() {\n\tflag.BoolVar(&flagNoSquash, \"nosquash\", false, \"Don't squash layers and output every layer as ACI\")\n\tflag.StringVar(&flagImage, \"image\", \"\", \"When converting a local file, it selects a particular image to convert. Format: IMAGE_NAME[:TAG]\")\n\tflag.BoolVar(&flagDebug, \"debug\", false, \"Enables debug messages\")\n\tflag.BoolVar(&flagInsecureSkipVerify, \"insecure-skip-verify\", false, \"Don't verify certificates when fetching images\")\n\tflag.BoolVar(&flagInsecureAllowHTTP, \"insecure-allow-http\", false, \"Uses unencrypted connections when fetching images\")\n\tflag.StringVar(&flagCompression, \"compression\", \"gzip\", \"Type of compression to use; allowed values: gzip, none\")\n\tflag.BoolVar(&flagVersion, \"version\", false, \"Print version\")\n}\n\nfunc printVersion() {\n\tfmt.Println(\"docker2aci version\", docker2aci.Version)\n\tfmt.Println(\"appc version\", docker2aci.AppcVersion)\n}\n\nfunc runDocker2ACI(arg string) error {\n\tdebug := log.NewNopLogger()\n\tinfo := log.NewStdLogger(os.Stderr)\n\n\tif flagDebug {\n\t\tdebug = log.NewStdLogger(os.Stderr)\n\t}\n\n\tsquash := !flagNoSquash\n\n\tvar aciLayerPaths []string\n\t// try to convert a local file\n\tu, err := url.Parse(arg)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error parsing argument: %v\", err)\n\t}\n\n\tvar compression common.Compression\n\n\tswitch flagCompression {\n\tcase \"none\":\n\t\tcompression = common.NoCompression\n\tcase \"gzip\":\n\t\tcompression = common.GzipCompression\n\tdefault:\n\t\treturn fmt.Errorf(\"unknown compression method: %s\", flagCompression)\n\t}\n\n\tcfg := docker2aci.CommonConfig{\n\t\tSquash:      squash,\n\t\tOutputDir:   \".\",\n\t\tTmpDir:      os.TempDir(),\n\t\tCompression: compression,\n\t\tDebug:       debug,\n\t\tInfo:        info,\n\t}\n\tif u.Scheme == \"docker\" {\n\t\tif flagImage != \"\" {\n\t\t\treturn fmt.Errorf(\"flag --image works only with files.\")\n\t\t}\n\t\tdockerURL := strings.TrimPrefix(arg, \"docker://\")\n\n\t\tindexServer := docker2aci.GetIndexName(dockerURL)\n\n\t\tvar username, password string\n\t\tusername, password, err = docker2aci.GetDockercfgAuth(indexServer)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"error reading .dockercfg file: %v\", err)\n\t\t}\n\t\tremoteConfig := docker2aci.RemoteConfig{\n\t\t\tCommonConfig: cfg,\n\t\t\tUsername:     username,\n\t\t\tPassword:     password,\n\t\t\tInsecure: common.InsecureConfig{\n\t\t\t\tSkipVerify: flagInsecureSkipVerify,\n\t\t\t\tAllowHTTP:  flagInsecureAllowHTTP,\n\t\t\t},\n\t\t}\n\n\t\taciLayerPaths, err = docker2aci.ConvertRemoteRepo(dockerURL, remoteConfig)\n\t} else {\n\t\tfileConfig := docker2aci.FileConfig{\n\t\t\tCommonConfig: cfg,\n\t\t\tDockerURL:    flagImage,\n\t\t}\n\t\taciLayerPaths, err = docker2aci.ConvertSavedFile(arg, fileConfig)\n\t\tif serr, ok := err.(*common.ErrSeveralImages); ok {\n\t\t\terr = fmt.Errorf(\"%s, use option --image with one of:\\n\\n%s\", serr, strings.Join(serr.Images, \"\\n\"))\n\t\t}\n\t}\n\tif err != nil {\n\t\treturn fmt.Errorf(\"conversion error: %v\", err)\n\t}\n\n\t// we get last layer's manifest, this will include all the elements in the\n\t// previous layers. If we're squashing, the last element of aciLayerPaths\n\t// will be the squashed image.\n\tmanifest, err := getManifest(aciLayerPaths[len(aciLayerPaths)-1])\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tprintConvertedVolumes(*manifest)\n\tprintConvertedPorts(*manifest)\n\n\tfmt.Printf(\"\\nGenerated ACI(s):\\n\")\n\tfor _, aciFile := range aciLayerPaths {\n\t\tfmt.Println(aciFile)\n\t}\n\n\treturn nil\n}\n\nfunc printConvertedVolumes(manifest schema.ImageManifest) {\n\tif manifest.App == nil {\n\t\treturn\n\t}\n\tif mps := manifest.App.MountPoints; len(mps) > 0 {\n\t\tfmt.Printf(\"\\nConverted volumes:\\n\")\n\t\tfor _, mp := range mps {\n\t\t\tfmt.Printf(\"\\tname: %q, path: %q, readOnly: %v\\n\", mp.Name, mp.Path, mp.ReadOnly)\n\t\t}\n\t}\n}\n\nfunc printConvertedPorts(manifest schema.ImageManifest) {\n\tif manifest.App == nil {\n\t\treturn\n\t}\n\tif ports := manifest.App.Ports; len(ports) > 0 {\n\t\tfmt.Printf(\"\\nConverted ports:\\n\")\n\t\tfor _, port := range ports {\n\t\t\tfmt.Printf(\"\\tname: %q, protocol: %q, port: %v, count: %v, socketActivated: %v\\n\",\n\t\t\t\tport.Name, port.Protocol, port.Port, port.Count, port.SocketActivated)\n\t\t}\n\t}\n}\n\nfunc getManifest(aciPath string) (*schema.ImageManifest, error) {\n\tf, err := os.Open(aciPath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error opening converted image: %v\", err)\n\t}\n\tdefer f.Close()\n\n\tmanifest, err := aci.ManifestFromImage(f)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error reading manifest from converted image: %v\", err)\n\t}\n\n\treturn manifest, nil\n}\n\nfunc usage() {\n\tfmt.Fprintf(os.Stderr, \"Usage of %s:\\n\", os.Args[0])\n\tfmt.Fprintf(os.Stderr, \"docker2aci [-debug] [-nosquash] [-compression=(gzip|none)] IMAGE\\n\")\n\tfmt.Fprintf(os.Stderr, \"  Where IMAGE is\\n\")\n\tfmt.Fprintf(os.Stderr, \"    [-image=IMAGE_NAME[:TAG]] FILEPATH\\n\")\n\tfmt.Fprintf(os.Stderr, \"  or\\n\")\n\tfmt.Fprintf(os.Stderr, \"    docker://[REGISTRYURL/]IMAGE_NAME[:TAG]\\n\")\n\tfmt.Fprintf(os.Stderr, \"Flags:\\n\")\n\tflag.PrintDefaults()\n}\n\nfunc main() {\n\tflag.Usage = usage\n\tflag.Parse()\n\targs := flag.Args()\n\n\tif flagVersion {\n\t\tprintVersion()\n\t\treturn\n\t}\n\n\tif len(args) != 1 {\n\t\tusage()\n\t\tos.Exit(2)\n\t}\n\n\tif err := runDocker2ACI(args[0]); err != nil {\n\t\tfmt.Fprintf(os.Stderr, \"Error: %v\\n\", err)\n\t\tos.Exit(1)\n\t}\n}\n"
  },
  {
    "path": "pkg/log/log.go",
    "content": "// Copyright 2016 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage log\n\nimport (\n\t\"io\"\n\tstdlog \"log\"\n)\n\n// Logger is the interface that enables logging.\n// It is compatible with the stdlib \"log\" methods.\n// It is also compatible with https://godoc.org/github.com/Sirupsen/logrus#StdLogger.\ntype Logger interface {\n\tPrint(...interface{})\n\tPrintf(string, ...interface{})\n\tPrintln(...interface{})\n}\n\nfunc NewStdLogger(out io.Writer) Logger {\n\treturn stdlog.New(out, \"\", 0)\n}\n\ntype nopLogger struct{}\n\nfunc NewNopLogger() Logger {\n\treturn &nopLogger{}\n}\n\nfunc (l *nopLogger) Print(...interface{}) {\n\t// nop\n}\n\nfunc (l *nopLogger) Printf(string, ...interface{}) {\n\t// nop\n}\n\nfunc (l *nopLogger) Println(...interface{}) {\n\t// nop\n}\n"
  },
  {
    "path": "scripts/bump-release",
    "content": "#!/bin/bash -e\n#\n# Attempt to bump the docker2aci release to the specified version by replacing\n# all occurrences of the current/previous version.\n#\n# Generates two commits: the release itself and the bump to the next +git\n# version\n#\n# YMMV, no disclaimer or warranty, etc.\n\n# make sure we are running in a toplevel directory\nif ! [[ \"$0\" =~ \"scripts/bump-release\" ]]; then\n\techo \"This script must be run in a toplevel docker2aci directory\"\n\texit 255\nfi\n\nif ! [[ \"$1\" =~ ^v[[:digit:]]+\\.[[:digit:]]+\\.[[:digit:]]$ ]]; then\n\techo \"Usage: scripts/bump-release <VERSION>\"\n\techo \"   where VERSION must be vX.Y.Z\"\n\texit 255\nfi\n\nfunction replace_stuff() {\n\tlocal FROM\n\tlocal TO\n\tlocal REPLACE\n\n\tFROM=$1\n\tTO=$2\n\t# escape special characters\n\tREPLACE=$(sed -e 's/[]\\/$*.^|[]/\\\\&/g'<<< $FROM)\n\tshift 2\n\techo $* | xargs sed -i --follow-symlinks -e \"s/$REPLACE/$TO/g\"\n}\n\nfunction replace_version() {\n\treplace_stuff $1 $2 lib/version.go\n}\n\nNEXT=${1:1} \t        # 0.2.3\nNEXTGIT=\"${NEXT}+git\"   # 0.2.3+git\n\nPREVGIT=$(grep -Po 'var Version = \"\\K[^\"]*(?=\")' lib/version.go) # 0.1.2+git\nPREV=${PREVGIT::-4}     # 0.1.2\n\nreplace_version $PREVGIT $NEXT\ngit commit -am \"version: bump to v${NEXT}\"\n\nreplace_version $NEXT $NEXTGIT\ngit commit -am \"version: bump to v${NEXTGIT}\"\n"
  },
  {
    "path": "scripts/glide-update",
    "content": "#!/usr/bin/env bash\nset -e\n\nif ! [[ \"$0\" =~ \"scripts/glide-update\"  ]]; then\n  echo \"must be run from repository root\"\n  exit 255\nfi\n\nif [ ! $(command -v glide)  ]; then\n  echo \"glide: command not found\"\n  exit 255\nfi\n\nif [ ! $(command -v glide-vc)  ]; then\n  echo \"glide-vc: command not found\"\n  exit 255\nfi\n\nglide update --strip-vendor\nglide-vc --only-code --no-tests --no-test-imports --use-lock-file\n"
  },
  {
    "path": "tests/README.md",
    "content": "# docker2aci tests\n\n## Semaphore\n\nThe tests run on the [Semaphore](https://semaphoreci.com/) CI system.\n\nThe tests are executed on Semaphore at each Pull Request (PR).\nEach GitHub PR page should have a link to the [test results on Semaphore](https://semaphoreci.com/appc/docker2aci).\n\n### Build settings\n\nThe tests will run on two VMs.\nThe \"Setup\" and \"Post thread\" sections will be executed on both VMs.\nThe \"Thread 1\" and \"Thread 2\" will be executed in parallel in separate VMs.\n\n#### Setup\n\n```\n./build.sh\n```\n\n#### Thread 1\n\n```\n./tests/test.sh\n```\n\n### Platform\n\nSelect `Ubuntu 14.04 LTS v1503 (beta with Docker support)`.\nThe platform with *Docker support* means the tests will run in a VM.\n\n"
  },
  {
    "path": "tests/fixture-test-depsloop/check.sh",
    "content": "#!/bin/sh\n\nDOCKER2ACI=../bin/docker2aci\nTESTDIR=$1\nTESTNAME=$2\n\ntimeout 10s ${DOCKER2ACI} \"${TESTDIR}/${TESTNAME}/${TESTNAME}.docker\"\nif [ $? -eq 1 ]; then\n\techo \"### Test case ${TESTNAME}: SUCCESS\"\n\texit 0\nelse\n\techo \"### Test case ${TESTNAME}: FAIL\"\n\texit 1\nfi\n\n"
  },
  {
    "path": "tests/fixture-test-invalidlayerid/check.sh",
    "content": "#!/bin/sh\n\nDOCKER2ACI=../bin/docker2aci\nTESTDIR=$1\nTESTNAME=$2\n\nsudo ${DOCKER2ACI} \"${TESTDIR}/${TESTNAME}/${TESTNAME}.docker\"\nif [ $? -eq 1 ]; then\n\techo \"### Test case ${TESTNAME}: SUCCESS\"\n\texit 0\nelse\n\techo \"### Test case ${TESTNAME}: FAIL\"\n\texit 1\nfi\n\n"
  },
  {
    "path": "tests/rkt-v1.1.0.md5sum",
    "content": "MD5 (rkt-v1.1.0.tar.gz) = d3d9d62429e53d8f631dbec93e4e719f\n"
  },
  {
    "path": "tests/test-basic/Dockerfile",
    "content": "FROM busybox\nCOPY check.sh /\nRUN echo file1 > file1 ; ln file1 file2\nRUN echo file3 > file3\nRUN echo file4 > file4\nCMD /check.sh 2>&1\n"
  },
  {
    "path": "tests/test-basic/check.sh",
    "content": "#!/bin/sh\nset -e\nset -x\n\ngrep -q file1 file1\ngrep -q file1 file2\ngrep -q file3 file3\ngrep -q file4 file4\nif [ \"$CHECK\" != \"rkt-rendered\" ] ; then\n\t# Skip this test because of:\n\t# https://github.com/coreos/rkt/issues/1774\n\ttest $(ls -i file1 |awk '{print $1}') -eq $(ls -i file2 |awk '{print $1}')\n\ttest $(ls -i file3 |awk '{print $1}') -ne $(ls -i file4 |awk '{print $1}')\nfi\necho \"SUCCESS\"\n"
  },
  {
    "path": "tests/test-pwl/Dockerfile",
    "content": "FROM gcr.io/google_containers/nginx:1.7.9\nCOPY check.sh /\nENTRYPOINT /check.sh 2>&1\n"
  },
  {
    "path": "tests/test-pwl/check.sh",
    "content": "#!/bin/sh\nset -e\nset -x\n\nls -l /var/run\necho \"SUCCESS\"\n"
  },
  {
    "path": "tests/test-whiteouts/Dockerfile",
    "content": "FROM busybox\nCOPY check.sh /\n\nRUN echo yes > layer0-file1 ; ln layer0-file1 layer0-file2 ; ln layer0-file1 layer0-file3\nRUN echo yes > layer1-file1 ; ln layer1-file1 layer1-file2 ; ln layer1-file1 layer1-file3\nRUN echo yes > layer2-file1 ; ln layer2-file1 layer2-file2 ; ln layer2-file1 layer2-file3\nRUN echo yes > layer3-file1 ; ln layer3-file1 layer3-file2 ; ln layer3-file1 layer3-file3\nRUN rm -f layer1-file1 layer2-file2 layer3-file3\n\nRUN echo yes > layer4-file1 ; ln layer4-file1 layer4-file2 ; ln layer4-file1 layer4-file3\nRUN echo yes > layer5-file1 ; ln layer5-file1 layer5-file2 ; ln layer5-file1 layer5-file3\nRUN echo yes > layer6-file1 ; ln layer6-file1 layer6-file2 ; ln layer6-file1 layer6-file3\nRUN rm -f layer4-file2 layer5-file1 layer6-file1 layer4-file3 layer5-file3 layer6-file2\n\nRUN echo OLD > layer10-file1 ; ln layer10-file1 layer10-file2 ; ln layer10-file1 layer10-file3\nRUN echo NEW > layer10-file1 ; ln -f layer10-file1 layer10-file2 ; ln -f layer10-file1 layer10-file3 ; echo foo > foo\n\nRUN echo line1 >  layer11-file1 ; ln layer11-file1 layer11-file2 ; ln layer11-file1 layer11-file3\nRUN echo line2 >> layer11-file1\n\nCMD /check.sh 2>&1\n"
  },
  {
    "path": "tests/test-whiteouts/check.sh",
    "content": "#!/bin/sh\nset -e\nset -x\n\ngrep -q yes layer0-file1\ngrep -q yes layer0-file2\ngrep -q yes layer0-file3\n\ntest ! -e layer1-file1\ntest   -e layer1-file2\ntest   -e layer1-file3\n\ntest   -e layer2-file1\ntest ! -e layer2-file2\ntest   -e layer2-file3\n\ntest   -e layer3-file1\ntest   -e layer3-file2\ntest ! -e layer3-file3\n\ngrep -q yes layer1-file2\ngrep -q yes layer1-file3\n\ngrep -q yes layer2-file1\ngrep -q yes layer2-file3\n\ngrep -q yes layer3-file1\ngrep -q yes layer3-file2\n\n\ntest   -e layer4-file1\ntest ! -e layer4-file2\ntest ! -e layer4-file3\n\ntest ! -e layer5-file1\ntest   -e layer5-file2\ntest ! -e layer5-file3\n\ntest ! -e layer6-file1\ntest ! -e layer6-file2\ntest   -e layer6-file3\n\ngrep -q yes layer4-file1\ngrep -q yes layer5-file2\ngrep -q yes layer6-file3\n\n\ngrep -q NEW layer10-file1\ngrep -q NEW layer10-file2\ngrep -q NEW layer10-file3\n\ngrep -q line1 layer11-file1\ngrep -q line1 layer11-file2\ngrep -q line1 layer11-file3\n\n# # Docker with AUFS or overlay storage backend does not handle this test\n# # correctly and Semaphore uses AUFS\nif [ \"$DOCKER_STORAGE_BACKEND\" == devicemapper ] ; then\n\tgrep -q line2 layer11-file1\n\tgrep -q line2 layer11-file2\n\tgrep -q line2 layer11-file3\n\tcmp layer11-file1 layer11-file2\n\tcmp layer11-file1 layer11-file3\nfi\n\necho \"SUCCESS\"\n"
  },
  {
    "path": "tests/test.sh",
    "content": "#!/bin/bash\n\nset -e\n\n# Gets the parent of the directory that this script is stored in.\n# https://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in\nDIR=\"$( cd \"$( dirname $( dirname \"${BASH_SOURCE[0]}\" ) )\" && pwd )\"\n\nORG_PATH=\"github.com/appc\"\nREPO_PATH=\"${ORG_PATH}/docker2aci\"\n\nif [ ! -h ${DIR}/gopath/src/${REPO_PATH} ]; then\n  mkdir -p ${DIR}/gopath/src/${ORG_PATH}\n  cd ${DIR} && ln -s ../../../.. gopath/src/${REPO_PATH} || exit 255\nfi\n\nexport GO15VENDOREXPERIMENT=1\nexport GOPATH=${DIR}/gopath\nREPO_GOPATH=\"${GOPATH}/src/${REPO_PATH}\"\n\ncd \"${REPO_GOPATH}\"\n\ngo vet ./pkg/...\ngo vet ./lib/...\ngo test -v ${REPO_PATH}/lib/tests\ngo test -v ${REPO_PATH}/lib/internal\ngo test -v ${REPO_PATH}/lib/common\n\nDOCKER2ACI=../bin/docker2aci\nPREFIX=docker2aci-tests\nTESTDIR=\"${REPO_GOPATH}/tests/\"\nRKTVERSION=v1.1.0\n\ncd $TESTDIR\n\n# install rkt in Semaphore\nif ! which rkt > /dev/null ; then\n\tif [ \"$SEMAPHORE\" != \"true\" ] ; then\n\t\techo \"Please install rkt\"\n\t\texit 1\n\tfi\n\tpushd $SEMAPHORE_CACHE_DIR\n\tif ! md5sum -c $TESTDIR/rkt-$RKTVERSION.md5sum; then\n\t\twget https://github.com/coreos/rkt/releases/download/$RKTVERSION/rkt-$RKTVERSION.tar.gz\n\tfi\n\tmd5sum -c $TESTDIR/rkt-$RKTVERSION.md5sum\n\ttar xf rkt-$RKTVERSION.tar.gz\n\texport PATH=$PATH:$PWD/rkt-$RKTVERSION/\n\tpopd\nfi\nRKT=$(which rkt)\n\nDOCKER_STORAGE_BACKEND=$(sudo docker info|grep '^Storage Driver:'|sed 's/Storage Driver: //')\n\nfor i in $(find . -maxdepth 1 -type d -name 'fixture-test*') ; do\n\t  TESTNAME=$(basename $i)\n\t  echo \"### Test case ${TESTNAME}...\"\n\t  $TESTDIR/${TESTNAME}/check.sh \"${TESTDIR}\" \"${TESTNAME}\"\ndone\n\nfor i in $(find . -maxdepth 1 -type d -name 'test-*') ; do\n\tTESTNAME=$(basename $i)\n\techo \"### Test case ${TESTNAME}: build...\"\n\tsudo docker build --tag=$PREFIX/${TESTNAME} --no-cache=true ${TESTNAME}\n\n\techo \"### Test case ${TESTNAME}: test in Docker...\"\n\tsudo docker run --rm \\\n\t                --env=CHECK=docker-run \\\n\t                --env=DOCKER_STORAGE_BACKEND=$DOCKER_STORAGE_BACKEND \\\n\t                $PREFIX/${TESTNAME}\n\n\techo \"### Test case ${TESTNAME}: converting to ACI...\"\n\tsudo docker save -o ${TESTNAME}.docker $PREFIX/${TESTNAME}\n\t# Docker now writes files as root, so make them readable\n\tsudo chmod o+rx ${TESTNAME}.docker\n\t$DOCKER2ACI ${TESTNAME}.docker\n\n\techo \"### Test case ${TESTNAME}: test in rkt...\"\n\tsudo $RKT prepare --insecure-options=image \\\n\t                  --set-env=CHECK=rkt-run \\\n\t                  --set-env=DOCKER_STORAGE_BACKEND=$DOCKER_STORAGE_BACKEND \\\n\t                  ./${PREFIX}-${TESTNAME}-latest.aci \\\n\t                  > rkt-uuid-${TESTNAME}\n\tsudo $RKT run-prepared $(cat rkt-uuid-${TESTNAME})\n\tsudo $RKT status $(cat rkt-uuid-${TESTNAME}) | grep app-${TESTNAME}=0\n\tsudo $RKT rm $(cat rkt-uuid-${TESTNAME})\n\n\techo \"### Test case ${TESTNAME}: test with 'rkt image render'...\"\n\tsudo $RKT image render --overwrite ${PREFIX}/${TESTNAME} ./rendered-${TESTNAME}\n\tpushd rendered-${TESTNAME}/rootfs\n\tCHECK=rkt-rendered DOCKER_STORAGE_BACKEND=$DOCKER_STORAGE_BACKEND $TESTDIR/${TESTNAME}/check.sh\n\tpopd\n\techo \"### Test case ${TESTNAME}: SUCCESS\"\n\n\tsudo docker rmi $PREFIX/${TESTNAME}\ndone\n"
  },
  {
    "path": "vendor/github.com/appc/spec/LICENSE",
    "content": "Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n"
  },
  {
    "path": "vendor/github.com/appc/spec/aci/build.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage aci\n\nimport (\n\t\"archive/tar\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/appc/spec/pkg/tarheader\"\n)\n\n// TarHeaderWalkFunc is the type of the function which allows setting tar\n// headers or filtering out tar entries when building an ACI. It will be\n// applied to every entry in the tar file.\n//\n// If true is returned, the entry will be included in the final ACI; if false,\n// the entry will not be included.\ntype TarHeaderWalkFunc func(hdr *tar.Header) bool\n\n// BuildWalker creates a filepath.WalkFunc that walks over the given root\n// (which should represent an ACI layout on disk) and adds the files in the\n// rootfs/ subdirectory to the given ArchiveWriter\nfunc BuildWalker(root string, aw ArchiveWriter, cb TarHeaderWalkFunc) filepath.WalkFunc {\n\t// cache of inode -> filepath, used to leverage hard links in the archive\n\tinos := map[uint64]string{}\n\treturn func(path string, info os.FileInfo, err error) error {\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\trelpath, err := filepath.Rel(root, path)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif relpath == \".\" {\n\t\t\treturn nil\n\t\t}\n\t\tif relpath == ManifestFile {\n\t\t\t// ignore; this will be written by the archive writer\n\t\t\t// TODO(jonboulle): does this make sense? maybe just remove from archivewriter?\n\t\t\treturn nil\n\t\t}\n\n\t\tlink := \"\"\n\t\tvar r io.Reader\n\t\tswitch info.Mode() & os.ModeType {\n\t\tcase os.ModeSocket:\n\t\t\treturn nil\n\t\tcase os.ModeNamedPipe:\n\t\tcase os.ModeCharDevice:\n\t\tcase os.ModeDevice:\n\t\tcase os.ModeDir:\n\t\tcase os.ModeSymlink:\n\t\t\ttarget, err := os.Readlink(path)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tlink = target\n\t\tdefault:\n\t\t\tfile, err := os.Open(path)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tdefer file.Close()\n\t\t\tr = file\n\t\t}\n\n\t\thdr, err := tar.FileInfoHeader(info, link)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\t// Because os.FileInfo's Name method returns only the base\n\t\t// name of the file it describes, it may be necessary to\n\t\t// modify the Name field of the returned header to provide the\n\t\t// full path name of the file.\n\t\thdr.Name = relpath\n\t\ttarheader.Populate(hdr, info, inos)\n\t\t// If the file is a hard link to a file we've already seen, we\n\t\t// don't need the contents\n\t\tif hdr.Typeflag == tar.TypeLink {\n\t\t\thdr.Size = 0\n\t\t\tr = nil\n\t\t}\n\n\t\tif cb != nil {\n\t\t\tif !cb(hdr) {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\n\t\tif err := aw.AddFile(hdr, r); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\treturn nil\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/aci/doc.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package aci contains various functions for working with App Container Images.\npackage aci\n"
  },
  {
    "path": "vendor/github.com/appc/spec/aci/file.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage aci\n\nimport (\n\t\"archive/tar\"\n\t\"bytes\"\n\t\"compress/bzip2\"\n\t\"compress/gzip\"\n\t\"encoding/hex\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"net/http\"\n\t\"os/exec\"\n\t\"path/filepath\"\n\n\t\"github.com/appc/spec/schema\"\n)\n\ntype FileType string\n\nconst (\n\tTypeGzip    = FileType(\"gz\")\n\tTypeBzip2   = FileType(\"bz2\")\n\tTypeXz      = FileType(\"xz\")\n\tTypeTar     = FileType(\"tar\")\n\tTypeText    = FileType(\"text\")\n\tTypeUnknown = FileType(\"unknown\")\n\n\treadLen = 512 // max bytes to sniff\n\n\thexHdrGzip  = \"1f8b\"\n\thexHdrBzip2 = \"425a68\"\n\thexHdrXz    = \"fd377a585a00\"\n\thexSigTar   = \"7573746172\"\n\n\ttarOffset = 257\n\n\ttextMime = \"text/plain; charset=utf-8\"\n)\n\nvar (\n\thdrGzip  []byte\n\thdrBzip2 []byte\n\thdrXz    []byte\n\tsigTar   []byte\n\ttarEnd   int\n)\n\nfunc mustDecodeHex(s string) []byte {\n\tb, err := hex.DecodeString(s)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn b\n}\n\nfunc init() {\n\thdrGzip = mustDecodeHex(hexHdrGzip)\n\thdrBzip2 = mustDecodeHex(hexHdrBzip2)\n\thdrXz = mustDecodeHex(hexHdrXz)\n\tsigTar = mustDecodeHex(hexSigTar)\n\ttarEnd = tarOffset + len(sigTar)\n}\n\n// DetectFileType attempts to detect the type of file that the given reader\n// represents by comparing it against known file signatures (magic numbers)\nfunc DetectFileType(r io.Reader) (FileType, error) {\n\tvar b bytes.Buffer\n\tn, err := io.CopyN(&b, r, readLen)\n\tif err != nil && err != io.EOF {\n\t\treturn TypeUnknown, err\n\t}\n\tbs := b.Bytes()\n\tswitch {\n\tcase bytes.HasPrefix(bs, hdrGzip):\n\t\treturn TypeGzip, nil\n\tcase bytes.HasPrefix(bs, hdrBzip2):\n\t\treturn TypeBzip2, nil\n\tcase bytes.HasPrefix(bs, hdrXz):\n\t\treturn TypeXz, nil\n\tcase n > int64(tarEnd) && bytes.Equal(bs[tarOffset:tarEnd], sigTar):\n\t\treturn TypeTar, nil\n\tcase http.DetectContentType(bs) == textMime:\n\t\treturn TypeText, nil\n\tdefault:\n\t\treturn TypeUnknown, nil\n\t}\n}\n\n// XzReader is an io.ReadCloser which decompresses xz compressed data.\ntype XzReader struct {\n\tio.ReadCloser\n\tcmd     *exec.Cmd\n\tclosech chan error\n}\n\n// NewXzReader shells out to a command line xz executable (if\n// available) to decompress the given io.Reader using the xz\n// compression format and returns an *XzReader.\n// It is the caller's responsibility to call Close on the XzReader when done.\nfunc NewXzReader(r io.Reader) (*XzReader, error) {\n\trpipe, wpipe := io.Pipe()\n\tex, err := exec.LookPath(\"xz\")\n\tif err != nil {\n\t\tlog.Fatalf(\"couldn't find xz executable: %v\", err)\n\t}\n\tcmd := exec.Command(ex, \"--decompress\", \"--stdout\")\n\n\tclosech := make(chan error)\n\n\tcmd.Stdin = r\n\tcmd.Stdout = wpipe\n\n\tgo func() {\n\t\terr := cmd.Run()\n\t\twpipe.CloseWithError(err)\n\t\tclosech <- err\n\t}()\n\n\treturn &XzReader{rpipe, cmd, closech}, nil\n}\n\nfunc (r *XzReader) Close() error {\n\tr.ReadCloser.Close()\n\tr.cmd.Process.Kill()\n\treturn <-r.closech\n}\n\n// ManifestFromImage extracts a new schema.ImageManifest from the given ACI image.\nfunc ManifestFromImage(rs io.ReadSeeker) (*schema.ImageManifest, error) {\n\tvar im schema.ImageManifest\n\n\ttr, err := NewCompressedTarReader(rs)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer tr.Close()\n\n\tfor {\n\t\thdr, err := tr.Next()\n\t\tswitch err {\n\t\tcase io.EOF:\n\t\t\treturn nil, errors.New(\"missing manifest\")\n\t\tcase nil:\n\t\t\tif filepath.Clean(hdr.Name) == ManifestFile {\n\t\t\t\tdata, err := ioutil.ReadAll(tr)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\tif err := im.UnmarshalJSON(data); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\treturn &im, nil\n\t\t\t}\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"error extracting tarball: %v\", err)\n\t\t}\n\t}\n}\n\n// TarReadCloser embeds a *tar.Reader and the related io.Closer\n// It is the caller's responsibility to call Close on TarReadCloser when\n// done.\ntype TarReadCloser struct {\n\t*tar.Reader\n\tio.Closer\n}\n\nfunc (r *TarReadCloser) Close() error {\n\treturn r.Closer.Close()\n}\n\n// NewCompressedTarReader creates a new TarReadCloser reading from the\n// given ACI image.\n// It is the caller's responsibility to call Close on the TarReadCloser\n// when done.\nfunc NewCompressedTarReader(rs io.ReadSeeker) (*TarReadCloser, error) {\n\tcr, err := NewCompressedReader(rs)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &TarReadCloser{tar.NewReader(cr), cr}, nil\n}\n\n// NewCompressedReader creates a new io.ReaderCloser from the given ACI image.\n// It is the caller's responsibility to call Close on the Reader when done.\nfunc NewCompressedReader(rs io.ReadSeeker) (io.ReadCloser, error) {\n\n\tvar (\n\t\tdr  io.ReadCloser\n\t\terr error\n\t)\n\n\t_, err = rs.Seek(0, 0)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tftype, err := DetectFileType(rs)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t_, err = rs.Seek(0, 0)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tswitch ftype {\n\tcase TypeGzip:\n\t\tdr, err = gzip.NewReader(rs)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\tcase TypeBzip2:\n\t\tdr = ioutil.NopCloser(bzip2.NewReader(rs))\n\tcase TypeXz:\n\t\tdr, err = NewXzReader(rs)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\tcase TypeTar:\n\t\tdr = ioutil.NopCloser(rs)\n\tcase TypeUnknown:\n\t\treturn nil, errors.New(\"error: unknown image filetype\")\n\tdefault:\n\t\treturn nil, errors.New(\"no type returned from DetectFileType?\")\n\t}\n\treturn dr, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/aci/layout.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage aci\n\n/*\n\nImage Layout\n\nThe on-disk layout of an app container is straightforward.\nIt includes a rootfs with all of the files that will exist in the root of the app and a manifest describing the image.\nThe layout MUST contain an image manifest.\n\n/manifest\n/rootfs/\n/rootfs/usr/bin/mysql\n\n*/\n\nimport (\n\t\"archive/tar\"\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/appc/spec/schema\"\n\t\"github.com/appc/spec/schema/types\"\n)\n\nconst (\n\t// Path to manifest file inside the layout\n\tManifestFile = \"manifest\"\n\t// Path to rootfs directory inside the layout\n\tRootfsDir = \"rootfs\"\n)\n\ntype ErrOldVersion struct {\n\tversion types.SemVer\n}\n\nfunc (e ErrOldVersion) Error() string {\n\treturn fmt.Sprintf(\"ACVersion too old. Found major version %v, expected %v\", e.version.Major, schema.AppContainerVersion.Major)\n}\n\nvar (\n\tErrNoRootFS   = errors.New(\"no rootfs found in layout\")\n\tErrNoManifest = errors.New(\"no image manifest found in layout\")\n)\n\n// ValidateLayout takes a directory and validates that the layout of the directory\n// matches that expected by the Application Container Image format.\n// If any errors are encountered during the validation, it will abort and\n// return the first one.\nfunc ValidateLayout(dir string) error {\n\tfi, err := os.Stat(dir)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error accessing layout: %v\", err)\n\t}\n\tif !fi.IsDir() {\n\t\treturn fmt.Errorf(\"given path %q is not a directory\", dir)\n\t}\n\tvar flist []string\n\tvar imOK, rfsOK bool\n\tvar im io.Reader\n\twalkLayout := func(fpath string, fi os.FileInfo, err error) error {\n\t\trpath, err := filepath.Rel(dir, fpath)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tswitch rpath {\n\t\tcase \".\":\n\t\tcase ManifestFile:\n\t\t\tim, err = os.Open(fpath)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\timOK = true\n\t\tcase RootfsDir:\n\t\t\tif !fi.IsDir() {\n\t\t\t\treturn errors.New(\"rootfs is not a directory\")\n\t\t\t}\n\t\t\trfsOK = true\n\t\tdefault:\n\t\t\tflist = append(flist, rpath)\n\t\t}\n\t\treturn nil\n\t}\n\tif err := filepath.Walk(dir, walkLayout); err != nil {\n\t\treturn err\n\t}\n\treturn validate(imOK, im, rfsOK, flist)\n}\n\n// ValidateArchive takes a *tar.Reader and validates that the layout of the\n// filesystem the reader encapsulates matches that expected by the\n// Application Container Image format.  If any errors are encountered during\n// the validation, it will abort and return the first one.\nfunc ValidateArchive(tr *tar.Reader) error {\n\tvar fseen map[string]bool = make(map[string]bool)\n\tvar imOK, rfsOK bool\n\tvar im bytes.Buffer\nTar:\n\tfor {\n\t\thdr, err := tr.Next()\n\t\tswitch {\n\t\tcase err == nil:\n\t\tcase err == io.EOF:\n\t\t\tbreak Tar\n\t\tdefault:\n\t\t\treturn err\n\t\t}\n\t\tname := filepath.Clean(hdr.Name)\n\t\tswitch name {\n\t\tcase \".\":\n\t\tcase ManifestFile:\n\t\t\t_, err := io.Copy(&im, tr)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\timOK = true\n\t\tcase RootfsDir:\n\t\t\tif !hdr.FileInfo().IsDir() {\n\t\t\t\treturn fmt.Errorf(\"rootfs is not a directory\")\n\t\t\t}\n\t\t\trfsOK = true\n\t\tdefault:\n\t\t\tif _, seen := fseen[name]; seen {\n\t\t\t\treturn fmt.Errorf(\"duplicate file entry in archive: %s\", name)\n\t\t\t}\n\t\t\tfseen[name] = true\n\t\t}\n\t}\n\tvar flist []string\n\tfor key := range fseen {\n\t\tflist = append(flist, key)\n\t}\n\treturn validate(imOK, &im, rfsOK, flist)\n}\n\nfunc validate(imOK bool, im io.Reader, rfsOK bool, files []string) error {\n\tdefer func() {\n\t\tif rc, ok := im.(io.Closer); ok {\n\t\t\trc.Close()\n\t\t}\n\t}()\n\tif !imOK {\n\t\treturn ErrNoManifest\n\t}\n\tif !rfsOK {\n\t\treturn ErrNoRootFS\n\t}\n\tb, err := ioutil.ReadAll(im)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"error reading image manifest: %v\", err)\n\t}\n\tvar a schema.ImageManifest\n\tif err := a.UnmarshalJSON(b); err != nil {\n\t\treturn fmt.Errorf(\"image manifest validation failed: %v\", err)\n\t}\n\tif a.ACVersion.LessThanMajor(schema.AppContainerVersion) {\n\t\treturn ErrOldVersion{\n\t\t\tversion: a.ACVersion,\n\t\t}\n\t}\n\tfor _, f := range files {\n\t\tif !strings.HasPrefix(f, \"rootfs\") {\n\t\t\treturn fmt.Errorf(\"unrecognized file path in layout: %q\", f)\n\t\t}\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/aci/writer.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage aci\n\nimport (\n\t\"archive/tar\"\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"io\"\n\t\"time\"\n\n\t\"github.com/appc/spec/schema\"\n)\n\n// ArchiveWriter writes App Container Images. Users wanting to create an ACI or\n// should create an ArchiveWriter and add files to it; the ACI will be written\n// to the underlying tar.Writer\ntype ArchiveWriter interface {\n\tAddFile(hdr *tar.Header, r io.Reader) error\n\tClose() error\n}\n\ntype imageArchiveWriter struct {\n\t*tar.Writer\n\tam *schema.ImageManifest\n}\n\n// NewImageWriter creates a new ArchiveWriter which will generate an App\n// Container Image based on the given manifest and write it to the given\n// tar.Writer\nfunc NewImageWriter(am schema.ImageManifest, w *tar.Writer) ArchiveWriter {\n\taw := &imageArchiveWriter{\n\t\tw,\n\t\t&am,\n\t}\n\treturn aw\n}\n\nfunc (aw *imageArchiveWriter) AddFile(hdr *tar.Header, r io.Reader) error {\n\terr := aw.Writer.WriteHeader(hdr)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tif r != nil {\n\t\t_, err := io.Copy(aw.Writer, r)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (aw *imageArchiveWriter) addFileNow(path string, contents []byte) error {\n\tbuf := bytes.NewBuffer(contents)\n\tnow := time.Now()\n\thdr := tar.Header{\n\t\tName:       path,\n\t\tMode:       0644,\n\t\tUid:        0,\n\t\tGid:        0,\n\t\tSize:       int64(buf.Len()),\n\t\tModTime:    now,\n\t\tTypeflag:   tar.TypeReg,\n\t\tUname:      \"root\",\n\t\tGname:      \"root\",\n\t\tChangeTime: now,\n\t}\n\treturn aw.AddFile(&hdr, buf)\n}\n\nfunc (aw *imageArchiveWriter) addManifest(name string, m json.Marshaler) error {\n\tout, err := m.MarshalJSON()\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn aw.addFileNow(name, out)\n}\n\nfunc (aw *imageArchiveWriter) Close() error {\n\tif err := aw.addManifest(ManifestFile, aw.am); err != nil {\n\t\treturn err\n\t}\n\treturn aw.Writer.Close()\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/acirenderer/acirenderer.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage acirenderer\n\nimport (\n\t\"archive/tar\"\n\t\"crypto/sha512\"\n\t\"fmt\"\n\t\"hash\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/appc/spec/schema\"\n\t\"github.com/appc/spec/schema/types\"\n)\n\n// An ACIRegistry provides all functions of an ACIProvider plus functions to\n// search for an aci and get its contents\ntype ACIRegistry interface {\n\tACIProvider\n\tGetImageManifest(key string) (*schema.ImageManifest, error)\n\tGetACI(name types.ACIdentifier, labels types.Labels) (string, error)\n}\n\n// An ACIProvider provides functions to get an ACI contents, to convert an\n// ACI hash to the key under which the ACI is known to the provider and to resolve an\n// image ID to the key under which it's known to the provider.\ntype ACIProvider interface {\n\t// Read the ACI contents stream given the key. Use ResolveKey to\n\t// convert an image ID to the relative provider's key.\n\tReadStream(key string) (io.ReadCloser, error)\n\t// Converts an image ID to the, if existent, key under which the\n\t// ACI is known to the provider\n\tResolveKey(key string) (string, error)\n\t// Converts a Hash to the provider's key\n\tHashToKey(h hash.Hash) string\n}\n\n// An Image contains the ImageManifest, the ACIProvider's key and its Level in\n// the dependency tree.\ntype Image struct {\n\tIm    *schema.ImageManifest\n\tKey   string\n\tLevel uint16\n}\n\n// Images encapsulates an ordered slice of Image structs. It represents a flat\n// dependency tree.\n// The upper Image should be the first in the slice with a level of 0.\n// For example if A is the upper image and has two deps (in order B and C). And C has one dep (D),\n// the slice (reporting the app name and excluding im and Hash) should be:\n// [{A, Level: 0}, {C, Level:1}, {D, Level: 2}, {B, Level: 1}]\ntype Images []Image\n\n// ACIFiles represents which files to extract for every ACI\ntype ACIFiles struct {\n\tKey     string\n\tFileMap map[string]struct{}\n}\n\n// RenderedACI is an (ordered) slice of ACIFiles\ntype RenderedACI []*ACIFiles\n\n// GetRenderedACIWithImageID, given an imageID, starts with the matching image\n// available in the store, creates the dependencies list and returns the\n// RenderedACI list.\nfunc GetRenderedACIWithImageID(imageID types.Hash, ap ACIRegistry) (RenderedACI, error) {\n\timgs, err := CreateDepListFromImageID(imageID, ap)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn GetRenderedACIFromList(imgs, ap)\n}\n\n// GetRenderedACI, given an image app name and optional labels, starts with the\n// best matching image available in the store, creates the dependencies list\n// and returns the RenderedACI list.\nfunc GetRenderedACI(name types.ACIdentifier, labels types.Labels, ap ACIRegistry) (RenderedACI, error) {\n\timgs, err := CreateDepListFromNameLabels(name, labels, ap)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn GetRenderedACIFromList(imgs, ap)\n}\n\n// GetRenderedACIFromList returns the RenderedACI list. All file outside rootfs\n// are excluded (at the moment only \"manifest\").\nfunc GetRenderedACIFromList(imgs Images, ap ACIProvider) (RenderedACI, error) {\n\tif len(imgs) == 0 {\n\t\treturn nil, fmt.Errorf(\"image list empty\")\n\t}\n\n\tallFiles := make(map[string]byte)\n\trenderedACI := RenderedACI{}\n\n\tfirst := true\n\tfor i, img := range imgs {\n\t\tpwlm := getUpperPWLM(imgs, i)\n\t\tra, err := getACIFiles(img, ap, allFiles, pwlm)\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\t// Use the manifest from the upper ACI\n\t\tif first {\n\t\t\tra.FileMap[\"manifest\"] = struct{}{}\n\t\t\tfirst = false\n\t\t}\n\t\trenderedACI = append(renderedACI, ra)\n\t}\n\n\treturn renderedACI, nil\n}\n\n// getUpperPWLM returns the pwl at the lower level for the branch where\n// img[pos] lives.\nfunc getUpperPWLM(imgs Images, pos int) map[string]struct{} {\n\tvar pwlm map[string]struct{}\n\tcurlevel := imgs[pos].Level\n\t// Start from our position and go back ignoring the other leafs.\n\tfor i := pos; i >= 0; i-- {\n\t\timg := imgs[i]\n\t\tif img.Level < curlevel && len(img.Im.PathWhitelist) > 0 {\n\t\t\tpwlm = pwlToMap(img.Im.PathWhitelist)\n\t\t}\n\t\tcurlevel = img.Level\n\t}\n\treturn pwlm\n}\n\n// getACIFiles returns the ACIFiles struct for the given image. All files\n// outside rootfs are excluded (at the moment only \"manifest\").\nfunc getACIFiles(img Image, ap ACIProvider, allFiles map[string]byte, pwlm map[string]struct{}) (*ACIFiles, error) {\n\trs, err := ap.ReadStream(img.Key)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer rs.Close()\n\n\thash := sha512.New()\n\tr := io.TeeReader(rs, hash)\n\n\tthispwlm := pwlToMap(img.Im.PathWhitelist)\n\tra := &ACIFiles{FileMap: make(map[string]struct{})}\n\tif err = Walk(tar.NewReader(r), func(hdr *tar.Header) error {\n\t\tname := hdr.Name\n\t\tcleanName := filepath.Clean(name)\n\n\t\t// Add the rootfs directory.\n\t\tif cleanName == \"rootfs\" && hdr.Typeflag == tar.TypeDir {\n\t\t\tra.FileMap[cleanName] = struct{}{}\n\t\t\tallFiles[cleanName] = hdr.Typeflag\n\t\t\treturn nil\n\t\t}\n\n\t\t// Ignore files outside /rootfs/ (at the moment only \"manifest\").\n\t\tif !strings.HasPrefix(cleanName, \"rootfs/\") {\n\t\t\treturn nil\n\t\t}\n\n\t\t// Is the file in our PathWhiteList?\n\t\t// If the file is a directory continue also if not in PathWhiteList\n\t\tif hdr.Typeflag != tar.TypeDir {\n\t\t\tif len(img.Im.PathWhitelist) > 0 {\n\t\t\t\tif _, ok := thispwlm[cleanName]; !ok {\n\t\t\t\t\treturn nil\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t// Is the file in the lower level PathWhiteList of this img branch?\n\t\tif pwlm != nil {\n\t\t\tif _, ok := pwlm[cleanName]; !ok {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t}\n\t\t// Is the file already provided by a previous image?\n\t\tif _, ok := allFiles[cleanName]; ok {\n\t\t\treturn nil\n\t\t}\n\t\t// Check that the parent dirs are also of type dir in the upper\n\t\t// images\n\t\tparentDir := filepath.Dir(cleanName)\n\t\tfor parentDir != \".\" && parentDir != \"/\" {\n\t\t\tif ft, ok := allFiles[parentDir]; ok && ft != tar.TypeDir {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tparentDir = filepath.Dir(parentDir)\n\t\t}\n\t\tra.FileMap[cleanName] = struct{}{}\n\t\tallFiles[cleanName] = hdr.Typeflag\n\t\treturn nil\n\t}); err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Tar does not necessarily read the complete file, so ensure we read the entirety into the hash\n\tif _, err := io.Copy(ioutil.Discard, r); err != nil {\n\t\treturn nil, fmt.Errorf(\"error reading ACI: %v\", err)\n\t}\n\n\tif g := ap.HashToKey(hash); g != img.Key {\n\t\treturn nil, fmt.Errorf(\"image hash does not match expected (%s != %s)\", g, img.Key)\n\t}\n\n\tra.Key = img.Key\n\treturn ra, nil\n}\n\n// pwlToMap converts a pathWhiteList slice to a map for faster search\n// It will also prepend \"rootfs/\" to the provided paths and they will be\n// relative to \"/\" so they can be easily compared with the tar.Header.Name\n// If pwl length is 0, a nil map is returned\nfunc pwlToMap(pwl []string) map[string]struct{} {\n\tif len(pwl) == 0 {\n\t\treturn nil\n\t}\n\tm := make(map[string]struct{}, len(pwl))\n\tfor _, name := range pwl {\n\t\trelpath := filepath.Join(\"rootfs\", name)\n\t\tm[relpath] = struct{}{}\n\t}\n\treturn m\n}\n\nfunc Walk(tarReader *tar.Reader, walkFunc func(hdr *tar.Header) error) error {\n\tfor {\n\t\thdr, err := tarReader.Next()\n\t\tif err == io.EOF {\n\t\t\t// end of tar archive\n\t\t\tbreak\n\t\t}\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"Error reading tar entry: %v\", err)\n\t\t}\n\t\tif err := walkFunc(hdr); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/acirenderer/resolve.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage acirenderer\n\nimport (\n\t\"container/list\"\n\n\t\"github.com/appc/spec/schema/types\"\n)\n\n// CreateDepListFromImageID returns the flat dependency tree of the image with\n// the provided imageID\nfunc CreateDepListFromImageID(imageID types.Hash, ap ACIRegistry) (Images, error) {\n\tkey, err := ap.ResolveKey(imageID.String())\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn createDepList(key, ap)\n}\n\n// CreateDepListFromNameLabels returns the flat dependency tree of the image\n// with the provided app name and optional labels.\nfunc CreateDepListFromNameLabels(name types.ACIdentifier, labels types.Labels, ap ACIRegistry) (Images, error) {\n\tkey, err := ap.GetACI(name, labels)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn createDepList(key, ap)\n}\n\n// createDepList returns the flat dependency tree as a list of Image type\nfunc createDepList(key string, ap ACIRegistry) (Images, error) {\n\timgsl := list.New()\n\tim, err := ap.GetImageManifest(key)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\timg := Image{Im: im, Key: key, Level: 0}\n\timgsl.PushFront(img)\n\n\t// Create a flat dependency tree. Use a LinkedList to be able to\n\t// insert elements in the list while working on it.\n\tfor el := imgsl.Front(); el != nil; el = el.Next() {\n\t\timg := el.Value.(Image)\n\t\tdependencies := img.Im.Dependencies\n\t\tfor _, d := range dependencies {\n\t\t\tvar depimg Image\n\t\t\tvar depKey string\n\t\t\tif d.ImageID != nil && !d.ImageID.Empty() {\n\t\t\t\tdepKey, err = ap.ResolveKey(d.ImageID.String())\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tvar err error\n\t\t\t\tdepKey, err = ap.GetACI(d.ImageName, d.Labels)\n\t\t\t\tif err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t}\n\t\t\tim, err := ap.GetImageManifest(depKey)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tdepimg = Image{Im: im, Key: depKey, Level: img.Level + 1}\n\t\t\timgsl.InsertAfter(depimg, el)\n\t\t}\n\t}\n\n\timgs := Images{}\n\tfor el := imgsl.Front(); el != nil; el = el.Next() {\n\t\timgs = append(imgs, el.Value.(Image))\n\t}\n\treturn imgs, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/device/device_linux.go",
    "content": "// Copyright 2016 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// +build linux\n\npackage device\n\n// with glibc/sysdeps/unix/sysv/linux/sys/sysmacros.h as reference\n\nfunc Major(rdev uint64) uint {\n\treturn uint((rdev>>8)&0xfff) | (uint(rdev>>32) & ^uint(0xfff))\n}\n\nfunc Minor(rdev uint64) uint {\n\treturn uint(rdev&0xff) | uint(uint32(rdev>>12) & ^uint32(0xff))\n}\n\nfunc Makedev(maj uint, min uint) uint64 {\n\treturn uint64(min&0xff) | (uint64(maj&0xfff) << 8) |\n\t\t((uint64(min) & ^uint64(0xff)) << 12) |\n\t\t((uint64(maj) & ^uint64(0xfff)) << 32)\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/device/device_posix.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// +build freebsd netbsd openbsd darwin\n\npackage device\n\n/*\n#define _BSD_SOURCE\n#define _DEFAULT_SOURCE\n#include <sys/types.h>\n\nunsigned int\nmy_major(dev_t dev)\n{\n  return major(dev);\n}\n\nunsigned int\nmy_minor(dev_t dev)\n{\n  return minor(dev);\n}\n\ndev_t\nmy_makedev(unsigned int maj, unsigned int min)\n{\n       return makedev(maj, min);\n}\n*/\nimport \"C\"\n\nfunc Major(rdev uint64) uint {\n\tmajor := C.my_major(C.dev_t(rdev))\n\treturn uint(major)\n}\n\nfunc Minor(rdev uint64) uint {\n\tminor := C.my_minor(C.dev_t(rdev))\n\treturn uint(minor)\n}\n\nfunc Makedev(maj uint, min uint) uint64 {\n\tdev := C.my_makedev(C.uint(maj), C.uint(min))\n\treturn uint64(dev)\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/tarheader/doc.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package tarheader contains a simple abstraction to accurately create\n// tar.Headers on different operating systems.\npackage tarheader\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/tarheader/pop_darwin.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n//+build darwin\n\npackage tarheader\n\nimport (\n\t\"archive/tar\"\n\t\"os\"\n\t\"syscall\"\n\t\"time\"\n)\n\nfunc init() {\n\tpopulateHeaderStat = append(populateHeaderStat, populateHeaderCtime)\n}\n\nfunc populateHeaderCtime(h *tar.Header, fi os.FileInfo, _ map[uint64]string) {\n\tst, ok := fi.Sys().(*syscall.Stat_t)\n\tif !ok {\n\t\treturn\n\t}\n\n\tsec, nsec := st.Ctimespec.Unix()\n\tctime := time.Unix(sec, nsec)\n\th.ChangeTime = ctime\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/tarheader/pop_linux.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// +build linux\n\npackage tarheader\n\nimport (\n\t\"archive/tar\"\n\t\"os\"\n\t\"syscall\"\n\t\"time\"\n)\n\nfunc init() {\n\tpopulateHeaderStat = append(populateHeaderStat, populateHeaderCtime)\n}\n\nfunc populateHeaderCtime(h *tar.Header, fi os.FileInfo, _ map[uint64]string) {\n\tst, ok := fi.Sys().(*syscall.Stat_t)\n\tif !ok {\n\t\treturn\n\t}\n\n\tsec, nsec := st.Ctim.Unix()\n\tctime := time.Unix(sec, nsec)\n\th.ChangeTime = ctime\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/tarheader/pop_posix.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// +build linux freebsd netbsd openbsd\n\npackage tarheader\n\nimport (\n\t\"archive/tar\"\n\t\"os\"\n\t\"syscall\"\n\n\t\"github.com/appc/spec/pkg/device\"\n)\n\nfunc init() {\n\tpopulateHeaderStat = append(populateHeaderStat, populateHeaderUnix)\n}\n\nfunc populateHeaderUnix(h *tar.Header, fi os.FileInfo, seen map[uint64]string) {\n\tst, ok := fi.Sys().(*syscall.Stat_t)\n\tif !ok {\n\t\treturn\n\t}\n\th.Uid = int(st.Uid)\n\th.Gid = int(st.Gid)\n\tif st.Mode&syscall.S_IFMT == syscall.S_IFBLK || st.Mode&syscall.S_IFMT == syscall.S_IFCHR {\n\t\th.Devminor = int64(device.Minor(uint64(st.Rdev)))\n\t\th.Devmajor = int64(device.Major(uint64(st.Rdev)))\n\t}\n\t// If we have already seen this inode, generate a hardlink\n\tp, ok := seen[uint64(st.Ino)]\n\tif ok {\n\t\th.Linkname = p\n\t\th.Typeflag = tar.TypeLink\n\t} else {\n\t\tseen[uint64(st.Ino)] = h.Name\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/pkg/tarheader/tarheader.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage tarheader\n\nimport (\n\t\"archive/tar\"\n\t\"os\"\n)\n\nvar populateHeaderStat []func(h *tar.Header, fi os.FileInfo, seen map[uint64]string)\n\nfunc Populate(h *tar.Header, fi os.FileInfo, seen map[uint64]string) {\n\tfor _, pop := range populateHeaderStat {\n\t\tpop(h, fi, seen)\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/common/common.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage common\n\nimport (\n\t\"fmt\"\n\t\"net/url\"\n\t\"strings\"\n)\n\n// MakeQueryString takes a comma-separated LABEL=VALUE string and returns an\n// \"&\"-separated string with URL escaped values.\n//\n// Examples:\n// \tversion=1.0.0,label=v1+v2 -> version=1.0.0&label=v1%2Bv2\n// \tname=db,source=/tmp$1 -> name=db&source=%2Ftmp%241\nfunc MakeQueryString(app string) (string, error) {\n\tparts := strings.Split(app, \",\")\n\tescapedParts := make([]string, len(parts))\n\tfor i, s := range parts {\n\t\tp := strings.SplitN(s, \"=\", 2)\n\t\tif len(p) != 2 {\n\t\t\treturn \"\", fmt.Errorf(\"malformed string %q - has a label without a value: %s\", app, p[0])\n\t\t}\n\t\tescapedParts[i] = fmt.Sprintf(\"%s=%s\", p[0], url.QueryEscape(p[1]))\n\t}\n\treturn strings.Join(escapedParts, \"&\"), nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/doc.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package schema provides definitions for the JSON schema of the different\n// manifests in the App Container Specification. The manifests are canonically\n// represented in their respective structs:\n//   - `ImageManifest`\n//   - `PodManifest`\n//\n// Validation is performed through serialization: if a blob of JSON data will\n// unmarshal to one of the *Manifests, it is considered a valid implementation\n// of the standard. Similarly, if a constructed *Manifest struct marshals\n// successfully to JSON, it must be valid.\npackage schema\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/image.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage schema\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\n\t\"github.com/appc/spec/schema/types\"\n\n\t\"go4.org/errorutil\"\n)\n\nconst (\n\tACIExtension      = \".aci\"\n\tImageManifestKind = types.ACKind(\"ImageManifest\")\n)\n\ntype ImageManifest struct {\n\tACKind        types.ACKind       `json:\"acKind\"`\n\tACVersion     types.SemVer       `json:\"acVersion\"`\n\tName          types.ACIdentifier `json:\"name\"`\n\tLabels        types.Labels       `json:\"labels,omitempty\"`\n\tApp           *types.App         `json:\"app,omitempty\"`\n\tAnnotations   types.Annotations  `json:\"annotations,omitempty\"`\n\tDependencies  types.Dependencies `json:\"dependencies,omitempty\"`\n\tPathWhitelist []string           `json:\"pathWhitelist,omitempty\"`\n}\n\n// imageManifest is a model to facilitate extra validation during the\n// unmarshalling of the ImageManifest\ntype imageManifest ImageManifest\n\nfunc BlankImageManifest() *ImageManifest {\n\treturn &ImageManifest{ACKind: ImageManifestKind, ACVersion: AppContainerVersion}\n}\n\nfunc (im *ImageManifest) UnmarshalJSON(data []byte) error {\n\ta := imageManifest(*im)\n\terr := json.Unmarshal(data, &a)\n\tif err != nil {\n\t\tif serr, ok := err.(*json.SyntaxError); ok {\n\t\t\tline, col, highlight := errorutil.HighlightBytePosition(bytes.NewReader(data), serr.Offset)\n\t\t\treturn fmt.Errorf(\"\\nError at line %d, column %d\\n%s%v\", line, col, highlight, err)\n\t\t}\n\t\treturn err\n\t}\n\tnim := ImageManifest(a)\n\tif err := nim.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*im = nim\n\treturn nil\n}\n\nfunc (im ImageManifest) MarshalJSON() ([]byte, error) {\n\tif err := im.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(imageManifest(im))\n}\n\nvar imKindError = types.InvalidACKindError(ImageManifestKind)\n\n// assertValid performs extra assertions on an ImageManifest to ensure that\n// fields are set appropriately, etc. It is used exclusively when marshalling\n// and unmarshalling an ImageManifest. Most field-specific validation is\n// performed through the individual types being marshalled; assertValid()\n// should only deal with higher-level validation.\nfunc (im *ImageManifest) assertValid() error {\n\tif im.ACKind != ImageManifestKind {\n\t\treturn imKindError\n\t}\n\tif im.ACVersion.Empty() {\n\t\treturn errors.New(`acVersion must be set`)\n\t}\n\tif im.Name.Empty() {\n\t\treturn errors.New(`name must be set`)\n\t}\n\treturn nil\n}\n\nfunc (im *ImageManifest) GetLabel(name string) (val string, ok bool) {\n\treturn im.Labels.Get(name)\n}\n\nfunc (im *ImageManifest) GetAnnotation(name string) (val string, ok bool) {\n\treturn im.Annotations.Get(name)\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/kind.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage schema\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/appc/spec/schema/types\"\n)\n\ntype Kind struct {\n\tACVersion types.SemVer `json:\"acVersion\"`\n\tACKind    types.ACKind `json:\"acKind\"`\n}\n\ntype kind Kind\n\nfunc (k *Kind) UnmarshalJSON(data []byte) error {\n\tnk := kind{}\n\terr := json.Unmarshal(data, &nk)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*k = Kind(nk)\n\treturn nil\n}\n\nfunc (k Kind) MarshalJSON() ([]byte, error) {\n\treturn json.Marshal(kind(k))\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/pod.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage schema\n\nimport (\n\t\"bytes\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\n\t\"github.com/appc/spec/schema/types\"\n\n\t\"go4.org/errorutil\"\n)\n\nconst PodManifestKind = types.ACKind(\"PodManifest\")\n\ntype PodManifest struct {\n\tACVersion       types.SemVer          `json:\"acVersion\"`\n\tACKind          types.ACKind          `json:\"acKind\"`\n\tApps            AppList               `json:\"apps\"`\n\tVolumes         []types.Volume        `json:\"volumes\"`\n\tIsolators       []types.Isolator      `json:\"isolators\"`\n\tAnnotations     types.Annotations     `json:\"annotations\"`\n\tPorts           []types.ExposedPort   `json:\"ports\"`\n\tUserAnnotations types.UserAnnotations `json:\"userAnnotations,omitempty\"`\n\tUserLabels      types.UserLabels      `json:\"userLabels,omitempty\"`\n}\n\n// podManifest is a model to facilitate extra validation during the\n// unmarshalling of the PodManifest\ntype podManifest PodManifest\n\nfunc BlankPodManifest() *PodManifest {\n\treturn &PodManifest{ACKind: PodManifestKind, ACVersion: AppContainerVersion}\n}\n\nfunc (pm *PodManifest) UnmarshalJSON(data []byte) error {\n\tp := podManifest(*pm)\n\terr := json.Unmarshal(data, &p)\n\tif err != nil {\n\t\tif serr, ok := err.(*json.SyntaxError); ok {\n\t\t\tline, col, highlight := errorutil.HighlightBytePosition(bytes.NewReader(data), serr.Offset)\n\t\t\treturn fmt.Errorf(\"\\nError at line %d, column %d\\n%s%v\", line, col, highlight, err)\n\t\t}\n\t\treturn err\n\t}\n\tnpm := PodManifest(p)\n\tif err := npm.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*pm = npm\n\treturn nil\n}\n\nfunc (pm PodManifest) MarshalJSON() ([]byte, error) {\n\tif err := pm.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(podManifest(pm))\n}\n\nvar pmKindError = types.InvalidACKindError(PodManifestKind)\n\n// assertValid performs extra assertions on an PodManifest to\n// ensure that fields are set appropriately, etc. It is used exclusively when\n// marshalling and unmarshalling an PodManifest. Most\n// field-specific validation is performed through the individual types being\n// marshalled; assertValid() should only deal with higher-level validation.\nfunc (pm *PodManifest) assertValid() error {\n\tif pm.ACKind != PodManifestKind {\n\t\treturn pmKindError\n\t}\n\treturn nil\n}\n\ntype AppList []RuntimeApp\n\ntype appList AppList\n\nfunc (al *AppList) UnmarshalJSON(data []byte) error {\n\ta := appList{}\n\terr := json.Unmarshal(data, &a)\n\tif err != nil {\n\t\treturn err\n\t}\n\tnal := AppList(a)\n\tif err := nal.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*al = nal\n\treturn nil\n}\n\nfunc (al AppList) MarshalJSON() ([]byte, error) {\n\tif err := al.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(appList(al))\n}\n\nfunc (al AppList) assertValid() error {\n\tseen := map[types.ACName]bool{}\n\tfor _, a := range al {\n\t\tif _, ok := seen[a.Name]; ok {\n\t\t\treturn fmt.Errorf(`duplicate apps of name %q`, a.Name)\n\t\t}\n\t\tseen[a.Name] = true\n\t}\n\treturn nil\n}\n\n// Get retrieves an app by the specified name from the AppList; if there is\n// no such app, nil is returned. The returned *RuntimeApp MUST be considered\n// read-only.\nfunc (al AppList) Get(name types.ACName) *RuntimeApp {\n\tfor _, a := range al {\n\t\tif name.Equals(a.Name) {\n\t\t\taa := a\n\t\t\treturn &aa\n\t\t}\n\t}\n\treturn nil\n}\n\n// Mount describes the mapping between a volume and the path it is mounted\n// inside of an app's filesystem.\n// The AppVolume is optional. If missing, the pod-level Volume of the\n// same name shall be used.\ntype Mount struct {\n\tVolume    types.ACName  `json:\"volume\"`\n\tPath      string        `json:\"path\"`\n\tAppVolume *types.Volume `json:\"appVolume,omitempty\"`\n}\n\nfunc (r Mount) assertValid() error {\n\tif r.Volume.Empty() {\n\t\treturn errors.New(\"volume must be set\")\n\t}\n\tif r.Path == \"\" {\n\t\treturn errors.New(\"path must be set\")\n\t}\n\treturn nil\n}\n\n// RuntimeApp describes an application referenced in a PodManifest\ntype RuntimeApp struct {\n\tName           types.ACName      `json:\"name\"`\n\tImage          RuntimeImage      `json:\"image\"`\n\tApp            *types.App        `json:\"app,omitempty\"`\n\tReadOnlyRootFS bool              `json:\"readOnlyRootFS,omitempty\"`\n\tMounts         []Mount           `json:\"mounts,omitempty\"`\n\tAnnotations    types.Annotations `json:\"annotations,omitempty\"`\n}\n\n// RuntimeImage describes an image referenced in a RuntimeApp\ntype RuntimeImage struct {\n\tName   *types.ACIdentifier `json:\"name,omitempty\"`\n\tID     types.Hash          `json:\"id\"`\n\tLabels types.Labels        `json:\"labels,omitempty\"`\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/acidentifier.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"regexp\"\n\t\"strings\"\n)\n\nvar (\n\t// ValidACIdentifier is a regular expression that defines a valid ACIdentifier\n\tValidACIdentifier = regexp.MustCompile(\"^[a-z0-9]+([-._~/][a-z0-9]+)*$\")\n\n\tinvalidACIdentifierChars = regexp.MustCompile(\"[^a-z0-9-._~/]\")\n\tinvalidACIdentifierEdges = regexp.MustCompile(\"(^[-._~/]+)|([-._~/]+$)\")\n\n\tErrEmptyACIdentifier         = ACIdentifierError(\"ACIdentifier cannot be empty\")\n\tErrInvalidEdgeInACIdentifier = ACIdentifierError(\"ACIdentifier must start and end with only lower case \" +\n\t\t\"alphanumeric characters\")\n\tErrInvalidCharInACIdentifier = ACIdentifierError(\"ACIdentifier must contain only lower case \" +\n\t\t`alphanumeric characters plus \"-._~/\"`)\n)\n\n// ACIdentifier (an App-Container Identifier) is a format used by keys in image names\n// and image labels of the App Container Standard. An ACIdentifier is restricted to numeric\n// and lowercase URI unreserved characters defined in URI RFC[1]; all alphabetical characters\n// must be lowercase only. Furthermore, the first and last character (\"edges\") must be\n// alphanumeric, and an ACIdentifier cannot be empty. Programmatically, an ACIdentifier must\n// conform to the regular expression ValidACIdentifier.\n//\n// [1] http://tools.ietf.org/html/rfc3986#section-2.3\ntype ACIdentifier string\n\nfunc (n ACIdentifier) String() string {\n\treturn string(n)\n}\n\n// Set sets the ACIdentifier to the given value, if it is valid; if not,\n// an error is returned.\nfunc (n *ACIdentifier) Set(s string) error {\n\tnn, err := NewACIdentifier(s)\n\tif err == nil {\n\t\t*n = *nn\n\t}\n\treturn err\n}\n\n// Equals checks whether a given ACIdentifier is equal to this one.\nfunc (n ACIdentifier) Equals(o ACIdentifier) bool {\n\treturn strings.ToLower(string(n)) == strings.ToLower(string(o))\n}\n\n// Empty returns a boolean indicating whether this ACIdentifier is empty.\nfunc (n ACIdentifier) Empty() bool {\n\treturn n.String() == \"\"\n}\n\n// NewACIdentifier generates a new ACIdentifier from a string. If the given string is\n// not a valid ACIdentifier, nil and an error are returned.\nfunc NewACIdentifier(s string) (*ACIdentifier, error) {\n\tn := ACIdentifier(s)\n\tif err := n.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &n, nil\n}\n\n// MustACIdentifier generates a new ACIdentifier from a string, If the given string is\n// not a valid ACIdentifier, it panics.\nfunc MustACIdentifier(s string) *ACIdentifier {\n\tn, err := NewACIdentifier(s)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn n\n}\n\nfunc (n ACIdentifier) assertValid() error {\n\ts := string(n)\n\tif len(s) == 0 {\n\t\treturn ErrEmptyACIdentifier\n\t}\n\tif invalidACIdentifierChars.MatchString(s) {\n\t\treturn ErrInvalidCharInACIdentifier\n\t}\n\tif invalidACIdentifierEdges.MatchString(s) {\n\t\treturn ErrInvalidEdgeInACIdentifier\n\t}\n\treturn nil\n}\n\n// UnmarshalJSON implements the json.Unmarshaler interface\nfunc (n *ACIdentifier) UnmarshalJSON(data []byte) error {\n\tvar s string\n\tif err := json.Unmarshal(data, &s); err != nil {\n\t\treturn err\n\t}\n\tnn, err := NewACIdentifier(s)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*n = *nn\n\treturn nil\n}\n\n// MarshalJSON implements the json.Marshaler interface\nfunc (n ACIdentifier) MarshalJSON() ([]byte, error) {\n\tif err := n.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(n.String())\n}\n\n// SanitizeACIdentifier replaces every invalid ACIdentifier character in s with an underscore\n// making it a legal ACIdentifier string. If the character is an upper case letter it\n// replaces it with its lower case. It also removes illegal edge characters\n// (hyphens, period, underscore, tilde and slash).\n//\n// This is a helper function and its algorithm is not part of the spec. It\n// should not be called without the user explicitly asking for a suggestion.\nfunc SanitizeACIdentifier(s string) (string, error) {\n\ts = strings.ToLower(s)\n\ts = invalidACIdentifierChars.ReplaceAllString(s, \"_\")\n\ts = invalidACIdentifierEdges.ReplaceAllString(s, \"\")\n\n\tif s == \"\" {\n\t\treturn \"\", errors.New(\"must contain at least one valid character\")\n\t}\n\n\treturn s, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/ackind.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n)\n\nvar (\n\tErrNoACKind = ACKindError(\"ACKind must be set\")\n)\n\n// ACKind wraps a string to define a field which must be set with one of\n// several ACKind values. If it is unset, or has an invalid value, the field\n// will refuse to marshal/unmarshal.\ntype ACKind string\n\nfunc (a ACKind) String() string {\n\treturn string(a)\n}\n\nfunc (a ACKind) assertValid() error {\n\ts := a.String()\n\tswitch s {\n\tcase \"ImageManifest\", \"PodManifest\":\n\t\treturn nil\n\tcase \"\":\n\t\treturn ErrNoACKind\n\tdefault:\n\t\tmsg := fmt.Sprintf(\"bad ACKind: %s\", s)\n\t\treturn ACKindError(msg)\n\t}\n}\n\nfunc (a ACKind) MarshalJSON() ([]byte, error) {\n\tif err := a.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(a.String())\n}\n\nfunc (a *ACKind) UnmarshalJSON(data []byte) error {\n\tvar s string\n\terr := json.Unmarshal(data, &s)\n\tif err != nil {\n\t\treturn err\n\t}\n\tna := ACKind(s)\n\tif err := na.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*a = na\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/acname.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"regexp\"\n\t\"strings\"\n)\n\nvar (\n\t// ValidACName is a regular expression that defines a valid ACName\n\tValidACName = regexp.MustCompile(\"^[a-z0-9]+([-][a-z0-9]+)*$\")\n\n\tinvalidACNameChars = regexp.MustCompile(\"[^a-z0-9-]\")\n\tinvalidACNameEdges = regexp.MustCompile(\"(^[-]+)|([-]+$)\")\n\n\tErrEmptyACName         = ACNameError(\"ACName cannot be empty\")\n\tErrInvalidEdgeInACName = ACNameError(\"ACName must start and end with only lower case \" +\n\t\t\"alphanumeric characters\")\n\tErrInvalidCharInACName = ACNameError(\"ACName must contain only lower case \" +\n\t\t`alphanumeric characters plus \"-\"`)\n)\n\n// ACName (an App-Container Name) is a format used by keys in different formats\n// of the App Container Standard. An ACName is restricted to numeric and lowercase\n// characters accepted by the DNS RFC[1] plus \"-\"; all alphabetical characters must\n// be lowercase only. Furthermore, the first and last character (\"edges\") must be\n// alphanumeric, and an ACName cannot be empty. Programmatically, an ACName must\n// conform to the regular expression ValidACName.\n//\n// [1] http://tools.ietf.org/html/rfc1123#page-13\ntype ACName string\n\nfunc (n ACName) String() string {\n\treturn string(n)\n}\n\n// Set sets the ACName to the given value, if it is valid; if not,\n// an error is returned.\nfunc (n *ACName) Set(s string) error {\n\tnn, err := NewACName(s)\n\tif err == nil {\n\t\t*n = *nn\n\t}\n\treturn err\n}\n\n// Equals checks whether a given ACName is equal to this one.\nfunc (n ACName) Equals(o ACName) bool {\n\treturn strings.ToLower(string(n)) == strings.ToLower(string(o))\n}\n\n// Empty returns a boolean indicating whether this ACName is empty.\nfunc (n ACName) Empty() bool {\n\treturn n.String() == \"\"\n}\n\n// NewACName generates a new ACName from a string. If the given string is\n// not a valid ACName, nil and an error are returned.\nfunc NewACName(s string) (*ACName, error) {\n\tn := ACName(s)\n\tif err := n.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &n, nil\n}\n\n// MustACName generates a new ACName from a string, If the given string is\n// not a valid ACName, it panics.\nfunc MustACName(s string) *ACName {\n\tn, err := NewACName(s)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn n\n}\n\nfunc (n ACName) assertValid() error {\n\ts := string(n)\n\tif len(s) == 0 {\n\t\treturn ErrEmptyACName\n\t}\n\tif invalidACNameChars.MatchString(s) {\n\t\treturn ErrInvalidCharInACName\n\t}\n\tif invalidACNameEdges.MatchString(s) {\n\t\treturn ErrInvalidEdgeInACName\n\t}\n\treturn nil\n}\n\n// UnmarshalJSON implements the json.Unmarshaler interface\nfunc (n *ACName) UnmarshalJSON(data []byte) error {\n\tvar s string\n\tif err := json.Unmarshal(data, &s); err != nil {\n\t\treturn err\n\t}\n\tnn, err := NewACName(s)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*n = *nn\n\treturn nil\n}\n\n// MarshalJSON implements the json.Marshaler interface\nfunc (n ACName) MarshalJSON() ([]byte, error) {\n\tif err := n.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(n.String())\n}\n\n// SanitizeACName replaces every invalid ACName character in s with a dash\n// making it a legal ACName string. If the character is an upper case letter it\n// replaces it with its lower case. It also removes illegal edge characters\n// (hyphens).\n//\n// This is a helper function and its algorithm is not part of the spec. It\n// should not be called without the user explicitly asking for a suggestion.\nfunc SanitizeACName(s string) (string, error) {\n\ts = strings.ToLower(s)\n\ts = invalidACNameChars.ReplaceAllString(s, \"-\")\n\ts = invalidACNameEdges.ReplaceAllString(s, \"\")\n\n\tif s == \"\" {\n\t\treturn \"\", errors.New(\"must contain at least one valid character\")\n\t}\n\n\treturn s, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/annotations.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n)\n\ntype Annotations []Annotation\n\ntype annotations Annotations\n\ntype Annotation struct {\n\tName  ACIdentifier `json:\"name\"`\n\tValue string       `json:\"value\"`\n}\n\nfunc (a Annotations) assertValid() error {\n\tseen := map[ACIdentifier]string{}\n\tfor _, anno := range a {\n\t\t_, ok := seen[anno.Name]\n\t\tif ok {\n\t\t\treturn fmt.Errorf(`duplicate annotations of name %q`, anno.Name)\n\t\t}\n\t\tseen[anno.Name] = anno.Value\n\t}\n\tif c, ok := seen[\"created\"]; ok {\n\t\tif _, err := NewDate(c); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tif h, ok := seen[\"homepage\"]; ok {\n\t\tif _, err := NewURL(h); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tif d, ok := seen[\"documentation\"]; ok {\n\t\tif _, err := NewURL(d); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\nfunc (a Annotations) MarshalJSON() ([]byte, error) {\n\tif err := a.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(annotations(a))\n}\n\nfunc (a *Annotations) UnmarshalJSON(data []byte) error {\n\tvar ja annotations\n\tif err := json.Unmarshal(data, &ja); err != nil {\n\t\treturn err\n\t}\n\tna := Annotations(ja)\n\tif err := na.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*a = na\n\treturn nil\n}\n\n// Retrieve the value of an annotation by the given name from Annotations, if\n// it exists.\nfunc (a Annotations) Get(name string) (val string, ok bool) {\n\tfor _, anno := range a {\n\t\tif anno.Name.String() == name {\n\t\t\treturn anno.Value, true\n\t\t}\n\t}\n\treturn \"\", false\n}\n\n// Set sets the value of an annotation by the given name, overwriting if one already exists.\nfunc (a *Annotations) Set(name ACIdentifier, value string) {\n\tfor i, anno := range *a {\n\t\tif anno.Name.Equals(name) {\n\t\t\t(*a)[i] = Annotation{\n\t\t\t\tName:  name,\n\t\t\t\tValue: value,\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n\tanno := Annotation{\n\t\tName:  name,\n\t\tValue: value,\n\t}\n\t*a = append(*a, anno)\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/app.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"path\"\n)\n\ntype App struct {\n\tExec              Exec            `json:\"exec\"`\n\tEventHandlers     []EventHandler  `json:\"eventHandlers,omitempty\"`\n\tUser              string          `json:\"user\"`\n\tGroup             string          `json:\"group\"`\n\tSupplementaryGIDs []int           `json:\"supplementaryGIDs,omitempty\"`\n\tWorkingDirectory  string          `json:\"workingDirectory,omitempty\"`\n\tEnvironment       Environment     `json:\"environment,omitempty\"`\n\tMountPoints       []MountPoint    `json:\"mountPoints,omitempty\"`\n\tPorts             []Port          `json:\"ports,omitempty\"`\n\tIsolators         Isolators       `json:\"isolators,omitempty\"`\n\tUserAnnotations   UserAnnotations `json:\"userAnnotations,omitempty\"`\n\tUserLabels        UserLabels      `json:\"userLabels,omitempty\"`\n}\n\n// app is a model to facilitate extra validation during the\n// unmarshalling of the App\ntype app App\n\nfunc (a *App) UnmarshalJSON(data []byte) error {\n\tja := app(*a)\n\terr := json.Unmarshal(data, &ja)\n\tif err != nil {\n\t\treturn err\n\t}\n\tna := App(ja)\n\tif err := na.assertValid(); err != nil {\n\t\treturn err\n\t}\n\tif na.Environment == nil {\n\t\tna.Environment = make(Environment, 0)\n\t}\n\t*a = na\n\treturn nil\n}\n\nfunc (a App) MarshalJSON() ([]byte, error) {\n\tif err := a.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(app(a))\n}\n\nfunc (a *App) assertValid() error {\n\tif err := a.Exec.assertValid(); err != nil {\n\t\treturn err\n\t}\n\tif a.User == \"\" {\n\t\treturn errors.New(`user is required`)\n\t}\n\tif a.Group == \"\" {\n\t\treturn errors.New(`group is required`)\n\t}\n\tif !path.IsAbs(a.WorkingDirectory) && a.WorkingDirectory != \"\" {\n\t\treturn errors.New(\"workingDirectory must be an absolute path\")\n\t}\n\teh := make(map[string]bool)\n\tfor _, e := range a.EventHandlers {\n\t\tname := e.Name\n\t\tif eh[name] {\n\t\t\treturn fmt.Errorf(\"Only one eventHandler of name %q allowed\", name)\n\t\t}\n\t\teh[name] = true\n\t}\n\tif err := a.Environment.assertValid(); err != nil {\n\t\treturn err\n\t}\n\tif err := a.Isolators.assertValid(); err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/date.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"time\"\n)\n\n// Date wraps time.Time to marshal/unmarshal to/from JSON strings in strict\n// accordance with RFC3339\n// TODO(jonboulle): golang's implementation seems slightly buggy here;\n// according to http://tools.ietf.org/html/rfc3339#section-5.6 , applications\n// may choose to separate the date and time with a space instead of a T\n// character (for example, `date --rfc-3339` on GNU coreutils) - but this is\n// considered an error by go's parser. File a bug?\ntype Date time.Time\n\nfunc NewDate(s string) (*Date, error) {\n\tt, err := time.Parse(time.RFC3339, s)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"bad Date: %v\", err)\n\t}\n\td := Date(t)\n\treturn &d, nil\n}\n\nfunc (d Date) String() string {\n\treturn time.Time(d).Format(time.RFC3339)\n}\n\nfunc (d *Date) UnmarshalJSON(data []byte) error {\n\tvar s string\n\tif err := json.Unmarshal(data, &s); err != nil {\n\t\treturn err\n\t}\n\tnd, err := NewDate(s)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*d = *nd\n\treturn nil\n}\n\nfunc (d Date) MarshalJSON() ([]byte, error) {\n\treturn json.Marshal(d.String())\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/dependencies.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n)\n\ntype Dependencies []Dependency\n\ntype Dependency struct {\n\tImageName ACIdentifier `json:\"imageName\"`\n\tImageID   *Hash        `json:\"imageID,omitempty\"`\n\tLabels    Labels       `json:\"labels,omitempty\"`\n\tSize      uint         `json:\"size,omitempty\"`\n}\n\ntype dependency Dependency\n\nfunc (d Dependency) assertValid() error {\n\tif len(d.ImageName) < 1 {\n\t\treturn errors.New(`imageName cannot be empty`)\n\t}\n\treturn nil\n}\n\nfunc (d Dependency) MarshalJSON() ([]byte, error) {\n\tif err := d.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(dependency(d))\n}\n\nfunc (d *Dependency) UnmarshalJSON(data []byte) error {\n\tvar jd dependency\n\tif err := json.Unmarshal(data, &jd); err != nil {\n\t\treturn err\n\t}\n\tnd := Dependency(jd)\n\tif err := nd.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*d = nd\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/doc.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Package types contains structs representing the various types in the app\n// container specification. It is used by the [schema manifest types](../)\n// to enforce validation.\npackage types\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/environment.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"regexp\"\n)\n\nvar (\n\tenvPattern = regexp.MustCompile(\"^[A-Za-z_][A-Za-z_0-9]*$\")\n)\n\ntype Environment []EnvironmentVariable\n\ntype environment Environment\n\ntype EnvironmentVariable struct {\n\tName  string `json:\"name\"`\n\tValue string `json:\"value\"`\n}\n\nfunc (ev EnvironmentVariable) assertValid() error {\n\tif len(ev.Name) == 0 {\n\t\treturn fmt.Errorf(`environment variable name must not be empty`)\n\t}\n\tif !envPattern.MatchString(ev.Name) {\n\t\treturn fmt.Errorf(`environment variable does not have valid identifier %q`, ev.Name)\n\t}\n\treturn nil\n}\n\nfunc (e Environment) assertValid() error {\n\tseen := map[string]bool{}\n\tfor _, env := range e {\n\t\tif err := env.assertValid(); err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, ok := seen[env.Name]\n\t\tif ok {\n\t\t\treturn fmt.Errorf(`duplicate environment variable of name %q`, env.Name)\n\t\t}\n\t\tseen[env.Name] = true\n\t}\n\n\treturn nil\n}\n\nfunc (e Environment) MarshalJSON() ([]byte, error) {\n\tif err := e.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(environment(e))\n}\n\nfunc (e *Environment) UnmarshalJSON(data []byte) error {\n\tvar je environment\n\tif err := json.Unmarshal(data, &je); err != nil {\n\t\treturn err\n\t}\n\tne := Environment(je)\n\tif err := ne.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*e = ne\n\treturn nil\n}\n\n// Retrieve the value of an environment variable by the given name from\n// Environment, if it exists.\nfunc (e Environment) Get(name string) (value string, ok bool) {\n\tfor _, env := range e {\n\t\tif env.Name == name {\n\t\t\treturn env.Value, true\n\t\t}\n\t}\n\treturn \"\", false\n}\n\n// Set sets the value of an environment variable by the given name,\n// overwriting if one already exists.\nfunc (e *Environment) Set(name string, value string) {\n\tfor i, env := range *e {\n\t\tif env.Name == name {\n\t\t\t(*e)[i] = EnvironmentVariable{\n\t\t\t\tName:  name,\n\t\t\t\tValue: value,\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n\tenv := EnvironmentVariable{\n\t\tName:  name,\n\t\tValue: value,\n\t}\n\t*e = append(*e, env)\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/errors.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport \"fmt\"\n\n// An ACKindError is returned when the wrong ACKind is set in a manifest\ntype ACKindError string\n\nfunc (e ACKindError) Error() string {\n\treturn string(e)\n}\n\nfunc InvalidACKindError(kind ACKind) ACKindError {\n\treturn ACKindError(fmt.Sprintf(\"missing or bad ACKind (must be %#v)\", kind))\n}\n\n// An ACVersionError is returned when a bad ACVersion is set in a manifest\ntype ACVersionError string\n\nfunc (e ACVersionError) Error() string {\n\treturn string(e)\n}\n\n// An ACIdentifierError is returned when a bad value is used for an ACIdentifier\ntype ACIdentifierError string\n\nfunc (e ACIdentifierError) Error() string {\n\treturn string(e)\n}\n\n// An ACNameError is returned when a bad value is used for an ACName\ntype ACNameError string\n\nfunc (e ACNameError) Error() string {\n\treturn string(e)\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/event_handler.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n)\n\ntype EventHandler struct {\n\tName string `json:\"name\"`\n\tExec Exec   `json:\"exec\"`\n}\n\ntype eventHandler EventHandler\n\nfunc (e EventHandler) assertValid() error {\n\ts := e.Name\n\tswitch s {\n\tcase \"pre-start\", \"post-stop\":\n\t\treturn nil\n\tcase \"\":\n\t\treturn errors.New(`eventHandler \"name\" cannot be empty`)\n\tdefault:\n\t\treturn fmt.Errorf(`bad eventHandler \"name\": %q`, s)\n\t}\n}\n\nfunc (e EventHandler) MarshalJSON() ([]byte, error) {\n\tif err := e.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(eventHandler(e))\n}\n\nfunc (e *EventHandler) UnmarshalJSON(data []byte) error {\n\tvar je eventHandler\n\terr := json.Unmarshal(data, &je)\n\tif err != nil {\n\t\treturn err\n\t}\n\tne := EventHandler(je)\n\tif err := ne.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*e = ne\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/exec.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport \"encoding/json\"\n\ntype Exec []string\n\ntype exec Exec\n\nfunc (e Exec) assertValid() error {\n\treturn nil\n}\n\nfunc (e Exec) MarshalJSON() ([]byte, error) {\n\tif err := e.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(exec(e))\n}\n\nfunc (e *Exec) UnmarshalJSON(data []byte) error {\n\tvar je exec\n\terr := json.Unmarshal(data, &je)\n\tif err != nil {\n\t\treturn err\n\t}\n\tne := Exec(je)\n\tif err := ne.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*e = ne\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/hash.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"crypto/sha512\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"strings\"\n)\n\nconst (\n\tmaxHashSize = (sha512.Size / 2) + len(\"sha512-\")\n)\n\n// Hash encodes a hash specified in a string of the form:\n//    \"<type>-<value>\"\n// for example\n//    \"sha512-06c733b1838136838e6d2d3e8fa5aea4c7905e92[...]\"\n// Valid types are currently:\n//  * sha512\ntype Hash struct {\n\ttyp string\n\tVal string\n}\n\nfunc NewHash(s string) (*Hash, error) {\n\telems := strings.Split(s, \"-\")\n\tif len(elems) != 2 {\n\t\treturn nil, errors.New(\"badly formatted hash string\")\n\t}\n\tnh := Hash{\n\t\ttyp: elems[0],\n\t\tVal: elems[1],\n\t}\n\tif err := nh.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &nh, nil\n}\n\nfunc (h Hash) String() string {\n\treturn fmt.Sprintf(\"%s-%s\", h.typ, h.Val)\n}\n\nfunc (h *Hash) Set(s string) error {\n\tnh, err := NewHash(s)\n\tif err == nil {\n\t\t*h = *nh\n\t}\n\treturn err\n}\n\nfunc (h Hash) Empty() bool {\n\treturn reflect.DeepEqual(h, Hash{})\n}\n\nfunc (h Hash) assertValid() error {\n\tswitch h.typ {\n\tcase \"sha512\":\n\tcase \"\":\n\t\treturn fmt.Errorf(\"unexpected empty hash type\")\n\tdefault:\n\t\treturn fmt.Errorf(\"unrecognized hash type: %v\", h.typ)\n\t}\n\tif h.Val == \"\" {\n\t\treturn fmt.Errorf(\"unexpected empty hash value\")\n\t}\n\treturn nil\n}\n\nfunc (h *Hash) UnmarshalJSON(data []byte) error {\n\tvar s string\n\tif err := json.Unmarshal(data, &s); err != nil {\n\t\treturn err\n\t}\n\tnh, err := NewHash(s)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*h = *nh\n\treturn nil\n}\n\nfunc (h Hash) MarshalJSON() ([]byte, error) {\n\tif err := h.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(h.String())\n}\n\nfunc NewHashSHA512(b []byte) *Hash {\n\th := sha512.New()\n\th.Write(b)\n\tnh, _ := NewHash(fmt.Sprintf(\"sha512-%x\", h.Sum(nil)))\n\treturn nh\n}\n\nfunc ShortHash(hash string) string {\n\tif len(hash) > maxHashSize {\n\t\treturn hash[:maxHashSize]\n\t}\n\treturn hash\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/isolator.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n)\n\nvar (\n\tisolatorMap map[ACIdentifier]IsolatorValueConstructor\n\n\t// ErrIncompatibleIsolator is returned whenever an Isolators set contains\n\t// conflicting IsolatorValue instances\n\tErrIncompatibleIsolator = errors.New(\"isolators set contains incompatible types\")\n\t// ErrInvalidIsolator is returned upon validation failures due to improper\n\t// or partially constructed Isolator instances (eg. from incomplete direct construction)\n\tErrInvalidIsolator = errors.New(\"invalid isolator\")\n)\n\nfunc init() {\n\tisolatorMap = make(map[ACIdentifier]IsolatorValueConstructor)\n}\n\ntype IsolatorValueConstructor func() IsolatorValue\n\nfunc AddIsolatorValueConstructor(n ACIdentifier, i IsolatorValueConstructor) {\n\tisolatorMap[n] = i\n}\n\nfunc AddIsolatorName(n ACIdentifier, ns map[ACIdentifier]struct{}) {\n\tns[n] = struct{}{}\n}\n\n// Isolators encapsulates a list of individual Isolators for the ImageManifest\n// and PodManifest schemas.\ntype Isolators []Isolator\n\n// assertValid checks that every single isolator is valid and that\n// the whole set is well built\nfunc (isolators Isolators) assertValid() error {\n\ttypesMap := make(map[ACIdentifier]bool)\n\tfor _, i := range isolators {\n\t\tv := i.Value()\n\t\tif v == nil {\n\t\t\treturn ErrInvalidIsolator\n\t\t}\n\t\tif err := v.AssertValid(); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif _, ok := typesMap[i.Name]; ok {\n\t\t\tif !v.multipleAllowed() {\n\t\t\t\treturn fmt.Errorf(`isolators set contains too many instances of type %s\"`, i.Name)\n\t\t\t}\n\t\t}\n\t\tfor _, c := range v.Conflicts() {\n\t\t\tif _, found := typesMap[c]; found {\n\t\t\t\treturn ErrIncompatibleIsolator\n\t\t\t}\n\t\t}\n\t\ttypesMap[i.Name] = true\n\t}\n\treturn nil\n}\n\n// GetByName returns the last isolator in the list by the given name.\nfunc (is *Isolators) GetByName(name ACIdentifier) *Isolator {\n\tvar i Isolator\n\tfor j := len(*is) - 1; j >= 0; j-- {\n\t\ti = []Isolator(*is)[j]\n\t\tif i.Name == name {\n\t\t\treturn &i\n\t\t}\n\t}\n\treturn nil\n}\n\n// ReplaceIsolatorsByName overrides matching isolator types with a new\n// isolator, deleting them all and appending the new one instead\nfunc (is *Isolators) ReplaceIsolatorsByName(newIs Isolator, oldNames []ACIdentifier) {\n\tvar i Isolator\n\tfor j := len(*is) - 1; j >= 0; j-- {\n\t\ti = []Isolator(*is)[j]\n\t\tfor _, name := range oldNames {\n\t\t\tif i.Name == name {\n\t\t\t\t*is = append((*is)[:j], (*is)[j+1:]...)\n\t\t\t}\n\t\t}\n\t}\n\t*is = append((*is)[:], newIs)\n\treturn\n}\n\n// Unrecognized returns a set of isolators that are not recognized.\n// An isolator is not recognized if it has not had an associated\n// constructor registered with AddIsolatorValueConstructor.\nfunc (is *Isolators) Unrecognized() Isolators {\n\tu := Isolators{}\n\tfor _, i := range *is {\n\t\tif i.value == nil {\n\t\t\tu = append(u, i)\n\t\t}\n\t}\n\treturn u\n}\n\n// IsolatorValue encapsulates the actual value of an Isolator which may be\n// serialized as any arbitrary JSON blob. Specific Isolator types should\n// implement this interface to facilitate unmarshalling and validation.\ntype IsolatorValue interface {\n\t// UnmarshalJSON unserialize a JSON-encoded isolator\n\tUnmarshalJSON(b []byte) error\n\t// AssertValid returns a non-nil error value if an IsolatorValue is not valid\n\t// according to appc spec\n\tAssertValid() error\n\t// Conflicts returns a list of conflicting isolators types, which cannot co-exist\n\t// together with this IsolatorValue\n\tConflicts() []ACIdentifier\n\t// multipleAllowed specifies whether multiple isolator instances are allowed\n\t// for this isolator type\n\tmultipleAllowed() bool\n}\n\n// Isolator is a model for unmarshalling isolator types from their JSON-encoded\n// representation.\ntype Isolator struct {\n\t// Name is the name of the Isolator type as defined in the specification.\n\tName ACIdentifier `json:\"name\"`\n\t// ValueRaw captures the raw JSON value of an Isolator that was\n\t// unmarshalled. This field is used for unmarshalling only. It MUST NOT\n\t// be referenced by external users of the Isolator struct. It is\n\t// exported only to satisfy Go's unfortunate requirement that fields\n\t// must be capitalized to be unmarshalled successfully.\n\tValueRaw *json.RawMessage `json:\"value\"`\n\t// value captures the \"true\" value of the isolator.\n\tvalue IsolatorValue\n}\n\n// isolator is a shadow type used for unmarshalling.\ntype isolator Isolator\n\n// Value returns the raw Value of this Isolator. Users should perform a type\n// switch/assertion on this value to extract the underlying isolator type.\nfunc (i *Isolator) Value() IsolatorValue {\n\treturn i.value\n}\n\n// UnmarshalJSON populates this Isolator from a JSON-encoded representation. To\n// unmarshal the Value of the Isolator, it will use the appropriate constructor\n// as registered by AddIsolatorValueConstructor.\nfunc (i *Isolator) UnmarshalJSON(b []byte) error {\n\tvar ii isolator\n\terr := json.Unmarshal(b, &ii)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tvar dst IsolatorValue\n\tcon, ok := isolatorMap[ii.Name]\n\tif ok {\n\t\tdst = con()\n\t\terr = dst.UnmarshalJSON(*ii.ValueRaw)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\terr = dst.AssertValid()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\ti.value = dst\n\ti.ValueRaw = ii.ValueRaw\n\ti.Name = ii.Name\n\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/isolator_linux_specific.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"strings\"\n\t\"unicode\"\n)\n\nconst (\n\tLinuxCapabilitiesRetainSetName = \"os/linux/capabilities-retain-set\"\n\tLinuxCapabilitiesRevokeSetName = \"os/linux/capabilities-remove-set\"\n\tLinuxNoNewPrivilegesName       = \"os/linux/no-new-privileges\"\n\tLinuxSeccompRemoveSetName      = \"os/linux/seccomp-remove-set\"\n\tLinuxSeccompRetainSetName      = \"os/linux/seccomp-retain-set\"\n\tLinuxOOMScoreAdjName           = \"os/linux/oom-score-adj\"\n\tLinuxCPUSharesName             = \"os/linux/cpu-shares\"\n\tLinuxSELinuxContextName        = \"os/linux/selinux-context\"\n)\n\nvar LinuxIsolatorNames = make(map[ACIdentifier]struct{})\n\nfunc init() {\n\tfor name, con := range map[ACIdentifier]IsolatorValueConstructor{\n\t\tLinuxCapabilitiesRevokeSetName: func() IsolatorValue { return &LinuxCapabilitiesRevokeSet{} },\n\t\tLinuxCapabilitiesRetainSetName: func() IsolatorValue { return &LinuxCapabilitiesRetainSet{} },\n\t\tLinuxNoNewPrivilegesName:       func() IsolatorValue { v := LinuxNoNewPrivileges(false); return &v },\n\t\tLinuxOOMScoreAdjName:           func() IsolatorValue { v := LinuxOOMScoreAdj(0); return &v },\n\t\tLinuxCPUSharesName:             func() IsolatorValue { v := LinuxCPUShares(1024); return &v },\n\t\tLinuxSeccompRemoveSetName:      func() IsolatorValue { return &LinuxSeccompRemoveSet{} },\n\t\tLinuxSeccompRetainSetName:      func() IsolatorValue { return &LinuxSeccompRetainSet{} },\n\t\tLinuxSELinuxContextName:        func() IsolatorValue { return &LinuxSELinuxContext{} },\n\t} {\n\t\tAddIsolatorName(name, LinuxIsolatorNames)\n\t\tAddIsolatorValueConstructor(name, con)\n\t}\n}\n\ntype LinuxNoNewPrivileges bool\n\nfunc (l LinuxNoNewPrivileges) AssertValid() error {\n\treturn nil\n}\n\n// TODO(lucab): both need to be clarified in spec,\n// see https://github.com/appc/spec/issues/625\nfunc (l LinuxNoNewPrivileges) multipleAllowed() bool {\n\treturn true\n}\nfunc (l LinuxNoNewPrivileges) Conflicts() []ACIdentifier {\n\treturn nil\n}\n\nfunc (l *LinuxNoNewPrivileges) UnmarshalJSON(b []byte) error {\n\tvar v bool\n\terr := json.Unmarshal(b, &v)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t*l = LinuxNoNewPrivileges(v)\n\n\treturn nil\n}\n\ntype AsIsolator interface {\n\tAsIsolator() (*Isolator, error)\n}\n\ntype LinuxCapabilitiesSet interface {\n\tSet() []LinuxCapability\n\tAssertValid() error\n}\n\ntype LinuxCapability string\n\ntype linuxCapabilitiesSetValue struct {\n\tSet []LinuxCapability `json:\"set\"`\n}\n\ntype linuxCapabilitiesSetBase struct {\n\tval linuxCapabilitiesSetValue\n}\n\nfunc (l linuxCapabilitiesSetBase) AssertValid() error {\n\tif len(l.val.Set) == 0 {\n\t\treturn errors.New(\"set must be non-empty\")\n\t}\n\treturn nil\n}\n\n// TODO(lucab): both need to be clarified in spec,\n// see https://github.com/appc/spec/issues/625\nfunc (l linuxCapabilitiesSetBase) multipleAllowed() bool {\n\treturn true\n}\nfunc (l linuxCapabilitiesSetBase) Conflicts() []ACIdentifier {\n\treturn nil\n}\n\nfunc (l *linuxCapabilitiesSetBase) UnmarshalJSON(b []byte) error {\n\tvar v linuxCapabilitiesSetValue\n\terr := json.Unmarshal(b, &v)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tl.val = v\n\n\treturn err\n}\n\nfunc (l linuxCapabilitiesSetBase) Set() []LinuxCapability {\n\treturn l.val.Set\n}\n\ntype LinuxCapabilitiesRetainSet struct {\n\tlinuxCapabilitiesSetBase\n}\n\nfunc NewLinuxCapabilitiesRetainSet(caps ...string) (*LinuxCapabilitiesRetainSet, error) {\n\tl := LinuxCapabilitiesRetainSet{\n\t\tlinuxCapabilitiesSetBase{\n\t\t\tlinuxCapabilitiesSetValue{\n\t\t\t\tmake([]LinuxCapability, len(caps)),\n\t\t\t},\n\t\t},\n\t}\n\tfor i, c := range caps {\n\t\tl.linuxCapabilitiesSetBase.val.Set[i] = LinuxCapability(c)\n\t}\n\tif err := l.AssertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &l, nil\n}\n\nfunc (l LinuxCapabilitiesRetainSet) AsIsolator() (*Isolator, error) {\n\tb, err := json.Marshal(l.linuxCapabilitiesSetBase.val)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\trm := json.RawMessage(b)\n\treturn &Isolator{\n\t\tName:     LinuxCapabilitiesRetainSetName,\n\t\tValueRaw: &rm,\n\t\tvalue:    &l,\n\t}, nil\n}\n\ntype LinuxCapabilitiesRevokeSet struct {\n\tlinuxCapabilitiesSetBase\n}\n\nfunc NewLinuxCapabilitiesRevokeSet(caps ...string) (*LinuxCapabilitiesRevokeSet, error) {\n\tl := LinuxCapabilitiesRevokeSet{\n\t\tlinuxCapabilitiesSetBase{\n\t\t\tlinuxCapabilitiesSetValue{\n\t\t\t\tmake([]LinuxCapability, len(caps)),\n\t\t\t},\n\t\t},\n\t}\n\tfor i, c := range caps {\n\t\tl.linuxCapabilitiesSetBase.val.Set[i] = LinuxCapability(c)\n\t}\n\tif err := l.AssertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &l, nil\n}\n\nfunc (l LinuxCapabilitiesRevokeSet) AsIsolator() (*Isolator, error) {\n\tb, err := json.Marshal(l.linuxCapabilitiesSetBase.val)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\trm := json.RawMessage(b)\n\treturn &Isolator{\n\t\tName:     LinuxCapabilitiesRevokeSetName,\n\t\tValueRaw: &rm,\n\t\tvalue:    &l,\n\t}, nil\n}\n\ntype LinuxSeccompSet interface {\n\tSet() []LinuxSeccompEntry\n\tErrno() LinuxSeccompErrno\n\tAssertValid() error\n}\n\ntype LinuxSeccompEntry string\ntype LinuxSeccompErrno string\n\ntype linuxSeccompValue struct {\n\tSet   []LinuxSeccompEntry `json:\"set\"`\n\tErrno LinuxSeccompErrno   `json:\"errno\"`\n}\n\ntype linuxSeccompBase struct {\n\tval linuxSeccompValue\n}\n\nfunc (l linuxSeccompBase) multipleAllowed() bool {\n\treturn false\n}\n\nfunc (l linuxSeccompBase) AssertValid() error {\n\tif len(l.val.Set) == 0 {\n\t\treturn errors.New(\"set must be non-empty\")\n\t}\n\tif l.val.Errno == \"\" {\n\t\treturn nil\n\t}\n\tfor _, c := range l.val.Errno {\n\t\tif !unicode.IsUpper(c) {\n\t\t\treturn errors.New(\"errno must be an upper case string\")\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (l *linuxSeccompBase) UnmarshalJSON(b []byte) error {\n\tvar v linuxSeccompValue\n\terr := json.Unmarshal(b, &v)\n\tif err != nil {\n\t\treturn err\n\t}\n\tl.val = v\n\treturn nil\n}\n\nfunc (l linuxSeccompBase) Set() []LinuxSeccompEntry {\n\treturn l.val.Set\n}\n\nfunc (l linuxSeccompBase) Errno() LinuxSeccompErrno {\n\treturn l.val.Errno\n}\n\ntype LinuxSeccompRetainSet struct {\n\tlinuxSeccompBase\n}\n\nfunc (l LinuxSeccompRetainSet) Conflicts() []ACIdentifier {\n\treturn []ACIdentifier{LinuxSeccompRemoveSetName}\n}\n\nfunc NewLinuxSeccompRetainSet(errno string, syscall ...string) (*LinuxSeccompRetainSet, error) {\n\tl := LinuxSeccompRetainSet{\n\t\tlinuxSeccompBase{\n\t\t\tlinuxSeccompValue{\n\t\t\t\tmake([]LinuxSeccompEntry, len(syscall)),\n\t\t\t\tLinuxSeccompErrno(errno),\n\t\t\t},\n\t\t},\n\t}\n\tfor i, c := range syscall {\n\t\tl.linuxSeccompBase.val.Set[i] = LinuxSeccompEntry(c)\n\t}\n\tif err := l.AssertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &l, nil\n}\n\nfunc (l LinuxSeccompRetainSet) AsIsolator() (*Isolator, error) {\n\tb, err := json.Marshal(l.linuxSeccompBase.val)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\trm := json.RawMessage(b)\n\treturn &Isolator{\n\t\tName:     LinuxSeccompRetainSetName,\n\t\tValueRaw: &rm,\n\t\tvalue:    &l,\n\t}, nil\n}\n\ntype LinuxSeccompRemoveSet struct {\n\tlinuxSeccompBase\n}\n\nfunc (l LinuxSeccompRemoveSet) Conflicts() []ACIdentifier {\n\treturn []ACIdentifier{LinuxSeccompRetainSetName}\n}\n\nfunc NewLinuxSeccompRemoveSet(errno string, syscall ...string) (*LinuxSeccompRemoveSet, error) {\n\tl := LinuxSeccompRemoveSet{\n\t\tlinuxSeccompBase{\n\t\t\tlinuxSeccompValue{\n\t\t\t\tmake([]LinuxSeccompEntry, len(syscall)),\n\t\t\t\tLinuxSeccompErrno(errno),\n\t\t\t},\n\t\t},\n\t}\n\tfor i, c := range syscall {\n\t\tl.linuxSeccompBase.val.Set[i] = LinuxSeccompEntry(c)\n\t}\n\tif err := l.AssertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &l, nil\n}\n\nfunc (l LinuxSeccompRemoveSet) AsIsolator() (*Isolator, error) {\n\tb, err := json.Marshal(l.linuxSeccompBase.val)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\trm := json.RawMessage(b)\n\treturn &Isolator{\n\t\tName:     LinuxSeccompRemoveSetName,\n\t\tValueRaw: &rm,\n\t\tvalue:    &l,\n\t}, nil\n}\n\n// LinuxCPUShares assigns the CPU time share weight to the processes executed.\n// See https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUShares=weight,\n// https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt\ntype LinuxCPUShares int\n\nfunc NewLinuxCPUShares(val int) (*LinuxCPUShares, error) {\n\tl := LinuxCPUShares(val)\n\tif err := l.AssertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &l, nil\n}\n\nfunc (l LinuxCPUShares) AssertValid() error {\n\tif l < 2 || l > 262144 {\n\t\treturn fmt.Errorf(\"%s must be between 2 and 262144, got %d\", LinuxCPUSharesName, l)\n\t}\n\treturn nil\n}\n\nfunc (l LinuxCPUShares) multipleAllowed() bool {\n\treturn false\n}\n\nfunc (l LinuxCPUShares) Conflicts() []ACIdentifier {\n\treturn nil\n}\n\nfunc (l *LinuxCPUShares) UnmarshalJSON(b []byte) error {\n\tvar v int\n\terr := json.Unmarshal(b, &v)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t*l = LinuxCPUShares(v)\n\treturn nil\n}\n\nfunc (l LinuxCPUShares) AsIsolator() Isolator {\n\tb, err := json.Marshal(l)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\trm := json.RawMessage(b)\n\treturn Isolator{\n\t\tName:     LinuxCPUSharesName,\n\t\tValueRaw: &rm,\n\t\tvalue:    &l,\n\t}\n}\n\n// LinuxOOMScoreAdj is equivalent to /proc/[pid]/oom_score_adj\ntype LinuxOOMScoreAdj int // -1000 to 1000\n\nfunc NewLinuxOOMScoreAdj(val int) (*LinuxOOMScoreAdj, error) {\n\tl := LinuxOOMScoreAdj(val)\n\tif err := l.AssertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &l, nil\n}\n\nfunc (l LinuxOOMScoreAdj) AssertValid() error {\n\tif l < -1000 || l > 1000 {\n\t\treturn fmt.Errorf(\"%s must be between -1000 and 1000, got %d\", LinuxOOMScoreAdjName, l)\n\t}\n\treturn nil\n}\n\nfunc (l LinuxOOMScoreAdj) multipleAllowed() bool {\n\treturn false\n}\n\nfunc (l LinuxOOMScoreAdj) Conflicts() []ACIdentifier {\n\treturn nil\n}\n\nfunc (l *LinuxOOMScoreAdj) UnmarshalJSON(b []byte) error {\n\tvar v int\n\terr := json.Unmarshal(b, &v)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t*l = LinuxOOMScoreAdj(v)\n\treturn nil\n}\n\nfunc (l LinuxOOMScoreAdj) AsIsolator() Isolator {\n\tb, err := json.Marshal(l)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\trm := json.RawMessage(b)\n\treturn Isolator{\n\t\tName:     LinuxOOMScoreAdjName,\n\t\tValueRaw: &rm,\n\t\tvalue:    &l,\n\t}\n}\n\ntype LinuxSELinuxUser string\ntype LinuxSELinuxRole string\ntype LinuxSELinuxType string\ntype LinuxSELinuxLevel string\n\ntype linuxSELinuxValue struct {\n\tUser  LinuxSELinuxUser  `json:\"user\"`\n\tRole  LinuxSELinuxRole  `json:\"role\"`\n\tType  LinuxSELinuxType  `json:\"type\"`\n\tLevel LinuxSELinuxLevel `json:\"level\"`\n}\n\ntype LinuxSELinuxContext struct {\n\tval linuxSELinuxValue\n}\n\nfunc (l LinuxSELinuxContext) AssertValid() error {\n\tif l.val.User == \"\" || strings.Contains(string(l.val.User), \":\") {\n\t\treturn fmt.Errorf(\"invalid user value %q\", l.val.User)\n\t}\n\tif l.val.Role == \"\" || strings.Contains(string(l.val.Role), \":\") {\n\t\treturn fmt.Errorf(\"invalid role value %q\", l.val.Role)\n\t}\n\tif l.val.Type == \"\" || strings.Contains(string(l.val.Type), \":\") {\n\t\treturn fmt.Errorf(\"invalid type value %q\", l.val.Type)\n\t}\n\tif l.val.Level == \"\" {\n\t\treturn fmt.Errorf(\"invalid level value %q\", l.val.Level)\n\t}\n\treturn nil\n}\n\nfunc (l *LinuxSELinuxContext) UnmarshalJSON(b []byte) error {\n\tvar v linuxSELinuxValue\n\terr := json.Unmarshal(b, &v)\n\tif err != nil {\n\t\treturn err\n\t}\n\tl.val = v\n\treturn nil\n}\n\nfunc (l LinuxSELinuxContext) User() LinuxSELinuxUser {\n\treturn l.val.User\n}\n\nfunc (l LinuxSELinuxContext) Role() LinuxSELinuxRole {\n\treturn l.val.Role\n}\n\nfunc (l LinuxSELinuxContext) Type() LinuxSELinuxType {\n\treturn l.val.Type\n}\n\nfunc (l LinuxSELinuxContext) Level() LinuxSELinuxLevel {\n\treturn l.val.Level\n}\n\nfunc (l LinuxSELinuxContext) multipleAllowed() bool {\n\treturn false\n}\n\nfunc (l LinuxSELinuxContext) Conflicts() []ACIdentifier {\n\treturn nil\n}\n\nfunc NewLinuxSELinuxContext(selinuxUser, selinuxRole, selinuxType, selinuxLevel string) (*LinuxSELinuxContext, error) {\n\tl := LinuxSELinuxContext{\n\t\tlinuxSELinuxValue{\n\t\t\tLinuxSELinuxUser(selinuxUser),\n\t\t\tLinuxSELinuxRole(selinuxRole),\n\t\t\tLinuxSELinuxType(selinuxType),\n\t\t\tLinuxSELinuxLevel(selinuxLevel),\n\t\t},\n\t}\n\tif err := l.AssertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &l, nil\n}\n\nfunc (l LinuxSELinuxContext) AsIsolator() (*Isolator, error) {\n\tb, err := json.Marshal(l.val)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\trm := json.RawMessage(b)\n\treturn &Isolator{\n\t\tName:     LinuxSELinuxContextName,\n\t\tValueRaw: &rm,\n\t\tvalue:    &l,\n\t}, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/isolator_resources.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\n\t\"github.com/appc/spec/schema/types/resource\"\n)\n\nvar (\n\tErrDefaultTrue     = errors.New(\"default must be false\")\n\tErrDefaultRequired = errors.New(\"default must be true\")\n\tErrRequestNonEmpty = errors.New(\"request not supported by this resource, must be empty\")\n\n\tResourceIsolatorNames = make(map[ACIdentifier]struct{})\n)\n\nconst (\n\tResourceBlockBandwidthName   = \"resource/block-bandwidth\"\n\tResourceBlockIOPSName        = \"resource/block-iops\"\n\tResourceCPUName              = \"resource/cpu\"\n\tResourceMemoryName           = \"resource/memory\"\n\tResourceNetworkBandwidthName = \"resource/network-bandwidth\"\n)\n\nfunc init() {\n\tfor name, con := range map[ACIdentifier]IsolatorValueConstructor{\n\t\tResourceBlockBandwidthName:   func() IsolatorValue { return &ResourceBlockBandwidth{} },\n\t\tResourceBlockIOPSName:        func() IsolatorValue { return &ResourceBlockIOPS{} },\n\t\tResourceCPUName:              func() IsolatorValue { return &ResourceCPU{} },\n\t\tResourceMemoryName:           func() IsolatorValue { return &ResourceMemory{} },\n\t\tResourceNetworkBandwidthName: func() IsolatorValue { return &ResourceNetworkBandwidth{} },\n\t} {\n\t\tAddIsolatorName(name, ResourceIsolatorNames)\n\t\tAddIsolatorValueConstructor(name, con)\n\t}\n}\n\ntype Resource interface {\n\tLimit() *resource.Quantity\n\tRequest() *resource.Quantity\n\tDefault() bool\n}\n\ntype ResourceBase struct {\n\tval resourceValue\n}\n\ntype resourceValue struct {\n\tDefault bool               `json:\"default\"`\n\tRequest *resource.Quantity `json:\"request\"`\n\tLimit   *resource.Quantity `json:\"limit\"`\n}\n\nfunc (r ResourceBase) Limit() *resource.Quantity {\n\treturn r.val.Limit\n}\nfunc (r ResourceBase) Request() *resource.Quantity {\n\treturn r.val.Request\n}\nfunc (r ResourceBase) Default() bool {\n\treturn r.val.Default\n}\n\nfunc (r *ResourceBase) UnmarshalJSON(b []byte) error {\n\treturn json.Unmarshal(b, &r.val)\n}\n\nfunc (r ResourceBase) AssertValid() error {\n\treturn nil\n}\n\n// TODO(lucab): both need to be clarified in spec,\n// see https://github.com/appc/spec/issues/625\nfunc (l ResourceBase) multipleAllowed() bool {\n\treturn true\n}\nfunc (l ResourceBase) Conflicts() []ACIdentifier {\n\treturn nil\n}\n\ntype ResourceBlockBandwidth struct {\n\tResourceBase\n}\n\nfunc (r ResourceBlockBandwidth) AssertValid() error {\n\tif r.Default() != true {\n\t\treturn ErrDefaultRequired\n\t}\n\tif r.Request() != nil {\n\t\treturn ErrRequestNonEmpty\n\t}\n\treturn nil\n}\n\ntype ResourceBlockIOPS struct {\n\tResourceBase\n}\n\nfunc (r ResourceBlockIOPS) AssertValid() error {\n\tif r.Default() != true {\n\t\treturn ErrDefaultRequired\n\t}\n\tif r.Request() != nil {\n\t\treturn ErrRequestNonEmpty\n\t}\n\treturn nil\n}\n\ntype ResourceCPU struct {\n\tResourceBase\n}\n\nfunc (r ResourceCPU) String() string {\n\treturn fmt.Sprintf(\"ResourceCPU(request=%s, limit=%s)\", r.Request(), r.Limit())\n}\n\nfunc (r ResourceCPU) AssertValid() error {\n\tif r.Default() != false {\n\t\treturn ErrDefaultTrue\n\t}\n\treturn nil\n}\n\nfunc (r ResourceCPU) AsIsolator() Isolator {\n\tisol := isolatorMap[ResourceCPUName]()\n\n\tb, err := json.Marshal(r.val)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tvalRaw := json.RawMessage(b)\n\treturn Isolator{\n\t\tName:     ResourceCPUName,\n\t\tValueRaw: &valRaw,\n\t\tvalue:    isol,\n\t}\n}\n\nfunc NewResourceCPUIsolator(request, limit string) (*ResourceCPU, error) {\n\treq, err := resource.ParseQuantity(request)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error parsing request: %v\", err)\n\t}\n\tlim, err := resource.ParseQuantity(limit)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error parsing limit: %v\", err)\n\t}\n\tres := &ResourceCPU{\n\t\tResourceBase{\n\t\t\tresourceValue{\n\t\t\t\tRequest: &req,\n\t\t\t\tLimit:   &lim,\n\t\t\t},\n\t\t},\n\t}\n\tif err := res.AssertValid(); err != nil {\n\t\t// should never happen\n\t\treturn nil, err\n\t}\n\treturn res, nil\n}\n\ntype ResourceMemory struct {\n\tResourceBase\n}\n\nfunc (r ResourceMemory) String() string {\n\treturn fmt.Sprintf(\"ResourceMemory(request=%s, limit=%s)\", r.Request(), r.Limit())\n}\n\nfunc (r ResourceMemory) AssertValid() error {\n\tif r.Default() != false {\n\t\treturn ErrDefaultTrue\n\t}\n\treturn nil\n}\n\nfunc (r ResourceMemory) AsIsolator() Isolator {\n\tisol := isolatorMap[ResourceMemoryName]()\n\n\tb, err := json.Marshal(r.val)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tvalRaw := json.RawMessage(b)\n\treturn Isolator{\n\t\tName:     ResourceMemoryName,\n\t\tValueRaw: &valRaw,\n\t\tvalue:    isol,\n\t}\n}\n\nfunc NewResourceMemoryIsolator(request, limit string) (*ResourceMemory, error) {\n\treq, err := resource.ParseQuantity(request)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error parsing request: %v\", err)\n\t}\n\tlim, err := resource.ParseQuantity(limit)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"error parsing limit: %v\", err)\n\t}\n\tres := &ResourceMemory{\n\t\tResourceBase{\n\t\t\tresourceValue{\n\t\t\t\tRequest: &req,\n\t\t\t\tLimit:   &lim,\n\t\t\t},\n\t\t},\n\t}\n\tif err := res.AssertValid(); err != nil {\n\t\t// should never happen\n\t\treturn nil, err\n\t}\n\treturn res, nil\n}\n\ntype ResourceNetworkBandwidth struct {\n\tResourceBase\n}\n\nfunc (r ResourceNetworkBandwidth) AssertValid() error {\n\tif r.Default() != true {\n\t\treturn ErrDefaultRequired\n\t}\n\tif r.Request() != nil {\n\t\treturn ErrRequestNonEmpty\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/isolator_unix.go",
    "content": "// Copyright 2016 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n)\n\nvar (\n\tUnixIsolatorNames = make(map[ACIdentifier]struct{})\n)\n\nconst (\n\t//TODO(lucab): add \"ulimit\" isolators\n\tUnixSysctlName = \"os/unix/sysctl\"\n)\n\nfunc init() {\n\tfor name, con := range map[ACIdentifier]IsolatorValueConstructor{\n\t\tUnixSysctlName: func() IsolatorValue { return &UnixSysctl{} },\n\t} {\n\t\tAddIsolatorName(name, UnixIsolatorNames)\n\t\tAddIsolatorValueConstructor(name, con)\n\t}\n}\n\ntype UnixSysctl map[string]string\n\nfunc (s *UnixSysctl) UnmarshalJSON(b []byte) error {\n\tvar v map[string]string\n\terr := json.Unmarshal(b, &v)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*s = UnixSysctl(v)\n\treturn err\n}\n\nfunc (s UnixSysctl) AssertValid() error {\n\treturn nil\n}\n\nfunc (s UnixSysctl) multipleAllowed() bool {\n\treturn false\n}\nfunc (s UnixSysctl) Conflicts() []ACIdentifier {\n\treturn nil\n}\n\nfunc (s UnixSysctl) AsIsolator() Isolator {\n\tisol := isolatorMap[UnixSysctlName]()\n\n\tb, err := json.Marshal(s)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tvalRaw := json.RawMessage(b)\n\treturn Isolator{\n\t\tName:     UnixSysctlName,\n\t\tValueRaw: &valRaw,\n\t\tvalue:    isol,\n\t}\n}\n\nfunc NewUnixSysctlIsolator(cfg map[string]string) (*UnixSysctl, error) {\n\ts := UnixSysctl(cfg)\n\tif err := s.AssertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &s, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/labels.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"sort\"\n)\n\nvar ValidOSArch = map[string][]string{\n\t\"linux\":   {\"amd64\", \"i386\", \"aarch64\", \"aarch64_be\", \"armv6l\", \"armv7l\", \"armv7b\", \"ppc64\", \"ppc64le\", \"s390x\"},\n\t\"freebsd\": {\"amd64\", \"i386\", \"arm\"},\n\t\"darwin\":  {\"x86_64\", \"i386\"},\n}\n\ntype Labels []Label\n\ntype labels Labels\n\ntype Label struct {\n\tName  ACIdentifier `json:\"name\"`\n\tValue string       `json:\"value\"`\n}\n\n// {appc,go}ArchTuple are internal helper types used to translate arch tuple between go and appc\ntype appcArchTuple struct {\n\tappcOs   string\n\tappcArch string\n}\ntype goArchTuple struct {\n\tgoOs         string\n\tgoArch       string\n\tgoArchFlavor string\n}\n\n// IsValidOsArch checks if a OS-architecture combination is valid given a map\n// of valid OS-architectures\nfunc IsValidOSArch(labels map[ACIdentifier]string, validOSArch map[string][]string) error {\n\tif os, ok := labels[\"os\"]; ok {\n\t\tif validArchs, ok := validOSArch[os]; !ok {\n\t\t\t// Not a whitelisted OS. TODO: how to warn rather than fail?\n\t\t\tvalidOses := make([]string, 0, len(validOSArch))\n\t\t\tfor validOs := range validOSArch {\n\t\t\t\tvalidOses = append(validOses, validOs)\n\t\t\t}\n\t\t\tsort.Strings(validOses)\n\t\t\treturn fmt.Errorf(`bad os %#v (must be one of: %v)`, os, validOses)\n\t\t} else {\n\t\t\t// Whitelisted OS. We check arch here, as arch makes sense only\n\t\t\t// when os is defined.\n\t\t\tif arch, ok := labels[\"arch\"]; ok {\n\t\t\t\tfound := false\n\t\t\t\tfor _, validArch := range validArchs {\n\t\t\t\t\tif arch == validArch {\n\t\t\t\t\t\tfound = true\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif !found {\n\t\t\t\t\treturn fmt.Errorf(`bad arch %#v for %v (must be one of: %v)`, arch, os, validArchs)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (l Labels) assertValid() error {\n\tseen := map[ACIdentifier]string{}\n\tfor _, lbl := range l {\n\t\tif lbl.Name == \"name\" {\n\t\t\treturn fmt.Errorf(`invalid label name: \"name\"`)\n\t\t}\n\t\t_, ok := seen[lbl.Name]\n\t\tif ok {\n\t\t\treturn fmt.Errorf(`duplicate labels of name %q`, lbl.Name)\n\t\t}\n\t\tseen[lbl.Name] = lbl.Value\n\t}\n\treturn IsValidOSArch(seen, ValidOSArch)\n}\n\nfunc (l Labels) MarshalJSON() ([]byte, error) {\n\tif err := l.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(labels(l))\n}\n\nfunc (l *Labels) UnmarshalJSON(data []byte) error {\n\tvar jl labels\n\tif err := json.Unmarshal(data, &jl); err != nil {\n\t\treturn err\n\t}\n\tnl := Labels(jl)\n\tif err := nl.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*l = nl\n\treturn nil\n}\n\n// Get retrieves the value of the label by the given name from Labels, if it exists\nfunc (l Labels) Get(name string) (val string, ok bool) {\n\tfor _, lbl := range l {\n\t\tif lbl.Name.String() == name {\n\t\t\treturn lbl.Value, true\n\t\t}\n\t}\n\treturn \"\", false\n}\n\n// ToMap creates a map[ACIdentifier]string.\nfunc (l Labels) ToMap() map[ACIdentifier]string {\n\tlabelsMap := make(map[ACIdentifier]string)\n\tfor _, lbl := range l {\n\t\tlabelsMap[lbl.Name] = lbl.Value\n\t}\n\treturn labelsMap\n}\n\n// LabelsFromMap creates Labels from a map[ACIdentifier]string\nfunc LabelsFromMap(labelsMap map[ACIdentifier]string) (Labels, error) {\n\tlabels := Labels{}\n\tfor n, v := range labelsMap {\n\t\tlabels = append(labels, Label{Name: n, Value: v})\n\t}\n\tif err := labels.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn labels, nil\n}\n\n// ToAppcOSArch translates a Golang arch tuple (OS, architecture, flavor) into\n// an appc arch tuple (OS, architecture)\nfunc ToAppcOSArch(goOs string, goArch string, goArchFlavor string) (appcOs string, appcArch string, e error) {\n\ttabularAppcToGo := map[goArchTuple]appcArchTuple{\n\t\t{\"linux\", \"amd64\", \"\"}:   {\"linux\", \"amd64\"},\n\t\t{\"linux\", \"386\", \"\"}:     {\"linux\", \"i386\"},\n\t\t{\"linux\", \"arm64\", \"\"}:   {\"linux\", \"aarch64\"},\n\t\t{\"linux\", \"arm\", \"\"}:     {\"linux\", \"armv6l\"},\n\t\t{\"linux\", \"arm\", \"6\"}:    {\"linux\", \"armv6l\"},\n\t\t{\"linux\", \"arm\", \"7\"}:    {\"linux\", \"armv7l\"},\n\t\t{\"linux\", \"ppc64\", \"\"}:   {\"linux\", \"ppc64\"},\n\t\t{\"linux\", \"ppc64le\", \"\"}: {\"linux\", \"ppc64le\"},\n\t\t{\"linux\", \"s390x\", \"\"}:   {\"linux\", \"s390x\"},\n\n\t\t{\"freebsd\", \"amd64\", \"\"}: {\"freebsd\", \"amd64\"},\n\t\t{\"freebsd\", \"386\", \"\"}:   {\"freebsd\", \"i386\"},\n\t\t{\"freebsd\", \"arm\", \"\"}:   {\"freebsd\", \"arm\"},\n\t\t{\"freebsd\", \"arm\", \"5\"}:  {\"freebsd\", \"arm\"},\n\t\t{\"freebsd\", \"arm\", \"6\"}:  {\"freebsd\", \"arm\"},\n\t\t{\"freebsd\", \"arm\", \"7\"}:  {\"freebsd\", \"arm\"},\n\n\t\t{\"darwin\", \"amd64\", \"\"}: {\"darwin\", \"x86_64\"},\n\t\t{\"darwin\", \"386\", \"\"}:   {\"darwin\", \"i386\"},\n\t}\n\tarchTuple, ok := tabularAppcToGo[goArchTuple{goOs, goArch, goArchFlavor}]\n\tif !ok {\n\t\treturn \"\", \"\", fmt.Errorf(\"unknown arch tuple: %q - %q - %q\", goOs, goArch, goArchFlavor)\n\t}\n\treturn archTuple.appcOs, archTuple.appcArch, nil\n}\n\n// ToGoOSArch translates an appc arch tuple (OS, architecture) into\n// a Golang arch tuple (OS, architecture, flavor)\nfunc ToGoOSArch(appcOs string, appcArch string) (goOs string, goArch string, goArchFlavor string, e error) {\n\ttabularGoToAppc := map[appcArchTuple]goArchTuple{\n\t\t// {\"linux\", \"aarch64_be\"}: nil,\n\t\t// {\"linux\", \"armv7b\"}: nil,\n\t\t{\"linux\", \"aarch64\"}: {\"linux\", \"arm64\", \"\"},\n\t\t{\"linux\", \"amd64\"}:   {\"linux\", \"amd64\", \"\"},\n\t\t{\"linux\", \"armv6l\"}:  {\"linux\", \"arm\", \"6\"},\n\t\t{\"linux\", \"armv7l\"}:  {\"linux\", \"arm\", \"7\"},\n\t\t{\"linux\", \"i386\"}:    {\"linux\", \"386\", \"\"},\n\t\t{\"linux\", \"ppc64\"}:   {\"linux\", \"ppc64\", \"\"},\n\t\t{\"linux\", \"ppc64le\"}: {\"linux\", \"ppc64le\", \"\"},\n\t\t{\"linux\", \"s390x\"}:   {\"linux\", \"s390x\", \"\"},\n\n\t\t{\"freebsd\", \"amd64\"}: {\"freebsd\", \"amd64\", \"\"},\n\t\t{\"freebsd\", \"arm\"}:   {\"freebsd\", \"arm\", \"6\"},\n\t\t{\"freebsd\", \"386\"}:   {\"freebsd\", \"i386\", \"\"},\n\n\t\t{\"darwin\", \"amd64\"}: {\"darwin\", \"x86_64\", \"\"},\n\t\t{\"darwin\", \"386\"}:   {\"darwin\", \"i386\", \"\"},\n\t}\n\n\tarchTuple, ok := tabularGoToAppc[appcArchTuple{appcOs, appcArch}]\n\tif !ok {\n\t\treturn \"\", \"\", \"\", fmt.Errorf(\"unknown arch tuple: %q - %q\", appcOs, appcArch)\n\t}\n\treturn archTuple.goOs, archTuple.goArch, archTuple.goArchFlavor, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/mountpoint.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"strconv\"\n\n\t\"github.com/appc/spec/schema/common\"\n)\n\n// MountPoint is the application-side manifestation of a Volume.\ntype MountPoint struct {\n\tName     ACName `json:\"name\"`\n\tPath     string `json:\"path\"`\n\tReadOnly bool   `json:\"readOnly,omitempty\"`\n}\n\nfunc (mount MountPoint) assertValid() error {\n\tif mount.Name.Empty() {\n\t\treturn errors.New(\"name must be set\")\n\t}\n\tif len(mount.Path) == 0 {\n\t\treturn errors.New(\"path must be set\")\n\t}\n\treturn nil\n}\n\n// MountPointFromString takes a command line mountpoint parameter and returns a mountpoint\n//\n// It is useful for actool patch-manifest --mounts\n//\n// Example mountpoint parameters:\n// \tdatabase,path=/tmp,readOnly=true\nfunc MountPointFromString(mp string) (*MountPoint, error) {\n\tvar mount MountPoint\n\n\tmp = \"name=\" + mp\n\tmpQuery, err := common.MakeQueryString(mp)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tv, err := url.ParseQuery(mpQuery)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor key, val := range v {\n\t\tif len(val) > 1 {\n\t\t\treturn nil, fmt.Errorf(\"label %s with multiple values %q\", key, val)\n\t\t}\n\n\t\tswitch key {\n\t\tcase \"name\":\n\t\t\tacn, err := NewACName(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tmount.Name = *acn\n\t\tcase \"path\":\n\t\t\tmount.Path = val[0]\n\t\tcase \"readOnly\":\n\t\t\tro, err := strconv.ParseBool(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tmount.ReadOnly = ro\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"unknown mountpoint parameter %q\", key)\n\t\t}\n\t}\n\terr = mount.assertValid()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &mount, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/port.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/url\"\n\t\"strconv\"\n\n\t\"github.com/appc/spec/schema/common\"\n)\n\n// Port represents a port as offered by an application *inside*\n// the pod.\ntype Port struct {\n\tName            ACName `json:\"name\"`\n\tProtocol        string `json:\"protocol\"`\n\tPort            uint   `json:\"port\"`\n\tCount           uint   `json:\"count\"`\n\tSocketActivated bool   `json:\"socketActivated\"`\n}\n\n// ExposedPort represents a port listening on the host side.\n// The PodPort is optional -- if missing, then try and find the pod-side\n// information by matching names\ntype ExposedPort struct {\n\tName     ACName `json:\"name\"`\n\tHostPort uint   `json:\"hostPort\"`\n\tHostIP   net.IP `json:\"hostIP,omitempty\"`  // optional\n\tPodPort  *Port  `json:\"podPort,omitempty\"` // optional. If missing, try and find a corresponding App's port\n}\n\ntype port Port\n\nfunc (p *Port) UnmarshalJSON(data []byte) error {\n\tvar pp port\n\tif err := json.Unmarshal(data, &pp); err != nil {\n\t\treturn err\n\t}\n\tnp := Port(pp)\n\tif err := np.assertValid(); err != nil {\n\t\treturn err\n\t}\n\tif np.Count == 0 {\n\t\tnp.Count = 1\n\t}\n\t*p = np\n\treturn nil\n}\n\nfunc (p Port) MarshalJSON() ([]byte, error) {\n\tif err := p.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(port(p))\n}\n\nfunc (p Port) assertValid() error {\n\t// Although there are no guarantees, most (if not all)\n\t// transport protocols use 16 bit ports\n\tif p.Port > 65535 || p.Port < 1 {\n\t\treturn errors.New(\"port must be in 1-65535 range\")\n\t}\n\tif p.Port+p.Count > 65536 {\n\t\treturn errors.New(\"end of port range must be in 1-65535 range\")\n\t}\n\treturn nil\n}\n\n// PortFromString takes a command line port parameter and returns a port\n//\n// It is useful for actool patch-manifest --ports\n//\n// Example port parameters:\n//      health-check,protocol=udp,port=8000\n// \tquery,protocol=tcp,port=8080,count=1,socketActivated=true\nfunc PortFromString(pt string) (*Port, error) {\n\tvar port Port\n\n\tpt = \"name=\" + pt\n\tptQuery, err := common.MakeQueryString(pt)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tv, err := url.ParseQuery(ptQuery)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor key, val := range v {\n\t\tif len(val) > 1 {\n\t\t\treturn nil, fmt.Errorf(\"label %s with multiple values %q\", key, val)\n\t\t}\n\n\t\tswitch key {\n\t\tcase \"name\":\n\t\t\tacn, err := NewACName(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tport.Name = *acn\n\t\tcase \"protocol\":\n\t\t\tport.Protocol = val[0]\n\t\tcase \"port\":\n\t\t\tp, err := strconv.ParseUint(val[0], 10, 16)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tport.Port = uint(p)\n\t\tcase \"count\":\n\t\t\tcnt, err := strconv.ParseUint(val[0], 10, 16)\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tport.Count = uint(cnt)\n\t\tcase \"socketActivated\":\n\t\t\tsa, err := strconv.ParseBool(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tport.SocketActivated = sa\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"unknown port parameter %q\", key)\n\t\t}\n\t}\n\terr = port.assertValid()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &port, nil\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/resource/amount.go",
    "content": "/*\nCopyright 2014 The Kubernetes Authors All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage resource\n\nimport (\n\t\"math/big\"\n\t\"strconv\"\n\n\tinf \"gopkg.in/inf.v0\"\n)\n\n// Scale is used for getting and setting the base-10 scaled value.\n// Base-2 scales are omitted for mathematical simplicity.\n// See Quantity.ScaledValue for more details.\ntype Scale int32\n\n// infScale adapts a Scale value to an inf.Scale value.\nfunc (s Scale) infScale() inf.Scale {\n\treturn inf.Scale(-s) // inf.Scale is upside-down\n}\n\nconst (\n\tNano  Scale = -9\n\tMicro Scale = -6\n\tMilli Scale = -3\n\tKilo  Scale = 3\n\tMega  Scale = 6\n\tGiga  Scale = 9\n\tTera  Scale = 12\n\tPeta  Scale = 15\n\tExa   Scale = 18\n)\n\nvar (\n\tZero = int64Amount{}\n\n\t// Used by quantity strings - treat as read only\n\tzeroBytes = []byte(\"0\")\n)\n\n// int64Amount represents a fixed precision numerator and arbitrary scale exponent. It is faster\n// than operations on inf.Dec for values that can be represented as int64.\ntype int64Amount struct {\n\tvalue int64\n\tscale Scale\n}\n\n// Sign returns 0 if the value is zero, -1 if it is less than 0, or 1 if it is greater than 0.\nfunc (a int64Amount) Sign() int {\n\tswitch {\n\tcase a.value == 0:\n\t\treturn 0\n\tcase a.value > 0:\n\t\treturn 1\n\tdefault:\n\t\treturn -1\n\t}\n}\n\n// AsInt64 returns the current amount as an int64 at scale 0, or false if the value cannot be\n// represented in an int64 OR would result in a loss of precision. This method is intended as\n// an optimization to avoid calling AsDec.\nfunc (a int64Amount) AsInt64() (int64, bool) {\n\tif a.scale == 0 {\n\t\treturn a.value, true\n\t}\n\tif a.scale < 0 {\n\t\t// TODO: attempt to reduce factors, although it is assumed that factors are reduced prior\n\t\t// to the int64Amount being created.\n\t\treturn 0, false\n\t}\n\treturn positiveScaleInt64(a.value, a.scale)\n}\n\n// AsScaledInt64 returns an int64 representing the value of this amount at the specified scale,\n// rounding up, or false if that would result in overflow. (1e20).AsScaledInt64(1) would result\n// in overflow because 1e19 is not representable as an int64. Note that setting a scale larger\n// than the current value may result in loss of precision - i.e. (1e-6).AsScaledInt64(0) would\n// return 1, because 0.000001 is rounded up to 1.\nfunc (a int64Amount) AsScaledInt64(scale Scale) (result int64, ok bool) {\n\tif a.scale < scale {\n\t\tresult, _ = negativeScaleInt64(a.value, scale-a.scale)\n\t\treturn result, true\n\t}\n\treturn positiveScaleInt64(a.value, a.scale-scale)\n}\n\n// AsDec returns an inf.Dec representation of this value.\nfunc (a int64Amount) AsDec() *inf.Dec {\n\tvar base inf.Dec\n\tbase.SetUnscaled(a.value)\n\tbase.SetScale(inf.Scale(-a.scale))\n\treturn &base\n}\n\n// Cmp returns 0 if a and b are equal, 1 if a is greater than b, or -1 if a is less than b.\nfunc (a int64Amount) Cmp(b int64Amount) int {\n\tswitch {\n\tcase a.scale == b.scale:\n\t\t// compare only the unscaled portion\n\tcase a.scale > b.scale:\n\t\tresult, remainder, exact := divideByScaleInt64(b.value, a.scale-b.scale)\n\t\tif !exact {\n\t\t\treturn a.AsDec().Cmp(b.AsDec())\n\t\t}\n\t\tif result == a.value {\n\t\t\tswitch {\n\t\t\tcase remainder == 0:\n\t\t\t\treturn 0\n\t\t\tcase remainder > 0:\n\t\t\t\treturn -1\n\t\t\tdefault:\n\t\t\t\treturn 1\n\t\t\t}\n\t\t}\n\t\tb.value = result\n\tdefault:\n\t\tresult, remainder, exact := divideByScaleInt64(a.value, b.scale-a.scale)\n\t\tif !exact {\n\t\t\treturn a.AsDec().Cmp(b.AsDec())\n\t\t}\n\t\tif result == b.value {\n\t\t\tswitch {\n\t\t\tcase remainder == 0:\n\t\t\t\treturn 0\n\t\t\tcase remainder > 0:\n\t\t\t\treturn 1\n\t\t\tdefault:\n\t\t\t\treturn -1\n\t\t\t}\n\t\t}\n\t\ta.value = result\n\t}\n\n\tswitch {\n\tcase a.value == b.value:\n\t\treturn 0\n\tcase a.value < b.value:\n\t\treturn -1\n\tdefault:\n\t\treturn 1\n\t}\n}\n\n// Add adds two int64Amounts together, matching scales. It will return false and not mutate\n// a if overflow or underflow would result.\nfunc (a *int64Amount) Add(b int64Amount) bool {\n\tswitch {\n\tcase b.value == 0:\n\t\treturn true\n\tcase a.value == 0:\n\t\ta.value = b.value\n\t\ta.scale = b.scale\n\t\treturn true\n\tcase a.scale == b.scale:\n\t\tc, ok := int64Add(a.value, b.value)\n\t\tif !ok {\n\t\t\treturn false\n\t\t}\n\t\ta.value = c\n\tcase a.scale > b.scale:\n\t\tc, ok := positiveScaleInt64(a.value, a.scale-b.scale)\n\t\tif !ok {\n\t\t\treturn false\n\t\t}\n\t\tc, ok = int64Add(c, b.value)\n\t\tif !ok {\n\t\t\treturn false\n\t\t}\n\t\ta.scale = b.scale\n\t\ta.value = c\n\tdefault:\n\t\tc, ok := positiveScaleInt64(b.value, b.scale-a.scale)\n\t\tif !ok {\n\t\t\treturn false\n\t\t}\n\t\tc, ok = int64Add(a.value, c)\n\t\tif !ok {\n\t\t\treturn false\n\t\t}\n\t\ta.value = c\n\t}\n\treturn true\n}\n\n// Sub removes the value of b from the current amount, or returns false if underflow would result.\nfunc (a *int64Amount) Sub(b int64Amount) bool {\n\treturn a.Add(int64Amount{value: -b.value, scale: b.scale})\n}\n\n// AsScale adjusts this amount to set a minimum scale, rounding up, and returns true iff no precision\n// was lost. (1.1e5).AsScale(5) would return 1.1e5, but (1.1e5).AsScale(6) would return 1e6.\nfunc (a int64Amount) AsScale(scale Scale) (int64Amount, bool) {\n\tif a.scale >= scale {\n\t\treturn a, true\n\t}\n\tresult, exact := negativeScaleInt64(a.value, scale-a.scale)\n\treturn int64Amount{value: result, scale: scale}, exact\n}\n\n// AsCanonicalBytes accepts a buffer to write the base-10 string value of this field to, and returns\n// either that buffer or a larger buffer and the current exponent of the value. The value is adjusted\n// until the exponent is a multiple of 3 - i.e. 1.1e5 would return \"110\", 3.\nfunc (a int64Amount) AsCanonicalBytes(out []byte) (result []byte, exponent int32) {\n\tmantissa := a.value\n\texponent = int32(a.scale)\n\n\tamount, times := removeInt64Factors(mantissa, 10)\n\texponent += int32(times)\n\n\t// make sure exponent is a multiple of 3\n\tvar ok bool\n\tswitch exponent % 3 {\n\tcase 1, -2:\n\t\tamount, ok = int64MultiplyScale10(amount)\n\t\tif !ok {\n\t\t\treturn infDecAmount{a.AsDec()}.AsCanonicalBytes(out)\n\t\t}\n\t\texponent = exponent - 1\n\tcase 2, -1:\n\t\tamount, ok = int64MultiplyScale100(amount)\n\t\tif !ok {\n\t\t\treturn infDecAmount{a.AsDec()}.AsCanonicalBytes(out)\n\t\t}\n\t\texponent = exponent - 2\n\t}\n\treturn strconv.AppendInt(out, amount, 10), exponent\n}\n\n// AsCanonicalBase1024Bytes accepts a buffer to write the base-1024 string value of this field to, and returns\n// either that buffer or a larger buffer and the current exponent of the value. 2048 is 2 * 1024 ^ 1 and would\n// return []byte(\"2048\"), 1.\nfunc (a int64Amount) AsCanonicalBase1024Bytes(out []byte) (result []byte, exponent int32) {\n\tvalue, ok := a.AsScaledInt64(0)\n\tif !ok {\n\t\treturn infDecAmount{a.AsDec()}.AsCanonicalBase1024Bytes(out)\n\t}\n\tamount, exponent := removeInt64Factors(value, 1024)\n\treturn strconv.AppendInt(out, amount, 10), exponent\n}\n\n// infDecAmount implements common operations over an inf.Dec that are specific to the quantity\n// representation.\ntype infDecAmount struct {\n\t*inf.Dec\n}\n\n// AsScale adjusts this amount to set a minimum scale, rounding up, and returns true iff no precision\n// was lost. (1.1e5).AsScale(5) would return 1.1e5, but (1.1e5).AsScale(6) would return 1e6.\nfunc (a infDecAmount) AsScale(scale Scale) (infDecAmount, bool) {\n\ttmp := &inf.Dec{}\n\ttmp.Round(a.Dec, scale.infScale(), inf.RoundUp)\n\treturn infDecAmount{tmp}, tmp.Cmp(a.Dec) == 0\n}\n\n// AsCanonicalBytes accepts a buffer to write the base-10 string value of this field to, and returns\n// either that buffer or a larger buffer and the current exponent of the value. The value is adjusted\n// until the exponent is a multiple of 3 - i.e. 1.1e5 would return \"110\", 3.\nfunc (a infDecAmount) AsCanonicalBytes(out []byte) (result []byte, exponent int32) {\n\tmantissa := a.Dec.UnscaledBig()\n\texponent = int32(-a.Dec.Scale())\n\tamount := big.NewInt(0).Set(mantissa)\n\t// move all factors of 10 into the exponent for easy reasoning\n\tamount, times := removeBigIntFactors(amount, bigTen)\n\texponent += times\n\n\t// make sure exponent is a multiple of 3\n\tfor exponent%3 != 0 {\n\t\tamount.Mul(amount, bigTen)\n\t\texponent--\n\t}\n\n\treturn append(out, amount.String()...), exponent\n}\n\n// AsCanonicalBase1024Bytes accepts a buffer to write the base-1024 string value of this field to, and returns\n// either that buffer or a larger buffer and the current exponent of the value. 2048 is 2 * 1024 ^ 1 and would\n// return []byte(\"2048\"), 1.\nfunc (a infDecAmount) AsCanonicalBase1024Bytes(out []byte) (result []byte, exponent int32) {\n\ttmp := &inf.Dec{}\n\ttmp.Round(a.Dec, 0, inf.RoundUp)\n\tamount, exponent := removeBigIntFactors(tmp.UnscaledBig(), big1024)\n\treturn append(out, amount.String()...), exponent\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/resource/math.go",
    "content": "/*\nCopyright 2014 The Kubernetes Authors All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage resource\n\nimport (\n\t\"math/big\"\n\n\tinf \"gopkg.in/inf.v0\"\n)\n\nconst (\n\t// maxInt64Factors is the highest value that will be checked when removing factors of 10 from an int64.\n\t// It is also the maximum decimal digits that can be represented with an int64.\n\tmaxInt64Factors = 18\n)\n\nvar (\n\t// Commonly needed big.Int values-- treat as read only!\n\tbigTen      = big.NewInt(10)\n\tbigZero     = big.NewInt(0)\n\tbigOne      = big.NewInt(1)\n\tbigThousand = big.NewInt(1000)\n\tbig1024     = big.NewInt(1024)\n\n\t// Commonly needed inf.Dec values-- treat as read only!\n\tdecZero      = inf.NewDec(0, 0)\n\tdecOne       = inf.NewDec(1, 0)\n\tdecMinusOne  = inf.NewDec(-1, 0)\n\tdecThousand  = inf.NewDec(1000, 0)\n\tdec1024      = inf.NewDec(1024, 0)\n\tdecMinus1024 = inf.NewDec(-1024, 0)\n\n\t// Largest (in magnitude) number allowed.\n\tmaxAllowed = infDecAmount{inf.NewDec((1<<63)-1, 0)} // == max int64\n\n\t// The maximum value we can represent milli-units for.\n\t// Compare with the return value of Quantity.Value() to\n\t// see if it's safe to use Quantity.MilliValue().\n\tMaxMilliValue = int64(((1 << 63) - 1) / 1000)\n)\n\nconst mostNegative = -(mostPositive + 1)\nconst mostPositive = 1<<63 - 1\n\n// int64Add returns a+b, or false if that would overflow int64.\nfunc int64Add(a, b int64) (int64, bool) {\n\tc := a + b\n\tswitch {\n\tcase a > 0 && b > 0:\n\t\tif c < 0 {\n\t\t\treturn 0, false\n\t\t}\n\tcase a < 0 && b < 0:\n\t\tif c > 0 {\n\t\t\treturn 0, false\n\t\t}\n\t\tif a == mostNegative && b == mostNegative {\n\t\t\treturn 0, false\n\t\t}\n\t}\n\treturn c, true\n}\n\n// int64Multiply returns a*b, or false if that would overflow or underflow int64.\nfunc int64Multiply(a, b int64) (int64, bool) {\n\tif a == 0 || b == 0 || a == 1 || b == 1 {\n\t\treturn a * b, true\n\t}\n\tif a == mostNegative || b == mostNegative {\n\t\treturn 0, false\n\t}\n\tc := a * b\n\treturn c, c/b == a\n}\n\n// int64MultiplyScale returns a*b, assuming b is greater than one, or false if that would overflow or underflow int64.\n// Use when b is known to be greater than one.\nfunc int64MultiplyScale(a int64, b int64) (int64, bool) {\n\tif a == 0 || a == 1 {\n\t\treturn a * b, true\n\t}\n\tif a == mostNegative && b != 1 {\n\t\treturn 0, false\n\t}\n\tc := a * b\n\treturn c, c/b == a\n}\n\n// int64MultiplyScale10 multiplies a by 10, or returns false if that would overflow. This method is faster than\n// int64Multiply(a, 10) because the compiler can optimize constant factor multiplication.\nfunc int64MultiplyScale10(a int64) (int64, bool) {\n\tif a == 0 || a == 1 {\n\t\treturn a * 10, true\n\t}\n\tif a == mostNegative {\n\t\treturn 0, false\n\t}\n\tc := a * 10\n\treturn c, c/10 == a\n}\n\n// int64MultiplyScale100 multiplies a by 100, or returns false if that would overflow. This method is faster than\n// int64Multiply(a, 100) because the compiler can optimize constant factor multiplication.\nfunc int64MultiplyScale100(a int64) (int64, bool) {\n\tif a == 0 || a == 1 {\n\t\treturn a * 100, true\n\t}\n\tif a == mostNegative {\n\t\treturn 0, false\n\t}\n\tc := a * 100\n\treturn c, c/100 == a\n}\n\n// int64MultiplyScale1000 multiplies a by 1000, or returns false if that would overflow. This method is faster than\n// int64Multiply(a, 1000) because the compiler can optimize constant factor multiplication.\nfunc int64MultiplyScale1000(a int64) (int64, bool) {\n\tif a == 0 || a == 1 {\n\t\treturn a * 1000, true\n\t}\n\tif a == mostNegative {\n\t\treturn 0, false\n\t}\n\tc := a * 1000\n\treturn c, c/1000 == a\n}\n\n// positiveScaleInt64 multiplies base by 10^scale, returning false if the\n// value overflows. Passing a negative scale is undefined.\nfunc positiveScaleInt64(base int64, scale Scale) (int64, bool) {\n\tswitch scale {\n\tcase 0:\n\t\treturn base, true\n\tcase 1:\n\t\treturn int64MultiplyScale10(base)\n\tcase 2:\n\t\treturn int64MultiplyScale100(base)\n\tcase 3:\n\t\treturn int64MultiplyScale1000(base)\n\tcase 6:\n\t\treturn int64MultiplyScale(base, 1000000)\n\tcase 9:\n\t\treturn int64MultiplyScale(base, 1000000000)\n\tdefault:\n\t\tvalue := base\n\t\tvar ok bool\n\t\tfor i := Scale(0); i < scale; i++ {\n\t\t\tif value, ok = int64MultiplyScale(value, 10); !ok {\n\t\t\t\treturn 0, false\n\t\t\t}\n\t\t}\n\t\treturn value, true\n\t}\n}\n\n// negativeScaleInt64 reduces base by the provided scale, rounding up, until the\n// value is zero or the scale is reached. Passing a negative scale is undefined.\n// The value returned, if not exact, is rounded away from zero.\nfunc negativeScaleInt64(base int64, scale Scale) (result int64, exact bool) {\n\tif scale == 0 {\n\t\treturn base, true\n\t}\n\n\tvalue := base\n\tvar fraction bool\n\tfor i := Scale(0); i < scale; i++ {\n\t\tif !fraction && value%10 != 0 {\n\t\t\tfraction = true\n\t\t}\n\t\tvalue = value / 10\n\t\tif value == 0 {\n\t\t\tif fraction {\n\t\t\t\tif base > 0 {\n\t\t\t\t\treturn 1, false\n\t\t\t\t}\n\t\t\t\treturn -1, false\n\t\t\t}\n\t\t\treturn 0, true\n\t\t}\n\t}\n\tif fraction {\n\t\tif base > 0 {\n\t\t\tvalue += 1\n\t\t} else {\n\t\t\tvalue += -1\n\t\t}\n\t}\n\treturn value, !fraction\n}\n\nfunc pow10Int64(b int64) int64 {\n\tswitch b {\n\tcase 0:\n\t\treturn 1\n\tcase 1:\n\t\treturn 10\n\tcase 2:\n\t\treturn 100\n\tcase 3:\n\t\treturn 1000\n\tcase 4:\n\t\treturn 10000\n\tcase 5:\n\t\treturn 100000\n\tcase 6:\n\t\treturn 1000000\n\tcase 7:\n\t\treturn 10000000\n\tcase 8:\n\t\treturn 100000000\n\tcase 9:\n\t\treturn 1000000000\n\tcase 10:\n\t\treturn 10000000000\n\tcase 11:\n\t\treturn 100000000000\n\tcase 12:\n\t\treturn 1000000000000\n\tcase 13:\n\t\treturn 10000000000000\n\tcase 14:\n\t\treturn 100000000000000\n\tcase 15:\n\t\treturn 1000000000000000\n\tcase 16:\n\t\treturn 10000000000000000\n\tcase 17:\n\t\treturn 100000000000000000\n\tcase 18:\n\t\treturn 1000000000000000000\n\tdefault:\n\t\treturn 0\n\t}\n}\n\n// powInt64 raises a to the bth power. Is not overflow aware.\nfunc powInt64(a, b int64) int64 {\n\tp := int64(1)\n\tfor b > 0 {\n\t\tif b&1 != 0 {\n\t\t\tp *= a\n\t\t}\n\t\tb >>= 1\n\t\ta *= a\n\t}\n\treturn p\n}\n\n// negativeScaleInt64 returns the result of dividing base by scale * 10 and the remainder, or\n// false if no such division is possible. Dividing by negative scales is undefined.\nfunc divideByScaleInt64(base int64, scale Scale) (result, remainder int64, exact bool) {\n\tif scale == 0 {\n\t\treturn base, 0, true\n\t}\n\t// the max scale representable in base 10 in an int64 is 18 decimal places\n\tif scale >= 18 {\n\t\treturn 0, base, false\n\t}\n\tdivisor := pow10Int64(int64(scale))\n\treturn base / divisor, base % divisor, true\n}\n\n// removeInt64Factors divides in a loop; the return values have the property that\n// value == result * base ^ scale\nfunc removeInt64Factors(value int64, base int64) (result int64, times int32) {\n\ttimes = 0\n\tresult = value\n\tnegative := result < 0\n\tif negative {\n\t\tresult = -result\n\t}\n\tswitch base {\n\t// allow the compiler to optimize the common cases\n\tcase 10:\n\t\tfor result >= 10 && result%10 == 0 {\n\t\t\ttimes++\n\t\t\tresult = result / 10\n\t\t}\n\t// allow the compiler to optimize the common cases\n\tcase 1024:\n\t\tfor result >= 1024 && result%1024 == 0 {\n\t\t\ttimes++\n\t\t\tresult = result / 1024\n\t\t}\n\tdefault:\n\t\tfor result >= base && result%base == 0 {\n\t\t\ttimes++\n\t\t\tresult = result / base\n\t\t}\n\t}\n\tif negative {\n\t\tresult = -result\n\t}\n\treturn result, times\n}\n\n// removeBigIntFactors divides in a loop; the return values have the property that\n// d == result * factor ^ times\n// d may be modified in place.\n// If d == 0, then the return values will be (0, 0)\nfunc removeBigIntFactors(d, factor *big.Int) (result *big.Int, times int32) {\n\tq := big.NewInt(0)\n\tm := big.NewInt(0)\n\tfor d.Cmp(bigZero) != 0 {\n\t\tq.DivMod(d, factor, m)\n\t\tif m.Cmp(bigZero) != 0 {\n\t\t\tbreak\n\t\t}\n\t\ttimes++\n\t\td, q = q, d\n\t}\n\treturn d, times\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/resource/quantity.go",
    "content": "/*\nCopyright 2014 The Kubernetes Authors All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage resource\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math/big\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\n\tflag \"github.com/spf13/pflag\"\n\n\tinf \"gopkg.in/inf.v0\"\n)\n\n// Quantity is a fixed-point representation of a number.\n// It provides convenient marshaling/unmarshaling in JSON and YAML,\n// in addition to String() and Int64() accessors.\n//\n// The serialization format is:\n//\n// <quantity>        ::= <signedNumber><suffix>\n//   (Note that <suffix> may be empty, from the \"\" case in <decimalSI>.)\n// <digit>           ::= 0 | 1 | ... | 9\n// <digits>          ::= <digit> | <digit><digits>\n// <number>          ::= <digits> | <digits>.<digits> | <digits>. | .<digits>\n// <sign>            ::= \"+\" | \"-\"\n// <signedNumber>    ::= <number> | <sign><number>\n// <suffix>          ::= <binarySI> | <decimalExponent> | <decimalSI>\n// <binarySI>        ::= Ki | Mi | Gi | Ti | Pi | Ei\n//   (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)\n// <decimalSI>       ::= m | \"\" | k | M | G | T | P | E\n//   (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)\n// <decimalExponent> ::= \"e\" <signedNumber> | \"E\" <signedNumber>\n//\n// No matter which of the three exponent forms is used, no quantity may represent\n// a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal\n// places. Numbers larger or more precise will be capped or rounded up.\n// (E.g.: 0.1m will rounded up to 1m.)\n// This may be extended in the future if we require larger or smaller quantities.\n//\n// When a Quantity is parsed from a string, it will remember the type of suffix\n// it had, and will use the same type again when it is serialized.\n//\n// Before serializing, Quantity will be put in \"canonical form\".\n// This means that Exponent/suffix will be adjusted up or down (with a\n// corresponding increase or decrease in Mantissa) such that:\n//   a. No precision is lost\n//   b. No fractional digits will be emitted\n//   c. The exponent (or suffix) is as large as possible.\n// The sign will be omitted unless the number is negative.\n//\n// Examples:\n//   1.5 will be serialized as \"1500m\"\n//   1.5Gi will be serialized as \"1536Mi\"\n//\n// NOTE: We reserve the right to amend this canonical format, perhaps to\n//   allow 1.5 to be canonical.\n// TODO: Remove above disclaimer after all bikeshedding about format is over,\n//   or after March 2015.\n//\n// Note that the quantity will NEVER be internally represented by a\n// floating point number. That is the whole point of this exercise.\n//\n// Non-canonical values will still parse as long as they are well formed,\n// but will be re-emitted in their canonical form. (So always use canonical\n// form, or don't diff.)\n//\n// This format is intended to make it difficult to use these numbers without\n// writing some sort of special handling code in the hopes that that will\n// cause implementors to also use a fixed point implementation.\n//\n// +gencopy=false\n// +protobuf=true\n// +protobuf.embed=string\n// +protobuf.options.marshal=false\n// +protobuf.options.(gogoproto.goproto_stringer)=false\ntype Quantity struct {\n\t// i is the quantity in int64 scaled form, if d.Dec == nil\n\ti int64Amount\n\t// d is the quantity in inf.Dec form if d.Dec != nil\n\td infDecAmount\n\t// s is the generated value of this quantity to avoid recalculation\n\ts string\n\n\t// Change Format at will. See the comment for Canonicalize for\n\t// more details.\n\tFormat\n}\n\n// CanonicalValue allows a quantity amount to be converted to a string.\ntype CanonicalValue interface {\n\t// AsCanonicalBytes returns a byte array representing the string representation\n\t// of the value mantissa and an int32 representing its exponent in base-10. Callers may\n\t// pass a byte slice to the method to avoid allocations.\n\tAsCanonicalBytes(out []byte) ([]byte, int32)\n\t// AsCanonicalBase1024Bytes returns a byte array representing the string representation\n\t// of the value mantissa and an int32 representing its exponent in base-1024. Callers\n\t// may pass a byte slice to the method to avoid allocations.\n\tAsCanonicalBase1024Bytes(out []byte) ([]byte, int32)\n}\n\n// Format lists the three possible formattings of a quantity.\ntype Format string\n\nconst (\n\tDecimalExponent = Format(\"DecimalExponent\") // e.g., 12e6\n\tBinarySI        = Format(\"BinarySI\")        // e.g., 12Mi (12 * 2^20)\n\tDecimalSI       = Format(\"DecimalSI\")       // e.g., 12M  (12 * 10^6)\n)\n\n// MustParse turns the given string into a quantity or panics; for tests\n// or others cases where you know the string is valid.\nfunc MustParse(str string) Quantity {\n\tq, err := ParseQuantity(str)\n\tif err != nil {\n\t\tpanic(fmt.Errorf(\"cannot parse '%v': %v\", str, err))\n\t}\n\treturn q\n}\n\nconst (\n\t// splitREString is used to separate a number from its suffix; as such,\n\t// this is overly permissive, but that's OK-- it will be checked later.\n\tsplitREString = \"^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$\"\n)\n\nvar (\n\t// splitRE is used to get the various parts of a number.\n\tsplitRE = regexp.MustCompile(splitREString)\n\n\t// Errors that could happen while parsing a string.\n\tErrFormatWrong = errors.New(\"quantities must match the regular expression '\" + splitREString + \"'\")\n\tErrNumeric     = errors.New(\"unable to parse numeric part of quantity\")\n\tErrSuffix      = errors.New(\"unable to parse quantity's suffix\")\n)\n\n// parseQuantityString is a fast scanner for quantity values.\nfunc parseQuantityString(str string) (positive bool, value, num, denom, suffix string, err error) {\n\tpositive = true\n\tpos := 0\n\tend := len(str)\n\n\t// handle leading sign\n\tif pos < end {\n\t\tswitch str[0] {\n\t\tcase '-':\n\t\t\tpositive = false\n\t\t\tpos++\n\t\tcase '+':\n\t\t\tpos++\n\t\t}\n\t}\n\n\t// strip leading zeros\nZeroes:\n\tfor i := pos; ; i++ {\n\t\tif i >= end {\n\t\t\tnum = \"0\"\n\t\t\tvalue = num\n\t\t\treturn\n\t\t}\n\t\tswitch str[i] {\n\t\tcase '0':\n\t\t\tpos++\n\t\tdefault:\n\t\t\tbreak Zeroes\n\t\t}\n\t}\n\n\t// extract the numerator\nNum:\n\tfor i := pos; ; i++ {\n\t\tif i >= end {\n\t\t\tnum = str[pos:end]\n\t\t\tvalue = str[0:end]\n\t\t\treturn\n\t\t}\n\t\tswitch str[i] {\n\t\tcase '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':\n\t\tdefault:\n\t\t\tnum = str[pos:i]\n\t\t\tpos = i\n\t\t\tbreak Num\n\t\t}\n\t}\n\n\t// if we stripped all numerator positions, always return 0\n\tif len(num) == 0 {\n\t\tnum = \"0\"\n\t}\n\n\t// handle a denominator\n\tif pos < end && str[pos] == '.' {\n\t\tpos++\n\tDenom:\n\t\tfor i := pos; ; i++ {\n\t\t\tif i >= end {\n\t\t\t\tdenom = str[pos:end]\n\t\t\t\tvalue = str[0:end]\n\t\t\t\treturn\n\t\t\t}\n\t\t\tswitch str[i] {\n\t\t\tcase '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':\n\t\t\tdefault:\n\t\t\t\tdenom = str[pos:i]\n\t\t\t\tpos = i\n\t\t\t\tbreak Denom\n\t\t\t}\n\t\t}\n\t\t// TODO: we currently allow 1.G, but we may not want to in the future.\n\t\t// if len(denom) == 0 {\n\t\t// \terr = ErrFormatWrong\n\t\t// \treturn\n\t\t// }\n\t}\n\tvalue = str[0:pos]\n\n\t// grab the elements of the suffix\n\tsuffixStart := pos\n\tfor i := pos; ; i++ {\n\t\tif i >= end {\n\t\t\tsuffix = str[suffixStart:end]\n\t\t\treturn\n\t\t}\n\t\tif !strings.ContainsAny(str[i:i+1], \"eEinumkKMGTP\") {\n\t\t\tpos = i\n\t\t\tbreak\n\t\t}\n\t}\n\tif pos < end {\n\t\tswitch str[pos] {\n\t\tcase '-', '+':\n\t\t\tpos++\n\t\t}\n\t}\nSuffix:\n\tfor i := pos; ; i++ {\n\t\tif i >= end {\n\t\t\tsuffix = str[suffixStart:end]\n\t\t\treturn\n\t\t}\n\t\tswitch str[i] {\n\t\tcase '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':\n\t\tdefault:\n\t\t\tbreak Suffix\n\t\t}\n\t}\n\t// we encountered a non decimal in the Suffix loop, but the last character\n\t// was not a valid exponent\n\terr = ErrFormatWrong\n\treturn\n}\n\n// ParseQuantity turns str into a Quantity, or returns an error.\nfunc ParseQuantity(str string) (Quantity, error) {\n\tif len(str) == 0 {\n\t\treturn Quantity{}, ErrFormatWrong\n\t}\n\tif str == \"0\" {\n\t\treturn Quantity{Format: DecimalSI, s: str}, nil\n\t}\n\n\tpositive, value, num, denom, suf, err := parseQuantityString(str)\n\tif err != nil {\n\t\treturn Quantity{}, err\n\t}\n\n\tbase, exponent, format, ok := quantitySuffixer.interpret(suffix(suf))\n\tif !ok {\n\t\treturn Quantity{}, ErrSuffix\n\t}\n\n\tprecision := int32(0)\n\tscale := int32(0)\n\tmantissa := int64(1)\n\tswitch format {\n\tcase DecimalExponent, DecimalSI:\n\t\tscale = exponent\n\t\tprecision = maxInt64Factors - int32(len(num)+len(denom))\n\tcase BinarySI:\n\t\tscale = 0\n\t\tswitch {\n\t\tcase exponent >= 0 && len(denom) == 0:\n\t\t\t// only handle positive binary numbers with the fast path\n\t\t\tmantissa = int64(int64(mantissa) << uint64(exponent))\n\t\t\t// 1Mi (2^20) has ~6 digits of decimal precision, so exponent*3/10 -1 is roughly the precision\n\t\t\tprecision = 15 - int32(len(num)) - int32(float32(exponent)*3/10) - 1\n\t\tdefault:\n\t\t\tprecision = -1\n\t\t}\n\t}\n\n\tif precision >= 0 {\n\t\t// if we have a denominator, shift the entire value to the left by the number of places in the\n\t\t// denominator\n\t\tscale -= int32(len(denom))\n\t\tif scale >= int32(Nano) {\n\t\t\tshifted := num + denom\n\n\t\t\tvar value int64\n\t\t\tvalue, err := strconv.ParseInt(shifted, 10, 64)\n\t\t\tif err != nil {\n\t\t\t\treturn Quantity{}, ErrNumeric\n\t\t\t}\n\t\t\tif result, ok := int64Multiply(value, int64(mantissa)); ok {\n\t\t\t\tif !positive {\n\t\t\t\t\tresult = -result\n\t\t\t\t}\n\t\t\t\t// if the number is in canonical form, reuse the string\n\t\t\t\tswitch format {\n\t\t\t\tcase BinarySI:\n\t\t\t\t\tif exponent%10 == 0 && (value&0x07 != 0) {\n\t\t\t\t\t\treturn Quantity{i: int64Amount{value: result, scale: Scale(scale)}, Format: format, s: str}, nil\n\t\t\t\t\t}\n\t\t\t\tdefault:\n\t\t\t\t\tif scale%3 == 0 && !strings.HasSuffix(shifted, \"000\") && shifted[0] != '0' {\n\t\t\t\t\t\treturn Quantity{i: int64Amount{value: result, scale: Scale(scale)}, Format: format, s: str}, nil\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn Quantity{i: int64Amount{value: result, scale: Scale(scale)}, Format: format}, nil\n\t\t\t}\n\t\t}\n\t}\n\n\tamount := new(inf.Dec)\n\tif _, ok := amount.SetString(value); !ok {\n\t\treturn Quantity{}, ErrNumeric\n\t}\n\n\t// So that no one but us has to think about suffixes, remove it.\n\tif base == 10 {\n\t\tamount.SetScale(amount.Scale() + Scale(exponent).infScale())\n\t} else if base == 2 {\n\t\t// numericSuffix = 2 ** exponent\n\t\tnumericSuffix := big.NewInt(1).Lsh(bigOne, uint(exponent))\n\t\tub := amount.UnscaledBig()\n\t\tamount.SetUnscaledBig(ub.Mul(ub, numericSuffix))\n\t}\n\n\t// Cap at min/max bounds.\n\tsign := amount.Sign()\n\tif sign == -1 {\n\t\tamount.Neg(amount)\n\t}\n\n\t// This rounds non-zero values up to the minimum representable value, under the theory that\n\t// if you want some resources, you should get some resources, even if you asked for way too small\n\t// of an amount.  Arguably, this should be inf.RoundHalfUp (normal rounding), but that would have\n\t// the side effect of rounding values < .5n to zero.\n\tif v, ok := amount.Unscaled(); v != int64(0) || !ok {\n\t\tamount.Round(amount, Nano.infScale(), inf.RoundUp)\n\t}\n\n\t// The max is just a simple cap.\n\t// TODO: this prevents accumulating quantities greater than int64, for instance quota across a cluster\n\tif format == BinarySI && amount.Cmp(maxAllowed.Dec) > 0 {\n\t\tamount.Set(maxAllowed.Dec)\n\t}\n\n\tif format == BinarySI && amount.Cmp(decOne) < 0 && amount.Cmp(decZero) > 0 {\n\t\t// This avoids rounding and hopefully confusion, too.\n\t\tformat = DecimalSI\n\t}\n\tif sign == -1 {\n\t\tamount.Neg(amount)\n\t}\n\n\treturn Quantity{d: infDecAmount{amount}, Format: format}, nil\n}\n\n// CanonicalizeBytes returns the canonical form of q and its suffix (see comment on Quantity).\n//\n// Note about BinarySI:\n// * If q.Format is set to BinarySI and q.Amount represents a non-zero value between\n//   -1 and +1, it will be emitted as if q.Format were DecimalSI.\n// * Otherwise, if q.Format is set to BinarySI, frational parts of q.Amount will be\n//   rounded up. (1.1i becomes 2i.)\nfunc (q *Quantity) CanonicalizeBytes(out []byte) (result, suffix []byte) {\n\tif q.IsZero() {\n\t\treturn zeroBytes, nil\n\t}\n\n\tvar rounded CanonicalValue\n\tformat := q.Format\n\tswitch format {\n\tcase DecimalExponent, DecimalSI:\n\tcase BinarySI:\n\t\tif q.CmpInt64(-1024) > 0 && q.CmpInt64(1024) < 0 {\n\t\t\t// This avoids rounding and hopefully confusion, too.\n\t\t\tformat = DecimalSI\n\t\t} else {\n\t\t\tvar exact bool\n\t\t\tif rounded, exact = q.AsScale(0); !exact {\n\t\t\t\t// Don't lose precision-- show as DecimalSI\n\t\t\t\tformat = DecimalSI\n\t\t\t}\n\t\t}\n\tdefault:\n\t\tformat = DecimalExponent\n\t}\n\n\t// TODO: If BinarySI formatting is requested but would cause rounding, upgrade to\n\t// one of the other formats.\n\tswitch format {\n\tcase DecimalExponent, DecimalSI:\n\t\tnumber, exponent := q.AsCanonicalBytes(out)\n\t\tsuffix, _ := quantitySuffixer.constructBytes(10, exponent, format)\n\t\treturn number, suffix\n\tdefault:\n\t\t// format must be BinarySI\n\t\tnumber, exponent := rounded.AsCanonicalBase1024Bytes(out)\n\t\tsuffix, _ := quantitySuffixer.constructBytes(2, exponent*10, format)\n\t\treturn number, suffix\n\t}\n}\n\n// AsInt64 returns a representation of the current value as an int64 if a fast conversion\n// is possible. If false is returned, callers must use the inf.Dec form of this quantity.\nfunc (q *Quantity) AsInt64() (int64, bool) {\n\tif q.d.Dec != nil {\n\t\treturn 0, false\n\t}\n\treturn q.i.AsInt64()\n}\n\n// ToDec promotes the quantity in place to use an inf.Dec representation and returns itself.\nfunc (q *Quantity) ToDec() *Quantity {\n\tif q.d.Dec == nil {\n\t\tq.d.Dec = q.i.AsDec()\n\t\tq.i = int64Amount{}\n\t}\n\treturn q\n}\n\n// AsDec returns the quantity as represented by a scaled inf.Dec.\nfunc (q *Quantity) AsDec() *inf.Dec {\n\tif q.d.Dec != nil {\n\t\treturn q.d.Dec\n\t}\n\tq.d.Dec = q.i.AsDec()\n\tq.i = int64Amount{}\n\treturn q.d.Dec\n}\n\n// AsCanonicalBytes returns the canonical byte representation of this quantity as a mantissa\n// and base 10 exponent. The out byte slice may be passed to the method to avoid an extra\n// allocation.\nfunc (q *Quantity) AsCanonicalBytes(out []byte) (result []byte, exponent int32) {\n\tif q.d.Dec != nil {\n\t\treturn q.d.AsCanonicalBytes(out)\n\t}\n\treturn q.i.AsCanonicalBytes(out)\n}\n\n// IsZero returns true if the quantity is equal to zero.\nfunc (q *Quantity) IsZero() bool {\n\tif q.d.Dec != nil {\n\t\treturn q.d.Dec.Sign() == 0\n\t}\n\treturn q.i.value == 0\n}\n\n// Sign returns 0 if the quantity is zero, -1 if the quantity is less than zero, or 1 if the\n// quantity is greater than zero.\nfunc (q *Quantity) Sign() int {\n\tif q.d.Dec != nil {\n\t\treturn q.d.Dec.Sign()\n\t}\n\treturn q.i.Sign()\n}\n\n// AsScaled returns the current value, rounded up to the provided scale, and returns\n// false if the scale resulted in a loss of precision.\nfunc (q *Quantity) AsScale(scale Scale) (CanonicalValue, bool) {\n\tif q.d.Dec != nil {\n\t\treturn q.d.AsScale(scale)\n\t}\n\treturn q.i.AsScale(scale)\n}\n\n// RoundUp updates the quantity to the provided scale, ensuring that the value is at\n// least 1. False is returned if the rounding operation resulted in a loss of precision.\n// Negative numbers are rounded away from zero (-9 scale 1 rounds to -10).\nfunc (q *Quantity) RoundUp(scale Scale) bool {\n\tif q.d.Dec != nil {\n\t\tq.s = \"\"\n\t\td, exact := q.d.AsScale(scale)\n\t\tq.d = d\n\t\treturn exact\n\t}\n\t// avoid clearing the string value if we have already calculated it\n\tif q.i.scale >= scale {\n\t\treturn true\n\t}\n\tq.s = \"\"\n\ti, exact := q.i.AsScale(scale)\n\tq.i = i\n\treturn exact\n}\n\n// Add adds the provide y quantity to the current value. If the current value is zero,\n// the format of the quantity will be updated to the format of y.\nfunc (q *Quantity) Add(y Quantity) {\n\tq.s = \"\"\n\tif q.d.Dec == nil && y.d.Dec == nil {\n\t\tif q.i.value == 0 {\n\t\t\tq.Format = y.Format\n\t\t}\n\t\tif q.i.Add(y.i) {\n\t\t\treturn\n\t\t}\n\t} else if q.IsZero() {\n\t\tq.Format = y.Format\n\t}\n\tq.ToDec().d.Dec.Add(q.d.Dec, y.AsDec())\n}\n\n// Sub subtracts the provided quantity from the current value in place. If the current\n// value is zero, the format of the quantity will be updated to the format of y.\nfunc (q *Quantity) Sub(y Quantity) {\n\tq.s = \"\"\n\tif q.IsZero() {\n\t\tq.Format = y.Format\n\t}\n\tif q.d.Dec == nil && y.d.Dec == nil && q.i.Sub(y.i) {\n\t\treturn\n\t}\n\tq.ToDec().d.Dec.Sub(q.d.Dec, y.AsDec())\n}\n\n// Cmp returns 0 if the quantity is equal to y, -1 if the quantity is less than y, or 1 if the\n// quantity is greater than y.\nfunc (q *Quantity) Cmp(y Quantity) int {\n\tif q.d.Dec == nil && y.d.Dec == nil {\n\t\treturn q.i.Cmp(y.i)\n\t}\n\treturn q.AsDec().Cmp(y.AsDec())\n}\n\n// CmpInt64 returns 0 if the quantity is equal to y, -1 if the quantity is less than y, or 1 if the\n// quantity is greater than y.\nfunc (q *Quantity) CmpInt64(y int64) int {\n\tif q.d.Dec != nil {\n\t\treturn q.d.Dec.Cmp(inf.NewDec(y, inf.Scale(0)))\n\t}\n\treturn q.i.Cmp(int64Amount{value: y})\n}\n\n// Neg sets quantity to be the negative value of itself.\nfunc (q *Quantity) Neg() {\n\tq.s = \"\"\n\tif q.d.Dec == nil {\n\t\tq.i.value = -q.i.value\n\t\treturn\n\t}\n\tq.d.Dec.Neg(q.d.Dec)\n}\n\n// int64QuantityExpectedBytes is the expected width in bytes of the canonical string representation\n// of most Quantity values.\nconst int64QuantityExpectedBytes = 18\n\n// String formats the Quantity as a string, caching the result if not calculated.\n// String is an expensive operation and caching this result significantly reduces the cost of\n// normal parse / marshal operations on Quantity.\nfunc (q *Quantity) String() string {\n\tif len(q.s) == 0 {\n\t\tresult := make([]byte, 0, int64QuantityExpectedBytes)\n\t\tnumber, suffix := q.CanonicalizeBytes(result)\n\t\tnumber = append(number, suffix...)\n\t\tq.s = string(number)\n\t}\n\treturn q.s\n}\n\n// MarshalJSON implements the json.Marshaller interface.\nfunc (q Quantity) MarshalJSON() ([]byte, error) {\n\tif len(q.s) > 0 {\n\t\tout := make([]byte, len(q.s)+2)\n\t\tout[0], out[len(out)-1] = '\"', '\"'\n\t\tcopy(out[1:], q.s)\n\t\treturn out, nil\n\t}\n\tresult := make([]byte, int64QuantityExpectedBytes, int64QuantityExpectedBytes)\n\tresult[0] = '\"'\n\tnumber, suffix := q.CanonicalizeBytes(result[1:1])\n\t// if the same slice was returned to us that we passed in, avoid another allocation by copying number into\n\t// the source slice and returning that\n\tif len(number) > 0 && &number[0] == &result[1] && (len(number)+len(suffix)+2) <= int64QuantityExpectedBytes {\n\t\tnumber = append(number, suffix...)\n\t\tnumber = append(number, '\"')\n\t\treturn result[:1+len(number)], nil\n\t}\n\t// if CanonicalizeBytes needed more space than our slice provided, we may need to allocate again so use\n\t// append\n\tresult = result[:1]\n\tresult = append(result, number...)\n\tresult = append(result, suffix...)\n\tresult = append(result, '\"')\n\treturn result, nil\n}\n\n// UnmarshalJSON implements the json.Unmarshaller interface.\n// TODO: Remove support for leading/trailing whitespace\nfunc (q *Quantity) UnmarshalJSON(value []byte) error {\n\tl := len(value)\n\tif l == 4 && bytes.Equal(value, []byte(\"null\")) {\n\t\tq.d.Dec = nil\n\t\tq.i = int64Amount{}\n\t\treturn nil\n\t}\n\tif l >= 2 && value[0] == '\"' && value[l-1] == '\"' {\n\t\tvalue = value[1 : l-1]\n\t}\n\n\tparsed, err := ParseQuantity(strings.TrimSpace(string(value)))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// This copy is safe because parsed will not be referred to again.\n\t*q = parsed\n\treturn nil\n}\n\n// NewQuantity returns a new Quantity representing the given\n// value in the given format.\nfunc NewQuantity(value int64, format Format) *Quantity {\n\treturn &Quantity{\n\t\ti:      int64Amount{value: value},\n\t\tFormat: format,\n\t}\n}\n\n// NewMilliQuantity returns a new Quantity representing the given\n// value * 1/1000 in the given format. Note that BinarySI formatting\n// will round fractional values, and will be changed to DecimalSI for\n// values x where (-1 < x < 1) && (x != 0).\nfunc NewMilliQuantity(value int64, format Format) *Quantity {\n\treturn &Quantity{\n\t\ti:      int64Amount{value: value, scale: -3},\n\t\tFormat: format,\n\t}\n}\n\n// NewScaledQuantity returns a new Quantity representing the given\n// value * 10^scale in DecimalSI format.\nfunc NewScaledQuantity(value int64, scale Scale) *Quantity {\n\treturn &Quantity{\n\t\ti:      int64Amount{value: value, scale: scale},\n\t\tFormat: DecimalSI,\n\t}\n}\n\n// Value returns the value of q; any fractional part will be lost.\nfunc (q *Quantity) Value() int64 {\n\treturn q.ScaledValue(0)\n}\n\n// MilliValue returns the value of ceil(q * 1000); this could overflow an int64;\n// if that's a concern, call Value() first to verify the number is small enough.\nfunc (q *Quantity) MilliValue() int64 {\n\treturn q.ScaledValue(Milli)\n}\n\n// ScaledValue returns the value of ceil(q * 10^scale); this could overflow an int64.\n// To detect overflow, call Value() first and verify the expected magnitude.\nfunc (q *Quantity) ScaledValue(scale Scale) int64 {\n\tif q.d.Dec == nil {\n\t\ti, _ := q.i.AsScaledInt64(scale)\n\t\treturn i\n\t}\n\tdec := q.d.Dec\n\treturn scaledValue(dec.UnscaledBig(), int(dec.Scale()), int(scale.infScale()))\n}\n\n// Set sets q's value to be value.\nfunc (q *Quantity) Set(value int64) {\n\tq.SetScaled(value, 0)\n}\n\n// SetMilli sets q's value to be value * 1/1000.\nfunc (q *Quantity) SetMilli(value int64) {\n\tq.SetScaled(value, Milli)\n}\n\n// SetScaled sets q's value to be value * 10^scale\nfunc (q *Quantity) SetScaled(value int64, scale Scale) {\n\tq.s = \"\"\n\tq.d.Dec = nil\n\tq.i = int64Amount{value: value, scale: scale}\n}\n\n// Copy is a convenience function that makes a deep copy for you. Non-deep\n// copies of quantities share pointers and you will regret that.\nfunc (q *Quantity) Copy() *Quantity {\n\tif q.d.Dec == nil {\n\t\treturn &Quantity{\n\t\t\ts:      q.s,\n\t\t\ti:      q.i,\n\t\t\tFormat: q.Format,\n\t\t}\n\t}\n\ttmp := &inf.Dec{}\n\treturn &Quantity{\n\t\ts:      q.s,\n\t\td:      infDecAmount{tmp.Set(q.d.Dec)},\n\t\tFormat: q.Format,\n\t}\n}\n\n// qFlag is a helper type for the Flag function\ntype qFlag struct {\n\tdest *Quantity\n}\n\n// Sets the value of the internal Quantity. (used by flag & pflag)\nfunc (qf qFlag) Set(val string) error {\n\tq, err := ParseQuantity(val)\n\tif err != nil {\n\t\treturn err\n\t}\n\t// This copy is OK because q will not be referenced again.\n\t*qf.dest = q\n\treturn nil\n}\n\n// Converts the value of the internal Quantity to a string. (used by flag & pflag)\nfunc (qf qFlag) String() string {\n\treturn qf.dest.String()\n}\n\n// States the type of flag this is (Quantity). (used by pflag)\nfunc (qf qFlag) Type() string {\n\treturn \"quantity\"\n}\n\n// QuantityFlag is a helper that makes a quantity flag (using standard flag package).\n// Will panic if defaultValue is not a valid quantity.\nfunc QuantityFlag(flagName, defaultValue, description string) *Quantity {\n\tq := MustParse(defaultValue)\n\tflag.Var(NewQuantityFlagValue(&q), flagName, description)\n\treturn &q\n}\n\n// NewQuantityFlagValue returns an object that can be used to back a flag,\n// pointing at the given Quantity variable.\nfunc NewQuantityFlagValue(q *Quantity) flag.Value {\n\treturn qFlag{q}\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/resource/scale_int.go",
    "content": "/*\nCopyright 2015 The Kubernetes Authors All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage resource\n\nimport (\n\t\"math\"\n\t\"math/big\"\n\t\"sync\"\n)\n\nvar (\n\t// A sync pool to reduce allocation.\n\tintPool  sync.Pool\n\tmaxInt64 = big.NewInt(math.MaxInt64)\n)\n\nfunc init() {\n\tintPool.New = func() interface{} {\n\t\treturn &big.Int{}\n\t}\n}\n\n// scaledValue scales given unscaled value from scale to new Scale and returns\n// an int64. It ALWAYS rounds up the result when scale down. The final result might\n// overflow.\n//\n// scale, newScale represents the scale of the unscaled decimal.\n// The mathematical value of the decimal is unscaled * 10**(-scale).\nfunc scaledValue(unscaled *big.Int, scale, newScale int) int64 {\n\tdif := scale - newScale\n\tif dif == 0 {\n\t\treturn unscaled.Int64()\n\t}\n\n\t// Handle scale up\n\t// This is an easy case, we do not need to care about rounding and overflow.\n\t// If any intermediate operation causes overflow, the result will overflow.\n\tif dif < 0 {\n\t\treturn unscaled.Int64() * int64(math.Pow10(-dif))\n\t}\n\n\t// Handle scale down\n\t// We have to be careful about the intermediate operations.\n\n\t// fast path when unscaled < max.Int64 and exp(10,dif) < max.Int64\n\tconst log10MaxInt64 = 19\n\tif unscaled.Cmp(maxInt64) < 0 && dif < log10MaxInt64 {\n\t\tdivide := int64(math.Pow10(dif))\n\t\tresult := unscaled.Int64() / divide\n\t\tmod := unscaled.Int64() % divide\n\t\tif mod != 0 {\n\t\t\treturn result + 1\n\t\t}\n\t\treturn result\n\t}\n\n\t// We should only convert back to int64 when getting the result.\n\tdivisor := intPool.Get().(*big.Int)\n\texp := intPool.Get().(*big.Int)\n\tresult := intPool.Get().(*big.Int)\n\tdefer func() {\n\t\tintPool.Put(divisor)\n\t\tintPool.Put(exp)\n\t\tintPool.Put(result)\n\t}()\n\n\t// divisor = 10^(dif)\n\t// TODO: create loop up table if exp costs too much.\n\tdivisor.Exp(bigTen, exp.SetInt64(int64(dif)), nil)\n\t// reuse exp\n\tremainder := exp\n\n\t// result = unscaled / divisor\n\t// remainder = unscaled % divisor\n\tresult.DivMod(unscaled, divisor, remainder)\n\tif remainder.Sign() != 0 {\n\t\treturn result.Int64() + 1\n\t}\n\n\treturn result.Int64()\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/resource/suffix.go",
    "content": "/*\nCopyright 2014 The Kubernetes Authors All rights reserved.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\npackage resource\n\nimport (\n\t\"strconv\"\n)\n\ntype suffix string\n\n// suffixer can interpret and construct suffixes.\ntype suffixer interface {\n\tinterpret(suffix) (base, exponent int32, fmt Format, ok bool)\n\tconstruct(base, exponent int32, fmt Format) (s suffix, ok bool)\n\tconstructBytes(base, exponent int32, fmt Format) (s []byte, ok bool)\n}\n\n// quantitySuffixer handles suffixes for all three formats that quantity\n// can handle.\nvar quantitySuffixer = newSuffixer()\n\ntype bePair struct {\n\tbase, exponent int32\n}\n\ntype listSuffixer struct {\n\tsuffixToBE      map[suffix]bePair\n\tbeToSuffix      map[bePair]suffix\n\tbeToSuffixBytes map[bePair][]byte\n}\n\nfunc (ls *listSuffixer) addSuffix(s suffix, pair bePair) {\n\tif ls.suffixToBE == nil {\n\t\tls.suffixToBE = map[suffix]bePair{}\n\t}\n\tif ls.beToSuffix == nil {\n\t\tls.beToSuffix = map[bePair]suffix{}\n\t}\n\tif ls.beToSuffixBytes == nil {\n\t\tls.beToSuffixBytes = map[bePair][]byte{}\n\t}\n\tls.suffixToBE[s] = pair\n\tls.beToSuffix[pair] = s\n\tls.beToSuffixBytes[pair] = []byte(s)\n}\n\nfunc (ls *listSuffixer) lookup(s suffix) (base, exponent int32, ok bool) {\n\tpair, ok := ls.suffixToBE[s]\n\tif !ok {\n\t\treturn 0, 0, false\n\t}\n\treturn pair.base, pair.exponent, true\n}\n\nfunc (ls *listSuffixer) construct(base, exponent int32) (s suffix, ok bool) {\n\ts, ok = ls.beToSuffix[bePair{base, exponent}]\n\treturn\n}\n\nfunc (ls *listSuffixer) constructBytes(base, exponent int32) (s []byte, ok bool) {\n\ts, ok = ls.beToSuffixBytes[bePair{base, exponent}]\n\treturn\n}\n\ntype suffixHandler struct {\n\tdecSuffixes listSuffixer\n\tbinSuffixes listSuffixer\n}\n\ntype fastLookup struct {\n\t*suffixHandler\n}\n\nfunc (l fastLookup) interpret(s suffix) (base, exponent int32, format Format, ok bool) {\n\tswitch s {\n\tcase \"\":\n\t\treturn 10, 0, DecimalSI, true\n\tcase \"n\":\n\t\treturn 10, -9, DecimalSI, true\n\tcase \"u\":\n\t\treturn 10, -6, DecimalSI, true\n\tcase \"m\":\n\t\treturn 10, -3, DecimalSI, true\n\tcase \"k\":\n\t\treturn 10, 3, DecimalSI, true\n\tcase \"M\":\n\t\treturn 10, 6, DecimalSI, true\n\tcase \"G\":\n\t\treturn 10, 9, DecimalSI, true\n\t}\n\treturn l.suffixHandler.interpret(s)\n}\n\nfunc newSuffixer() suffixer {\n\tsh := &suffixHandler{}\n\n\t// IMPORTANT: if you change this section you must change fastLookup\n\n\tsh.binSuffixes.addSuffix(\"Ki\", bePair{2, 10})\n\tsh.binSuffixes.addSuffix(\"Mi\", bePair{2, 20})\n\tsh.binSuffixes.addSuffix(\"Gi\", bePair{2, 30})\n\tsh.binSuffixes.addSuffix(\"Ti\", bePair{2, 40})\n\tsh.binSuffixes.addSuffix(\"Pi\", bePair{2, 50})\n\tsh.binSuffixes.addSuffix(\"Ei\", bePair{2, 60})\n\t// Don't emit an error when trying to produce\n\t// a suffix for 2^0.\n\tsh.decSuffixes.addSuffix(\"\", bePair{2, 0})\n\n\tsh.decSuffixes.addSuffix(\"n\", bePair{10, -9})\n\tsh.decSuffixes.addSuffix(\"u\", bePair{10, -6})\n\tsh.decSuffixes.addSuffix(\"m\", bePair{10, -3})\n\tsh.decSuffixes.addSuffix(\"\", bePair{10, 0})\n\tsh.decSuffixes.addSuffix(\"k\", bePair{10, 3})\n\tsh.decSuffixes.addSuffix(\"M\", bePair{10, 6})\n\tsh.decSuffixes.addSuffix(\"G\", bePair{10, 9})\n\tsh.decSuffixes.addSuffix(\"T\", bePair{10, 12})\n\tsh.decSuffixes.addSuffix(\"P\", bePair{10, 15})\n\tsh.decSuffixes.addSuffix(\"E\", bePair{10, 18})\n\n\treturn fastLookup{sh}\n}\n\nfunc (sh *suffixHandler) construct(base, exponent int32, fmt Format) (s suffix, ok bool) {\n\tswitch fmt {\n\tcase DecimalSI:\n\t\treturn sh.decSuffixes.construct(base, exponent)\n\tcase BinarySI:\n\t\treturn sh.binSuffixes.construct(base, exponent)\n\tcase DecimalExponent:\n\t\tif base != 10 {\n\t\t\treturn \"\", false\n\t\t}\n\t\tif exponent == 0 {\n\t\t\treturn \"\", true\n\t\t}\n\t\treturn suffix(\"e\" + strconv.FormatInt(int64(exponent), 10)), true\n\t}\n\treturn \"\", false\n}\n\nfunc (sh *suffixHandler) constructBytes(base, exponent int32, format Format) (s []byte, ok bool) {\n\tswitch format {\n\tcase DecimalSI:\n\t\treturn sh.decSuffixes.constructBytes(base, exponent)\n\tcase BinarySI:\n\t\treturn sh.binSuffixes.constructBytes(base, exponent)\n\tcase DecimalExponent:\n\t\tif base != 10 {\n\t\t\treturn nil, false\n\t\t}\n\t\tif exponent == 0 {\n\t\t\treturn nil, true\n\t\t}\n\t\tresult := make([]byte, 8, 8)\n\t\tresult[0] = 'e'\n\t\tnumber := strconv.AppendInt(result[1:1], int64(exponent), 10)\n\t\tif &result[1] == &number[0] {\n\t\t\treturn result[:1+len(number)], true\n\t\t}\n\t\tresult = append(result[:1], number...)\n\t\treturn result, true\n\t}\n\treturn nil, false\n}\n\nfunc (sh *suffixHandler) interpret(suffix suffix) (base, exponent int32, fmt Format, ok bool) {\n\t// Try lookup tables first\n\tif b, e, ok := sh.decSuffixes.lookup(suffix); ok {\n\t\treturn b, e, DecimalSI, true\n\t}\n\tif b, e, ok := sh.binSuffixes.lookup(suffix); ok {\n\t\treturn b, e, BinarySI, true\n\t}\n\n\tif len(suffix) > 1 && (suffix[0] == 'E' || suffix[0] == 'e') {\n\t\tparsed, err := strconv.ParseInt(string(suffix[1:]), 10, 64)\n\t\tif err != nil {\n\t\t\treturn 0, 0, DecimalExponent, false\n\t\t}\n\t\treturn 10, int32(parsed), DecimalExponent, true\n\t}\n\n\treturn 0, 0, DecimalExponent, false\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/semver.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/coreos/go-semver/semver\"\n)\n\nvar (\n\tErrNoZeroSemVer = ACVersionError(\"SemVer cannot be zero\")\n\tErrBadSemVer    = ACVersionError(\"SemVer is bad\")\n)\n\n// SemVer implements the Unmarshaler interface to define a field that must be\n// a semantic version string\n// TODO(jonboulle): extend upstream instead of wrapping?\ntype SemVer semver.Version\n\n// NewSemVer generates a new SemVer from a string. If the given string does\n// not represent a valid SemVer, nil and an error are returned.\nfunc NewSemVer(s string) (*SemVer, error) {\n\tnsv, err := semver.NewVersion(s)\n\tif err != nil {\n\t\treturn nil, ErrBadSemVer\n\t}\n\tv := SemVer(*nsv)\n\tif v.Empty() {\n\t\treturn nil, ErrNoZeroSemVer\n\t}\n\treturn &v, nil\n}\n\nfunc (sv SemVer) LessThanMajor(versionB SemVer) bool {\n\tmajorA := semver.Version(sv).Major\n\tmajorB := semver.Version(versionB).Major\n\tif majorA < majorB {\n\t\treturn true\n\t}\n\treturn false\n}\n\nfunc (sv SemVer) LessThanExact(versionB SemVer) bool {\n\tvA := semver.Version(sv)\n\tvB := semver.Version(versionB)\n\treturn vA.LessThan(vB)\n}\n\nfunc (sv SemVer) String() string {\n\ts := semver.Version(sv)\n\treturn s.String()\n}\n\nfunc (sv SemVer) Empty() bool {\n\treturn semver.Version(sv) == semver.Version{}\n}\n\n// UnmarshalJSON implements the json.Unmarshaler interface\nfunc (sv *SemVer) UnmarshalJSON(data []byte) error {\n\tvar s string\n\tif err := json.Unmarshal(data, &s); err != nil {\n\t\treturn err\n\t}\n\tv, err := NewSemVer(s)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*sv = *v\n\treturn nil\n}\n\n// MarshalJSON implements the json.Marshaler interface\nfunc (sv SemVer) MarshalJSON() ([]byte, error) {\n\tif sv.Empty() {\n\t\treturn nil, ErrNoZeroSemVer\n\t}\n\treturn json.Marshal(sv.String())\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/url.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"net/url\"\n)\n\n// URL wraps url.URL to marshal/unmarshal to/from JSON strings and enforce\n// that the scheme is HTTP/HTTPS only\ntype URL url.URL\n\nfunc NewURL(s string) (*URL, error) {\n\tuu, err := url.Parse(s)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"bad URL: %v\", err)\n\t}\n\tnu := URL(*uu)\n\tif err := nu.assertValidScheme(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &nu, nil\n}\n\nfunc (u URL) String() string {\n\tuu := url.URL(u)\n\treturn uu.String()\n}\n\nfunc (u URL) assertValidScheme() error {\n\tswitch u.Scheme {\n\tcase \"http\", \"https\":\n\t\treturn nil\n\tdefault:\n\t\treturn fmt.Errorf(\"bad URL scheme, must be http/https\")\n\t}\n}\n\nfunc (u *URL) UnmarshalJSON(data []byte) error {\n\tvar s string\n\tif err := json.Unmarshal(data, &s); err != nil {\n\t\treturn err\n\t}\n\tnu, err := NewURL(s)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*u = *nu\n\treturn nil\n}\n\nfunc (u URL) MarshalJSON() ([]byte, error) {\n\tif err := u.assertValidScheme(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(u.String())\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/user_annotations.go",
    "content": "// Copyright 2016 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\n// UserAnnotations are arbitrary key-value pairs, to be supplied and interpreted by the user\ntype UserAnnotations map[string]string\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/user_labels.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\n// UserLabels are arbitrary key-value pairs, to be supplied and interpreted by the user\ntype UserLabels map[string]string\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/uuid.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/hex\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"reflect\"\n\t\"strings\"\n)\n\nvar (\n\tErrNoEmptyUUID = errors.New(\"UUID cannot be empty\")\n)\n\n// UUID encodes an RFC4122-compliant UUID, marshaled to/from a string\n// TODO(jonboulle): vendor a package for this?\n// TODO(jonboulle): consider more flexibility in input string formats.\n// Right now, we only accept:\n//   \"6733C088-A507-4694-AABF-EDBE4FC5266F\"\n//   \"6733C088A5074694AABFEDBE4FC5266F\"\ntype UUID [16]byte\n\nfunc (u UUID) String() string {\n\treturn fmt.Sprintf(\"%x-%x-%x-%x-%x\", u[0:4], u[4:6], u[6:8], u[8:10], u[10:16])\n}\n\nfunc (u *UUID) Set(s string) error {\n\tnu, err := NewUUID(s)\n\tif err == nil {\n\t\t*u = *nu\n\t}\n\treturn err\n}\n\n// NewUUID generates a new UUID from the given string. If the string does not\n// represent a valid UUID, nil and an error are returned.\nfunc NewUUID(s string) (*UUID, error) {\n\ts = strings.Replace(s, \"-\", \"\", -1)\n\tif len(s) != 32 {\n\t\treturn nil, errors.New(\"bad UUID length != 32\")\n\t}\n\tdec, err := hex.DecodeString(s)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar u UUID\n\tfor i, b := range dec {\n\t\tu[i] = b\n\t}\n\treturn &u, nil\n}\n\nfunc (u UUID) Empty() bool {\n\treturn reflect.DeepEqual(u, UUID{})\n}\n\nfunc (u *UUID) UnmarshalJSON(data []byte) error {\n\tvar s string\n\tif err := json.Unmarshal(data, &s); err != nil {\n\t\treturn err\n\t}\n\tuu, err := NewUUID(s)\n\tif uu.Empty() {\n\t\treturn ErrNoEmptyUUID\n\t}\n\tif err == nil {\n\t\t*u = *uu\n\t}\n\treturn err\n}\n\nfunc (u UUID) MarshalJSON() ([]byte, error) {\n\tif u.Empty() {\n\t\treturn nil, ErrNoEmptyUUID\n\t}\n\treturn json.Marshal(u.String())\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/types/volume.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage types\n\nimport (\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"path/filepath\"\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/appc/spec/schema/common\"\n)\n\nconst (\n\temptyVolumeDefaultMode = \"0755\"\n\temptyVolumeDefaultUID  = 0\n\temptyVolumeDefaultGID  = 0\n)\n\n// Volume encapsulates a volume which should be mounted into the filesystem\n// of all apps in a PodManifest\ntype Volume struct {\n\tName ACName `json:\"name\"`\n\tKind string `json:\"kind\"`\n\n\t// currently used only by \"host\"\n\t// TODO(jonboulle): factor out?\n\tSource    string `json:\"source,omitempty\"`\n\tReadOnly  *bool  `json:\"readOnly,omitempty\"`\n\tRecursive *bool  `json:\"recursive,omitempty\"`\n\n\t// currently used only by \"empty\"\n\tMode *string `json:\"mode,omitempty\"`\n\tUID  *int    `json:\"uid,omitempty\"`\n\tGID  *int    `json:\"gid,omitempty\"`\n}\n\ntype volume Volume\n\nfunc (v Volume) assertValid() error {\n\tif v.Name.Empty() {\n\t\treturn errors.New(\"name must be set\")\n\t}\n\n\tswitch v.Kind {\n\tcase \"empty\":\n\t\tif v.Source != \"\" {\n\t\t\treturn errors.New(\"source for empty volume must be empty\")\n\t\t}\n\t\tif v.Mode == nil {\n\t\t\treturn errors.New(\"mode for empty volume must be set\")\n\t\t}\n\t\tif v.UID == nil {\n\t\t\treturn errors.New(\"uid for empty volume must be set\")\n\t\t}\n\t\tif v.GID == nil {\n\t\t\treturn errors.New(\"gid for empty volume must be set\")\n\t\t}\n\t\treturn nil\n\tcase \"host\":\n\t\tif v.Source == \"\" {\n\t\t\treturn errors.New(\"source for host volume cannot be empty\")\n\t\t}\n\t\tif v.Mode != nil {\n\t\t\treturn errors.New(\"mode for host volume cannot be set\")\n\t\t}\n\t\tif v.UID != nil {\n\t\t\treturn errors.New(\"uid for host volume cannot be set\")\n\t\t}\n\t\tif v.GID != nil {\n\t\t\treturn errors.New(\"gid for host volume cannot be set\")\n\t\t}\n\t\tif !filepath.IsAbs(v.Source) {\n\t\t\treturn errors.New(\"source for host volume must be absolute path\")\n\t\t}\n\t\treturn nil\n\tdefault:\n\t\treturn errors.New(`unrecognized volume kind: should be one of \"empty\", \"host\"`)\n\t}\n}\n\nfunc (v *Volume) UnmarshalJSON(data []byte) error {\n\tvar vv volume\n\tif err := json.Unmarshal(data, &vv); err != nil {\n\t\treturn err\n\t}\n\tnv := Volume(vv)\n\tmaybeSetDefaults(&nv)\n\tif err := nv.assertValid(); err != nil {\n\t\treturn err\n\t}\n\t*v = nv\n\treturn nil\n}\n\nfunc (v Volume) MarshalJSON() ([]byte, error) {\n\tif err := v.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\treturn json.Marshal(volume(v))\n}\n\nfunc (v Volume) String() string {\n\ts := []string{\n\t\tv.Name.String(),\n\t\t\",kind=\",\n\t\tv.Kind,\n\t}\n\tif v.Source != \"\" {\n\t\ts = append(s, \",source=\")\n\t\ts = append(s, v.Source)\n\t}\n\tif v.ReadOnly != nil {\n\t\ts = append(s, \",readOnly=\")\n\t\ts = append(s, strconv.FormatBool(*v.ReadOnly))\n\t}\n\tif v.Recursive != nil {\n\t\ts = append(s, \",recursive=\")\n\t\ts = append(s, strconv.FormatBool(*v.Recursive))\n\t}\n\tswitch v.Kind {\n\tcase \"empty\":\n\t\tif *v.Mode != emptyVolumeDefaultMode {\n\t\t\ts = append(s, \",mode=\")\n\t\t\ts = append(s, *v.Mode)\n\t\t}\n\t\tif *v.UID != emptyVolumeDefaultUID {\n\t\t\ts = append(s, \",uid=\")\n\t\t\ts = append(s, strconv.Itoa(*v.UID))\n\t\t}\n\t\tif *v.GID != emptyVolumeDefaultGID {\n\t\t\ts = append(s, \",gid=\")\n\t\t\ts = append(s, strconv.Itoa(*v.GID))\n\t\t}\n\t}\n\treturn strings.Join(s, \"\")\n}\n\n// VolumeFromString takes a command line volume parameter and returns a volume\n//\n// Example volume parameters:\n// \tdatabase,kind=host,source=/tmp,readOnly=true,recursive=true\nfunc VolumeFromString(vp string) (*Volume, error) {\n\tvp = \"name=\" + vp\n\tvpQuery, err := common.MakeQueryString(vp)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tv, err := url.ParseQuery(vpQuery)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn VolumeFromParams(v)\n}\n\nfunc VolumeFromParams(params map[string][]string) (*Volume, error) {\n\tvar vol Volume\n\tfor key, val := range params {\n\t\tval := val\n\t\tif len(val) > 1 {\n\t\t\treturn nil, fmt.Errorf(\"label %s with multiple values %q\", key, val)\n\t\t}\n\n\t\tswitch key {\n\t\tcase \"name\":\n\t\t\tacn, err := NewACName(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tvol.Name = *acn\n\t\tcase \"kind\":\n\t\t\tvol.Kind = val[0]\n\t\tcase \"source\":\n\t\t\tvol.Source = val[0]\n\t\tcase \"readOnly\":\n\t\t\tro, err := strconv.ParseBool(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tvol.ReadOnly = &ro\n\t\tcase \"recursive\":\n\t\t\trec, err := strconv.ParseBool(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tvol.Recursive = &rec\n\t\tcase \"mode\":\n\t\t\tvol.Mode = &val[0]\n\t\tcase \"uid\":\n\t\t\tu, err := strconv.Atoi(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tvol.UID = &u\n\t\tcase \"gid\":\n\t\t\tg, err := strconv.Atoi(val[0])\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tvol.GID = &g\n\t\tdefault:\n\t\t\treturn nil, fmt.Errorf(\"unknown volume parameter %q\", key)\n\t\t}\n\t}\n\n\tmaybeSetDefaults(&vol)\n\n\tif err := vol.assertValid(); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &vol, nil\n}\n\n// maybeSetDefaults sets the correct default values for certain fields on a\n// Volume if they are not already been set. These fields are not\n// pre-populated on all Volumes as the Volume type is polymorphic.\nfunc maybeSetDefaults(vol *Volume) {\n\tif vol.Kind == \"empty\" {\n\t\tif vol.Mode == nil {\n\t\t\tm := emptyVolumeDefaultMode\n\t\t\tvol.Mode = &m\n\t\t}\n\t\tif vol.UID == nil {\n\t\t\tu := emptyVolumeDefaultUID\n\t\t\tvol.UID = &u\n\t\t}\n\t\tif vol.GID == nil {\n\t\t\tg := emptyVolumeDefaultGID\n\t\t\tvol.GID = &g\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/appc/spec/schema/version.go",
    "content": "// Copyright 2015 The appc Authors\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage schema\n\nimport (\n\t\"github.com/appc/spec/schema/types\"\n)\n\nconst (\n\t// version represents the canonical version of the appc spec and tooling.\n\t// For now, the schema and tooling is coupled with the spec itself, so\n\t// this must be kept in sync with the VERSION file in the root of the repo.\n\tversion string = \"0.8.10\"\n)\n\nvar (\n\t// AppContainerVersion is the SemVer representation of version\n\tAppContainerVersion types.SemVer\n)\n\nfunc init() {\n\tv, err := types.NewSemVer(version)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tAppContainerVersion = *v\n}\n"
  },
  {
    "path": "vendor/github.com/coreos/go-semver/LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "vendor/github.com/coreos/go-semver/example.go",
    "content": "package main\n\nimport (\n\t\"fmt\"\n\t\"github.com/coreos/go-semver/semver\"\n\t\"os\"\n)\n\nfunc main() {\n\tvA, err := semver.NewVersion(os.Args[1])\n\tif err != nil {\n\t\tfmt.Println(err.Error())\n\t}\n\tvB, err := semver.NewVersion(os.Args[2])\n\tif err != nil {\n\t\tfmt.Println(err.Error())\n\t}\n\n\tfmt.Printf(\"%s < %s == %t\\n\", vA, vB, vA.LessThan(*vB))\n}\n"
  },
  {
    "path": "vendor/github.com/coreos/go-semver/semver/semver.go",
    "content": "// Copyright 2013-2015 CoreOS, Inc.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\n// Semantic Versions http://semver.org\npackage semver\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"strings\"\n)\n\ntype Version struct {\n\tMajor      int64\n\tMinor      int64\n\tPatch      int64\n\tPreRelease PreRelease\n\tMetadata   string\n}\n\ntype PreRelease string\n\nfunc splitOff(input *string, delim string) (val string) {\n\tparts := strings.SplitN(*input, delim, 2)\n\n\tif len(parts) == 2 {\n\t\t*input = parts[0]\n\t\tval = parts[1]\n\t}\n\n\treturn val\n}\n\nfunc NewVersion(version string) (*Version, error) {\n\tv := Version{}\n\n\tv.Metadata = splitOff(&version, \"+\")\n\tv.PreRelease = PreRelease(splitOff(&version, \"-\"))\n\n\tdotParts := strings.SplitN(version, \".\", 3)\n\n\tif len(dotParts) != 3 {\n\t\treturn nil, errors.New(fmt.Sprintf(\"%s is not in dotted-tri format\", version))\n\t}\n\n\tparsed := make([]int64, 3, 3)\n\n\tfor i, v := range dotParts[:3] {\n\t\tval, err := strconv.ParseInt(v, 10, 64)\n\t\tparsed[i] = val\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tv.Major = parsed[0]\n\tv.Minor = parsed[1]\n\tv.Patch = parsed[2]\n\n\treturn &v, nil\n}\n\nfunc Must(v *Version, err error) *Version {\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\treturn v\n}\n\nfunc (v Version) String() string {\n\tvar buffer bytes.Buffer\n\n\tfmt.Fprintf(&buffer, \"%d.%d.%d\", v.Major, v.Minor, v.Patch)\n\n\tif v.PreRelease != \"\" {\n\t\tfmt.Fprintf(&buffer, \"-%s\", v.PreRelease)\n\t}\n\n\tif v.Metadata != \"\" {\n\t\tfmt.Fprintf(&buffer, \"+%s\", v.Metadata)\n\t}\n\n\treturn buffer.String()\n}\n\nfunc (v *Version) UnmarshalYAML(unmarshal func(interface{}) error) error {\n\tvar data string\n\tif err := unmarshal(&data); err != nil {\n\t\treturn err\n\t}\n\tvv, err := NewVersion(data)\n\tif err != nil {\n\t\treturn err\n\t}\n\t*v = *vv\n\treturn nil\n}\n\nfunc (v Version) MarshalJSON() ([]byte, error) {\n\treturn []byte(`\"` + v.String() + `\"`), nil\n}\n\nfunc (v *Version) UnmarshalJSON(data []byte) error {\n\tl := len(data)\n\tif l == 0 || string(data) == `\"\"` {\n\t\treturn nil\n\t}\n\tif l < 2 || data[0] != '\"' || data[l-1] != '\"' {\n\t\treturn errors.New(\"invalid semver string\")\n\t}\n\tvv, err := NewVersion(string(data[1 : l-1]))\n\tif err != nil {\n\t\treturn err\n\t}\n\t*v = *vv\n\treturn nil\n}\n\nfunc (v Version) LessThan(versionB Version) bool {\n\tversionA := v\n\tcmp := recursiveCompare(versionA.Slice(), versionB.Slice())\n\n\tif cmp == 0 {\n\t\tcmp = preReleaseCompare(versionA, versionB)\n\t}\n\n\tif cmp == -1 {\n\t\treturn true\n\t}\n\n\treturn false\n}\n\n/* Slice converts the comparable parts of the semver into a slice of strings */\nfunc (v Version) Slice() []int64 {\n\treturn []int64{v.Major, v.Minor, v.Patch}\n}\n\nfunc (p PreRelease) Slice() []string {\n\tpreRelease := string(p)\n\treturn strings.Split(preRelease, \".\")\n}\n\nfunc preReleaseCompare(versionA Version, versionB Version) int {\n\ta := versionA.PreRelease\n\tb := versionB.PreRelease\n\n\t/* Handle the case where if two versions are otherwise equal it is the\n\t * one without a PreRelease that is greater */\n\tif len(a) == 0 && (len(b) > 0) {\n\t\treturn 1\n\t} else if len(b) == 0 && (len(a) > 0) {\n\t\treturn -1\n\t}\n\n\t// If there is a prelease, check and compare each part.\n\treturn recursivePreReleaseCompare(a.Slice(), b.Slice())\n}\n\nfunc recursiveCompare(versionA []int64, versionB []int64) int {\n\tif len(versionA) == 0 {\n\t\treturn 0\n\t}\n\n\ta := versionA[0]\n\tb := versionB[0]\n\n\tif a > b {\n\t\treturn 1\n\t} else if a < b {\n\t\treturn -1\n\t}\n\n\treturn recursiveCompare(versionA[1:], versionB[1:])\n}\n\nfunc recursivePreReleaseCompare(versionA []string, versionB []string) int {\n\t// Handle slice length disparity.\n\tif len(versionA) == 0 {\n\t\t// Nothing to compare too, so we return 0\n\t\treturn 0\n\t} else if len(versionB) == 0 {\n\t\t// We're longer than versionB so return 1.\n\t\treturn 1\n\t}\n\n\ta := versionA[0]\n\tb := versionB[0]\n\n\taInt := false\n\tbInt := false\n\n\taI, err := strconv.Atoi(versionA[0])\n\tif err == nil {\n\t\taInt = true\n\t}\n\n\tbI, err := strconv.Atoi(versionB[0])\n\tif err == nil {\n\t\tbInt = true\n\t}\n\n\t// Handle Integer Comparison\n\tif aInt && bInt {\n\t\tif aI > bI {\n\t\t\treturn 1\n\t\t} else if aI < bI {\n\t\t\treturn -1\n\t\t}\n\t}\n\n\t// Handle String Comparison\n\tif a > b {\n\t\treturn 1\n\t} else if a < b {\n\t\treturn -1\n\t}\n\n\treturn recursivePreReleaseCompare(versionA[1:], versionB[1:])\n}\n\n// BumpMajor increments the Major field by 1 and resets all other fields to their default values\nfunc (v *Version) BumpMajor() {\n\tv.Major += 1\n\tv.Minor = 0\n\tv.Patch = 0\n\tv.PreRelease = PreRelease(\"\")\n\tv.Metadata = \"\"\n}\n\n// BumpMinor increments the Minor field by 1 and resets all other fields to their default values\nfunc (v *Version) BumpMinor() {\n\tv.Minor += 1\n\tv.Patch = 0\n\tv.PreRelease = PreRelease(\"\")\n\tv.Metadata = \"\"\n}\n\n// BumpPatch increments the Patch field by 1 and resets all other fields to their default values\nfunc (v *Version) BumpPatch() {\n\tv.Patch += 1\n\tv.PreRelease = PreRelease(\"\")\n\tv.Metadata = \"\"\n}\n"
  },
  {
    "path": "vendor/github.com/coreos/go-semver/semver/sort.go",
    "content": "// Copyright 2013-2015 CoreOS, Inc.\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage semver\n\nimport (\n\t\"sort\"\n)\n\ntype Versions []*Version\n\nfunc (s Versions) Len() int {\n\treturn len(s)\n}\n\nfunc (s Versions) Swap(i, j int) {\n\ts[i], s[j] = s[j], s[i]\n}\n\nfunc (s Versions) Less(i, j int) bool {\n\treturn s[i].LessThan(*s[j])\n}\n\n// Sort sorts the given slice of Version\nfunc Sort(versions []*Version) {\n\tsort.Sort(Versions(versions))\n}\n"
  },
  {
    "path": "vendor/github.com/coreos/ioprogress/LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2014 Mitchell Hashimoto\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n"
  },
  {
    "path": "vendor/github.com/coreos/ioprogress/draw.go",
    "content": "package ioprogress\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"strings\"\n\n\t\"golang.org/x/crypto/ssh/terminal\"\n)\n\n// DrawFunc is the callback type for drawing progress.\ntype DrawFunc func(int64, int64) error\n\n// DrawTextFormatFunc is a callback used by DrawFuncs that draw text in\n// order to format the text into some more human friendly format.\ntype DrawTextFormatFunc func(int64, int64) string\n\nvar defaultDrawFunc DrawFunc\n\nfunc init() {\n\tdefaultDrawFunc = DrawTerminal(os.Stdout)\n}\n\n// isTerminal returns True when w is going to a tty, and false otherwise.\nfunc isTerminal(w io.Writer) bool {\n\tif f, ok := w.(*os.File); ok {\n\t\treturn terminal.IsTerminal(int(f.Fd()))\n\t}\n\treturn false\n}\n\n// DrawTerminal returns a DrawFunc that draws a progress bar to an io.Writer\n// that is assumed to be a terminal (and therefore respects carriage returns).\nfunc DrawTerminal(w io.Writer) DrawFunc {\n\treturn DrawTerminalf(w, func(progress, total int64) string {\n\t\treturn fmt.Sprintf(\"%d/%d\", progress, total)\n\t})\n}\n\n// DrawTerminalf returns a DrawFunc that draws a progress bar to an io.Writer\n// that is formatted with the given formatting function.\nfunc DrawTerminalf(w io.Writer, f DrawTextFormatFunc) DrawFunc {\n\tvar maxLength int\n\n\treturn func(progress, total int64) error {\n\t\tif progress == -1 && total == -1 {\n\t\t\t_, err := fmt.Fprintf(w, \"\\n\")\n\t\t\treturn err\n\t\t}\n\n\t\t// Make sure we pad it to the max length we've ever drawn so that\n\t\t// we don't have trailing characters.\n\t\tline := f(progress, total)\n\t\tif len(line) < maxLength {\n\t\t\tline = fmt.Sprintf(\n\t\t\t\t\"%s%s\",\n\t\t\t\tline,\n\t\t\t\tstrings.Repeat(\" \", maxLength-len(line)))\n\t\t}\n\t\tmaxLength = len(line)\n\n\t\tterminate := \"\\r\"\n\t\tif !isTerminal(w) {\n\t\t\tterminate = \"\\n\"\n\t\t}\n\t\t_, err := fmt.Fprint(w, line+terminate)\n\t\treturn err\n\t}\n}\n\nvar byteUnits = []string{\"B\", \"KB\", \"MB\", \"GB\", \"TB\", \"PB\"}\n\n// DrawTextFormatBytes is a DrawTextFormatFunc that formats the progress\n// and total into human-friendly byte formats.\nfunc DrawTextFormatBytes(progress, total int64) string {\n\treturn fmt.Sprintf(\"%s/%s\", ByteUnitStr(progress), ByteUnitStr(total))\n}\n\n// DrawTextFormatBar returns a DrawTextFormatFunc that draws a progress\n// bar with the given width (in characters). This can be used in conjunction\n// with another DrawTextFormatFunc to create a progress bar with bytes, for\n// example:\n//\n//     bar := DrawTextFormatBar(20)\n//     func(progress, total int64) string {\n//         return fmt.Sprintf(\n//           \"%s %s\",\n//           bar(progress, total),\n//           DrawTextFormatBytes(progress, total))\n//     }\n//\nfunc DrawTextFormatBar(width int64) DrawTextFormatFunc {\n\treturn DrawTextFormatBarForW(width, nil)\n}\n\n// DrawTextFormatBarForW returns a DrawTextFormatFunc as described in the docs\n// for DrawTextFormatBar, however if the io.Writer passed in is not a tty then\n// the returned function will always return \"\".\nfunc DrawTextFormatBarForW(width int64, w io.Writer) DrawTextFormatFunc {\n\tif w != nil && !isTerminal(w) {\n\t\treturn func(progress, total int64) string {\n\t\t\treturn \"\"\n\t\t}\n\t}\n\n\twidth -= 2\n\n\treturn func(progress, total int64) string {\n\t\tcurrent := int64((float64(progress) / float64(total)) * float64(width))\n\t\tif current < 0 || current > width {\n\t\t\treturn fmt.Sprintf(\"[%s]\", strings.Repeat(\" \", int(width)))\n\t\t}\n\t\treturn fmt.Sprintf(\n\t\t\t\"[%s%s]\",\n\t\t\tstrings.Repeat(\"=\", int(current)),\n\t\t\tstrings.Repeat(\" \", int(width-current)))\n\t}\n}\n\n// ByteUnitStr pretty prints a number of bytes.\nfunc ByteUnitStr(n int64) string {\n\tvar unit string\n\tsize := float64(n)\n\tfor i := 1; i < len(byteUnits); i++ {\n\t\tif size < 1000 {\n\t\t\tunit = byteUnits[i-1]\n\t\t\tbreak\n\t\t}\n\n\t\tsize = size / 1000\n\t}\n\n\treturn fmt.Sprintf(\"%.3g %s\", size, unit)\n}\n"
  },
  {
    "path": "vendor/github.com/coreos/ioprogress/reader.go",
    "content": "package ioprogress\n\nimport (\n\t\"io\"\n\t\"time\"\n)\n\n// Reader is an implementation of io.Reader that draws the progress of\n// reading some data.\ntype Reader struct {\n\t// Reader is the underlying reader to read from\n\tReader io.Reader\n\n\t// Size is the total size of the data coming out of the reader.\n\tSize int64\n\n\t// DrawFunc is the callback to invoke to draw the progress bar. By\n\t// default, this will be DrawTerminal(os.Stdout).\n\t//\n\t// DrawInterval is the minimum time to wait between reads to update the\n\t// progress bar.\n\tDrawFunc     DrawFunc\n\tDrawInterval time.Duration\n\n\tprogress int64\n\tlastDraw time.Time\n}\n\n// Read reads from the underlying reader and invokes the DrawFunc if\n// appropriate. The DrawFunc is executed when there is data that is\n// read (progress is made) and at least DrawInterval time has passed.\nfunc (r *Reader) Read(p []byte) (int, error) {\n\t// If we haven't drawn before, initialize the progress bar\n\tif r.lastDraw.IsZero() {\n\t\tr.initProgress()\n\t}\n\n\t// Read from the underlying source\n\tn, err := r.Reader.Read(p)\n\n\t// Always increment the progress even if there was an error\n\tr.progress += int64(n)\n\n\t// If we don't have any errors, then draw the progress. If we are\n\t// at the end of the data, then finish the progress.\n\tif err == nil {\n\t\t// Only draw if we read data or we've never read data before (to\n\t\t// initialize the progress bar).\n\t\tif n > 0 {\n\t\t\tr.drawProgress()\n\t\t}\n\t}\n\tif err == io.EOF {\n\t\tr.finishProgress()\n\t}\n\n\treturn n, err\n}\n\nfunc (r *Reader) drawProgress() {\n\t// If we've drawn before, then make sure that the draw interval\n\t// has passed before we draw again.\n\tinterval := r.DrawInterval\n\tif interval == 0 {\n\t\tinterval = time.Second\n\t}\n\tif !r.lastDraw.IsZero() {\n\t\tnextDraw := r.lastDraw.Add(interval)\n\t\tif time.Now().Before(nextDraw) {\n\t\t\treturn\n\t\t}\n\t}\n\n\t// Draw\n\tf := r.drawFunc()\n\tf(r.progress, r.Size)\n\n\t// Record this draw so that we don't draw again really quickly\n\tr.lastDraw = time.Now()\n}\n\nfunc (r *Reader) finishProgress() {\n\tf := r.drawFunc()\n\tf(r.progress, r.Size)\n\n\t// Print a newline\n\tf(-1, -1)\n\n\t// Reset lastDraw so we don't finish again\n\tvar zeroDraw time.Time\n\tr.lastDraw = zeroDraw\n}\n\nfunc (r *Reader) initProgress() {\n\tvar zeroDraw time.Time\n\tr.lastDraw = zeroDraw\n\tr.drawProgress()\n\tr.lastDraw = zeroDraw\n}\n\nfunc (r *Reader) drawFunc() DrawFunc {\n\tif r.DrawFunc == nil {\n\t\treturn defaultDrawFunc\n\t}\n\n\treturn r.DrawFunc\n}\n"
  },
  {
    "path": "vendor/github.com/coreos/pkg/LICENSE",
    "content": "Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n"
  },
  {
    "path": "vendor/github.com/coreos/pkg/NOTICE",
    "content": "CoreOS Project\nCopyright 2014 CoreOS, Inc\n\nThis product includes software developed at CoreOS, Inc.\n(http://www.coreos.com/).\n"
  },
  {
    "path": "vendor/github.com/coreos/pkg/progressutil/iocopy.go",
    "content": "// Copyright 2016 CoreOS Inc\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage progressutil\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"sync\"\n\t\"time\"\n)\n\nvar (\n\tErrAlreadyStarted = errors.New(\"cannot add copies after PrintAndWait has been called\")\n)\n\ntype copyReader struct {\n\treader  io.Reader\n\tcurrent int64\n\ttotal   int64\n\tpb      *ProgressBar\n}\n\nfunc (cr *copyReader) Read(p []byte) (int, error) {\n\tn, err := cr.reader.Read(p)\n\tcr.current += int64(n)\n\terr1 := cr.updateProgressBar()\n\tif err == nil {\n\t\terr = err1\n\t}\n\treturn n, err\n}\n\nfunc (cr *copyReader) updateProgressBar() error {\n\tcr.pb.SetPrintAfter(cr.formattedProgress())\n\n\tprogress := float64(cr.current) / float64(cr.total)\n\tif progress > 1 {\n\t\tprogress = 1\n\t}\n\treturn cr.pb.SetCurrentProgress(progress)\n}\n\n// NewCopyProgressPrinter returns a new CopyProgressPrinter\nfunc NewCopyProgressPrinter() *CopyProgressPrinter {\n\treturn &CopyProgressPrinter{results: make(chan error), cancel: make(chan struct{})}\n}\n\n// CopyProgressPrinter will perform an arbitrary number of io.Copy calls, while\n// continually printing the progress of each copy.\ntype CopyProgressPrinter struct {\n\tresults chan error\n\tcancel  chan struct{}\n\n\t// `lock` mutex protects all fields below it in CopyProgressPrinter struct\n\tlock    sync.Mutex\n\treaders []*copyReader\n\tstarted bool\n\tpbp     *ProgressBarPrinter\n}\n\n// AddCopy adds a copy for this CopyProgressPrinter to perform. An io.Copy call\n// will be made to copy bytes from reader to dest, and name and size will be\n// used to label the progress bar and display how much progress has been made.\n// If size is 0, the total size of the reader is assumed to be unknown.\n// AddCopy can only be called before PrintAndWait; otherwise, ErrAlreadyStarted\n// will be returned.\nfunc (cpp *CopyProgressPrinter) AddCopy(reader io.Reader, name string, size int64, dest io.Writer) error {\n\tcpp.lock.Lock()\n\tdefer cpp.lock.Unlock()\n\n\tif cpp.started {\n\t\treturn ErrAlreadyStarted\n\t}\n\tif cpp.pbp == nil {\n\t\tcpp.pbp = &ProgressBarPrinter{}\n\t\tcpp.pbp.PadToBeEven = true\n\t}\n\n\tcr := &copyReader{\n\t\treader:  reader,\n\t\tcurrent: 0,\n\t\ttotal:   size,\n\t\tpb:      cpp.pbp.AddProgressBar(),\n\t}\n\tcr.pb.SetPrintBefore(name)\n\tcr.pb.SetPrintAfter(cr.formattedProgress())\n\n\tcpp.readers = append(cpp.readers, cr)\n\n\tgo func() {\n\t\t_, err := io.Copy(dest, cr)\n\t\tselect {\n\t\tcase <-cpp.cancel:\n\t\t\treturn\n\t\tcase cpp.results <- err:\n\t\t\treturn\n\t\t}\n\t}()\n\treturn nil\n}\n\n// PrintAndWait will print the progress for each copy operation added with\n// AddCopy to printTo every printInterval. This will continue until every added\n// copy is finished, or until cancel is written to.\n// PrintAndWait may only be called once; any subsequent calls will immediately\n// return ErrAlreadyStarted.  After PrintAndWait has been called, no more\n// copies may be added to the CopyProgressPrinter.\nfunc (cpp *CopyProgressPrinter) PrintAndWait(printTo io.Writer, printInterval time.Duration, cancel chan struct{}) error {\n\tcpp.lock.Lock()\n\tif cpp.started {\n\t\tcpp.lock.Unlock()\n\t\treturn ErrAlreadyStarted\n\t}\n\tcpp.started = true\n\tcpp.lock.Unlock()\n\n\tn := len(cpp.readers)\n\tif n == 0 {\n\t\t// Nothing to do.\n\t\treturn nil\n\t}\n\n\tdefer close(cpp.cancel)\n\tt := time.NewTicker(printInterval)\n\tallDone := false\n\tfor i := 0; i < n; {\n\t\tselect {\n\t\tcase <-cancel:\n\t\t\treturn nil\n\t\tcase <-t.C:\n\t\t\t_, err := cpp.pbp.Print(printTo)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase err := <-cpp.results:\n\t\t\ti++\n\t\t\t// Once completion is signaled, further on this just drains\n\t\t\t// (unlikely) errors from the channel.\n\t\t\tif err == nil && !allDone {\n\t\t\t\tallDone, err = cpp.pbp.Print(printTo)\n\t\t\t}\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc (cr *copyReader) formattedProgress() string {\n\tvar totalStr string\n\tif cr.total == 0 {\n\t\ttotalStr = \"?\"\n\t} else {\n\t\ttotalStr = ByteUnitStr(cr.total)\n\t}\n\treturn fmt.Sprintf(\"%s / %s\", ByteUnitStr(cr.current), totalStr)\n}\n\nvar byteUnits = []string{\"B\", \"KB\", \"MB\", \"GB\", \"TB\", \"PB\"}\n\n// ByteUnitStr pretty prints a number of bytes.\nfunc ByteUnitStr(n int64) string {\n\tvar unit string\n\tsize := float64(n)\n\tfor i := 1; i < len(byteUnits); i++ {\n\t\tif size < 1000 {\n\t\t\tunit = byteUnits[i-1]\n\t\t\tbreak\n\t\t}\n\n\t\tsize = size / 1000\n\t}\n\n\treturn fmt.Sprintf(\"%.3g %s\", size, unit)\n}\n"
  },
  {
    "path": "vendor/github.com/coreos/pkg/progressutil/progressbar.go",
    "content": "// Copyright 2016 CoreOS Inc\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage progressutil\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"strings\"\n\t\"sync\"\n\n\t\"golang.org/x/crypto/ssh/terminal\"\n)\n\nvar (\n\t// ErrorProgressOutOfBounds is returned if the progress is set to a value\n\t// not between 0 and 1.\n\tErrorProgressOutOfBounds = fmt.Errorf(\"progress is out of bounds (0 to 1)\")\n\n\t// ErrorNoBarsAdded is returned when no progress bars have been added to a\n\t// ProgressBarPrinter before PrintAndWait is called.\n\tErrorNoBarsAdded = fmt.Errorf(\"AddProgressBar hasn't been called yet\")\n)\n\n// ProgressBar represents one progress bar in a ProgressBarPrinter. Should not\n// be created directly, use the AddProgressBar on a ProgressBarPrinter to\n// create these.\ntype ProgressBar struct {\n\tlock sync.Mutex\n\n\tcurrentProgress float64\n\tprintBefore     string\n\tprintAfter      string\n\tdone            bool\n}\n\nfunc (pb *ProgressBar) clone() *ProgressBar {\n\tpb.lock.Lock()\n\tpbClone := &ProgressBar{\n\t\tcurrentProgress: pb.currentProgress,\n\t\tprintBefore:     pb.printBefore,\n\t\tprintAfter:      pb.printAfter,\n\t\tdone:            pb.done,\n\t}\n\tpb.lock.Unlock()\n\treturn pbClone\n}\n\nfunc (pb *ProgressBar) GetCurrentProgress() float64 {\n\tpb.lock.Lock()\n\tval := pb.currentProgress\n\tpb.lock.Unlock()\n\treturn val\n}\n\n// SetCurrentProgress sets the progress of this ProgressBar. The progress must\n// be between 0 and 1 inclusive.\nfunc (pb *ProgressBar) SetCurrentProgress(progress float64) error {\n\tif progress < 0 || progress > 1 {\n\t\treturn ErrorProgressOutOfBounds\n\t}\n\tpb.lock.Lock()\n\tpb.currentProgress = progress\n\tpb.lock.Unlock()\n\treturn nil\n}\n\n// GetDone returns whether or not this progress bar is done\nfunc (pb *ProgressBar) GetDone() bool {\n\tpb.lock.Lock()\n\tval := pb.done\n\tpb.lock.Unlock()\n\treturn val\n}\n\n// SetDone sets whether or not this progress bar is done\nfunc (pb *ProgressBar) SetDone(val bool) {\n\tpb.lock.Lock()\n\tpb.done = val\n\tpb.lock.Unlock()\n}\n\n// GetPrintBefore gets the text printed on the line before the progress bar.\nfunc (pb *ProgressBar) GetPrintBefore() string {\n\tpb.lock.Lock()\n\tval := pb.printBefore\n\tpb.lock.Unlock()\n\treturn val\n}\n\n// SetPrintBefore sets the text printed on the line before the progress bar.\nfunc (pb *ProgressBar) SetPrintBefore(before string) {\n\tpb.lock.Lock()\n\tpb.printBefore = before\n\tpb.lock.Unlock()\n}\n\n// GetPrintAfter gets the text printed on the line after the progress bar.\nfunc (pb *ProgressBar) GetPrintAfter() string {\n\tpb.lock.Lock()\n\tval := pb.printAfter\n\tpb.lock.Unlock()\n\treturn val\n}\n\n// SetPrintAfter sets the text printed on the line after the progress bar.\nfunc (pb *ProgressBar) SetPrintAfter(after string) {\n\tpb.lock.Lock()\n\tpb.printAfter = after\n\tpb.lock.Unlock()\n}\n\n// ProgressBarPrinter will print out the progress of some number of\n// ProgressBars.\ntype ProgressBarPrinter struct {\n\tlock sync.Mutex\n\n\t// DisplayWidth can be set to influence how large the progress bars are.\n\t// The bars will be scaled to attempt to produce lines of this number of\n\t// characters, but lines of different lengths may still be printed. When\n\t// this value is 0 (aka unset), 80 character columns are assumed.\n\tDisplayWidth int\n\t// PadToBeEven, when set to true, will make Print pad the printBefore text\n\t// with trailing spaces and the printAfter text with leading spaces to make\n\t// the progress bars the same length.\n\tPadToBeEven         bool\n\tnumLinesInLastPrint int\n\tprogressBars        []*ProgressBar\n\tmaxBefore           int\n\tmaxAfter            int\n}\n\n// AddProgressBar will create a new ProgressBar, register it with this\n// ProgressBarPrinter, and return it. This must be called at least once before\n// PrintAndWait is called.\nfunc (pbp *ProgressBarPrinter) AddProgressBar() *ProgressBar {\n\tpb := &ProgressBar{}\n\tpbp.lock.Lock()\n\tpbp.progressBars = append(pbp.progressBars, pb)\n\tpbp.lock.Unlock()\n\treturn pb\n}\n\n// Print will print out progress information for each ProgressBar that has been\n// added to this ProgressBarPrinter. The progress will be written to printTo,\n// and if printTo is a terminal it will draw progress bars.  AddProgressBar\n// must be called at least once before Print is called. If printing to a\n// terminal, all draws after the first one will move the cursor up to draw over\n// the previously printed bars.\nfunc (pbp *ProgressBarPrinter) Print(printTo io.Writer) (bool, error) {\n\tpbp.lock.Lock()\n\tvar bars []*ProgressBar\n\tfor _, bar := range pbp.progressBars {\n\t\tbars = append(bars, bar.clone())\n\t}\n\tnumColumns := pbp.DisplayWidth\n\tpbp.lock.Unlock()\n\n\tif len(bars) == 0 {\n\t\treturn false, ErrorNoBarsAdded\n\t}\n\n\tif numColumns == 0 {\n\t\tnumColumns = 80\n\t}\n\n\tif isTerminal(printTo) {\n\t\tmoveCursorUp(printTo, pbp.numLinesInLastPrint)\n\t}\n\n\tfor _, bar := range bars {\n\t\tbeforeSize := len(bar.GetPrintBefore())\n\t\tafterSize := len(bar.GetPrintAfter())\n\t\tif beforeSize > pbp.maxBefore {\n\t\t\tpbp.maxBefore = beforeSize\n\t\t}\n\t\tif afterSize > pbp.maxAfter {\n\t\t\tpbp.maxAfter = afterSize\n\t\t}\n\t}\n\n\tallDone := true\n\tfor _, bar := range bars {\n\t\tif isTerminal(printTo) {\n\t\t\tbar.printToTerminal(printTo, numColumns, pbp.PadToBeEven, pbp.maxBefore, pbp.maxAfter)\n\t\t} else {\n\t\t\tbar.printToNonTerminal(printTo)\n\t\t}\n\t\tallDone = allDone && bar.GetCurrentProgress() == 1\n\t}\n\n\tpbp.numLinesInLastPrint = len(bars)\n\n\treturn allDone, nil\n}\n\n// moveCursorUp moves the cursor up numLines in the terminal\nfunc moveCursorUp(printTo io.Writer, numLines int) {\n\tif numLines > 0 {\n\t\tfmt.Fprintf(printTo, \"\\033[%dA\", numLines)\n\t}\n}\n\nfunc (pb *ProgressBar) printToTerminal(printTo io.Writer, numColumns int, padding bool, maxBefore, maxAfter int) {\n\tbefore := pb.GetPrintBefore()\n\tafter := pb.GetPrintAfter()\n\n\tif padding {\n\t\tbefore = before + strings.Repeat(\" \", maxBefore-len(before))\n\t\tafter = strings.Repeat(\" \", maxAfter-len(after)) + after\n\t}\n\n\tprogressBarSize := numColumns - (len(fmt.Sprintf(\"%s [] %s\", before, after)))\n\tprogressBar := \"\"\n\tif progressBarSize > 0 {\n\t\tcurrentProgress := int(pb.GetCurrentProgress() * float64(progressBarSize))\n\t\tprogressBar = fmt.Sprintf(\"[%s%s] \",\n\t\t\tstrings.Repeat(\"=\", currentProgress),\n\t\t\tstrings.Repeat(\" \", progressBarSize-currentProgress))\n\t} else {\n\t\t// If we can't fit the progress bar, better to not pad the before/after.\n\t\tbefore = pb.GetPrintBefore()\n\t\tafter = pb.GetPrintAfter()\n\t}\n\n\tfmt.Fprintf(printTo, \"%s %s%s\\n\", before, progressBar, after)\n}\n\nfunc (pb *ProgressBar) printToNonTerminal(printTo io.Writer) {\n\tif !pb.GetDone() {\n\t\tfmt.Fprintf(printTo, \"%s %s\\n\", pb.printBefore, pb.printAfter)\n\t\tif pb.GetCurrentProgress() == 1 {\n\t\t\tpb.SetDone(true)\n\t\t}\n\t}\n}\n\n// isTerminal returns True when w is going to a tty, and false otherwise.\nfunc isTerminal(w io.Writer) bool {\n\tif f, ok := w.(*os.File); ok {\n\t\treturn terminal.IsTerminal(int(f.Fd()))\n\t}\n\treturn false\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/LICENSE",
    "content": "Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/blobs.go",
    "content": "package distribution\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net/http\"\n\t\"time\"\n\n\t\"github.com/docker/distribution/reference\"\n\t\"github.com/opencontainers/go-digest\"\n)\n\nvar (\n\t// ErrBlobExists returned when blob already exists\n\tErrBlobExists = errors.New(\"blob exists\")\n\n\t// ErrBlobDigestUnsupported when blob digest is an unsupported version.\n\tErrBlobDigestUnsupported = errors.New(\"unsupported blob digest\")\n\n\t// ErrBlobUnknown when blob is not found.\n\tErrBlobUnknown = errors.New(\"unknown blob\")\n\n\t// ErrBlobUploadUnknown returned when upload is not found.\n\tErrBlobUploadUnknown = errors.New(\"blob upload unknown\")\n\n\t// ErrBlobInvalidLength returned when the blob has an expected length on\n\t// commit, meaning mismatched with the descriptor or an invalid value.\n\tErrBlobInvalidLength = errors.New(\"blob invalid length\")\n)\n\n// ErrBlobInvalidDigest returned when digest check fails.\ntype ErrBlobInvalidDigest struct {\n\tDigest digest.Digest\n\tReason error\n}\n\nfunc (err ErrBlobInvalidDigest) Error() string {\n\treturn fmt.Sprintf(\"invalid digest for referenced layer: %v, %v\",\n\t\terr.Digest, err.Reason)\n}\n\n// ErrBlobMounted returned when a blob is mounted from another repository\n// instead of initiating an upload session.\ntype ErrBlobMounted struct {\n\tFrom       reference.Canonical\n\tDescriptor Descriptor\n}\n\nfunc (err ErrBlobMounted) Error() string {\n\treturn fmt.Sprintf(\"blob mounted from: %v to: %v\",\n\t\terr.From, err.Descriptor)\n}\n\n// Descriptor describes targeted content. Used in conjunction with a blob\n// store, a descriptor can be used to fetch, store and target any kind of\n// blob. The struct also describes the wire protocol format. Fields should\n// only be added but never changed.\ntype Descriptor struct {\n\t// MediaType describe the type of the content. All text based formats are\n\t// encoded as utf-8.\n\tMediaType string `json:\"mediaType,omitempty\"`\n\n\t// Size in bytes of content.\n\tSize int64 `json:\"size,omitempty\"`\n\n\t// Digest uniquely identifies the content. A byte stream can be verified\n\t// against against this digest.\n\tDigest digest.Digest `json:\"digest,omitempty\"`\n\n\t// URLs contains the source URLs of this content.\n\tURLs []string `json:\"urls,omitempty\"`\n\n\t// NOTE: Before adding a field here, please ensure that all\n\t// other options have been exhausted. Much of the type relationships\n\t// depend on the simplicity of this type.\n}\n\n// Descriptor returns the descriptor, to make it satisfy the Describable\n// interface. Note that implementations of Describable are generally objects\n// which can be described, not simply descriptors; this exception is in place\n// to make it more convenient to pass actual descriptors to functions that\n// expect Describable objects.\nfunc (d Descriptor) Descriptor() Descriptor {\n\treturn d\n}\n\n// BlobStatter makes blob descriptors available by digest. The service may\n// provide a descriptor of a different digest if the provided digest is not\n// canonical.\ntype BlobStatter interface {\n\t// Stat provides metadata about a blob identified by the digest. If the\n\t// blob is unknown to the describer, ErrBlobUnknown will be returned.\n\tStat(ctx context.Context, dgst digest.Digest) (Descriptor, error)\n}\n\n// BlobDeleter enables deleting blobs from storage.\ntype BlobDeleter interface {\n\tDelete(ctx context.Context, dgst digest.Digest) error\n}\n\n// BlobEnumerator enables iterating over blobs from storage\ntype BlobEnumerator interface {\n\tEnumerate(ctx context.Context, ingester func(dgst digest.Digest) error) error\n}\n\n// BlobDescriptorService manages metadata about a blob by digest. Most\n// implementations will not expose such an interface explicitly. Such mappings\n// should be maintained by interacting with the BlobIngester. Hence, this is\n// left off of BlobService and BlobStore.\ntype BlobDescriptorService interface {\n\tBlobStatter\n\n\t// SetDescriptor assigns the descriptor to the digest. The provided digest and\n\t// the digest in the descriptor must map to identical content but they may\n\t// differ on their algorithm. The descriptor must have the canonical\n\t// digest of the content and the digest algorithm must match the\n\t// annotators canonical algorithm.\n\t//\n\t// Such a facility can be used to map blobs between digest domains, with\n\t// the restriction that the algorithm of the descriptor must match the\n\t// canonical algorithm (ie sha256) of the annotator.\n\tSetDescriptor(ctx context.Context, dgst digest.Digest, desc Descriptor) error\n\n\t// Clear enables descriptors to be unlinked\n\tClear(ctx context.Context, dgst digest.Digest) error\n}\n\n// BlobDescriptorServiceFactory creates middleware for BlobDescriptorService.\ntype BlobDescriptorServiceFactory interface {\n\tBlobAccessController(svc BlobDescriptorService) BlobDescriptorService\n}\n\n// ReadSeekCloser is the primary reader type for blob data, combining\n// io.ReadSeeker with io.Closer.\ntype ReadSeekCloser interface {\n\tio.ReadSeeker\n\tio.Closer\n}\n\n// BlobProvider describes operations for getting blob data.\ntype BlobProvider interface {\n\t// Get returns the entire blob identified by digest along with the descriptor.\n\tGet(ctx context.Context, dgst digest.Digest) ([]byte, error)\n\n\t// Open provides a ReadSeekCloser to the blob identified by the provided\n\t// descriptor. If the blob is not known to the service, an error will be\n\t// returned.\n\tOpen(ctx context.Context, dgst digest.Digest) (ReadSeekCloser, error)\n}\n\n// BlobServer can serve blobs via http.\ntype BlobServer interface {\n\t// ServeBlob attempts to serve the blob, identified by dgst, via http. The\n\t// service may decide to redirect the client elsewhere or serve the data\n\t// directly.\n\t//\n\t// This handler only issues successful responses, such as 2xx or 3xx,\n\t// meaning it serves data or issues a redirect. If the blob is not\n\t// available, an error will be returned and the caller may still issue a\n\t// response.\n\t//\n\t// The implementation may serve the same blob from a different digest\n\t// domain. The appropriate headers will be set for the blob, unless they\n\t// have already been set by the caller.\n\tServeBlob(ctx context.Context, w http.ResponseWriter, r *http.Request, dgst digest.Digest) error\n}\n\n// BlobIngester ingests blob data.\ntype BlobIngester interface {\n\t// Put inserts the content p into the blob service, returning a descriptor\n\t// or an error.\n\tPut(ctx context.Context, mediaType string, p []byte) (Descriptor, error)\n\n\t// Create allocates a new blob writer to add a blob to this service. The\n\t// returned handle can be written to and later resumed using an opaque\n\t// identifier. With this approach, one can Close and Resume a BlobWriter\n\t// multiple times until the BlobWriter is committed or cancelled.\n\tCreate(ctx context.Context, options ...BlobCreateOption) (BlobWriter, error)\n\n\t// Resume attempts to resume a write to a blob, identified by an id.\n\tResume(ctx context.Context, id string) (BlobWriter, error)\n}\n\n// BlobCreateOption is a general extensible function argument for blob creation\n// methods. A BlobIngester may choose to honor any or none of the given\n// BlobCreateOptions, which can be specific to the implementation of the\n// BlobIngester receiving them.\n// TODO (brianbland): unify this with ManifestServiceOption in the future\ntype BlobCreateOption interface {\n\tApply(interface{}) error\n}\n\n// CreateOptions is a collection of blob creation modifiers relevant to general\n// blob storage intended to be configured by the BlobCreateOption.Apply method.\ntype CreateOptions struct {\n\tMount struct {\n\t\tShouldMount bool\n\t\tFrom        reference.Canonical\n\t\t// Stat allows to pass precalculated descriptor to link and return.\n\t\t// Blob access check will be skipped if set.\n\t\tStat *Descriptor\n\t}\n}\n\n// BlobWriter provides a handle for inserting data into a blob store.\n// Instances should be obtained from BlobWriteService.Writer and\n// BlobWriteService.Resume. If supported by the store, a writer can be\n// recovered with the id.\ntype BlobWriter interface {\n\tio.WriteCloser\n\tio.ReaderFrom\n\n\t// Size returns the number of bytes written to this blob.\n\tSize() int64\n\n\t// ID returns the identifier for this writer. The ID can be used with the\n\t// Blob service to later resume the write.\n\tID() string\n\n\t// StartedAt returns the time this blob write was started.\n\tStartedAt() time.Time\n\n\t// Commit completes the blob writer process. The content is verified\n\t// against the provided provisional descriptor, which may result in an\n\t// error. Depending on the implementation, written data may be validated\n\t// against the provisional descriptor fields. If MediaType is not present,\n\t// the implementation may reject the commit or assign \"application/octet-\n\t// stream\" to the blob. The returned descriptor may have a different\n\t// digest depending on the blob store, referred to as the canonical\n\t// descriptor.\n\tCommit(ctx context.Context, provisional Descriptor) (canonical Descriptor, err error)\n\n\t// Cancel ends the blob write without storing any data and frees any\n\t// associated resources. Any data written thus far will be lost. Cancel\n\t// implementations should allow multiple calls even after a commit that\n\t// result in a no-op. This allows use of Cancel in a defer statement,\n\t// increasing the assurance that it is correctly called.\n\tCancel(ctx context.Context) error\n}\n\n// BlobService combines the operations to access, read and write blobs. This\n// can be used to describe remote blob services.\ntype BlobService interface {\n\tBlobStatter\n\tBlobProvider\n\tBlobIngester\n}\n\n// BlobStore represent the entire suite of blob related operations. Such an\n// implementation can access, read, write, delete and serve blobs.\ntype BlobStore interface {\n\tBlobService\n\tBlobServer\n\tBlobDeleter\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/digestset/set.go",
    "content": "package digestset\n\nimport (\n\t\"errors\"\n\t\"sort\"\n\t\"strings\"\n\t\"sync\"\n\n\tdigest \"github.com/opencontainers/go-digest\"\n)\n\nvar (\n\t// ErrDigestNotFound is used when a matching digest\n\t// could not be found in a set.\n\tErrDigestNotFound = errors.New(\"digest not found\")\n\n\t// ErrDigestAmbiguous is used when multiple digests\n\t// are found in a set. None of the matching digests\n\t// should be considered valid matches.\n\tErrDigestAmbiguous = errors.New(\"ambiguous digest string\")\n)\n\n// Set is used to hold a unique set of digests which\n// may be easily referenced by easily  referenced by a string\n// representation of the digest as well as short representation.\n// The uniqueness of the short representation is based on other\n// digests in the set. If digests are omitted from this set,\n// collisions in a larger set may not be detected, therefore it\n// is important to always do short representation lookups on\n// the complete set of digests. To mitigate collisions, an\n// appropriately long short code should be used.\ntype Set struct {\n\tmutex   sync.RWMutex\n\tentries digestEntries\n}\n\n// NewSet creates an empty set of digests\n// which may have digests added.\nfunc NewSet() *Set {\n\treturn &Set{\n\t\tentries: digestEntries{},\n\t}\n}\n\n// checkShortMatch checks whether two digests match as either whole\n// values or short values. This function does not test equality,\n// rather whether the second value could match against the first\n// value.\nfunc checkShortMatch(alg digest.Algorithm, hex, shortAlg, shortHex string) bool {\n\tif len(hex) == len(shortHex) {\n\t\tif hex != shortHex {\n\t\t\treturn false\n\t\t}\n\t\tif len(shortAlg) > 0 && string(alg) != shortAlg {\n\t\t\treturn false\n\t\t}\n\t} else if !strings.HasPrefix(hex, shortHex) {\n\t\treturn false\n\t} else if len(shortAlg) > 0 && string(alg) != shortAlg {\n\t\treturn false\n\t}\n\treturn true\n}\n\n// Lookup looks for a digest matching the given string representation.\n// If no digests could be found ErrDigestNotFound will be returned\n// with an empty digest value. If multiple matches are found\n// ErrDigestAmbiguous will be returned with an empty digest value.\nfunc (dst *Set) Lookup(d string) (digest.Digest, error) {\n\tdst.mutex.RLock()\n\tdefer dst.mutex.RUnlock()\n\tif len(dst.entries) == 0 {\n\t\treturn \"\", ErrDigestNotFound\n\t}\n\tvar (\n\t\tsearchFunc func(int) bool\n\t\talg        digest.Algorithm\n\t\thex        string\n\t)\n\tdgst, err := digest.Parse(d)\n\tif err == digest.ErrDigestInvalidFormat {\n\t\thex = d\n\t\tsearchFunc = func(i int) bool {\n\t\t\treturn dst.entries[i].val >= d\n\t\t}\n\t} else {\n\t\thex = dgst.Hex()\n\t\talg = dgst.Algorithm()\n\t\tsearchFunc = func(i int) bool {\n\t\t\tif dst.entries[i].val == hex {\n\t\t\t\treturn dst.entries[i].alg >= alg\n\t\t\t}\n\t\t\treturn dst.entries[i].val >= hex\n\t\t}\n\t}\n\tidx := sort.Search(len(dst.entries), searchFunc)\n\tif idx == len(dst.entries) || !checkShortMatch(dst.entries[idx].alg, dst.entries[idx].val, string(alg), hex) {\n\t\treturn \"\", ErrDigestNotFound\n\t}\n\tif dst.entries[idx].alg == alg && dst.entries[idx].val == hex {\n\t\treturn dst.entries[idx].digest, nil\n\t}\n\tif idx+1 < len(dst.entries) && checkShortMatch(dst.entries[idx+1].alg, dst.entries[idx+1].val, string(alg), hex) {\n\t\treturn \"\", ErrDigestAmbiguous\n\t}\n\n\treturn dst.entries[idx].digest, nil\n}\n\n// Add adds the given digest to the set. An error will be returned\n// if the given digest is invalid. If the digest already exists in the\n// set, this operation will be a no-op.\nfunc (dst *Set) Add(d digest.Digest) error {\n\tif err := d.Validate(); err != nil {\n\t\treturn err\n\t}\n\tdst.mutex.Lock()\n\tdefer dst.mutex.Unlock()\n\tentry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}\n\tsearchFunc := func(i int) bool {\n\t\tif dst.entries[i].val == entry.val {\n\t\t\treturn dst.entries[i].alg >= entry.alg\n\t\t}\n\t\treturn dst.entries[i].val >= entry.val\n\t}\n\tidx := sort.Search(len(dst.entries), searchFunc)\n\tif idx == len(dst.entries) {\n\t\tdst.entries = append(dst.entries, entry)\n\t\treturn nil\n\t} else if dst.entries[idx].digest == d {\n\t\treturn nil\n\t}\n\n\tentries := append(dst.entries, nil)\n\tcopy(entries[idx+1:], entries[idx:len(entries)-1])\n\tentries[idx] = entry\n\tdst.entries = entries\n\treturn nil\n}\n\n// Remove removes the given digest from the set. An err will be\n// returned if the given digest is invalid. If the digest does\n// not exist in the set, this operation will be a no-op.\nfunc (dst *Set) Remove(d digest.Digest) error {\n\tif err := d.Validate(); err != nil {\n\t\treturn err\n\t}\n\tdst.mutex.Lock()\n\tdefer dst.mutex.Unlock()\n\tentry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}\n\tsearchFunc := func(i int) bool {\n\t\tif dst.entries[i].val == entry.val {\n\t\t\treturn dst.entries[i].alg >= entry.alg\n\t\t}\n\t\treturn dst.entries[i].val >= entry.val\n\t}\n\tidx := sort.Search(len(dst.entries), searchFunc)\n\t// Not found if idx is after or value at idx is not digest\n\tif idx == len(dst.entries) || dst.entries[idx].digest != d {\n\t\treturn nil\n\t}\n\n\tentries := dst.entries\n\tcopy(entries[idx:], entries[idx+1:])\n\tentries = entries[:len(entries)-1]\n\tdst.entries = entries\n\n\treturn nil\n}\n\n// All returns all the digests in the set\nfunc (dst *Set) All() []digest.Digest {\n\tdst.mutex.RLock()\n\tdefer dst.mutex.RUnlock()\n\tretValues := make([]digest.Digest, len(dst.entries))\n\tfor i := range dst.entries {\n\t\tretValues[i] = dst.entries[i].digest\n\t}\n\n\treturn retValues\n}\n\n// ShortCodeTable returns a map of Digest to unique short codes. The\n// length represents the minimum value, the maximum length may be the\n// entire value of digest if uniqueness cannot be achieved without the\n// full value. This function will attempt to make short codes as short\n// as possible to be unique.\nfunc ShortCodeTable(dst *Set, length int) map[digest.Digest]string {\n\tdst.mutex.RLock()\n\tdefer dst.mutex.RUnlock()\n\tm := make(map[digest.Digest]string, len(dst.entries))\n\tl := length\n\tresetIdx := 0\n\tfor i := 0; i < len(dst.entries); i++ {\n\t\tvar short string\n\t\textended := true\n\t\tfor extended {\n\t\t\textended = false\n\t\t\tif len(dst.entries[i].val) <= l {\n\t\t\t\tshort = dst.entries[i].digest.String()\n\t\t\t} else {\n\t\t\t\tshort = dst.entries[i].val[:l]\n\t\t\t\tfor j := i + 1; j < len(dst.entries); j++ {\n\t\t\t\t\tif checkShortMatch(dst.entries[j].alg, dst.entries[j].val, \"\", short) {\n\t\t\t\t\t\tif j > resetIdx {\n\t\t\t\t\t\t\tresetIdx = j\n\t\t\t\t\t\t}\n\t\t\t\t\t\textended = true\n\t\t\t\t\t} else {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif extended {\n\t\t\t\t\tl++\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tm[dst.entries[i].digest] = short\n\t\tif i >= resetIdx {\n\t\t\tl = length\n\t\t}\n\t}\n\treturn m\n}\n\ntype digestEntry struct {\n\talg    digest.Algorithm\n\tval    string\n\tdigest digest.Digest\n}\n\ntype digestEntries []*digestEntry\n\nfunc (d digestEntries) Len() int {\n\treturn len(d)\n}\n\nfunc (d digestEntries) Less(i, j int) bool {\n\tif d[i].val != d[j].val {\n\t\treturn d[i].val < d[j].val\n\t}\n\treturn d[i].alg < d[j].alg\n}\n\nfunc (d digestEntries) Swap(i, j int) {\n\td[i], d[j] = d[j], d[i]\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/doc.go",
    "content": "// Package distribution will define the interfaces for the components of\n// docker distribution. The goal is to allow users to reliably package, ship\n// and store content related to docker images.\n//\n// This is currently a work in progress. More details are available in the\n// README.md.\npackage distribution\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/errors.go",
    "content": "package distribution\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/opencontainers/go-digest\"\n)\n\n// ErrAccessDenied is returned when an access to a requested resource is\n// denied.\nvar ErrAccessDenied = errors.New(\"access denied\")\n\n// ErrManifestNotModified is returned when a conditional manifest GetByTag\n// returns nil due to the client indicating it has the latest version\nvar ErrManifestNotModified = errors.New(\"manifest not modified\")\n\n// ErrUnsupported is returned when an unimplemented or unsupported action is\n// performed\nvar ErrUnsupported = errors.New(\"operation unsupported\")\n\n// ErrTagUnknown is returned if the given tag is not known by the tag service\ntype ErrTagUnknown struct {\n\tTag string\n}\n\nfunc (err ErrTagUnknown) Error() string {\n\treturn fmt.Sprintf(\"unknown tag=%s\", err.Tag)\n}\n\n// ErrRepositoryUnknown is returned if the named repository is not known by\n// the registry.\ntype ErrRepositoryUnknown struct {\n\tName string\n}\n\nfunc (err ErrRepositoryUnknown) Error() string {\n\treturn fmt.Sprintf(\"unknown repository name=%s\", err.Name)\n}\n\n// ErrRepositoryNameInvalid should be used to denote an invalid repository\n// name. Reason may set, indicating the cause of invalidity.\ntype ErrRepositoryNameInvalid struct {\n\tName   string\n\tReason error\n}\n\nfunc (err ErrRepositoryNameInvalid) Error() string {\n\treturn fmt.Sprintf(\"repository name %q invalid: %v\", err.Name, err.Reason)\n}\n\n// ErrManifestUnknown is returned if the manifest is not known by the\n// registry.\ntype ErrManifestUnknown struct {\n\tName string\n\tTag  string\n}\n\nfunc (err ErrManifestUnknown) Error() string {\n\treturn fmt.Sprintf(\"unknown manifest name=%s tag=%s\", err.Name, err.Tag)\n}\n\n// ErrManifestUnknownRevision is returned when a manifest cannot be found by\n// revision within a repository.\ntype ErrManifestUnknownRevision struct {\n\tName     string\n\tRevision digest.Digest\n}\n\nfunc (err ErrManifestUnknownRevision) Error() string {\n\treturn fmt.Sprintf(\"unknown manifest name=%s revision=%s\", err.Name, err.Revision)\n}\n\n// ErrManifestUnverified is returned when the registry is unable to verify\n// the manifest.\ntype ErrManifestUnverified struct{}\n\nfunc (ErrManifestUnverified) Error() string {\n\treturn \"unverified manifest\"\n}\n\n// ErrManifestVerification provides a type to collect errors encountered\n// during manifest verification. Currently, it accepts errors of all types,\n// but it may be narrowed to those involving manifest verification.\ntype ErrManifestVerification []error\n\nfunc (errs ErrManifestVerification) Error() string {\n\tvar parts []string\n\tfor _, err := range errs {\n\t\tparts = append(parts, err.Error())\n\t}\n\n\treturn fmt.Sprintf(\"errors verifying manifest: %v\", strings.Join(parts, \",\"))\n}\n\n// ErrManifestBlobUnknown returned when a referenced blob cannot be found.\ntype ErrManifestBlobUnknown struct {\n\tDigest digest.Digest\n}\n\nfunc (err ErrManifestBlobUnknown) Error() string {\n\treturn fmt.Sprintf(\"unknown blob %v on manifest\", err.Digest)\n}\n\n// ErrManifestNameInvalid should be used to denote an invalid manifest\n// name. Reason may set, indicating the cause of invalidity.\ntype ErrManifestNameInvalid struct {\n\tName   string\n\tReason error\n}\n\nfunc (err ErrManifestNameInvalid) Error() string {\n\treturn fmt.Sprintf(\"manifest name %q invalid: %v\", err.Name, err.Reason)\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/manifests.go",
    "content": "package distribution\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"mime\"\n\n\t\"github.com/opencontainers/go-digest\"\n)\n\n// Manifest represents a registry object specifying a set of\n// references and an optional target\ntype Manifest interface {\n\t// References returns a list of objects which make up this manifest.\n\t// A reference is anything which can be represented by a\n\t// distribution.Descriptor. These can consist of layers, resources or other\n\t// manifests.\n\t//\n\t// While no particular order is required, implementations should return\n\t// them from highest to lowest priority. For example, one might want to\n\t// return the base layer before the top layer.\n\tReferences() []Descriptor\n\n\t// Payload provides the serialized format of the manifest, in addition to\n\t// the media type.\n\tPayload() (mediaType string, payload []byte, err error)\n}\n\n// ManifestBuilder creates a manifest allowing one to include dependencies.\n// Instances can be obtained from a version-specific manifest package.  Manifest\n// specific data is passed into the function which creates the builder.\ntype ManifestBuilder interface {\n\t// Build creates the manifest from his builder.\n\tBuild(ctx context.Context) (Manifest, error)\n\n\t// References returns a list of objects which have been added to this\n\t// builder. The dependencies are returned in the order they were added,\n\t// which should be from base to head.\n\tReferences() []Descriptor\n\n\t// AppendReference includes the given object in the manifest after any\n\t// existing dependencies. If the add fails, such as when adding an\n\t// unsupported dependency, an error may be returned.\n\t//\n\t// The destination of the reference is dependent on the manifest type and\n\t// the dependency type.\n\tAppendReference(dependency Describable) error\n}\n\n// ManifestService describes operations on image manifests.\ntype ManifestService interface {\n\t// Exists returns true if the manifest exists.\n\tExists(ctx context.Context, dgst digest.Digest) (bool, error)\n\n\t// Get retrieves the manifest specified by the given digest\n\tGet(ctx context.Context, dgst digest.Digest, options ...ManifestServiceOption) (Manifest, error)\n\n\t// Put creates or updates the given manifest returning the manifest digest\n\tPut(ctx context.Context, manifest Manifest, options ...ManifestServiceOption) (digest.Digest, error)\n\n\t// Delete removes the manifest specified by the given digest. Deleting\n\t// a manifest that doesn't exist will return ErrManifestNotFound\n\tDelete(ctx context.Context, dgst digest.Digest) error\n}\n\n// ManifestEnumerator enables iterating over manifests\ntype ManifestEnumerator interface {\n\t// Enumerate calls ingester for each manifest.\n\tEnumerate(ctx context.Context, ingester func(digest.Digest) error) error\n}\n\n// Describable is an interface for descriptors\ntype Describable interface {\n\tDescriptor() Descriptor\n}\n\n// ManifestMediaTypes returns the supported media types for manifests.\nfunc ManifestMediaTypes() (mediaTypes []string) {\n\tfor t := range mappings {\n\t\tif t != \"\" {\n\t\t\tmediaTypes = append(mediaTypes, t)\n\t\t}\n\t}\n\treturn\n}\n\n// UnmarshalFunc implements manifest unmarshalling a given MediaType\ntype UnmarshalFunc func([]byte) (Manifest, Descriptor, error)\n\nvar mappings = make(map[string]UnmarshalFunc, 0)\n\n// UnmarshalManifest looks up manifest unmarshal functions based on\n// MediaType\nfunc UnmarshalManifest(ctHeader string, p []byte) (Manifest, Descriptor, error) {\n\t// Need to look up by the actual media type, not the raw contents of\n\t// the header. Strip semicolons and anything following them.\n\tvar mediaType string\n\tif ctHeader != \"\" {\n\t\tvar err error\n\t\tmediaType, _, err = mime.ParseMediaType(ctHeader)\n\t\tif err != nil {\n\t\t\treturn nil, Descriptor{}, err\n\t\t}\n\t}\n\n\tunmarshalFunc, ok := mappings[mediaType]\n\tif !ok {\n\t\tunmarshalFunc, ok = mappings[\"\"]\n\t\tif !ok {\n\t\t\treturn nil, Descriptor{}, fmt.Errorf(\"unsupported manifest media type and no default available: %s\", mediaType)\n\t\t}\n\t}\n\n\treturn unmarshalFunc(p)\n}\n\n// RegisterManifestSchema registers an UnmarshalFunc for a given schema type.  This\n// should be called from specific\nfunc RegisterManifestSchema(mediaType string, u UnmarshalFunc) error {\n\tif _, ok := mappings[mediaType]; ok {\n\t\treturn fmt.Errorf(\"manifest media type registration would overwrite existing: %s\", mediaType)\n\t}\n\tmappings[mediaType] = u\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/reference/helpers.go",
    "content": "package reference\n\nimport \"path\"\n\n// IsNameOnly returns true if reference only contains a repo name.\nfunc IsNameOnly(ref Named) bool {\n\tif _, ok := ref.(NamedTagged); ok {\n\t\treturn false\n\t}\n\tif _, ok := ref.(Canonical); ok {\n\t\treturn false\n\t}\n\treturn true\n}\n\n// FamiliarName returns the familiar name string\n// for the given named, familiarizing if needed.\nfunc FamiliarName(ref Named) string {\n\tif nn, ok := ref.(normalizedNamed); ok {\n\t\treturn nn.Familiar().Name()\n\t}\n\treturn ref.Name()\n}\n\n// FamiliarString returns the familiar string representation\n// for the given reference, familiarizing if needed.\nfunc FamiliarString(ref Reference) string {\n\tif nn, ok := ref.(normalizedNamed); ok {\n\t\treturn nn.Familiar().String()\n\t}\n\treturn ref.String()\n}\n\n// FamiliarMatch reports whether ref matches the specified pattern.\n// See https://godoc.org/path#Match for supported patterns.\nfunc FamiliarMatch(pattern string, ref Reference) (bool, error) {\n\tmatched, err := path.Match(pattern, FamiliarString(ref))\n\tif namedRef, isNamed := ref.(Named); isNamed && !matched {\n\t\tmatched, _ = path.Match(pattern, FamiliarName(namedRef))\n\t}\n\treturn matched, err\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/reference/normalize.go",
    "content": "package reference\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/docker/distribution/digestset\"\n\t\"github.com/opencontainers/go-digest\"\n)\n\nvar (\n\tlegacyDefaultDomain = \"index.docker.io\"\n\tdefaultDomain       = \"docker.io\"\n\tofficialRepoName    = \"library\"\n\tdefaultTag          = \"latest\"\n)\n\n// normalizedNamed represents a name which has been\n// normalized and has a familiar form. A familiar name\n// is what is used in Docker UI. An example normalized\n// name is \"docker.io/library/ubuntu\" and corresponding\n// familiar name of \"ubuntu\".\ntype normalizedNamed interface {\n\tNamed\n\tFamiliar() Named\n}\n\n// ParseNormalizedNamed parses a string into a named reference\n// transforming a familiar name from Docker UI to a fully\n// qualified reference. If the value may be an identifier\n// use ParseAnyReference.\nfunc ParseNormalizedNamed(s string) (Named, error) {\n\tif ok := anchoredIdentifierRegexp.MatchString(s); ok {\n\t\treturn nil, fmt.Errorf(\"invalid repository name (%s), cannot specify 64-byte hexadecimal strings\", s)\n\t}\n\tdomain, remainder := splitDockerDomain(s)\n\tvar remoteName string\n\tif tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {\n\t\tremoteName = remainder[:tagSep]\n\t} else {\n\t\tremoteName = remainder\n\t}\n\tif strings.ToLower(remoteName) != remoteName {\n\t\treturn nil, errors.New(\"invalid reference format: repository name must be lowercase\")\n\t}\n\n\tref, err := Parse(domain + \"/\" + remainder)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tnamed, isNamed := ref.(Named)\n\tif !isNamed {\n\t\treturn nil, fmt.Errorf(\"reference %s has no name\", ref.String())\n\t}\n\treturn named, nil\n}\n\n// splitDockerDomain splits a repository name to domain and remotename string.\n// If no valid domain is found, the default domain is used. Repository name\n// needs to be already validated before.\nfunc splitDockerDomain(name string) (domain, remainder string) {\n\ti := strings.IndexRune(name, '/')\n\tif i == -1 || (!strings.ContainsAny(name[:i], \".:\") && name[:i] != \"localhost\") {\n\t\tdomain, remainder = defaultDomain, name\n\t} else {\n\t\tdomain, remainder = name[:i], name[i+1:]\n\t}\n\tif domain == legacyDefaultDomain {\n\t\tdomain = defaultDomain\n\t}\n\tif domain == defaultDomain && !strings.ContainsRune(remainder, '/') {\n\t\tremainder = officialRepoName + \"/\" + remainder\n\t}\n\treturn\n}\n\n// familiarizeName returns a shortened version of the name familiar\n// to to the Docker UI. Familiar names have the default domain\n// \"docker.io\" and \"library/\" repository prefix removed.\n// For example, \"docker.io/library/redis\" will have the familiar\n// name \"redis\" and \"docker.io/dmcgowan/myapp\" will be \"dmcgowan/myapp\".\n// Returns a familiarized named only reference.\nfunc familiarizeName(named namedRepository) repository {\n\trepo := repository{\n\t\tdomain: named.Domain(),\n\t\tpath:   named.Path(),\n\t}\n\n\tif repo.domain == defaultDomain {\n\t\trepo.domain = \"\"\n\t\t// Handle official repositories which have the pattern \"library/<official repo name>\"\n\t\tif split := strings.Split(repo.path, \"/\"); len(split) == 2 && split[0] == officialRepoName {\n\t\t\trepo.path = split[1]\n\t\t}\n\t}\n\treturn repo\n}\n\nfunc (r reference) Familiar() Named {\n\treturn reference{\n\t\tnamedRepository: familiarizeName(r.namedRepository),\n\t\ttag:             r.tag,\n\t\tdigest:          r.digest,\n\t}\n}\n\nfunc (r repository) Familiar() Named {\n\treturn familiarizeName(r)\n}\n\nfunc (t taggedReference) Familiar() Named {\n\treturn taggedReference{\n\t\tnamedRepository: familiarizeName(t.namedRepository),\n\t\ttag:             t.tag,\n\t}\n}\n\nfunc (c canonicalReference) Familiar() Named {\n\treturn canonicalReference{\n\t\tnamedRepository: familiarizeName(c.namedRepository),\n\t\tdigest:          c.digest,\n\t}\n}\n\n// TagNameOnly adds the default tag \"latest\" to a reference if it only has\n// a repo name.\nfunc TagNameOnly(ref Named) Named {\n\tif IsNameOnly(ref) {\n\t\tnamedTagged, err := WithTag(ref, defaultTag)\n\t\tif err != nil {\n\t\t\t// Default tag must be valid, to create a NamedTagged\n\t\t\t// type with non-validated input the WithTag function\n\t\t\t// should be used instead\n\t\t\tpanic(err)\n\t\t}\n\t\treturn namedTagged\n\t}\n\treturn ref\n}\n\n// ParseAnyReference parses a reference string as a possible identifier,\n// full digest, or familiar name.\nfunc ParseAnyReference(ref string) (Reference, error) {\n\tif ok := anchoredIdentifierRegexp.MatchString(ref); ok {\n\t\treturn digestReference(\"sha256:\" + ref), nil\n\t}\n\tif dgst, err := digest.Parse(ref); err == nil {\n\t\treturn digestReference(dgst), nil\n\t}\n\n\treturn ParseNormalizedNamed(ref)\n}\n\n// ParseAnyReferenceWithSet parses a reference string as a possible short\n// identifier to be matched in a digest set, a full digest, or familiar name.\nfunc ParseAnyReferenceWithSet(ref string, ds *digestset.Set) (Reference, error) {\n\tif ok := anchoredShortIdentifierRegexp.MatchString(ref); ok {\n\t\tdgst, err := ds.Lookup(ref)\n\t\tif err == nil {\n\t\t\treturn digestReference(dgst), nil\n\t\t}\n\t} else {\n\t\tif dgst, err := digest.Parse(ref); err == nil {\n\t\t\treturn digestReference(dgst), nil\n\t\t}\n\t}\n\n\treturn ParseNormalizedNamed(ref)\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/reference/reference.go",
    "content": "// Package reference provides a general type to represent any way of referencing images within the registry.\n// Its main purpose is to abstract tags and digests (content-addressable hash).\n//\n// Grammar\n//\n// \treference                       := name [ \":\" tag ] [ \"@\" digest ]\n//\tname                            := [domain '/'] path-component ['/' path-component]*\n//\tdomain                          := domain-component ['.' domain-component]* [':' port-number]\n//\tdomain-component                := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/\n//\tport-number                     := /[0-9]+/\n//\tpath-component                  := alpha-numeric [separator alpha-numeric]*\n// \talpha-numeric                   := /[a-z0-9]+/\n//\tseparator                       := /[_.]|__|[-]*/\n//\n//\ttag                             := /[\\w][\\w.-]{0,127}/\n//\n//\tdigest                          := digest-algorithm \":\" digest-hex\n//\tdigest-algorithm                := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*\n//\tdigest-algorithm-separator      := /[+.-_]/\n//\tdigest-algorithm-component      := /[A-Za-z][A-Za-z0-9]*/\n//\tdigest-hex                      := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value\n//\n//\tidentifier                      := /[a-f0-9]{64}/\n//\tshort-identifier                := /[a-f0-9]{6,64}/\npackage reference\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/opencontainers/go-digest\"\n)\n\nconst (\n\t// NameTotalLengthMax is the maximum total number of characters in a repository name.\n\tNameTotalLengthMax = 255\n)\n\nvar (\n\t// ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.\n\tErrReferenceInvalidFormat = errors.New(\"invalid reference format\")\n\n\t// ErrTagInvalidFormat represents an error while trying to parse a string as a tag.\n\tErrTagInvalidFormat = errors.New(\"invalid tag format\")\n\n\t// ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.\n\tErrDigestInvalidFormat = errors.New(\"invalid digest format\")\n\n\t// ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.\n\tErrNameContainsUppercase = errors.New(\"repository name must be lowercase\")\n\n\t// ErrNameEmpty is returned for empty, invalid repository names.\n\tErrNameEmpty = errors.New(\"repository name must have at least one component\")\n\n\t// ErrNameTooLong is returned when a repository name is longer than NameTotalLengthMax.\n\tErrNameTooLong = fmt.Errorf(\"repository name must not be more than %v characters\", NameTotalLengthMax)\n\n\t// ErrNameNotCanonical is returned when a name is not canonical.\n\tErrNameNotCanonical = errors.New(\"repository name must be canonical\")\n)\n\n// Reference is an opaque object reference identifier that may include\n// modifiers such as a hostname, name, tag, and digest.\ntype Reference interface {\n\t// String returns the full reference\n\tString() string\n}\n\n// Field provides a wrapper type for resolving correct reference types when\n// working with encoding.\ntype Field struct {\n\treference Reference\n}\n\n// AsField wraps a reference in a Field for encoding.\nfunc AsField(reference Reference) Field {\n\treturn Field{reference}\n}\n\n// Reference unwraps the reference type from the field to\n// return the Reference object. This object should be\n// of the appropriate type to further check for different\n// reference types.\nfunc (f Field) Reference() Reference {\n\treturn f.reference\n}\n\n// MarshalText serializes the field to byte text which\n// is the string of the reference.\nfunc (f Field) MarshalText() (p []byte, err error) {\n\treturn []byte(f.reference.String()), nil\n}\n\n// UnmarshalText parses text bytes by invoking the\n// reference parser to ensure the appropriately\n// typed reference object is wrapped by field.\nfunc (f *Field) UnmarshalText(p []byte) error {\n\tr, err := Parse(string(p))\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tf.reference = r\n\treturn nil\n}\n\n// Named is an object with a full name\ntype Named interface {\n\tReference\n\tName() string\n}\n\n// Tagged is an object which has a tag\ntype Tagged interface {\n\tReference\n\tTag() string\n}\n\n// NamedTagged is an object including a name and tag.\ntype NamedTagged interface {\n\tNamed\n\tTag() string\n}\n\n// Digested is an object which has a digest\n// in which it can be referenced by\ntype Digested interface {\n\tReference\n\tDigest() digest.Digest\n}\n\n// Canonical reference is an object with a fully unique\n// name including a name with domain and digest\ntype Canonical interface {\n\tNamed\n\tDigest() digest.Digest\n}\n\n// namedRepository is a reference to a repository with a name.\n// A namedRepository has both domain and path components.\ntype namedRepository interface {\n\tNamed\n\tDomain() string\n\tPath() string\n}\n\n// Domain returns the domain part of the Named reference\nfunc Domain(named Named) string {\n\tif r, ok := named.(namedRepository); ok {\n\t\treturn r.Domain()\n\t}\n\tdomain, _ := splitDomain(named.Name())\n\treturn domain\n}\n\n// Path returns the name without the domain part of the Named reference\nfunc Path(named Named) (name string) {\n\tif r, ok := named.(namedRepository); ok {\n\t\treturn r.Path()\n\t}\n\t_, path := splitDomain(named.Name())\n\treturn path\n}\n\nfunc splitDomain(name string) (string, string) {\n\tmatch := anchoredNameRegexp.FindStringSubmatch(name)\n\tif len(match) != 3 {\n\t\treturn \"\", name\n\t}\n\treturn match[1], match[2]\n}\n\n// SplitHostname splits a named reference into a\n// hostname and name string. If no valid hostname is\n// found, the hostname is empty and the full value\n// is returned as name\n// DEPRECATED: Use Domain or Path\nfunc SplitHostname(named Named) (string, string) {\n\tif r, ok := named.(namedRepository); ok {\n\t\treturn r.Domain(), r.Path()\n\t}\n\treturn splitDomain(named.Name())\n}\n\n// Parse parses s and returns a syntactically valid Reference.\n// If an error was encountered it is returned, along with a nil Reference.\n// NOTE: Parse will not handle short digests.\nfunc Parse(s string) (Reference, error) {\n\tmatches := ReferenceRegexp.FindStringSubmatch(s)\n\tif matches == nil {\n\t\tif s == \"\" {\n\t\t\treturn nil, ErrNameEmpty\n\t\t}\n\t\tif ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {\n\t\t\treturn nil, ErrNameContainsUppercase\n\t\t}\n\t\treturn nil, ErrReferenceInvalidFormat\n\t}\n\n\tif len(matches[1]) > NameTotalLengthMax {\n\t\treturn nil, ErrNameTooLong\n\t}\n\n\tvar repo repository\n\n\tnameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])\n\tif nameMatch != nil && len(nameMatch) == 3 {\n\t\trepo.domain = nameMatch[1]\n\t\trepo.path = nameMatch[2]\n\t} else {\n\t\trepo.domain = \"\"\n\t\trepo.path = matches[1]\n\t}\n\n\tref := reference{\n\t\tnamedRepository: repo,\n\t\ttag:             matches[2],\n\t}\n\tif matches[3] != \"\" {\n\t\tvar err error\n\t\tref.digest, err = digest.Parse(matches[3])\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\n\tr := getBestReferenceType(ref)\n\tif r == nil {\n\t\treturn nil, ErrNameEmpty\n\t}\n\n\treturn r, nil\n}\n\n// ParseNamed parses s and returns a syntactically valid reference implementing\n// the Named interface. The reference must have a name and be in the canonical\n// form, otherwise an error is returned.\n// If an error was encountered it is returned, along with a nil Reference.\n// NOTE: ParseNamed will not handle short digests.\nfunc ParseNamed(s string) (Named, error) {\n\tnamed, err := ParseNormalizedNamed(s)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tif named.String() != s {\n\t\treturn nil, ErrNameNotCanonical\n\t}\n\treturn named, nil\n}\n\n// WithName returns a named object representing the given string. If the input\n// is invalid ErrReferenceInvalidFormat will be returned.\nfunc WithName(name string) (Named, error) {\n\tif len(name) > NameTotalLengthMax {\n\t\treturn nil, ErrNameTooLong\n\t}\n\n\tmatch := anchoredNameRegexp.FindStringSubmatch(name)\n\tif match == nil || len(match) != 3 {\n\t\treturn nil, ErrReferenceInvalidFormat\n\t}\n\treturn repository{\n\t\tdomain: match[1],\n\t\tpath:   match[2],\n\t}, nil\n}\n\n// WithTag combines the name from \"name\" and the tag from \"tag\" to form a\n// reference incorporating both the name and the tag.\nfunc WithTag(name Named, tag string) (NamedTagged, error) {\n\tif !anchoredTagRegexp.MatchString(tag) {\n\t\treturn nil, ErrTagInvalidFormat\n\t}\n\tvar repo repository\n\tif r, ok := name.(namedRepository); ok {\n\t\trepo.domain = r.Domain()\n\t\trepo.path = r.Path()\n\t} else {\n\t\trepo.path = name.Name()\n\t}\n\tif canonical, ok := name.(Canonical); ok {\n\t\treturn reference{\n\t\t\tnamedRepository: repo,\n\t\t\ttag:             tag,\n\t\t\tdigest:          canonical.Digest(),\n\t\t}, nil\n\t}\n\treturn taggedReference{\n\t\tnamedRepository: repo,\n\t\ttag:             tag,\n\t}, nil\n}\n\n// WithDigest combines the name from \"name\" and the digest from \"digest\" to form\n// a reference incorporating both the name and the digest.\nfunc WithDigest(name Named, digest digest.Digest) (Canonical, error) {\n\tif !anchoredDigestRegexp.MatchString(digest.String()) {\n\t\treturn nil, ErrDigestInvalidFormat\n\t}\n\tvar repo repository\n\tif r, ok := name.(namedRepository); ok {\n\t\trepo.domain = r.Domain()\n\t\trepo.path = r.Path()\n\t} else {\n\t\trepo.path = name.Name()\n\t}\n\tif tagged, ok := name.(Tagged); ok {\n\t\treturn reference{\n\t\t\tnamedRepository: repo,\n\t\t\ttag:             tagged.Tag(),\n\t\t\tdigest:          digest,\n\t\t}, nil\n\t}\n\treturn canonicalReference{\n\t\tnamedRepository: repo,\n\t\tdigest:          digest,\n\t}, nil\n}\n\n// TrimNamed removes any tag or digest from the named reference.\nfunc TrimNamed(ref Named) Named {\n\tdomain, path := SplitHostname(ref)\n\treturn repository{\n\t\tdomain: domain,\n\t\tpath:   path,\n\t}\n}\n\nfunc getBestReferenceType(ref reference) Reference {\n\tif ref.Name() == \"\" {\n\t\t// Allow digest only references\n\t\tif ref.digest != \"\" {\n\t\t\treturn digestReference(ref.digest)\n\t\t}\n\t\treturn nil\n\t}\n\tif ref.tag == \"\" {\n\t\tif ref.digest != \"\" {\n\t\t\treturn canonicalReference{\n\t\t\t\tnamedRepository: ref.namedRepository,\n\t\t\t\tdigest:          ref.digest,\n\t\t\t}\n\t\t}\n\t\treturn ref.namedRepository\n\t}\n\tif ref.digest == \"\" {\n\t\treturn taggedReference{\n\t\t\tnamedRepository: ref.namedRepository,\n\t\t\ttag:             ref.tag,\n\t\t}\n\t}\n\n\treturn ref\n}\n\ntype reference struct {\n\tnamedRepository\n\ttag    string\n\tdigest digest.Digest\n}\n\nfunc (r reference) String() string {\n\treturn r.Name() + \":\" + r.tag + \"@\" + r.digest.String()\n}\n\nfunc (r reference) Tag() string {\n\treturn r.tag\n}\n\nfunc (r reference) Digest() digest.Digest {\n\treturn r.digest\n}\n\ntype repository struct {\n\tdomain string\n\tpath   string\n}\n\nfunc (r repository) String() string {\n\treturn r.Name()\n}\n\nfunc (r repository) Name() string {\n\tif r.domain == \"\" {\n\t\treturn r.path\n\t}\n\treturn r.domain + \"/\" + r.path\n}\n\nfunc (r repository) Domain() string {\n\treturn r.domain\n}\n\nfunc (r repository) Path() string {\n\treturn r.path\n}\n\ntype digestReference digest.Digest\n\nfunc (d digestReference) String() string {\n\treturn digest.Digest(d).String()\n}\n\nfunc (d digestReference) Digest() digest.Digest {\n\treturn digest.Digest(d)\n}\n\ntype taggedReference struct {\n\tnamedRepository\n\ttag string\n}\n\nfunc (t taggedReference) String() string {\n\treturn t.Name() + \":\" + t.tag\n}\n\nfunc (t taggedReference) Tag() string {\n\treturn t.tag\n}\n\ntype canonicalReference struct {\n\tnamedRepository\n\tdigest digest.Digest\n}\n\nfunc (c canonicalReference) String() string {\n\treturn c.Name() + \"@\" + c.digest.String()\n}\n\nfunc (c canonicalReference) Digest() digest.Digest {\n\treturn c.digest\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/reference/regexp.go",
    "content": "package reference\n\nimport \"regexp\"\n\nvar (\n\t// alphaNumericRegexp defines the alpha numeric atom, typically a\n\t// component of names. This only allows lower case characters and digits.\n\talphaNumericRegexp = match(`[a-z0-9]+`)\n\n\t// separatorRegexp defines the separators allowed to be embedded in name\n\t// components. This allow one period, one or two underscore and multiple\n\t// dashes.\n\tseparatorRegexp = match(`(?:[._]|__|[-]*)`)\n\n\t// nameComponentRegexp restricts registry path component names to start\n\t// with at least one letter or number, with following parts able to be\n\t// separated by one period, one or two underscore and multiple dashes.\n\tnameComponentRegexp = expression(\n\t\talphaNumericRegexp,\n\t\toptional(repeated(separatorRegexp, alphaNumericRegexp)))\n\n\t// domainComponentRegexp restricts the registry domain component of a\n\t// repository name to start with a component as defined by DomainRegexp\n\t// and followed by an optional port.\n\tdomainComponentRegexp = match(`(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`)\n\n\t// DomainRegexp defines the structure of potential domain components\n\t// that may be part of image names. This is purposely a subset of what is\n\t// allowed by DNS to ensure backwards compatibility with Docker image\n\t// names.\n\tDomainRegexp = expression(\n\t\tdomainComponentRegexp,\n\t\toptional(repeated(literal(`.`), domainComponentRegexp)),\n\t\toptional(literal(`:`), match(`[0-9]+`)))\n\n\t// TagRegexp matches valid tag names. From docker/docker:graph/tags.go.\n\tTagRegexp = match(`[\\w][\\w.-]{0,127}`)\n\n\t// anchoredTagRegexp matches valid tag names, anchored at the start and\n\t// end of the matched string.\n\tanchoredTagRegexp = anchored(TagRegexp)\n\n\t// DigestRegexp matches valid digests.\n\tDigestRegexp = match(`[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`)\n\n\t// anchoredDigestRegexp matches valid digests, anchored at the start and\n\t// end of the matched string.\n\tanchoredDigestRegexp = anchored(DigestRegexp)\n\n\t// NameRegexp is the format for the name component of references. The\n\t// regexp has capturing groups for the domain and name part omitting\n\t// the separating forward slash from either.\n\tNameRegexp = expression(\n\t\toptional(DomainRegexp, literal(`/`)),\n\t\tnameComponentRegexp,\n\t\toptional(repeated(literal(`/`), nameComponentRegexp)))\n\n\t// anchoredNameRegexp is used to parse a name value, capturing the\n\t// domain and trailing components.\n\tanchoredNameRegexp = anchored(\n\t\toptional(capture(DomainRegexp), literal(`/`)),\n\t\tcapture(nameComponentRegexp,\n\t\t\toptional(repeated(literal(`/`), nameComponentRegexp))))\n\n\t// ReferenceRegexp is the full supported format of a reference. The regexp\n\t// is anchored and has capturing groups for name, tag, and digest\n\t// components.\n\tReferenceRegexp = anchored(capture(NameRegexp),\n\t\toptional(literal(\":\"), capture(TagRegexp)),\n\t\toptional(literal(\"@\"), capture(DigestRegexp)))\n\n\t// IdentifierRegexp is the format for string identifier used as a\n\t// content addressable identifier using sha256. These identifiers\n\t// are like digests without the algorithm, since sha256 is used.\n\tIdentifierRegexp = match(`([a-f0-9]{64})`)\n\n\t// ShortIdentifierRegexp is the format used to represent a prefix\n\t// of an identifier. A prefix may be used to match a sha256 identifier\n\t// within a list of trusted identifiers.\n\tShortIdentifierRegexp = match(`([a-f0-9]{6,64})`)\n\n\t// anchoredIdentifierRegexp is used to check or match an\n\t// identifier value, anchored at start and end of string.\n\tanchoredIdentifierRegexp = anchored(IdentifierRegexp)\n\n\t// anchoredShortIdentifierRegexp is used to check if a value\n\t// is a possible identifier prefix, anchored at start and end\n\t// of string.\n\tanchoredShortIdentifierRegexp = anchored(ShortIdentifierRegexp)\n)\n\n// match compiles the string to a regular expression.\nvar match = regexp.MustCompile\n\n// literal compiles s into a literal regular expression, escaping any regexp\n// reserved characters.\nfunc literal(s string) *regexp.Regexp {\n\tre := match(regexp.QuoteMeta(s))\n\n\tif _, complete := re.LiteralPrefix(); !complete {\n\t\tpanic(\"must be a literal\")\n\t}\n\n\treturn re\n}\n\n// expression defines a full expression, where each regular expression must\n// follow the previous.\nfunc expression(res ...*regexp.Regexp) *regexp.Regexp {\n\tvar s string\n\tfor _, re := range res {\n\t\ts += re.String()\n\t}\n\n\treturn match(s)\n}\n\n// optional wraps the expression in a non-capturing group and makes the\n// production optional.\nfunc optional(res ...*regexp.Regexp) *regexp.Regexp {\n\treturn match(group(expression(res...)).String() + `?`)\n}\n\n// repeated wraps the regexp in a non-capturing group to get one or more\n// matches.\nfunc repeated(res ...*regexp.Regexp) *regexp.Regexp {\n\treturn match(group(expression(res...)).String() + `+`)\n}\n\n// group wraps the regexp in a non-capturing group.\nfunc group(res ...*regexp.Regexp) *regexp.Regexp {\n\treturn match(`(?:` + expression(res...).String() + `)`)\n}\n\n// capture wraps the expression in a capturing group.\nfunc capture(res ...*regexp.Regexp) *regexp.Regexp {\n\treturn match(`(` + expression(res...).String() + `)`)\n}\n\n// anchored anchors the regular expression by adding start and end delimiters.\nfunc anchored(res ...*regexp.Regexp) *regexp.Regexp {\n\treturn match(`^` + expression(res...).String() + `$`)\n}\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/registry.go",
    "content": "package distribution\n\nimport (\n\t\"context\"\n\n\t\"github.com/docker/distribution/reference\"\n)\n\n// Scope defines the set of items that match a namespace.\ntype Scope interface {\n\t// Contains returns true if the name belongs to the namespace.\n\tContains(name string) bool\n}\n\ntype fullScope struct{}\n\nfunc (f fullScope) Contains(string) bool {\n\treturn true\n}\n\n// GlobalScope represents the full namespace scope which contains\n// all other scopes.\nvar GlobalScope = Scope(fullScope{})\n\n// Namespace represents a collection of repositories, addressable by name.\n// Generally, a namespace is backed by a set of one or more services,\n// providing facilities such as registry access, trust, and indexing.\ntype Namespace interface {\n\t// Scope describes the names that can be used with this Namespace. The\n\t// global namespace will have a scope that matches all names. The scope\n\t// effectively provides an identity for the namespace.\n\tScope() Scope\n\n\t// Repository should return a reference to the named repository. The\n\t// registry may or may not have the repository but should always return a\n\t// reference.\n\tRepository(ctx context.Context, name reference.Named) (Repository, error)\n\n\t// Repositories fills 'repos' with a lexicographically sorted catalog of repositories\n\t// up to the size of 'repos' and returns the value 'n' for the number of entries\n\t// which were filled.  'last' contains an offset in the catalog, and 'err' will be\n\t// set to io.EOF if there are no more entries to obtain.\n\tRepositories(ctx context.Context, repos []string, last string) (n int, err error)\n\n\t// Blobs returns a blob enumerator to access all blobs\n\tBlobs() BlobEnumerator\n\n\t// BlobStatter returns a BlobStatter to control\n\tBlobStatter() BlobStatter\n}\n\n// RepositoryEnumerator describes an operation to enumerate repositories\ntype RepositoryEnumerator interface {\n\tEnumerate(ctx context.Context, ingester func(string) error) error\n}\n\n// ManifestServiceOption is a function argument for Manifest Service methods\ntype ManifestServiceOption interface {\n\tApply(ManifestService) error\n}\n\n// WithTag allows a tag to be passed into Put\nfunc WithTag(tag string) ManifestServiceOption {\n\treturn WithTagOption{tag}\n}\n\n// WithTagOption holds a tag\ntype WithTagOption struct{ Tag string }\n\n// Apply conforms to the ManifestServiceOption interface\nfunc (o WithTagOption) Apply(m ManifestService) error {\n\t// no implementation\n\treturn nil\n}\n\n// Repository is a named collection of manifests and layers.\ntype Repository interface {\n\t// Named returns the name of the repository.\n\tNamed() reference.Named\n\n\t// Manifests returns a reference to this repository's manifest service.\n\t// with the supplied options applied.\n\tManifests(ctx context.Context, options ...ManifestServiceOption) (ManifestService, error)\n\n\t// Blobs returns a reference to this repository's blob service.\n\tBlobs(ctx context.Context) BlobStore\n\n\t// TODO(stevvooe): The above BlobStore return can probably be relaxed to\n\t// be a BlobService for use with clients. This will allow such\n\t// implementations to avoid implementing ServeBlob.\n\n\t// Tags returns a reference to this repositories tag service\n\tTags(ctx context.Context) TagService\n}\n\n// TODO(stevvooe): Must add close methods to all these. May want to change the\n// way instances are created to better reflect internal dependency\n// relationships.\n"
  },
  {
    "path": "vendor/github.com/docker/distribution/tags.go",
    "content": "package distribution\n\nimport (\n\t\"context\"\n)\n\n// TagService provides access to information about tagged objects.\ntype TagService interface {\n\t// Get retrieves the descriptor identified by the tag. Some\n\t// implementations may differentiate between \"trusted\" tags and\n\t// \"untrusted\" tags. If a tag is \"untrusted\", the mapping will be returned\n\t// as an ErrTagUntrusted error, with the target descriptor.\n\tGet(ctx context.Context, tag string) (Descriptor, error)\n\n\t// Tag associates the tag with the provided descriptor, updating the\n\t// current association, if needed.\n\tTag(ctx context.Context, tag string, desc Descriptor) error\n\n\t// Untag removes the given tag association\n\tUntag(ctx context.Context, tag string) error\n\n\t// All returns the set of tags managed by this tag service\n\tAll(ctx context.Context) ([]string, error)\n\n\t// Lookup returns the set of tags referencing the given digest.\n\tLookup(ctx context.Context, digest Descriptor) ([]string, error)\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/LICENSE",
    "content": "Copyright (c) 2012 The Go Authors. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n   * Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n   * Redistributions in binary form must reproduce the above\ncopyright notice, this list of conditions and the following disclaimer\nin the documentation and/or other materials provided with the\ndistribution.\n   * Neither the name of Google Inc. nor the names of its\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/copy.go",
    "content": "// Copyright 2012 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage flate\n\n// forwardCopy is like the built-in copy function except that it always goes\n// forward from the start, even if the dst and src overlap.\n// It is equivalent to:\n//   for i := 0; i < n; i++ {\n//     mem[dst+i] = mem[src+i]\n//   }\nfunc forwardCopy(mem []byte, dst, src, n int) {\n\tif dst <= src {\n\t\tcopy(mem[dst:dst+n], mem[src:src+n])\n\t\treturn\n\t}\n\tfor {\n\t\tif dst >= src+n {\n\t\t\tcopy(mem[dst:dst+n], mem[src:src+n])\n\t\t\treturn\n\t\t}\n\t\t// There is some forward overlap.  The destination\n\t\t// will be filled with a repeated pattern of mem[src:src+k].\n\t\t// We copy one instance of the pattern here, then repeat.\n\t\t// Each time around this loop k will double.\n\t\tk := dst - src\n\t\tcopy(mem[dst:dst+k], mem[src:src+k])\n\t\tn -= k\n\t\tdst += k\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/crc32_amd64.go",
    "content": "//+build !noasm\n//+build !appengine\n\n// Copyright 2015, Klaus Post, see LICENSE for details.\n\npackage flate\n\nimport (\n\t\"github.com/klauspost/cpuid\"\n)\n\n// crc32sse returns a hash for the first 4 bytes of the slice\n// len(a) must be >= 4.\n//go:noescape\nfunc crc32sse(a []byte) uint32\n\n// crc32sseAll calculates hashes for each 4-byte set in a.\n// dst must be east len(a) - 4 in size.\n// The size is not checked by the assembly.\n//go:noescape\nfunc crc32sseAll(a []byte, dst []uint32)\n\n// matchLenSSE4 returns the number of matching bytes in a and b\n// up to length 'max'. Both slices must be at least 'max'\n// bytes in size.\n//\n// TODO: drop the \"SSE4\" name, since it doesn't use any SSE instructions.\n//\n//go:noescape\nfunc matchLenSSE4(a, b []byte, max int) int\n\n// histogram accumulates a histogram of b in h.\n// h must be at least 256 entries in length,\n// and must be cleared before calling this function.\n//go:noescape\nfunc histogram(b []byte, h []int32)\n\n// Detect SSE 4.2 feature.\nfunc init() {\n\tuseSSE42 = cpuid.CPU.SSE42()\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/crc32_amd64.s",
    "content": "//+build !noasm\n//+build !appengine\n\n// Copyright 2015, Klaus Post, see LICENSE for details.\n\n// func crc32sse(a []byte) uint32\nTEXT ·crc32sse(SB), 4, $0\n\tMOVQ a+0(FP), R10\n\tXORQ BX, BX\n\n\t// CRC32   dword (R10), EBX\n\tBYTE $0xF2; BYTE $0x41; BYTE $0x0f\n\tBYTE $0x38; BYTE $0xf1; BYTE $0x1a\n\n\tMOVL BX, ret+24(FP)\n\tRET\n\n// func crc32sseAll(a []byte, dst []uint32)\nTEXT ·crc32sseAll(SB), 4, $0\n\tMOVQ  a+0(FP), R8      // R8: src\n\tMOVQ  a_len+8(FP), R10 // input length\n\tMOVQ  dst+24(FP), R9   // R9: dst\n\tSUBQ  $4, R10\n\tJS    end\n\tJZ    one_crc\n\tMOVQ  R10, R13\n\tSHRQ  $2, R10          // len/4\n\tANDQ  $3, R13          // len&3\n\tXORQ  BX, BX\n\tADDQ  $1, R13\n\tTESTQ R10, R10\n\tJZ    rem_loop\n\ncrc_loop:\n\tMOVQ (R8), R11\n\tXORQ BX, BX\n\tXORQ DX, DX\n\tXORQ DI, DI\n\tMOVQ R11, R12\n\tSHRQ $8, R11\n\tMOVQ R12, AX\n\tMOVQ R11, CX\n\tSHRQ $16, R12\n\tSHRQ $16, R11\n\tMOVQ R12, SI\n\n\t// CRC32   EAX, EBX\n\tBYTE $0xF2; BYTE $0x0f\n\tBYTE $0x38; BYTE $0xf1; BYTE $0xd8\n\n\t// CRC32   ECX, EDX\n\tBYTE $0xF2; BYTE $0x0f\n\tBYTE $0x38; BYTE $0xf1; BYTE $0xd1\n\n\t// CRC32   ESI, EDI\n\tBYTE $0xF2; BYTE $0x0f\n\tBYTE $0x38; BYTE $0xf1; BYTE $0xfe\n\tMOVL BX, (R9)\n\tMOVL DX, 4(R9)\n\tMOVL DI, 8(R9)\n\n\tXORQ BX, BX\n\tMOVL R11, AX\n\n\t// CRC32   EAX, EBX\n\tBYTE $0xF2; BYTE $0x0f\n\tBYTE $0x38; BYTE $0xf1; BYTE $0xd8\n\tMOVL BX, 12(R9)\n\n\tADDQ $16, R9\n\tADDQ $4, R8\n\tXORQ BX, BX\n\tSUBQ $1, R10\n\tJNZ  crc_loop\n\nrem_loop:\n\tMOVL (R8), AX\n\n\t// CRC32   EAX, EBX\n\tBYTE $0xF2; BYTE $0x0f\n\tBYTE $0x38; BYTE $0xf1; BYTE $0xd8\n\n\tMOVL BX, (R9)\n\tADDQ $4, R9\n\tADDQ $1, R8\n\tXORQ BX, BX\n\tSUBQ $1, R13\n\tJNZ  rem_loop\n\nend:\n\tRET\n\none_crc:\n\tMOVQ $1, R13\n\tXORQ BX, BX\n\tJMP  rem_loop\n\n// func matchLenSSE4(a, b []byte, max int) int\nTEXT ·matchLenSSE4(SB), 4, $0\n\tMOVQ a_base+0(FP), SI\n\tMOVQ b_base+24(FP), DI\n\tMOVQ DI, DX\n\tMOVQ max+48(FP), CX\n\ncmp8:\n\t// As long as we are 8 or more bytes before the end of max, we can load and\n\t// compare 8 bytes at a time. If those 8 bytes are equal, repeat.\n\tCMPQ CX, $8\n\tJLT  cmp1\n\tMOVQ (SI), AX\n\tMOVQ (DI), BX\n\tCMPQ AX, BX\n\tJNE  bsf\n\tADDQ $8, SI\n\tADDQ $8, DI\n\tSUBQ $8, CX\n\tJMP  cmp8\n\nbsf:\n\t// If those 8 bytes were not equal, XOR the two 8 byte values, and return\n\t// the index of the first byte that differs. The BSF instruction finds the\n\t// least significant 1 bit, the amd64 architecture is little-endian, and\n\t// the shift by 3 converts a bit index to a byte index.\n\tXORQ AX, BX\n\tBSFQ BX, BX\n\tSHRQ $3, BX\n\tADDQ BX, DI\n\n\t// Subtract off &b[0] to convert from &b[ret] to ret, and return.\n\tSUBQ DX, DI\n\tMOVQ DI, ret+56(FP)\n\tRET\n\ncmp1:\n\t// In the slices' tail, compare 1 byte at a time.\n\tCMPQ CX, $0\n\tJEQ  matchLenEnd\n\tMOVB (SI), AX\n\tMOVB (DI), BX\n\tCMPB AX, BX\n\tJNE  matchLenEnd\n\tADDQ $1, SI\n\tADDQ $1, DI\n\tSUBQ $1, CX\n\tJMP  cmp1\n\nmatchLenEnd:\n\t// Subtract off &b[0] to convert from &b[ret] to ret, and return.\n\tSUBQ DX, DI\n\tMOVQ DI, ret+56(FP)\n\tRET\n\n// func histogram(b []byte, h []int32)\nTEXT ·histogram(SB), 4, $0\n\tMOVQ b+0(FP), SI     // SI: &b\n\tMOVQ b_len+8(FP), R9 // R9: len(b)\n\tMOVQ h+24(FP), DI    // DI: Histogram\n\tMOVQ R9, R8\n\tSHRQ $3, R8\n\tJZ   hist1\n\tXORQ R11, R11\n\nloop_hist8:\n\tMOVQ (SI), R10\n\n\tMOVB R10, R11\n\tINCL (DI)(R11*4)\n\tSHRQ $8, R10\n\n\tMOVB R10, R11\n\tINCL (DI)(R11*4)\n\tSHRQ $8, R10\n\n\tMOVB R10, R11\n\tINCL (DI)(R11*4)\n\tSHRQ $8, R10\n\n\tMOVB R10, R11\n\tINCL (DI)(R11*4)\n\tSHRQ $8, R10\n\n\tMOVB R10, R11\n\tINCL (DI)(R11*4)\n\tSHRQ $8, R10\n\n\tMOVB R10, R11\n\tINCL (DI)(R11*4)\n\tSHRQ $8, R10\n\n\tMOVB R10, R11\n\tINCL (DI)(R11*4)\n\tSHRQ $8, R10\n\n\tINCL (DI)(R10*4)\n\n\tADDQ $8, SI\n\tDECQ R8\n\tJNZ  loop_hist8\n\nhist1:\n\tANDQ $7, R9\n\tJZ   end_hist\n\tXORQ R10, R10\n\nloop_hist1:\n\tMOVB (SI), R10\n\tINCL (DI)(R10*4)\n\tINCQ SI\n\tDECQ R9\n\tJNZ  loop_hist1\n\nend_hist:\n\tRET\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/crc32_noasm.go",
    "content": "//+build !amd64 noasm appengine\n\n// Copyright 2015, Klaus Post, see LICENSE for details.\n\npackage flate\n\nfunc init() {\n\tuseSSE42 = false\n}\n\n// crc32sse should never be called.\nfunc crc32sse(a []byte) uint32 {\n\tpanic(\"no assembler\")\n}\n\n// crc32sseAll should never be called.\nfunc crc32sseAll(a []byte, dst []uint32) {\n\tpanic(\"no assembler\")\n}\n\n// matchLenSSE4 should never be called.\nfunc matchLenSSE4(a, b []byte, max int) int {\n\tpanic(\"no assembler\")\n\treturn 0\n}\n\n// histogram accumulates a histogram of b in h.\n//\n// len(h) must be >= 256, and h's elements must be all zeroes.\nfunc histogram(b []byte, h []int32) {\n\th = h[:256]\n\tfor _, t := range b {\n\t\th[t]++\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/deflate.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Copyright (c) 2015 Klaus Post\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage flate\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"math\"\n)\n\nconst (\n\tNoCompression      = 0\n\tBestSpeed          = 1\n\tBestCompression    = 9\n\tDefaultCompression = -1\n\n\t// HuffmanOnly disables Lempel-Ziv match searching and only performs Huffman\n\t// entropy encoding. This mode is useful in compressing data that has\n\t// already been compressed with an LZ style algorithm (e.g. Snappy or LZ4)\n\t// that lacks an entropy encoder. Compression gains are achieved when\n\t// certain bytes in the input stream occur more frequently than others.\n\t//\n\t// Note that HuffmanOnly produces a compressed output that is\n\t// RFC 1951 compliant. That is, any valid DEFLATE decompressor will\n\t// continue to be able to decompress this output.\n\tHuffmanOnly         = -2\n\tConstantCompression = HuffmanOnly // compatibility alias.\n\n\tlogWindowSize    = 15\n\twindowSize       = 1 << logWindowSize\n\twindowMask       = windowSize - 1\n\tlogMaxOffsetSize = 15  // Standard DEFLATE\n\tminMatchLength   = 4   // The smallest match that the compressor looks for\n\tmaxMatchLength   = 258 // The longest match for the compressor\n\tminOffsetSize    = 1   // The shortest offset that makes any sense\n\n\t// The maximum number of tokens we put into a single flat block, just too\n\t// stop things from getting too large.\n\tmaxFlateBlockTokens = 1 << 14\n\tmaxStoreBlockSize   = 65535\n\thashBits            = 17 // After 17 performance degrades\n\thashSize            = 1 << hashBits\n\thashMask            = (1 << hashBits) - 1\n\thashShift           = (hashBits + minMatchLength - 1) / minMatchLength\n\tmaxHashOffset       = 1 << 24\n\n\tskipNever = math.MaxInt32\n)\n\nvar useSSE42 bool\n\ntype compressionLevel struct {\n\tgood, lazy, nice, chain, fastSkipHashing, level int\n}\n\n// Compression levels have been rebalanced from zlib deflate defaults\n// to give a bigger spread in speed and compression.\n// See https://blog.klauspost.com/rebalancing-deflate-compression-levels/\nvar levels = []compressionLevel{\n\t{}, // 0\n\t// Level 1-4 uses specialized algorithm - values not used\n\t{0, 0, 0, 0, 0, 1},\n\t{0, 0, 0, 0, 0, 2},\n\t{0, 0, 0, 0, 0, 3},\n\t{0, 0, 0, 0, 0, 4},\n\t// For levels 5-6 we don't bother trying with lazy matches.\n\t// Lazy matching is at least 30% slower, with 1.5% increase.\n\t{6, 0, 12, 8, 12, 5},\n\t{8, 0, 24, 16, 16, 6},\n\t// Levels 7-9 use increasingly more lazy matching\n\t// and increasingly stringent conditions for \"good enough\".\n\t{8, 8, 24, 16, skipNever, 7},\n\t{10, 16, 24, 64, skipNever, 8},\n\t{32, 258, 258, 4096, skipNever, 9},\n}\n\ntype compressor struct {\n\tcompressionLevel\n\n\tw          *huffmanBitWriter\n\tbulkHasher func([]byte, []uint32)\n\n\t// compression algorithm\n\tfill func(*compressor, []byte) int // copy data to window\n\tstep func(*compressor)             // process window\n\tsync bool                          // requesting flush\n\n\t// Input hash chains\n\t// hashHead[hashValue] contains the largest inputIndex with the specified hash value\n\t// If hashHead[hashValue] is within the current window, then\n\t// hashPrev[hashHead[hashValue] & windowMask] contains the previous index\n\t// with the same hash value.\n\tchainHead  int\n\thashHead   [hashSize]uint32\n\thashPrev   [windowSize]uint32\n\thashOffset int\n\n\t// input window: unprocessed data is window[index:windowEnd]\n\tindex         int\n\twindow        []byte\n\twindowEnd     int\n\tblockStart    int  // window index where current tokens start\n\tbyteAvailable bool // if true, still need to process window[index-1].\n\n\t// queued output tokens\n\ttokens tokens\n\n\t// deflate state\n\tlength         int\n\toffset         int\n\thash           uint32\n\tmaxInsertIndex int\n\terr            error\n\tii             uint16 // position of last match, intended to overflow to reset.\n\n\tsnap      snappyEnc\n\thashMatch [maxMatchLength + minMatchLength]uint32\n}\n\nfunc (d *compressor) fillDeflate(b []byte) int {\n\tif d.index >= 2*windowSize-(minMatchLength+maxMatchLength) {\n\t\t// shift the window by windowSize\n\t\tcopy(d.window[:], d.window[windowSize:2*windowSize])\n\t\td.index -= windowSize\n\t\td.windowEnd -= windowSize\n\t\tif d.blockStart >= windowSize {\n\t\t\td.blockStart -= windowSize\n\t\t} else {\n\t\t\td.blockStart = math.MaxInt32\n\t\t}\n\t\td.hashOffset += windowSize\n\t\tif d.hashOffset > maxHashOffset {\n\t\t\tdelta := d.hashOffset - 1\n\t\t\td.hashOffset -= delta\n\t\t\td.chainHead -= delta\n\t\t\tfor i, v := range d.hashPrev {\n\t\t\t\tif int(v) > delta {\n\t\t\t\t\td.hashPrev[i] = uint32(int(v) - delta)\n\t\t\t\t} else {\n\t\t\t\t\td.hashPrev[i] = 0\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor i, v := range d.hashHead {\n\t\t\t\tif int(v) > delta {\n\t\t\t\t\td.hashHead[i] = uint32(int(v) - delta)\n\t\t\t\t} else {\n\t\t\t\t\td.hashHead[i] = 0\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tn := copy(d.window[d.windowEnd:], b)\n\td.windowEnd += n\n\treturn n\n}\n\nfunc (d *compressor) writeBlock(tok tokens, index int, eof bool) error {\n\tif index > 0 || eof {\n\t\tvar window []byte\n\t\tif d.blockStart <= index {\n\t\t\twindow = d.window[d.blockStart:index]\n\t\t}\n\t\td.blockStart = index\n\t\td.w.writeBlock(tok.tokens[:tok.n], eof, window)\n\t\treturn d.w.err\n\t}\n\treturn nil\n}\n\n// writeBlockSkip writes the current block and uses the number of tokens\n// to determine if the block should be stored on no matches, or\n// only huffman encoded.\nfunc (d *compressor) writeBlockSkip(tok tokens, index int, eof bool) error {\n\tif index > 0 || eof {\n\t\tif d.blockStart <= index {\n\t\t\twindow := d.window[d.blockStart:index]\n\t\t\t// If we removed less than a 64th of all literals\n\t\t\t// we huffman compress the block.\n\t\t\tif int(tok.n) > len(window)-int(tok.n>>6) {\n\t\t\t\td.w.writeBlockHuff(eof, window)\n\t\t\t} else {\n\t\t\t\t// Write a dynamic huffman block.\n\t\t\t\td.w.writeBlockDynamic(tok.tokens[:tok.n], eof, window)\n\t\t\t}\n\t\t} else {\n\t\t\td.w.writeBlock(tok.tokens[:tok.n], eof, nil)\n\t\t}\n\t\td.blockStart = index\n\t\treturn d.w.err\n\t}\n\treturn nil\n}\n\n// fillWindow will fill the current window with the supplied\n// dictionary and calculate all hashes.\n// This is much faster than doing a full encode.\n// Should only be used after a start/reset.\nfunc (d *compressor) fillWindow(b []byte) {\n\t// Do not fill window if we are in store-only mode,\n\t// use constant or Snappy compression.\n\tswitch d.compressionLevel.level {\n\tcase 0, 1, 2:\n\t\treturn\n\t}\n\t// If we are given too much, cut it.\n\tif len(b) > windowSize {\n\t\tb = b[len(b)-windowSize:]\n\t}\n\t// Add all to window.\n\tn := copy(d.window[d.windowEnd:], b)\n\n\t// Calculate 256 hashes at the time (more L1 cache hits)\n\tloops := (n + 256 - minMatchLength) / 256\n\tfor j := 0; j < loops; j++ {\n\t\tstartindex := j * 256\n\t\tend := startindex + 256 + minMatchLength - 1\n\t\tif end > n {\n\t\t\tend = n\n\t\t}\n\t\ttocheck := d.window[startindex:end]\n\t\tdstSize := len(tocheck) - minMatchLength + 1\n\n\t\tif dstSize <= 0 {\n\t\t\tcontinue\n\t\t}\n\n\t\tdst := d.hashMatch[:dstSize]\n\t\td.bulkHasher(tocheck, dst)\n\t\tvar newH uint32\n\t\tfor i, val := range dst {\n\t\t\tdi := i + startindex\n\t\t\tnewH = val & hashMask\n\t\t\t// Get previous value with the same hash.\n\t\t\t// Our chain should point to the previous value.\n\t\t\td.hashPrev[di&windowMask] = d.hashHead[newH]\n\t\t\t// Set the head of the hash chain to us.\n\t\t\td.hashHead[newH] = uint32(di + d.hashOffset)\n\t\t}\n\t\td.hash = newH\n\t}\n\t// Update window information.\n\td.windowEnd += n\n\td.index = n\n}\n\n// Try to find a match starting at index whose length is greater than prevSize.\n// We only look at chainCount possibilities before giving up.\n// pos = d.index, prevHead = d.chainHead-d.hashOffset, prevLength=minMatchLength-1, lookahead\nfunc (d *compressor) findMatch(pos int, prevHead int, prevLength int, lookahead int) (length, offset int, ok bool) {\n\tminMatchLook := maxMatchLength\n\tif lookahead < minMatchLook {\n\t\tminMatchLook = lookahead\n\t}\n\n\twin := d.window[0 : pos+minMatchLook]\n\n\t// We quit when we get a match that's at least nice long\n\tnice := len(win) - pos\n\tif d.nice < nice {\n\t\tnice = d.nice\n\t}\n\n\t// If we've got a match that's good enough, only look in 1/4 the chain.\n\ttries := d.chain\n\tlength = prevLength\n\tif length >= d.good {\n\t\ttries >>= 2\n\t}\n\n\twEnd := win[pos+length]\n\twPos := win[pos:]\n\tminIndex := pos - windowSize\n\n\tfor i := prevHead; tries > 0; tries-- {\n\t\tif wEnd == win[i+length] {\n\t\t\tn := matchLen(win[i:], wPos, minMatchLook)\n\n\t\t\tif n > length && (n > minMatchLength || pos-i <= 4096) {\n\t\t\t\tlength = n\n\t\t\t\toffset = pos - i\n\t\t\t\tok = true\n\t\t\t\tif n >= nice {\n\t\t\t\t\t// The match is good enough that we don't try to find a better one.\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\twEnd = win[pos+n]\n\t\t\t}\n\t\t}\n\t\tif i == minIndex {\n\t\t\t// hashPrev[i & windowMask] has already been overwritten, so stop now.\n\t\t\tbreak\n\t\t}\n\t\ti = int(d.hashPrev[i&windowMask]) - d.hashOffset\n\t\tif i < minIndex || i < 0 {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn\n}\n\n// Try to find a match starting at index whose length is greater than prevSize.\n// We only look at chainCount possibilities before giving up.\n// pos = d.index, prevHead = d.chainHead-d.hashOffset, prevLength=minMatchLength-1, lookahead\nfunc (d *compressor) findMatchSSE(pos int, prevHead int, prevLength int, lookahead int) (length, offset int, ok bool) {\n\tminMatchLook := maxMatchLength\n\tif lookahead < minMatchLook {\n\t\tminMatchLook = lookahead\n\t}\n\n\twin := d.window[0 : pos+minMatchLook]\n\n\t// We quit when we get a match that's at least nice long\n\tnice := len(win) - pos\n\tif d.nice < nice {\n\t\tnice = d.nice\n\t}\n\n\t// If we've got a match that's good enough, only look in 1/4 the chain.\n\ttries := d.chain\n\tlength = prevLength\n\tif length >= d.good {\n\t\ttries >>= 2\n\t}\n\n\twEnd := win[pos+length]\n\twPos := win[pos:]\n\tminIndex := pos - windowSize\n\n\tfor i := prevHead; tries > 0; tries-- {\n\t\tif wEnd == win[i+length] {\n\t\t\tn := matchLenSSE4(win[i:], wPos, minMatchLook)\n\n\t\t\tif n > length && (n > minMatchLength || pos-i <= 4096) {\n\t\t\t\tlength = n\n\t\t\t\toffset = pos - i\n\t\t\t\tok = true\n\t\t\t\tif n >= nice {\n\t\t\t\t\t// The match is good enough that we don't try to find a better one.\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t\twEnd = win[pos+n]\n\t\t\t}\n\t\t}\n\t\tif i == minIndex {\n\t\t\t// hashPrev[i & windowMask] has already been overwritten, so stop now.\n\t\t\tbreak\n\t\t}\n\t\ti = int(d.hashPrev[i&windowMask]) - d.hashOffset\n\t\tif i < minIndex || i < 0 {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn\n}\n\nfunc (d *compressor) writeStoredBlock(buf []byte) error {\n\tif d.w.writeStoredHeader(len(buf), false); d.w.err != nil {\n\t\treturn d.w.err\n\t}\n\td.w.writeBytes(buf)\n\treturn d.w.err\n}\n\nconst hashmul = 0x1e35a7bd\n\n// hash4 returns a hash representation of the first 4 bytes\n// of the supplied slice.\n// The caller must ensure that len(b) >= 4.\nfunc hash4(b []byte) uint32 {\n\treturn ((uint32(b[3]) | uint32(b[2])<<8 | uint32(b[1])<<16 | uint32(b[0])<<24) * hashmul) >> (32 - hashBits)\n}\n\n// bulkHash4 will compute hashes using the same\n// algorithm as hash4\nfunc bulkHash4(b []byte, dst []uint32) {\n\tif len(b) < minMatchLength {\n\t\treturn\n\t}\n\thb := uint32(b[3]) | uint32(b[2])<<8 | uint32(b[1])<<16 | uint32(b[0])<<24\n\tdst[0] = (hb * hashmul) >> (32 - hashBits)\n\tend := len(b) - minMatchLength + 1\n\tfor i := 1; i < end; i++ {\n\t\thb = (hb << 8) | uint32(b[i+3])\n\t\tdst[i] = (hb * hashmul) >> (32 - hashBits)\n\t}\n}\n\n// matchLen returns the number of matching bytes in a and b\n// up to length 'max'. Both slices must be at least 'max'\n// bytes in size.\nfunc matchLen(a, b []byte, max int) int {\n\ta = a[:max]\n\tb = b[:len(a)]\n\tfor i, av := range a {\n\t\tif b[i] != av {\n\t\t\treturn i\n\t\t}\n\t}\n\treturn max\n}\n\nfunc (d *compressor) initDeflate() {\n\td.window = make([]byte, 2*windowSize)\n\td.hashOffset = 1\n\td.length = minMatchLength - 1\n\td.offset = 0\n\td.byteAvailable = false\n\td.index = 0\n\td.hash = 0\n\td.chainHead = -1\n\td.bulkHasher = bulkHash4\n\tif useSSE42 {\n\t\td.bulkHasher = crc32sseAll\n\t}\n}\n\n// Assumes that d.fastSkipHashing != skipNever,\n// otherwise use deflateLazy\nfunc (d *compressor) deflate() {\n\n\t// Sanity enables additional runtime tests.\n\t// It's intended to be used during development\n\t// to supplement the currently ad-hoc unit tests.\n\tconst sanity = false\n\n\tif d.windowEnd-d.index < minMatchLength+maxMatchLength && !d.sync {\n\t\treturn\n\t}\n\n\td.maxInsertIndex = d.windowEnd - (minMatchLength - 1)\n\tif d.index < d.maxInsertIndex {\n\t\td.hash = hash4(d.window[d.index : d.index+minMatchLength])\n\t}\n\n\tfor {\n\t\tif sanity && d.index > d.windowEnd {\n\t\t\tpanic(\"index > windowEnd\")\n\t\t}\n\t\tlookahead := d.windowEnd - d.index\n\t\tif lookahead < minMatchLength+maxMatchLength {\n\t\t\tif !d.sync {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif sanity && d.index > d.windowEnd {\n\t\t\t\tpanic(\"index > windowEnd\")\n\t\t\t}\n\t\t\tif lookahead == 0 {\n\t\t\t\tif d.tokens.n > 0 {\n\t\t\t\t\tif d.err = d.writeBlockSkip(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\td.tokens.n = 0\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tif d.index < d.maxInsertIndex {\n\t\t\t// Update the hash\n\t\t\td.hash = hash4(d.window[d.index : d.index+minMatchLength])\n\t\t\tch := d.hashHead[d.hash&hashMask]\n\t\t\td.chainHead = int(ch)\n\t\t\td.hashPrev[d.index&windowMask] = ch\n\t\t\td.hashHead[d.hash&hashMask] = uint32(d.index + d.hashOffset)\n\t\t}\n\t\td.length = minMatchLength - 1\n\t\td.offset = 0\n\t\tminIndex := d.index - windowSize\n\t\tif minIndex < 0 {\n\t\t\tminIndex = 0\n\t\t}\n\n\t\tif d.chainHead-d.hashOffset >= minIndex && lookahead > minMatchLength-1 {\n\t\t\tif newLength, newOffset, ok := d.findMatch(d.index, d.chainHead-d.hashOffset, minMatchLength-1, lookahead); ok {\n\t\t\t\td.length = newLength\n\t\t\t\td.offset = newOffset\n\t\t\t}\n\t\t}\n\t\tif d.length >= minMatchLength {\n\t\t\td.ii = 0\n\t\t\t// There was a match at the previous step, and the current match is\n\t\t\t// not better. Output the previous match.\n\t\t\t// \"d.length-3\" should NOT be \"d.length-minMatchLength\", since the format always assume 3\n\t\t\td.tokens.tokens[d.tokens.n] = matchToken(uint32(d.length-3), uint32(d.offset-minOffsetSize))\n\t\t\td.tokens.n++\n\t\t\t// Insert in the hash table all strings up to the end of the match.\n\t\t\t// index and index-1 are already inserted. If there is not enough\n\t\t\t// lookahead, the last two strings are not inserted into the hash\n\t\t\t// table.\n\t\t\tif d.length <= d.fastSkipHashing {\n\t\t\t\tvar newIndex int\n\t\t\t\tnewIndex = d.index + d.length\n\t\t\t\t// Calculate missing hashes\n\t\t\t\tend := newIndex\n\t\t\t\tif end > d.maxInsertIndex {\n\t\t\t\t\tend = d.maxInsertIndex\n\t\t\t\t}\n\t\t\t\tend += minMatchLength - 1\n\t\t\t\tstartindex := d.index + 1\n\t\t\t\tif startindex > d.maxInsertIndex {\n\t\t\t\t\tstartindex = d.maxInsertIndex\n\t\t\t\t}\n\t\t\t\ttocheck := d.window[startindex:end]\n\t\t\t\tdstSize := len(tocheck) - minMatchLength + 1\n\t\t\t\tif dstSize > 0 {\n\t\t\t\t\tdst := d.hashMatch[:dstSize]\n\t\t\t\t\tbulkHash4(tocheck, dst)\n\t\t\t\t\tvar newH uint32\n\t\t\t\t\tfor i, val := range dst {\n\t\t\t\t\t\tdi := i + startindex\n\t\t\t\t\t\tnewH = val & hashMask\n\t\t\t\t\t\t// Get previous value with the same hash.\n\t\t\t\t\t\t// Our chain should point to the previous value.\n\t\t\t\t\t\td.hashPrev[di&windowMask] = d.hashHead[newH]\n\t\t\t\t\t\t// Set the head of the hash chain to us.\n\t\t\t\t\t\td.hashHead[newH] = uint32(di + d.hashOffset)\n\t\t\t\t\t}\n\t\t\t\t\td.hash = newH\n\t\t\t\t}\n\t\t\t\td.index = newIndex\n\t\t\t} else {\n\t\t\t\t// For matches this long, we don't bother inserting each individual\n\t\t\t\t// item into the table.\n\t\t\t\td.index += d.length\n\t\t\t\tif d.index < d.maxInsertIndex {\n\t\t\t\t\td.hash = hash4(d.window[d.index : d.index+minMatchLength])\n\t\t\t\t}\n\t\t\t}\n\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t// The block includes the current character\n\t\t\t\tif d.err = d.writeBlockSkip(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\td.tokens.n = 0\n\t\t\t}\n\t\t} else {\n\t\t\td.ii++\n\t\t\tend := d.index + int(d.ii>>uint(d.fastSkipHashing)) + 1\n\t\t\tif end > d.windowEnd {\n\t\t\t\tend = d.windowEnd\n\t\t\t}\n\t\t\tfor i := d.index; i < end; i++ {\n\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[i]))\n\t\t\t\td.tokens.n++\n\t\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t\tif d.err = d.writeBlockSkip(d.tokens, i+1, false); d.err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\td.tokens.n = 0\n\t\t\t\t}\n\t\t\t}\n\t\t\td.index = end\n\t\t}\n\t}\n}\n\n// deflateLazy is the same as deflate, but with d.fastSkipHashing == skipNever,\n// meaning it always has lazy matching on.\nfunc (d *compressor) deflateLazy() {\n\t// Sanity enables additional runtime tests.\n\t// It's intended to be used during development\n\t// to supplement the currently ad-hoc unit tests.\n\tconst sanity = false\n\n\tif d.windowEnd-d.index < minMatchLength+maxMatchLength && !d.sync {\n\t\treturn\n\t}\n\n\td.maxInsertIndex = d.windowEnd - (minMatchLength - 1)\n\tif d.index < d.maxInsertIndex {\n\t\td.hash = hash4(d.window[d.index : d.index+minMatchLength])\n\t}\n\n\tfor {\n\t\tif sanity && d.index > d.windowEnd {\n\t\t\tpanic(\"index > windowEnd\")\n\t\t}\n\t\tlookahead := d.windowEnd - d.index\n\t\tif lookahead < minMatchLength+maxMatchLength {\n\t\t\tif !d.sync {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif sanity && d.index > d.windowEnd {\n\t\t\t\tpanic(\"index > windowEnd\")\n\t\t\t}\n\t\t\tif lookahead == 0 {\n\t\t\t\t// Flush current output block if any.\n\t\t\t\tif d.byteAvailable {\n\t\t\t\t\t// There is still one pending token that needs to be flushed\n\t\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[d.index-1]))\n\t\t\t\t\td.tokens.n++\n\t\t\t\t\td.byteAvailable = false\n\t\t\t\t}\n\t\t\t\tif d.tokens.n > 0 {\n\t\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\td.tokens.n = 0\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tif d.index < d.maxInsertIndex {\n\t\t\t// Update the hash\n\t\t\td.hash = hash4(d.window[d.index : d.index+minMatchLength])\n\t\t\tch := d.hashHead[d.hash&hashMask]\n\t\t\td.chainHead = int(ch)\n\t\t\td.hashPrev[d.index&windowMask] = ch\n\t\t\td.hashHead[d.hash&hashMask] = uint32(d.index + d.hashOffset)\n\t\t}\n\t\tprevLength := d.length\n\t\tprevOffset := d.offset\n\t\td.length = minMatchLength - 1\n\t\td.offset = 0\n\t\tminIndex := d.index - windowSize\n\t\tif minIndex < 0 {\n\t\t\tminIndex = 0\n\t\t}\n\n\t\tif d.chainHead-d.hashOffset >= minIndex && lookahead > prevLength && prevLength < d.lazy {\n\t\t\tif newLength, newOffset, ok := d.findMatch(d.index, d.chainHead-d.hashOffset, minMatchLength-1, lookahead); ok {\n\t\t\t\td.length = newLength\n\t\t\t\td.offset = newOffset\n\t\t\t}\n\t\t}\n\t\tif prevLength >= minMatchLength && d.length <= prevLength {\n\t\t\t// There was a match at the previous step, and the current match is\n\t\t\t// not better. Output the previous match.\n\t\t\td.tokens.tokens[d.tokens.n] = matchToken(uint32(prevLength-3), uint32(prevOffset-minOffsetSize))\n\t\t\td.tokens.n++\n\n\t\t\t// Insert in the hash table all strings up to the end of the match.\n\t\t\t// index and index-1 are already inserted. If there is not enough\n\t\t\t// lookahead, the last two strings are not inserted into the hash\n\t\t\t// table.\n\t\t\tvar newIndex int\n\t\t\tnewIndex = d.index + prevLength - 1\n\t\t\t// Calculate missing hashes\n\t\t\tend := newIndex\n\t\t\tif end > d.maxInsertIndex {\n\t\t\t\tend = d.maxInsertIndex\n\t\t\t}\n\t\t\tend += minMatchLength - 1\n\t\t\tstartindex := d.index + 1\n\t\t\tif startindex > d.maxInsertIndex {\n\t\t\t\tstartindex = d.maxInsertIndex\n\t\t\t}\n\t\t\ttocheck := d.window[startindex:end]\n\t\t\tdstSize := len(tocheck) - minMatchLength + 1\n\t\t\tif dstSize > 0 {\n\t\t\t\tdst := d.hashMatch[:dstSize]\n\t\t\t\tbulkHash4(tocheck, dst)\n\t\t\t\tvar newH uint32\n\t\t\t\tfor i, val := range dst {\n\t\t\t\t\tdi := i + startindex\n\t\t\t\t\tnewH = val & hashMask\n\t\t\t\t\t// Get previous value with the same hash.\n\t\t\t\t\t// Our chain should point to the previous value.\n\t\t\t\t\td.hashPrev[di&windowMask] = d.hashHead[newH]\n\t\t\t\t\t// Set the head of the hash chain to us.\n\t\t\t\t\td.hashHead[newH] = uint32(di + d.hashOffset)\n\t\t\t\t}\n\t\t\t\td.hash = newH\n\t\t\t}\n\n\t\t\td.index = newIndex\n\t\t\td.byteAvailable = false\n\t\t\td.length = minMatchLength - 1\n\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t// The block includes the current character\n\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\td.tokens.n = 0\n\t\t\t}\n\t\t} else {\n\t\t\t// Reset, if we got a match this run.\n\t\t\tif d.length >= minMatchLength {\n\t\t\t\td.ii = 0\n\t\t\t}\n\t\t\t// We have a byte waiting. Emit it.\n\t\t\tif d.byteAvailable {\n\t\t\t\td.ii++\n\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[d.index-1]))\n\t\t\t\td.tokens.n++\n\t\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\td.tokens.n = 0\n\t\t\t\t}\n\t\t\t\td.index++\n\n\t\t\t\t// If we have a long run of no matches, skip additional bytes\n\t\t\t\t// Resets when d.ii overflows after 64KB.\n\t\t\t\tif d.ii > 31 {\n\t\t\t\t\tn := int(d.ii >> 5)\n\t\t\t\t\tfor j := 0; j < n; j++ {\n\t\t\t\t\t\tif d.index >= d.windowEnd-1 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[d.index-1]))\n\t\t\t\t\t\td.tokens.n++\n\t\t\t\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\t\t\treturn\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\td.tokens.n = 0\n\t\t\t\t\t\t}\n\t\t\t\t\t\td.index++\n\t\t\t\t\t}\n\t\t\t\t\t// Flush last byte\n\t\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[d.index-1]))\n\t\t\t\t\td.tokens.n++\n\t\t\t\t\td.byteAvailable = false\n\t\t\t\t\t// d.length = minMatchLength - 1 // not needed, since d.ii is reset above, so it should never be > minMatchLength\n\t\t\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\t\treturn\n\t\t\t\t\t\t}\n\t\t\t\t\t\td.tokens.n = 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\td.index++\n\t\t\t\td.byteAvailable = true\n\t\t\t}\n\t\t}\n\t}\n}\n\n// Assumes that d.fastSkipHashing != skipNever,\n// otherwise use deflateLazySSE\nfunc (d *compressor) deflateSSE() {\n\n\t// Sanity enables additional runtime tests.\n\t// It's intended to be used during development\n\t// to supplement the currently ad-hoc unit tests.\n\tconst sanity = false\n\n\tif d.windowEnd-d.index < minMatchLength+maxMatchLength && !d.sync {\n\t\treturn\n\t}\n\n\td.maxInsertIndex = d.windowEnd - (minMatchLength - 1)\n\tif d.index < d.maxInsertIndex {\n\t\td.hash = crc32sse(d.window[d.index:d.index+minMatchLength]) & hashMask\n\t}\n\n\tfor {\n\t\tif sanity && d.index > d.windowEnd {\n\t\t\tpanic(\"index > windowEnd\")\n\t\t}\n\t\tlookahead := d.windowEnd - d.index\n\t\tif lookahead < minMatchLength+maxMatchLength {\n\t\t\tif !d.sync {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif sanity && d.index > d.windowEnd {\n\t\t\t\tpanic(\"index > windowEnd\")\n\t\t\t}\n\t\t\tif lookahead == 0 {\n\t\t\t\tif d.tokens.n > 0 {\n\t\t\t\t\tif d.err = d.writeBlockSkip(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\td.tokens.n = 0\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tif d.index < d.maxInsertIndex {\n\t\t\t// Update the hash\n\t\t\td.hash = crc32sse(d.window[d.index:d.index+minMatchLength]) & hashMask\n\t\t\tch := d.hashHead[d.hash]\n\t\t\td.chainHead = int(ch)\n\t\t\td.hashPrev[d.index&windowMask] = ch\n\t\t\td.hashHead[d.hash] = uint32(d.index + d.hashOffset)\n\t\t}\n\t\td.length = minMatchLength - 1\n\t\td.offset = 0\n\t\tminIndex := d.index - windowSize\n\t\tif minIndex < 0 {\n\t\t\tminIndex = 0\n\t\t}\n\n\t\tif d.chainHead-d.hashOffset >= minIndex && lookahead > minMatchLength-1 {\n\t\t\tif newLength, newOffset, ok := d.findMatchSSE(d.index, d.chainHead-d.hashOffset, minMatchLength-1, lookahead); ok {\n\t\t\t\td.length = newLength\n\t\t\t\td.offset = newOffset\n\t\t\t}\n\t\t}\n\t\tif d.length >= minMatchLength {\n\t\t\td.ii = 0\n\t\t\t// There was a match at the previous step, and the current match is\n\t\t\t// not better. Output the previous match.\n\t\t\t// \"d.length-3\" should NOT be \"d.length-minMatchLength\", since the format always assume 3\n\t\t\td.tokens.tokens[d.tokens.n] = matchToken(uint32(d.length-3), uint32(d.offset-minOffsetSize))\n\t\t\td.tokens.n++\n\t\t\t// Insert in the hash table all strings up to the end of the match.\n\t\t\t// index and index-1 are already inserted. If there is not enough\n\t\t\t// lookahead, the last two strings are not inserted into the hash\n\t\t\t// table.\n\t\t\tif d.length <= d.fastSkipHashing {\n\t\t\t\tvar newIndex int\n\t\t\t\tnewIndex = d.index + d.length\n\t\t\t\t// Calculate missing hashes\n\t\t\t\tend := newIndex\n\t\t\t\tif end > d.maxInsertIndex {\n\t\t\t\t\tend = d.maxInsertIndex\n\t\t\t\t}\n\t\t\t\tend += minMatchLength - 1\n\t\t\t\tstartindex := d.index + 1\n\t\t\t\tif startindex > d.maxInsertIndex {\n\t\t\t\t\tstartindex = d.maxInsertIndex\n\t\t\t\t}\n\t\t\t\ttocheck := d.window[startindex:end]\n\t\t\t\tdstSize := len(tocheck) - minMatchLength + 1\n\t\t\t\tif dstSize > 0 {\n\t\t\t\t\tdst := d.hashMatch[:dstSize]\n\n\t\t\t\t\tcrc32sseAll(tocheck, dst)\n\t\t\t\t\tvar newH uint32\n\t\t\t\t\tfor i, val := range dst {\n\t\t\t\t\t\tdi := i + startindex\n\t\t\t\t\t\tnewH = val & hashMask\n\t\t\t\t\t\t// Get previous value with the same hash.\n\t\t\t\t\t\t// Our chain should point to the previous value.\n\t\t\t\t\t\td.hashPrev[di&windowMask] = d.hashHead[newH]\n\t\t\t\t\t\t// Set the head of the hash chain to us.\n\t\t\t\t\t\td.hashHead[newH] = uint32(di + d.hashOffset)\n\t\t\t\t\t}\n\t\t\t\t\td.hash = newH\n\t\t\t\t}\n\t\t\t\td.index = newIndex\n\t\t\t} else {\n\t\t\t\t// For matches this long, we don't bother inserting each individual\n\t\t\t\t// item into the table.\n\t\t\t\td.index += d.length\n\t\t\t\tif d.index < d.maxInsertIndex {\n\t\t\t\t\td.hash = crc32sse(d.window[d.index:d.index+minMatchLength]) & hashMask\n\t\t\t\t}\n\t\t\t}\n\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t// The block includes the current character\n\t\t\t\tif d.err = d.writeBlockSkip(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\td.tokens.n = 0\n\t\t\t}\n\t\t} else {\n\t\t\td.ii++\n\t\t\tend := d.index + int(d.ii>>5) + 1\n\t\t\tif end > d.windowEnd {\n\t\t\t\tend = d.windowEnd\n\t\t\t}\n\t\t\tfor i := d.index; i < end; i++ {\n\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[i]))\n\t\t\t\td.tokens.n++\n\t\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t\tif d.err = d.writeBlockSkip(d.tokens, i+1, false); d.err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\td.tokens.n = 0\n\t\t\t\t}\n\t\t\t}\n\t\t\td.index = end\n\t\t}\n\t}\n}\n\n// deflateLazy is the same as deflate, but with d.fastSkipHashing == skipNever,\n// meaning it always has lazy matching on.\nfunc (d *compressor) deflateLazySSE() {\n\t// Sanity enables additional runtime tests.\n\t// It's intended to be used during development\n\t// to supplement the currently ad-hoc unit tests.\n\tconst sanity = false\n\n\tif d.windowEnd-d.index < minMatchLength+maxMatchLength && !d.sync {\n\t\treturn\n\t}\n\n\td.maxInsertIndex = d.windowEnd - (minMatchLength - 1)\n\tif d.index < d.maxInsertIndex {\n\t\td.hash = crc32sse(d.window[d.index:d.index+minMatchLength]) & hashMask\n\t}\n\n\tfor {\n\t\tif sanity && d.index > d.windowEnd {\n\t\t\tpanic(\"index > windowEnd\")\n\t\t}\n\t\tlookahead := d.windowEnd - d.index\n\t\tif lookahead < minMatchLength+maxMatchLength {\n\t\t\tif !d.sync {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif sanity && d.index > d.windowEnd {\n\t\t\t\tpanic(\"index > windowEnd\")\n\t\t\t}\n\t\t\tif lookahead == 0 {\n\t\t\t\t// Flush current output block if any.\n\t\t\t\tif d.byteAvailable {\n\t\t\t\t\t// There is still one pending token that needs to be flushed\n\t\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[d.index-1]))\n\t\t\t\t\td.tokens.n++\n\t\t\t\t\td.byteAvailable = false\n\t\t\t\t}\n\t\t\t\tif d.tokens.n > 0 {\n\t\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\td.tokens.n = 0\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tif d.index < d.maxInsertIndex {\n\t\t\t// Update the hash\n\t\t\td.hash = crc32sse(d.window[d.index:d.index+minMatchLength]) & hashMask\n\t\t\tch := d.hashHead[d.hash]\n\t\t\td.chainHead = int(ch)\n\t\t\td.hashPrev[d.index&windowMask] = ch\n\t\t\td.hashHead[d.hash] = uint32(d.index + d.hashOffset)\n\t\t}\n\t\tprevLength := d.length\n\t\tprevOffset := d.offset\n\t\td.length = minMatchLength - 1\n\t\td.offset = 0\n\t\tminIndex := d.index - windowSize\n\t\tif minIndex < 0 {\n\t\t\tminIndex = 0\n\t\t}\n\n\t\tif d.chainHead-d.hashOffset >= minIndex && lookahead > prevLength && prevLength < d.lazy {\n\t\t\tif newLength, newOffset, ok := d.findMatchSSE(d.index, d.chainHead-d.hashOffset, minMatchLength-1, lookahead); ok {\n\t\t\t\td.length = newLength\n\t\t\t\td.offset = newOffset\n\t\t\t}\n\t\t}\n\t\tif prevLength >= minMatchLength && d.length <= prevLength {\n\t\t\t// There was a match at the previous step, and the current match is\n\t\t\t// not better. Output the previous match.\n\t\t\td.tokens.tokens[d.tokens.n] = matchToken(uint32(prevLength-3), uint32(prevOffset-minOffsetSize))\n\t\t\td.tokens.n++\n\n\t\t\t// Insert in the hash table all strings up to the end of the match.\n\t\t\t// index and index-1 are already inserted. If there is not enough\n\t\t\t// lookahead, the last two strings are not inserted into the hash\n\t\t\t// table.\n\t\t\tvar newIndex int\n\t\t\tnewIndex = d.index + prevLength - 1\n\t\t\t// Calculate missing hashes\n\t\t\tend := newIndex\n\t\t\tif end > d.maxInsertIndex {\n\t\t\t\tend = d.maxInsertIndex\n\t\t\t}\n\t\t\tend += minMatchLength - 1\n\t\t\tstartindex := d.index + 1\n\t\t\tif startindex > d.maxInsertIndex {\n\t\t\t\tstartindex = d.maxInsertIndex\n\t\t\t}\n\t\t\ttocheck := d.window[startindex:end]\n\t\t\tdstSize := len(tocheck) - minMatchLength + 1\n\t\t\tif dstSize > 0 {\n\t\t\t\tdst := d.hashMatch[:dstSize]\n\t\t\t\tcrc32sseAll(tocheck, dst)\n\t\t\t\tvar newH uint32\n\t\t\t\tfor i, val := range dst {\n\t\t\t\t\tdi := i + startindex\n\t\t\t\t\tnewH = val & hashMask\n\t\t\t\t\t// Get previous value with the same hash.\n\t\t\t\t\t// Our chain should point to the previous value.\n\t\t\t\t\td.hashPrev[di&windowMask] = d.hashHead[newH]\n\t\t\t\t\t// Set the head of the hash chain to us.\n\t\t\t\t\td.hashHead[newH] = uint32(di + d.hashOffset)\n\t\t\t\t}\n\t\t\t\td.hash = newH\n\t\t\t}\n\n\t\t\td.index = newIndex\n\t\t\td.byteAvailable = false\n\t\t\td.length = minMatchLength - 1\n\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t// The block includes the current character\n\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\td.tokens.n = 0\n\t\t\t}\n\t\t} else {\n\t\t\t// Reset, if we got a match this run.\n\t\t\tif d.length >= minMatchLength {\n\t\t\t\td.ii = 0\n\t\t\t}\n\t\t\t// We have a byte waiting. Emit it.\n\t\t\tif d.byteAvailable {\n\t\t\t\td.ii++\n\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[d.index-1]))\n\t\t\t\td.tokens.n++\n\t\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\t\t\t\t\td.tokens.n = 0\n\t\t\t\t}\n\t\t\t\td.index++\n\n\t\t\t\t// If we have a long run of no matches, skip additional bytes\n\t\t\t\t// Resets when d.ii overflows after 64KB.\n\t\t\t\tif d.ii > 31 {\n\t\t\t\t\tn := int(d.ii >> 6)\n\t\t\t\t\tfor j := 0; j < n; j++ {\n\t\t\t\t\t\tif d.index >= d.windowEnd-1 {\n\t\t\t\t\t\t\tbreak\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[d.index-1]))\n\t\t\t\t\t\td.tokens.n++\n\t\t\t\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\t\t\treturn\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\td.tokens.n = 0\n\t\t\t\t\t\t}\n\t\t\t\t\t\td.index++\n\t\t\t\t\t}\n\t\t\t\t\t// Flush last byte\n\t\t\t\t\td.tokens.tokens[d.tokens.n] = literalToken(uint32(d.window[d.index-1]))\n\t\t\t\t\td.tokens.n++\n\t\t\t\t\td.byteAvailable = false\n\t\t\t\t\t// d.length = minMatchLength - 1 // not needed, since d.ii is reset above, so it should never be > minMatchLength\n\t\t\t\t\tif d.tokens.n == maxFlateBlockTokens {\n\t\t\t\t\t\tif d.err = d.writeBlock(d.tokens, d.index, false); d.err != nil {\n\t\t\t\t\t\t\treturn\n\t\t\t\t\t\t}\n\t\t\t\t\t\td.tokens.n = 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\td.index++\n\t\t\t\td.byteAvailable = true\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (d *compressor) store() {\n\tif d.windowEnd > 0 && (d.windowEnd == maxStoreBlockSize || d.sync) {\n\t\td.err = d.writeStoredBlock(d.window[:d.windowEnd])\n\t\td.windowEnd = 0\n\t}\n}\n\n// fillWindow will fill the buffer with data for huffman-only compression.\n// The number of bytes copied is returned.\nfunc (d *compressor) fillBlock(b []byte) int {\n\tn := copy(d.window[d.windowEnd:], b)\n\td.windowEnd += n\n\treturn n\n}\n\n// storeHuff will compress and store the currently added data,\n// if enough has been accumulated or we at the end of the stream.\n// Any error that occurred will be in d.err\nfunc (d *compressor) storeHuff() {\n\tif d.windowEnd < len(d.window) && !d.sync || d.windowEnd == 0 {\n\t\treturn\n\t}\n\td.w.writeBlockHuff(false, d.window[:d.windowEnd])\n\td.err = d.w.err\n\td.windowEnd = 0\n}\n\n// storeHuff will compress and store the currently added data,\n// if enough has been accumulated or we at the end of the stream.\n// Any error that occurred will be in d.err\nfunc (d *compressor) storeSnappy() {\n\t// We only compress if we have maxStoreBlockSize.\n\tif d.windowEnd < maxStoreBlockSize {\n\t\tif !d.sync {\n\t\t\treturn\n\t\t}\n\t\t// Handle extremely small sizes.\n\t\tif d.windowEnd < 128 {\n\t\t\tif d.windowEnd == 0 {\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif d.windowEnd <= 32 {\n\t\t\t\td.err = d.writeStoredBlock(d.window[:d.windowEnd])\n\t\t\t\td.tokens.n = 0\n\t\t\t\td.windowEnd = 0\n\t\t\t} else {\n\t\t\t\td.w.writeBlockHuff(false, d.window[:d.windowEnd])\n\t\t\t\td.err = d.w.err\n\t\t\t}\n\t\t\td.tokens.n = 0\n\t\t\td.windowEnd = 0\n\t\t\td.snap.Reset()\n\t\t\treturn\n\t\t}\n\t}\n\n\td.snap.Encode(&d.tokens, d.window[:d.windowEnd])\n\t// If we made zero matches, store the block as is.\n\tif int(d.tokens.n) == d.windowEnd {\n\t\td.err = d.writeStoredBlock(d.window[:d.windowEnd])\n\t\t// If we removed less than 1/16th, huffman compress the block.\n\t} else if int(d.tokens.n) > d.windowEnd-(d.windowEnd>>4) {\n\t\td.w.writeBlockHuff(false, d.window[:d.windowEnd])\n\t\td.err = d.w.err\n\t} else {\n\t\td.w.writeBlockDynamic(d.tokens.tokens[:d.tokens.n], false, d.window[:d.windowEnd])\n\t\td.err = d.w.err\n\t}\n\td.tokens.n = 0\n\td.windowEnd = 0\n}\n\n// write will add input byte to the stream.\n// Unless an error occurs all bytes will be consumed.\nfunc (d *compressor) write(b []byte) (n int, err error) {\n\tif d.err != nil {\n\t\treturn 0, d.err\n\t}\n\tn = len(b)\n\tfor len(b) > 0 {\n\t\td.step(d)\n\t\tb = b[d.fill(d, b):]\n\t\tif d.err != nil {\n\t\t\treturn 0, d.err\n\t\t}\n\t}\n\treturn n, d.err\n}\n\nfunc (d *compressor) syncFlush() error {\n\td.sync = true\n\tif d.err != nil {\n\t\treturn d.err\n\t}\n\td.step(d)\n\tif d.err == nil {\n\t\td.w.writeStoredHeader(0, false)\n\t\td.w.flush()\n\t\td.err = d.w.err\n\t}\n\td.sync = false\n\treturn d.err\n}\n\nfunc (d *compressor) init(w io.Writer, level int) (err error) {\n\td.w = newHuffmanBitWriter(w)\n\n\tswitch {\n\tcase level == NoCompression:\n\t\td.window = make([]byte, maxStoreBlockSize)\n\t\td.fill = (*compressor).fillBlock\n\t\td.step = (*compressor).store\n\tcase level == ConstantCompression:\n\t\td.window = make([]byte, maxStoreBlockSize)\n\t\td.fill = (*compressor).fillBlock\n\t\td.step = (*compressor).storeHuff\n\tcase level >= 1 && level <= 4:\n\t\td.snap = newSnappy(level)\n\t\td.window = make([]byte, maxStoreBlockSize)\n\t\td.fill = (*compressor).fillBlock\n\t\td.step = (*compressor).storeSnappy\n\tcase level == DefaultCompression:\n\t\tlevel = 5\n\t\tfallthrough\n\tcase 5 <= level && level <= 9:\n\t\td.compressionLevel = levels[level]\n\t\td.initDeflate()\n\t\td.fill = (*compressor).fillDeflate\n\t\tif d.fastSkipHashing == skipNever {\n\t\t\tif useSSE42 {\n\t\t\t\td.step = (*compressor).deflateLazySSE\n\t\t\t} else {\n\t\t\t\td.step = (*compressor).deflateLazy\n\t\t\t}\n\t\t} else {\n\t\t\tif useSSE42 {\n\t\t\t\td.step = (*compressor).deflateSSE\n\t\t\t} else {\n\t\t\t\td.step = (*compressor).deflate\n\n\t\t\t}\n\t\t}\n\tdefault:\n\t\treturn fmt.Errorf(\"flate: invalid compression level %d: want value in range [-2, 9]\", level)\n\t}\n\treturn nil\n}\n\n// reset the state of the compressor.\nfunc (d *compressor) reset(w io.Writer) {\n\td.w.reset(w)\n\td.sync = false\n\td.err = nil\n\t// We only need to reset a few things for Snappy.\n\tif d.snap != nil {\n\t\td.snap.Reset()\n\t\td.windowEnd = 0\n\t\td.tokens.n = 0\n\t\treturn\n\t}\n\tswitch d.compressionLevel.chain {\n\tcase 0:\n\t\t// level was NoCompression or ConstantCompresssion.\n\t\td.windowEnd = 0\n\tdefault:\n\t\td.chainHead = -1\n\t\tfor i := range d.hashHead {\n\t\t\td.hashHead[i] = 0\n\t\t}\n\t\tfor i := range d.hashPrev {\n\t\t\td.hashPrev[i] = 0\n\t\t}\n\t\td.hashOffset = 1\n\t\td.index, d.windowEnd = 0, 0\n\t\td.blockStart, d.byteAvailable = 0, false\n\t\td.tokens.n = 0\n\t\td.length = minMatchLength - 1\n\t\td.offset = 0\n\t\td.hash = 0\n\t\td.ii = 0\n\t\td.maxInsertIndex = 0\n\t}\n}\n\nfunc (d *compressor) close() error {\n\tif d.err != nil {\n\t\treturn d.err\n\t}\n\td.sync = true\n\td.step(d)\n\tif d.err != nil {\n\t\treturn d.err\n\t}\n\tif d.w.writeStoredHeader(0, true); d.w.err != nil {\n\t\treturn d.w.err\n\t}\n\td.w.flush()\n\treturn d.w.err\n}\n\n// NewWriter returns a new Writer compressing data at the given level.\n// Following zlib, levels range from 1 (BestSpeed) to 9 (BestCompression);\n// higher levels typically run slower but compress more.\n// Level 0 (NoCompression) does not attempt any compression; it only adds the\n// necessary DEFLATE framing.\n// Level -1 (DefaultCompression) uses the default compression level.\n// Level -2 (ConstantCompression) will use Huffman compression only, giving\n// a very fast compression for all types of input, but sacrificing considerable\n// compression efficiency.\n//\n// If level is in the range [-2, 9] then the error returned will be nil.\n// Otherwise the error returned will be non-nil.\nfunc NewWriter(w io.Writer, level int) (*Writer, error) {\n\tvar dw Writer\n\tif err := dw.d.init(w, level); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &dw, nil\n}\n\n// NewWriterDict is like NewWriter but initializes the new\n// Writer with a preset dictionary.  The returned Writer behaves\n// as if the dictionary had been written to it without producing\n// any compressed output.  The compressed data written to w\n// can only be decompressed by a Reader initialized with the\n// same dictionary.\nfunc NewWriterDict(w io.Writer, level int, dict []byte) (*Writer, error) {\n\tdw := &dictWriter{w}\n\tzw, err := NewWriter(dw, level)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tzw.d.fillWindow(dict)\n\tzw.dict = append(zw.dict, dict...) // duplicate dictionary for Reset method.\n\treturn zw, err\n}\n\ntype dictWriter struct {\n\tw io.Writer\n}\n\nfunc (w *dictWriter) Write(b []byte) (n int, err error) {\n\treturn w.w.Write(b)\n}\n\n// A Writer takes data written to it and writes the compressed\n// form of that data to an underlying writer (see NewWriter).\ntype Writer struct {\n\td    compressor\n\tdict []byte\n}\n\n// Write writes data to w, which will eventually write the\n// compressed form of data to its underlying writer.\nfunc (w *Writer) Write(data []byte) (n int, err error) {\n\treturn w.d.write(data)\n}\n\n// Flush flushes any pending data to the underlying writer.\n// It is useful mainly in compressed network protocols, to ensure that\n// a remote reader has enough data to reconstruct a packet.\n// Flush does not return until the data has been written.\n// Calling Flush when there is no pending data still causes the Writer\n// to emit a sync marker of at least 4 bytes.\n// If the underlying writer returns an error, Flush returns that error.\n//\n// In the terminology of the zlib library, Flush is equivalent to Z_SYNC_FLUSH.\nfunc (w *Writer) Flush() error {\n\t// For more about flushing:\n\t// http://www.bolet.org/~pornin/deflate-flush.html\n\treturn w.d.syncFlush()\n}\n\n// Close flushes and closes the writer.\nfunc (w *Writer) Close() error {\n\treturn w.d.close()\n}\n\n// Reset discards the writer's state and makes it equivalent to\n// the result of NewWriter or NewWriterDict called with dst\n// and w's level and dictionary.\nfunc (w *Writer) Reset(dst io.Writer) {\n\tif dw, ok := w.d.w.writer.(*dictWriter); ok {\n\t\t// w was created with NewWriterDict\n\t\tdw.w = dst\n\t\tw.d.reset(dw)\n\t\tw.d.fillWindow(w.dict)\n\t} else {\n\t\t// w was created with NewWriter\n\t\tw.d.reset(dst)\n\t}\n}\n\n// ResetDict discards the writer's state and makes it equivalent to\n// the result of NewWriter or NewWriterDict called with dst\n// and w's level, but sets a specific dictionary.\nfunc (w *Writer) ResetDict(dst io.Writer, dict []byte) {\n\tw.dict = dict\n\tw.d.reset(dst)\n\tw.d.fillWindow(w.dict)\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/dict_decoder.go",
    "content": "// Copyright 2016 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage flate\n\n// dictDecoder implements the LZ77 sliding dictionary as used in decompression.\n// LZ77 decompresses data through sequences of two forms of commands:\n//\n//\t* Literal insertions: Runs of one or more symbols are inserted into the data\n//\tstream as is. This is accomplished through the writeByte method for a\n//\tsingle symbol, or combinations of writeSlice/writeMark for multiple symbols.\n//\tAny valid stream must start with a literal insertion if no preset dictionary\n//\tis used.\n//\n//\t* Backward copies: Runs of one or more symbols are copied from previously\n//\temitted data. Backward copies come as the tuple (dist, length) where dist\n//\tdetermines how far back in the stream to copy from and length determines how\n//\tmany bytes to copy. Note that it is valid for the length to be greater than\n//\tthe distance. Since LZ77 uses forward copies, that situation is used to\n//\tperform a form of run-length encoding on repeated runs of symbols.\n//\tThe writeCopy and tryWriteCopy are used to implement this command.\n//\n// For performance reasons, this implementation performs little to no sanity\n// checks about the arguments. As such, the invariants documented for each\n// method call must be respected.\ntype dictDecoder struct {\n\thist []byte // Sliding window history\n\n\t// Invariant: 0 <= rdPos <= wrPos <= len(hist)\n\twrPos int  // Current output position in buffer\n\trdPos int  // Have emitted hist[:rdPos] already\n\tfull  bool // Has a full window length been written yet?\n}\n\n// init initializes dictDecoder to have a sliding window dictionary of the given\n// size. If a preset dict is provided, it will initialize the dictionary with\n// the contents of dict.\nfunc (dd *dictDecoder) init(size int, dict []byte) {\n\t*dd = dictDecoder{hist: dd.hist}\n\n\tif cap(dd.hist) < size {\n\t\tdd.hist = make([]byte, size)\n\t}\n\tdd.hist = dd.hist[:size]\n\n\tif len(dict) > len(dd.hist) {\n\t\tdict = dict[len(dict)-len(dd.hist):]\n\t}\n\tdd.wrPos = copy(dd.hist, dict)\n\tif dd.wrPos == len(dd.hist) {\n\t\tdd.wrPos = 0\n\t\tdd.full = true\n\t}\n\tdd.rdPos = dd.wrPos\n}\n\n// histSize reports the total amount of historical data in the dictionary.\nfunc (dd *dictDecoder) histSize() int {\n\tif dd.full {\n\t\treturn len(dd.hist)\n\t}\n\treturn dd.wrPos\n}\n\n// availRead reports the number of bytes that can be flushed by readFlush.\nfunc (dd *dictDecoder) availRead() int {\n\treturn dd.wrPos - dd.rdPos\n}\n\n// availWrite reports the available amount of output buffer space.\nfunc (dd *dictDecoder) availWrite() int {\n\treturn len(dd.hist) - dd.wrPos\n}\n\n// writeSlice returns a slice of the available buffer to write data to.\n//\n// This invariant will be kept: len(s) <= availWrite()\nfunc (dd *dictDecoder) writeSlice() []byte {\n\treturn dd.hist[dd.wrPos:]\n}\n\n// writeMark advances the writer pointer by cnt.\n//\n// This invariant must be kept: 0 <= cnt <= availWrite()\nfunc (dd *dictDecoder) writeMark(cnt int) {\n\tdd.wrPos += cnt\n}\n\n// writeByte writes a single byte to the dictionary.\n//\n// This invariant must be kept: 0 < availWrite()\nfunc (dd *dictDecoder) writeByte(c byte) {\n\tdd.hist[dd.wrPos] = c\n\tdd.wrPos++\n}\n\n// writeCopy copies a string at a given (dist, length) to the output.\n// This returns the number of bytes copied and may be less than the requested\n// length if the available space in the output buffer is too small.\n//\n// This invariant must be kept: 0 < dist <= histSize()\nfunc (dd *dictDecoder) writeCopy(dist, length int) int {\n\tdstBase := dd.wrPos\n\tdstPos := dstBase\n\tsrcPos := dstPos - dist\n\tendPos := dstPos + length\n\tif endPos > len(dd.hist) {\n\t\tendPos = len(dd.hist)\n\t}\n\n\t// Copy non-overlapping section after destination position.\n\t//\n\t// This section is non-overlapping in that the copy length for this section\n\t// is always less than or equal to the backwards distance. This can occur\n\t// if a distance refers to data that wraps-around in the buffer.\n\t// Thus, a backwards copy is performed here; that is, the exact bytes in\n\t// the source prior to the copy is placed in the destination.\n\tif srcPos < 0 {\n\t\tsrcPos += len(dd.hist)\n\t\tdstPos += copy(dd.hist[dstPos:endPos], dd.hist[srcPos:])\n\t\tsrcPos = 0\n\t}\n\n\t// Copy possibly overlapping section before destination position.\n\t//\n\t// This section can overlap if the copy length for this section is larger\n\t// than the backwards distance. This is allowed by LZ77 so that repeated\n\t// strings can be succinctly represented using (dist, length) pairs.\n\t// Thus, a forwards copy is performed here; that is, the bytes copied is\n\t// possibly dependent on the resulting bytes in the destination as the copy\n\t// progresses along. This is functionally equivalent to the following:\n\t//\n\t//\tfor i := 0; i < endPos-dstPos; i++ {\n\t//\t\tdd.hist[dstPos+i] = dd.hist[srcPos+i]\n\t//\t}\n\t//\tdstPos = endPos\n\t//\n\tfor dstPos < endPos {\n\t\tdstPos += copy(dd.hist[dstPos:endPos], dd.hist[srcPos:dstPos])\n\t}\n\n\tdd.wrPos = dstPos\n\treturn dstPos - dstBase\n}\n\n// tryWriteCopy tries to copy a string at a given (distance, length) to the\n// output. This specialized version is optimized for short distances.\n//\n// This method is designed to be inlined for performance reasons.\n//\n// This invariant must be kept: 0 < dist <= histSize()\nfunc (dd *dictDecoder) tryWriteCopy(dist, length int) int {\n\tdstPos := dd.wrPos\n\tendPos := dstPos + length\n\tif dstPos < dist || endPos > len(dd.hist) {\n\t\treturn 0\n\t}\n\tdstBase := dstPos\n\tsrcPos := dstPos - dist\n\n\t// Copy possibly overlapping section before destination position.\nloop:\n\tdstPos += copy(dd.hist[dstPos:endPos], dd.hist[srcPos:dstPos])\n\tif dstPos < endPos {\n\t\tgoto loop // Avoid for-loop so that this function can be inlined\n\t}\n\n\tdd.wrPos = dstPos\n\treturn dstPos - dstBase\n}\n\n// readFlush returns a slice of the historical buffer that is ready to be\n// emitted to the user. The data returned by readFlush must be fully consumed\n// before calling any other dictDecoder methods.\nfunc (dd *dictDecoder) readFlush() []byte {\n\ttoRead := dd.hist[dd.rdPos:dd.wrPos]\n\tdd.rdPos = dd.wrPos\n\tif dd.wrPos == len(dd.hist) {\n\t\tdd.wrPos, dd.rdPos = 0, 0\n\t\tdd.full = true\n\t}\n\treturn toRead\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/gen.go",
    "content": "// Copyright 2012 The Go Authors.  All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build ignore\n\n// This program generates fixedhuff.go\n// Invoke as\n//\n//\tgo run gen.go -output fixedhuff.go\n\npackage main\n\nimport (\n\t\"bytes\"\n\t\"flag\"\n\t\"fmt\"\n\t\"go/format\"\n\t\"io/ioutil\"\n\t\"log\"\n)\n\nvar filename = flag.String(\"output\", \"fixedhuff.go\", \"output file name\")\n\nconst maxCodeLen = 16\n\n// Note: the definition of the huffmanDecoder struct is copied from\n// inflate.go, as it is private to the implementation.\n\n// chunk & 15 is number of bits\n// chunk >> 4 is value, including table link\n\nconst (\n\thuffmanChunkBits  = 9\n\thuffmanNumChunks  = 1 << huffmanChunkBits\n\thuffmanCountMask  = 15\n\thuffmanValueShift = 4\n)\n\ntype huffmanDecoder struct {\n\tmin      int                      // the minimum code length\n\tchunks   [huffmanNumChunks]uint32 // chunks as described above\n\tlinks    [][]uint32               // overflow links\n\tlinkMask uint32                   // mask the width of the link table\n}\n\n// Initialize Huffman decoding tables from array of code lengths.\n// Following this function, h is guaranteed to be initialized into a complete\n// tree (i.e., neither over-subscribed nor under-subscribed). The exception is a\n// degenerate case where the tree has only a single symbol with length 1. Empty\n// trees are permitted.\nfunc (h *huffmanDecoder) init(bits []int) bool {\n\t// Sanity enables additional runtime tests during Huffman\n\t// table construction.  It's intended to be used during\n\t// development to supplement the currently ad-hoc unit tests.\n\tconst sanity = false\n\n\tif h.min != 0 {\n\t\t*h = huffmanDecoder{}\n\t}\n\n\t// Count number of codes of each length,\n\t// compute min and max length.\n\tvar count [maxCodeLen]int\n\tvar min, max int\n\tfor _, n := range bits {\n\t\tif n == 0 {\n\t\t\tcontinue\n\t\t}\n\t\tif min == 0 || n < min {\n\t\t\tmin = n\n\t\t}\n\t\tif n > max {\n\t\t\tmax = n\n\t\t}\n\t\tcount[n]++\n\t}\n\n\t// Empty tree. The decompressor.huffSym function will fail later if the tree\n\t// is used. Technically, an empty tree is only valid for the HDIST tree and\n\t// not the HCLEN and HLIT tree. However, a stream with an empty HCLEN tree\n\t// is guaranteed to fail since it will attempt to use the tree to decode the\n\t// codes for the HLIT and HDIST trees. Similarly, an empty HLIT tree is\n\t// guaranteed to fail later since the compressed data section must be\n\t// composed of at least one symbol (the end-of-block marker).\n\tif max == 0 {\n\t\treturn true\n\t}\n\n\tcode := 0\n\tvar nextcode [maxCodeLen]int\n\tfor i := min; i <= max; i++ {\n\t\tcode <<= 1\n\t\tnextcode[i] = code\n\t\tcode += count[i]\n\t}\n\n\t// Check that the coding is complete (i.e., that we've\n\t// assigned all 2-to-the-max possible bit sequences).\n\t// Exception: To be compatible with zlib, we also need to\n\t// accept degenerate single-code codings.  See also\n\t// TestDegenerateHuffmanCoding.\n\tif code != 1<<uint(max) && !(code == 1 && max == 1) {\n\t\treturn false\n\t}\n\n\th.min = min\n\tif max > huffmanChunkBits {\n\t\tnumLinks := 1 << (uint(max) - huffmanChunkBits)\n\t\th.linkMask = uint32(numLinks - 1)\n\n\t\t// create link tables\n\t\tlink := nextcode[huffmanChunkBits+1] >> 1\n\t\th.links = make([][]uint32, huffmanNumChunks-link)\n\t\tfor j := uint(link); j < huffmanNumChunks; j++ {\n\t\t\treverse := int(reverseByte[j>>8]) | int(reverseByte[j&0xff])<<8\n\t\t\treverse >>= uint(16 - huffmanChunkBits)\n\t\t\toff := j - uint(link)\n\t\t\tif sanity && h.chunks[reverse] != 0 {\n\t\t\t\tpanic(\"impossible: overwriting existing chunk\")\n\t\t\t}\n\t\t\th.chunks[reverse] = uint32(off<<huffmanValueShift | (huffmanChunkBits + 1))\n\t\t\th.links[off] = make([]uint32, numLinks)\n\t\t}\n\t}\n\n\tfor i, n := range bits {\n\t\tif n == 0 {\n\t\t\tcontinue\n\t\t}\n\t\tcode := nextcode[n]\n\t\tnextcode[n]++\n\t\tchunk := uint32(i<<huffmanValueShift | n)\n\t\treverse := int(reverseByte[code>>8]) | int(reverseByte[code&0xff])<<8\n\t\treverse >>= uint(16 - n)\n\t\tif n <= huffmanChunkBits {\n\t\t\tfor off := reverse; off < len(h.chunks); off += 1 << uint(n) {\n\t\t\t\t// We should never need to overwrite\n\t\t\t\t// an existing chunk.  Also, 0 is\n\t\t\t\t// never a valid chunk, because the\n\t\t\t\t// lower 4 \"count\" bits should be\n\t\t\t\t// between 1 and 15.\n\t\t\t\tif sanity && h.chunks[off] != 0 {\n\t\t\t\t\tpanic(\"impossible: overwriting existing chunk\")\n\t\t\t\t}\n\t\t\t\th.chunks[off] = chunk\n\t\t\t}\n\t\t} else {\n\t\t\tj := reverse & (huffmanNumChunks - 1)\n\t\t\tif sanity && h.chunks[j]&huffmanCountMask != huffmanChunkBits+1 {\n\t\t\t\t// Longer codes should have been\n\t\t\t\t// associated with a link table above.\n\t\t\t\tpanic(\"impossible: not an indirect chunk\")\n\t\t\t}\n\t\t\tvalue := h.chunks[j] >> huffmanValueShift\n\t\t\tlinktab := h.links[value]\n\t\t\treverse >>= huffmanChunkBits\n\t\t\tfor off := reverse; off < len(linktab); off += 1 << uint(n-huffmanChunkBits) {\n\t\t\t\tif sanity && linktab[off] != 0 {\n\t\t\t\t\tpanic(\"impossible: overwriting existing chunk\")\n\t\t\t\t}\n\t\t\t\tlinktab[off] = chunk\n\t\t\t}\n\t\t}\n\t}\n\n\tif sanity {\n\t\t// Above we've sanity checked that we never overwrote\n\t\t// an existing entry.  Here we additionally check that\n\t\t// we filled the tables completely.\n\t\tfor i, chunk := range h.chunks {\n\t\t\tif chunk == 0 {\n\t\t\t\t// As an exception, in the degenerate\n\t\t\t\t// single-code case, we allow odd\n\t\t\t\t// chunks to be missing.\n\t\t\t\tif code == 1 && i%2 == 1 {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tpanic(\"impossible: missing chunk\")\n\t\t\t}\n\t\t}\n\t\tfor _, linktab := range h.links {\n\t\t\tfor _, chunk := range linktab {\n\t\t\t\tif chunk == 0 {\n\t\t\t\t\tpanic(\"impossible: missing chunk\")\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn true\n}\n\nfunc main() {\n\tflag.Parse()\n\n\tvar h huffmanDecoder\n\tvar bits [288]int\n\tinitReverseByte()\n\tfor i := 0; i < 144; i++ {\n\t\tbits[i] = 8\n\t}\n\tfor i := 144; i < 256; i++ {\n\t\tbits[i] = 9\n\t}\n\tfor i := 256; i < 280; i++ {\n\t\tbits[i] = 7\n\t}\n\tfor i := 280; i < 288; i++ {\n\t\tbits[i] = 8\n\t}\n\th.init(bits[:])\n\tif h.links != nil {\n\t\tlog.Fatal(\"Unexpected links table in fixed Huffman decoder\")\n\t}\n\n\tvar buf bytes.Buffer\n\n\tfmt.Fprintf(&buf, `// Copyright 2013 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.`+\"\\n\\n\")\n\n\tfmt.Fprintln(&buf, \"package flate\")\n\tfmt.Fprintln(&buf)\n\tfmt.Fprintln(&buf, \"// autogenerated by go run gen.go -output fixedhuff.go, DO NOT EDIT\")\n\tfmt.Fprintln(&buf)\n\tfmt.Fprintln(&buf, \"var fixedHuffmanDecoder = huffmanDecoder{\")\n\tfmt.Fprintf(&buf, \"\\t%d,\\n\", h.min)\n\tfmt.Fprintln(&buf, \"\\t[huffmanNumChunks]uint32{\")\n\tfor i := 0; i < huffmanNumChunks; i++ {\n\t\tif i&7 == 0 {\n\t\t\tfmt.Fprintf(&buf, \"\\t\\t\")\n\t\t} else {\n\t\t\tfmt.Fprintf(&buf, \" \")\n\t\t}\n\t\tfmt.Fprintf(&buf, \"0x%04x,\", h.chunks[i])\n\t\tif i&7 == 7 {\n\t\t\tfmt.Fprintln(&buf)\n\t\t}\n\t}\n\tfmt.Fprintln(&buf, \"\\t},\")\n\tfmt.Fprintln(&buf, \"\\tnil, 0,\")\n\tfmt.Fprintln(&buf, \"}\")\n\n\tdata, err := format.Source(buf.Bytes())\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\terr = ioutil.WriteFile(*filename, data, 0644)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n}\n\nvar reverseByte [256]byte\n\nfunc initReverseByte() {\n\tfor x := 0; x < 256; x++ {\n\t\tvar result byte\n\t\tfor i := uint(0); i < 8; i++ {\n\t\t\tresult |= byte(((x >> i) & 1) << (7 - i))\n\t\t}\n\t\treverseByte[x] = result\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/huffman_bit_writer.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage flate\n\nimport (\n\t\"io\"\n)\n\nconst (\n\t// The largest offset code.\n\toffsetCodeCount = 30\n\n\t// The special code used to mark the end of a block.\n\tendBlockMarker = 256\n\n\t// The first length code.\n\tlengthCodesStart = 257\n\n\t// The number of codegen codes.\n\tcodegenCodeCount = 19\n\tbadCode          = 255\n\n\t// bufferFlushSize indicates the buffer size\n\t// after which bytes are flushed to the writer.\n\t// Should preferably be a multiple of 6, since\n\t// we accumulate 6 bytes between writes to the buffer.\n\tbufferFlushSize = 240\n\n\t// bufferSize is the actual output byte buffer size.\n\t// It must have additional headroom for a flush\n\t// which can contain up to 8 bytes.\n\tbufferSize = bufferFlushSize + 8\n)\n\n// The number of extra bits needed by length code X - LENGTH_CODES_START.\nvar lengthExtraBits = []int8{\n\t/* 257 */ 0, 0, 0,\n\t/* 260 */ 0, 0, 0, 0, 0, 1, 1, 1, 1, 2,\n\t/* 270 */ 2, 2, 2, 3, 3, 3, 3, 4, 4, 4,\n\t/* 280 */ 4, 5, 5, 5, 5, 0,\n}\n\n// The length indicated by length code X - LENGTH_CODES_START.\nvar lengthBase = []uint32{\n\t0, 1, 2, 3, 4, 5, 6, 7, 8, 10,\n\t12, 14, 16, 20, 24, 28, 32, 40, 48, 56,\n\t64, 80, 96, 112, 128, 160, 192, 224, 255,\n}\n\n// offset code word extra bits.\nvar offsetExtraBits = []int8{\n\t0, 0, 0, 0, 1, 1, 2, 2, 3, 3,\n\t4, 4, 5, 5, 6, 6, 7, 7, 8, 8,\n\t9, 9, 10, 10, 11, 11, 12, 12, 13, 13,\n\t/* extended window */\n\t14, 14, 15, 15, 16, 16, 17, 17, 18, 18, 19, 19, 20, 20,\n}\n\nvar offsetBase = []uint32{\n\t/* normal deflate */\n\t0x000000, 0x000001, 0x000002, 0x000003, 0x000004,\n\t0x000006, 0x000008, 0x00000c, 0x000010, 0x000018,\n\t0x000020, 0x000030, 0x000040, 0x000060, 0x000080,\n\t0x0000c0, 0x000100, 0x000180, 0x000200, 0x000300,\n\t0x000400, 0x000600, 0x000800, 0x000c00, 0x001000,\n\t0x001800, 0x002000, 0x003000, 0x004000, 0x006000,\n\n\t/* extended window */\n\t0x008000, 0x00c000, 0x010000, 0x018000, 0x020000,\n\t0x030000, 0x040000, 0x060000, 0x080000, 0x0c0000,\n\t0x100000, 0x180000, 0x200000, 0x300000,\n}\n\n// The odd order in which the codegen code sizes are written.\nvar codegenOrder = []uint32{16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15}\n\ntype huffmanBitWriter struct {\n\t// writer is the underlying writer.\n\t// Do not use it directly; use the write method, which ensures\n\t// that Write errors are sticky.\n\twriter io.Writer\n\n\t// Data waiting to be written is bytes[0:nbytes]\n\t// and then the low nbits of bits.\n\tbits            uint64\n\tnbits           uint\n\tbytes           [bufferSize]byte\n\tcodegenFreq     [codegenCodeCount]int32\n\tnbytes          int\n\tliteralFreq     []int32\n\toffsetFreq      []int32\n\tcodegen         []uint8\n\tliteralEncoding *huffmanEncoder\n\toffsetEncoding  *huffmanEncoder\n\tcodegenEncoding *huffmanEncoder\n\terr             error\n}\n\nfunc newHuffmanBitWriter(w io.Writer) *huffmanBitWriter {\n\treturn &huffmanBitWriter{\n\t\twriter:          w,\n\t\tliteralFreq:     make([]int32, maxNumLit),\n\t\toffsetFreq:      make([]int32, offsetCodeCount),\n\t\tcodegen:         make([]uint8, maxNumLit+offsetCodeCount+1),\n\t\tliteralEncoding: newHuffmanEncoder(maxNumLit),\n\t\tcodegenEncoding: newHuffmanEncoder(codegenCodeCount),\n\t\toffsetEncoding:  newHuffmanEncoder(offsetCodeCount),\n\t}\n}\n\nfunc (w *huffmanBitWriter) reset(writer io.Writer) {\n\tw.writer = writer\n\tw.bits, w.nbits, w.nbytes, w.err = 0, 0, 0, nil\n\tw.bytes = [bufferSize]byte{}\n}\n\nfunc (w *huffmanBitWriter) flush() {\n\tif w.err != nil {\n\t\tw.nbits = 0\n\t\treturn\n\t}\n\tn := w.nbytes\n\tfor w.nbits != 0 {\n\t\tw.bytes[n] = byte(w.bits)\n\t\tw.bits >>= 8\n\t\tif w.nbits > 8 { // Avoid underflow\n\t\t\tw.nbits -= 8\n\t\t} else {\n\t\t\tw.nbits = 0\n\t\t}\n\t\tn++\n\t}\n\tw.bits = 0\n\tw.write(w.bytes[:n])\n\tw.nbytes = 0\n}\n\nfunc (w *huffmanBitWriter) write(b []byte) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\t_, w.err = w.writer.Write(b)\n}\n\nfunc (w *huffmanBitWriter) writeBits(b int32, nb uint) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\tw.bits |= uint64(b) << w.nbits\n\tw.nbits += nb\n\tif w.nbits >= 48 {\n\t\tbits := w.bits\n\t\tw.bits >>= 48\n\t\tw.nbits -= 48\n\t\tn := w.nbytes\n\t\tbytes := w.bytes[n : n+6]\n\t\tbytes[0] = byte(bits)\n\t\tbytes[1] = byte(bits >> 8)\n\t\tbytes[2] = byte(bits >> 16)\n\t\tbytes[3] = byte(bits >> 24)\n\t\tbytes[4] = byte(bits >> 32)\n\t\tbytes[5] = byte(bits >> 40)\n\t\tn += 6\n\t\tif n >= bufferFlushSize {\n\t\t\tw.write(w.bytes[:n])\n\t\t\tn = 0\n\t\t}\n\t\tw.nbytes = n\n\t}\n}\n\nfunc (w *huffmanBitWriter) writeBytes(bytes []byte) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\tn := w.nbytes\n\tif w.nbits&7 != 0 {\n\t\tw.err = InternalError(\"writeBytes with unfinished bits\")\n\t\treturn\n\t}\n\tfor w.nbits != 0 {\n\t\tw.bytes[n] = byte(w.bits)\n\t\tw.bits >>= 8\n\t\tw.nbits -= 8\n\t\tn++\n\t}\n\tif n != 0 {\n\t\tw.write(w.bytes[:n])\n\t}\n\tw.nbytes = 0\n\tw.write(bytes)\n}\n\n// RFC 1951 3.2.7 specifies a special run-length encoding for specifying\n// the literal and offset lengths arrays (which are concatenated into a single\n// array).  This method generates that run-length encoding.\n//\n// The result is written into the codegen array, and the frequencies\n// of each code is written into the codegenFreq array.\n// Codes 0-15 are single byte codes. Codes 16-18 are followed by additional\n// information. Code badCode is an end marker\n//\n//  numLiterals      The number of literals in literalEncoding\n//  numOffsets       The number of offsets in offsetEncoding\n//  litenc, offenc   The literal and offset encoder to use\nfunc (w *huffmanBitWriter) generateCodegen(numLiterals int, numOffsets int, litEnc, offEnc *huffmanEncoder) {\n\tfor i := range w.codegenFreq {\n\t\tw.codegenFreq[i] = 0\n\t}\n\t// Note that we are using codegen both as a temporary variable for holding\n\t// a copy of the frequencies, and as the place where we put the result.\n\t// This is fine because the output is always shorter than the input used\n\t// so far.\n\tcodegen := w.codegen // cache\n\t// Copy the concatenated code sizes to codegen. Put a marker at the end.\n\tcgnl := codegen[:numLiterals]\n\tfor i := range cgnl {\n\t\tcgnl[i] = uint8(litEnc.codes[i].len)\n\t}\n\n\tcgnl = codegen[numLiterals : numLiterals+numOffsets]\n\tfor i := range cgnl {\n\t\tcgnl[i] = uint8(offEnc.codes[i].len)\n\t}\n\tcodegen[numLiterals+numOffsets] = badCode\n\n\tsize := codegen[0]\n\tcount := 1\n\toutIndex := 0\n\tfor inIndex := 1; size != badCode; inIndex++ {\n\t\t// INVARIANT: We have seen \"count\" copies of size that have not yet\n\t\t// had output generated for them.\n\t\tnextSize := codegen[inIndex]\n\t\tif nextSize == size {\n\t\t\tcount++\n\t\t\tcontinue\n\t\t}\n\t\t// We need to generate codegen indicating \"count\" of size.\n\t\tif size != 0 {\n\t\t\tcodegen[outIndex] = size\n\t\t\toutIndex++\n\t\t\tw.codegenFreq[size]++\n\t\t\tcount--\n\t\t\tfor count >= 3 {\n\t\t\t\tn := 6\n\t\t\t\tif n > count {\n\t\t\t\t\tn = count\n\t\t\t\t}\n\t\t\t\tcodegen[outIndex] = 16\n\t\t\t\toutIndex++\n\t\t\t\tcodegen[outIndex] = uint8(n - 3)\n\t\t\t\toutIndex++\n\t\t\t\tw.codegenFreq[16]++\n\t\t\t\tcount -= n\n\t\t\t}\n\t\t} else {\n\t\t\tfor count >= 11 {\n\t\t\t\tn := 138\n\t\t\t\tif n > count {\n\t\t\t\t\tn = count\n\t\t\t\t}\n\t\t\t\tcodegen[outIndex] = 18\n\t\t\t\toutIndex++\n\t\t\t\tcodegen[outIndex] = uint8(n - 11)\n\t\t\t\toutIndex++\n\t\t\t\tw.codegenFreq[18]++\n\t\t\t\tcount -= n\n\t\t\t}\n\t\t\tif count >= 3 {\n\t\t\t\t// count >= 3 && count <= 10\n\t\t\t\tcodegen[outIndex] = 17\n\t\t\t\toutIndex++\n\t\t\t\tcodegen[outIndex] = uint8(count - 3)\n\t\t\t\toutIndex++\n\t\t\t\tw.codegenFreq[17]++\n\t\t\t\tcount = 0\n\t\t\t}\n\t\t}\n\t\tcount--\n\t\tfor ; count >= 0; count-- {\n\t\t\tcodegen[outIndex] = size\n\t\t\toutIndex++\n\t\t\tw.codegenFreq[size]++\n\t\t}\n\t\t// Set up invariant for next time through the loop.\n\t\tsize = nextSize\n\t\tcount = 1\n\t}\n\t// Marker indicating the end of the codegen.\n\tcodegen[outIndex] = badCode\n}\n\n// dynamicSize returns the size of dynamically encoded data in bits.\nfunc (w *huffmanBitWriter) dynamicSize(litEnc, offEnc *huffmanEncoder, extraBits int) (size, numCodegens int) {\n\tnumCodegens = len(w.codegenFreq)\n\tfor numCodegens > 4 && w.codegenFreq[codegenOrder[numCodegens-1]] == 0 {\n\t\tnumCodegens--\n\t}\n\theader := 3 + 5 + 5 + 4 + (3 * numCodegens) +\n\t\tw.codegenEncoding.bitLength(w.codegenFreq[:]) +\n\t\tint(w.codegenFreq[16])*2 +\n\t\tint(w.codegenFreq[17])*3 +\n\t\tint(w.codegenFreq[18])*7\n\tsize = header +\n\t\tlitEnc.bitLength(w.literalFreq) +\n\t\toffEnc.bitLength(w.offsetFreq) +\n\t\textraBits\n\n\treturn size, numCodegens\n}\n\n// fixedSize returns the size of dynamically encoded data in bits.\nfunc (w *huffmanBitWriter) fixedSize(extraBits int) int {\n\treturn 3 +\n\t\tfixedLiteralEncoding.bitLength(w.literalFreq) +\n\t\tfixedOffsetEncoding.bitLength(w.offsetFreq) +\n\t\textraBits\n}\n\n// storedSize calculates the stored size, including header.\n// The function returns the size in bits and whether the block\n// fits inside a single block.\nfunc (w *huffmanBitWriter) storedSize(in []byte) (int, bool) {\n\tif in == nil {\n\t\treturn 0, false\n\t}\n\tif len(in) <= maxStoreBlockSize {\n\t\treturn (len(in) + 5) * 8, true\n\t}\n\treturn 0, false\n}\n\nfunc (w *huffmanBitWriter) writeCode(c hcode) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\tw.bits |= uint64(c.code) << w.nbits\n\tw.nbits += uint(c.len)\n\tif w.nbits >= 48 {\n\t\tbits := w.bits\n\t\tw.bits >>= 48\n\t\tw.nbits -= 48\n\t\tn := w.nbytes\n\t\tbytes := w.bytes[n : n+6]\n\t\tbytes[0] = byte(bits)\n\t\tbytes[1] = byte(bits >> 8)\n\t\tbytes[2] = byte(bits >> 16)\n\t\tbytes[3] = byte(bits >> 24)\n\t\tbytes[4] = byte(bits >> 32)\n\t\tbytes[5] = byte(bits >> 40)\n\t\tn += 6\n\t\tif n >= bufferFlushSize {\n\t\t\tw.write(w.bytes[:n])\n\t\t\tn = 0\n\t\t}\n\t\tw.nbytes = n\n\t}\n}\n\n// Write the header of a dynamic Huffman block to the output stream.\n//\n//  numLiterals  The number of literals specified in codegen\n//  numOffsets   The number of offsets specified in codegen\n//  numCodegens  The number of codegens used in codegen\nfunc (w *huffmanBitWriter) writeDynamicHeader(numLiterals int, numOffsets int, numCodegens int, isEof bool) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\tvar firstBits int32 = 4\n\tif isEof {\n\t\tfirstBits = 5\n\t}\n\tw.writeBits(firstBits, 3)\n\tw.writeBits(int32(numLiterals-257), 5)\n\tw.writeBits(int32(numOffsets-1), 5)\n\tw.writeBits(int32(numCodegens-4), 4)\n\n\tfor i := 0; i < numCodegens; i++ {\n\t\tvalue := uint(w.codegenEncoding.codes[codegenOrder[i]].len)\n\t\tw.writeBits(int32(value), 3)\n\t}\n\n\ti := 0\n\tfor {\n\t\tvar codeWord int = int(w.codegen[i])\n\t\ti++\n\t\tif codeWord == badCode {\n\t\t\tbreak\n\t\t}\n\t\tw.writeCode(w.codegenEncoding.codes[uint32(codeWord)])\n\n\t\tswitch codeWord {\n\t\tcase 16:\n\t\t\tw.writeBits(int32(w.codegen[i]), 2)\n\t\t\ti++\n\t\t\tbreak\n\t\tcase 17:\n\t\t\tw.writeBits(int32(w.codegen[i]), 3)\n\t\t\ti++\n\t\t\tbreak\n\t\tcase 18:\n\t\t\tw.writeBits(int32(w.codegen[i]), 7)\n\t\t\ti++\n\t\t\tbreak\n\t\t}\n\t}\n}\n\nfunc (w *huffmanBitWriter) writeStoredHeader(length int, isEof bool) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\tvar flag int32\n\tif isEof {\n\t\tflag = 1\n\t}\n\tw.writeBits(flag, 3)\n\tw.flush()\n\tw.writeBits(int32(length), 16)\n\tw.writeBits(int32(^uint16(length)), 16)\n}\n\nfunc (w *huffmanBitWriter) writeFixedHeader(isEof bool) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\t// Indicate that we are a fixed Huffman block\n\tvar value int32 = 2\n\tif isEof {\n\t\tvalue = 3\n\t}\n\tw.writeBits(value, 3)\n}\n\n// writeBlock will write a block of tokens with the smallest encoding.\n// The original input can be supplied, and if the huffman encoded data\n// is larger than the original bytes, the data will be written as a\n// stored block.\n// If the input is nil, the tokens will always be Huffman encoded.\nfunc (w *huffmanBitWriter) writeBlock(tokens []token, eof bool, input []byte) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\n\ttokens = append(tokens, endBlockMarker)\n\tnumLiterals, numOffsets := w.indexTokens(tokens)\n\n\tvar extraBits int\n\tstoredSize, storable := w.storedSize(input)\n\tif storable {\n\t\t// We only bother calculating the costs of the extra bits required by\n\t\t// the length of offset fields (which will be the same for both fixed\n\t\t// and dynamic encoding), if we need to compare those two encodings\n\t\t// against stored encoding.\n\t\tfor lengthCode := lengthCodesStart + 8; lengthCode < numLiterals; lengthCode++ {\n\t\t\t// First eight length codes have extra size = 0.\n\t\t\textraBits += int(w.literalFreq[lengthCode]) * int(lengthExtraBits[lengthCode-lengthCodesStart])\n\t\t}\n\t\tfor offsetCode := 4; offsetCode < numOffsets; offsetCode++ {\n\t\t\t// First four offset codes have extra size = 0.\n\t\t\textraBits += int(w.offsetFreq[offsetCode]) * int(offsetExtraBits[offsetCode])\n\t\t}\n\t}\n\n\t// Figure out smallest code.\n\t// Fixed Huffman baseline.\n\tvar literalEncoding = fixedLiteralEncoding\n\tvar offsetEncoding = fixedOffsetEncoding\n\tvar size = w.fixedSize(extraBits)\n\n\t// Dynamic Huffman?\n\tvar numCodegens int\n\n\t// Generate codegen and codegenFrequencies, which indicates how to encode\n\t// the literalEncoding and the offsetEncoding.\n\tw.generateCodegen(numLiterals, numOffsets, w.literalEncoding, w.offsetEncoding)\n\tw.codegenEncoding.generate(w.codegenFreq[:], 7)\n\tdynamicSize, numCodegens := w.dynamicSize(w.literalEncoding, w.offsetEncoding, extraBits)\n\n\tif dynamicSize < size {\n\t\tsize = dynamicSize\n\t\tliteralEncoding = w.literalEncoding\n\t\toffsetEncoding = w.offsetEncoding\n\t}\n\n\t// Stored bytes?\n\tif storable && storedSize < size {\n\t\tw.writeStoredHeader(len(input), eof)\n\t\tw.writeBytes(input)\n\t\treturn\n\t}\n\n\t// Huffman.\n\tif literalEncoding == fixedLiteralEncoding {\n\t\tw.writeFixedHeader(eof)\n\t} else {\n\t\tw.writeDynamicHeader(numLiterals, numOffsets, numCodegens, eof)\n\t}\n\n\t// Write the tokens.\n\tw.writeTokens(tokens, literalEncoding.codes, offsetEncoding.codes)\n}\n\n// writeBlockDynamic encodes a block using a dynamic Huffman table.\n// This should be used if the symbols used have a disproportionate\n// histogram distribution.\n// If input is supplied and the compression savings are below 1/16th of the\n// input size the block is stored.\nfunc (w *huffmanBitWriter) writeBlockDynamic(tokens []token, eof bool, input []byte) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\n\ttokens = append(tokens, endBlockMarker)\n\tnumLiterals, numOffsets := w.indexTokens(tokens)\n\n\t// Generate codegen and codegenFrequencies, which indicates how to encode\n\t// the literalEncoding and the offsetEncoding.\n\tw.generateCodegen(numLiterals, numOffsets, w.literalEncoding, w.offsetEncoding)\n\tw.codegenEncoding.generate(w.codegenFreq[:], 7)\n\tsize, numCodegens := w.dynamicSize(w.literalEncoding, w.offsetEncoding, 0)\n\n\t// Store bytes, if we don't get a reasonable improvement.\n\tif ssize, storable := w.storedSize(input); storable && ssize < (size+size>>4) {\n\t\tw.writeStoredHeader(len(input), eof)\n\t\tw.writeBytes(input)\n\t\treturn\n\t}\n\n\t// Write Huffman table.\n\tw.writeDynamicHeader(numLiterals, numOffsets, numCodegens, eof)\n\n\t// Write the tokens.\n\tw.writeTokens(tokens, w.literalEncoding.codes, w.offsetEncoding.codes)\n}\n\n// indexTokens indexes a slice of tokens, and updates\n// literalFreq and offsetFreq, and generates literalEncoding\n// and offsetEncoding.\n// The number of literal and offset tokens is returned.\nfunc (w *huffmanBitWriter) indexTokens(tokens []token) (numLiterals, numOffsets int) {\n\tfor i := range w.literalFreq {\n\t\tw.literalFreq[i] = 0\n\t}\n\tfor i := range w.offsetFreq {\n\t\tw.offsetFreq[i] = 0\n\t}\n\n\tfor _, t := range tokens {\n\t\tif t < matchType {\n\t\t\tw.literalFreq[t.literal()]++\n\t\t\tcontinue\n\t\t}\n\t\tlength := t.length()\n\t\toffset := t.offset()\n\t\tw.literalFreq[lengthCodesStart+lengthCode(length)]++\n\t\tw.offsetFreq[offsetCode(offset)]++\n\t}\n\n\t// get the number of literals\n\tnumLiterals = len(w.literalFreq)\n\tfor w.literalFreq[numLiterals-1] == 0 {\n\t\tnumLiterals--\n\t}\n\t// get the number of offsets\n\tnumOffsets = len(w.offsetFreq)\n\tfor numOffsets > 0 && w.offsetFreq[numOffsets-1] == 0 {\n\t\tnumOffsets--\n\t}\n\tif numOffsets == 0 {\n\t\t// We haven't found a single match. If we want to go with the dynamic encoding,\n\t\t// we should count at least one offset to be sure that the offset huffman tree could be encoded.\n\t\tw.offsetFreq[0] = 1\n\t\tnumOffsets = 1\n\t}\n\tw.literalEncoding.generate(w.literalFreq, 15)\n\tw.offsetEncoding.generate(w.offsetFreq, 15)\n\treturn\n}\n\n// writeTokens writes a slice of tokens to the output.\n// codes for literal and offset encoding must be supplied.\nfunc (w *huffmanBitWriter) writeTokens(tokens []token, leCodes, oeCodes []hcode) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\tfor _, t := range tokens {\n\t\tif t < matchType {\n\t\t\tw.writeCode(leCodes[t.literal()])\n\t\t\tcontinue\n\t\t}\n\t\t// Write the length\n\t\tlength := t.length()\n\t\tlengthCode := lengthCode(length)\n\t\tw.writeCode(leCodes[lengthCode+lengthCodesStart])\n\t\textraLengthBits := uint(lengthExtraBits[lengthCode])\n\t\tif extraLengthBits > 0 {\n\t\t\textraLength := int32(length - lengthBase[lengthCode])\n\t\t\tw.writeBits(extraLength, extraLengthBits)\n\t\t}\n\t\t// Write the offset\n\t\toffset := t.offset()\n\t\toffsetCode := offsetCode(offset)\n\t\tw.writeCode(oeCodes[offsetCode])\n\t\textraOffsetBits := uint(offsetExtraBits[offsetCode])\n\t\tif extraOffsetBits > 0 {\n\t\t\textraOffset := int32(offset - offsetBase[offsetCode])\n\t\t\tw.writeBits(extraOffset, extraOffsetBits)\n\t\t}\n\t}\n}\n\n// huffOffset is a static offset encoder used for huffman only encoding.\n// It can be reused since we will not be encoding offset values.\nvar huffOffset *huffmanEncoder\n\nfunc init() {\n\tw := newHuffmanBitWriter(nil)\n\tw.offsetFreq[0] = 1\n\thuffOffset = newHuffmanEncoder(offsetCodeCount)\n\thuffOffset.generate(w.offsetFreq, 15)\n}\n\n// writeBlockHuff encodes a block of bytes as either\n// Huffman encoded literals or uncompressed bytes if the\n// results only gains very little from compression.\nfunc (w *huffmanBitWriter) writeBlockHuff(eof bool, input []byte) {\n\tif w.err != nil {\n\t\treturn\n\t}\n\n\t// Clear histogram\n\tfor i := range w.literalFreq {\n\t\tw.literalFreq[i] = 0\n\t}\n\n\t// Add everything as literals\n\thistogram(input, w.literalFreq)\n\n\tw.literalFreq[endBlockMarker] = 1\n\n\tconst numLiterals = endBlockMarker + 1\n\tconst numOffsets = 1\n\n\tw.literalEncoding.generate(w.literalFreq, 15)\n\n\t// Figure out smallest code.\n\t// Always use dynamic Huffman or Store\n\tvar numCodegens int\n\n\t// Generate codegen and codegenFrequencies, which indicates how to encode\n\t// the literalEncoding and the offsetEncoding.\n\tw.generateCodegen(numLiterals, numOffsets, w.literalEncoding, huffOffset)\n\tw.codegenEncoding.generate(w.codegenFreq[:], 7)\n\tsize, numCodegens := w.dynamicSize(w.literalEncoding, huffOffset, 0)\n\n\t// Store bytes, if we don't get a reasonable improvement.\n\tif ssize, storable := w.storedSize(input); storable && ssize < (size+size>>4) {\n\t\tw.writeStoredHeader(len(input), eof)\n\t\tw.writeBytes(input)\n\t\treturn\n\t}\n\n\t// Huffman.\n\tw.writeDynamicHeader(numLiterals, numOffsets, numCodegens, eof)\n\tencoding := w.literalEncoding.codes[:257]\n\tn := w.nbytes\n\tfor _, t := range input {\n\t\t// Bitwriting inlined, ~30% speedup\n\t\tc := encoding[t]\n\t\tw.bits |= uint64(c.code) << w.nbits\n\t\tw.nbits += uint(c.len)\n\t\tif w.nbits < 48 {\n\t\t\tcontinue\n\t\t}\n\t\t// Store 6 bytes\n\t\tbits := w.bits\n\t\tw.bits >>= 48\n\t\tw.nbits -= 48\n\t\tbytes := w.bytes[n : n+6]\n\t\tbytes[0] = byte(bits)\n\t\tbytes[1] = byte(bits >> 8)\n\t\tbytes[2] = byte(bits >> 16)\n\t\tbytes[3] = byte(bits >> 24)\n\t\tbytes[4] = byte(bits >> 32)\n\t\tbytes[5] = byte(bits >> 40)\n\t\tn += 6\n\t\tif n < bufferFlushSize {\n\t\t\tcontinue\n\t\t}\n\t\tw.write(w.bytes[:n])\n\t\tif w.err != nil {\n\t\t\treturn // Return early in the event of write failures\n\t\t}\n\t\tn = 0\n\t}\n\tw.nbytes = n\n\tw.writeCode(encoding[endBlockMarker])\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/huffman_code.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage flate\n\nimport (\n\t\"math\"\n\t\"sort\"\n)\n\n// hcode is a huffman code with a bit code and bit length.\ntype hcode struct {\n\tcode, len uint16\n}\n\ntype huffmanEncoder struct {\n\tcodes     []hcode\n\tfreqcache []literalNode\n\tbitCount  [17]int32\n\tlns       byLiteral // stored to avoid repeated allocation in generate\n\tlfs       byFreq    // stored to avoid repeated allocation in generate\n}\n\ntype literalNode struct {\n\tliteral uint16\n\tfreq    int32\n}\n\n// A levelInfo describes the state of the constructed tree for a given depth.\ntype levelInfo struct {\n\t// Our level.  for better printing\n\tlevel int32\n\n\t// The frequency of the last node at this level\n\tlastFreq int32\n\n\t// The frequency of the next character to add to this level\n\tnextCharFreq int32\n\n\t// The frequency of the next pair (from level below) to add to this level.\n\t// Only valid if the \"needed\" value of the next lower level is 0.\n\tnextPairFreq int32\n\n\t// The number of chains remaining to generate for this level before moving\n\t// up to the next level\n\tneeded int32\n}\n\n// set sets the code and length of an hcode.\nfunc (h *hcode) set(code uint16, length uint16) {\n\th.len = length\n\th.code = code\n}\n\nfunc maxNode() literalNode { return literalNode{math.MaxUint16, math.MaxInt32} }\n\nfunc newHuffmanEncoder(size int) *huffmanEncoder {\n\treturn &huffmanEncoder{codes: make([]hcode, size)}\n}\n\n// Generates a HuffmanCode corresponding to the fixed literal table\nfunc generateFixedLiteralEncoding() *huffmanEncoder {\n\th := newHuffmanEncoder(maxNumLit)\n\tcodes := h.codes\n\tvar ch uint16\n\tfor ch = 0; ch < maxNumLit; ch++ {\n\t\tvar bits uint16\n\t\tvar size uint16\n\t\tswitch {\n\t\tcase ch < 144:\n\t\t\t// size 8, 000110000  .. 10111111\n\t\t\tbits = ch + 48\n\t\t\tsize = 8\n\t\t\tbreak\n\t\tcase ch < 256:\n\t\t\t// size 9, 110010000 .. 111111111\n\t\t\tbits = ch + 400 - 144\n\t\t\tsize = 9\n\t\t\tbreak\n\t\tcase ch < 280:\n\t\t\t// size 7, 0000000 .. 0010111\n\t\t\tbits = ch - 256\n\t\t\tsize = 7\n\t\t\tbreak\n\t\tdefault:\n\t\t\t// size 8, 11000000 .. 11000111\n\t\t\tbits = ch + 192 - 280\n\t\t\tsize = 8\n\t\t}\n\t\tcodes[ch] = hcode{code: reverseBits(bits, byte(size)), len: size}\n\t}\n\treturn h\n}\n\nfunc generateFixedOffsetEncoding() *huffmanEncoder {\n\th := newHuffmanEncoder(30)\n\tcodes := h.codes\n\tfor ch := range codes {\n\t\tcodes[ch] = hcode{code: reverseBits(uint16(ch), 5), len: 5}\n\t}\n\treturn h\n}\n\nvar fixedLiteralEncoding *huffmanEncoder = generateFixedLiteralEncoding()\nvar fixedOffsetEncoding *huffmanEncoder = generateFixedOffsetEncoding()\n\nfunc (h *huffmanEncoder) bitLength(freq []int32) int {\n\tvar total int\n\tfor i, f := range freq {\n\t\tif f != 0 {\n\t\t\ttotal += int(f) * int(h.codes[i].len)\n\t\t}\n\t}\n\treturn total\n}\n\nconst maxBitsLimit = 16\n\n// Return the number of literals assigned to each bit size in the Huffman encoding\n//\n// This method is only called when list.length >= 3\n// The cases of 0, 1, and 2 literals are handled by special case code.\n//\n// list  An array of the literals with non-zero frequencies\n//             and their associated frequencies. The array is in order of increasing\n//             frequency, and has as its last element a special element with frequency\n//             MaxInt32\n// maxBits     The maximum number of bits that should be used to encode any literal.\n//             Must be less than 16.\n// return      An integer array in which array[i] indicates the number of literals\n//             that should be encoded in i bits.\nfunc (h *huffmanEncoder) bitCounts(list []literalNode, maxBits int32) []int32 {\n\tif maxBits >= maxBitsLimit {\n\t\tpanic(\"flate: maxBits too large\")\n\t}\n\tn := int32(len(list))\n\tlist = list[0 : n+1]\n\tlist[n] = maxNode()\n\n\t// The tree can't have greater depth than n - 1, no matter what. This\n\t// saves a little bit of work in some small cases\n\tif maxBits > n-1 {\n\t\tmaxBits = n - 1\n\t}\n\n\t// Create information about each of the levels.\n\t// A bogus \"Level 0\" whose sole purpose is so that\n\t// level1.prev.needed==0.  This makes level1.nextPairFreq\n\t// be a legitimate value that never gets chosen.\n\tvar levels [maxBitsLimit]levelInfo\n\t// leafCounts[i] counts the number of literals at the left\n\t// of ancestors of the rightmost node at level i.\n\t// leafCounts[i][j] is the number of literals at the left\n\t// of the level j ancestor.\n\tvar leafCounts [maxBitsLimit][maxBitsLimit]int32\n\n\tfor level := int32(1); level <= maxBits; level++ {\n\t\t// For every level, the first two items are the first two characters.\n\t\t// We initialize the levels as if we had already figured this out.\n\t\tlevels[level] = levelInfo{\n\t\t\tlevel:        level,\n\t\t\tlastFreq:     list[1].freq,\n\t\t\tnextCharFreq: list[2].freq,\n\t\t\tnextPairFreq: list[0].freq + list[1].freq,\n\t\t}\n\t\tleafCounts[level][level] = 2\n\t\tif level == 1 {\n\t\t\tlevels[level].nextPairFreq = math.MaxInt32\n\t\t}\n\t}\n\n\t// We need a total of 2*n - 2 items at top level and have already generated 2.\n\tlevels[maxBits].needed = 2*n - 4\n\n\tlevel := maxBits\n\tfor {\n\t\tl := &levels[level]\n\t\tif l.nextPairFreq == math.MaxInt32 && l.nextCharFreq == math.MaxInt32 {\n\t\t\t// We've run out of both leafs and pairs.\n\t\t\t// End all calculations for this level.\n\t\t\t// To make sure we never come back to this level or any lower level,\n\t\t\t// set nextPairFreq impossibly large.\n\t\t\tl.needed = 0\n\t\t\tlevels[level+1].nextPairFreq = math.MaxInt32\n\t\t\tlevel++\n\t\t\tcontinue\n\t\t}\n\n\t\tprevFreq := l.lastFreq\n\t\tif l.nextCharFreq < l.nextPairFreq {\n\t\t\t// The next item on this row is a leaf node.\n\t\t\tn := leafCounts[level][level] + 1\n\t\t\tl.lastFreq = l.nextCharFreq\n\t\t\t// Lower leafCounts are the same of the previous node.\n\t\t\tleafCounts[level][level] = n\n\t\t\tl.nextCharFreq = list[n].freq\n\t\t} else {\n\t\t\t// The next item on this row is a pair from the previous row.\n\t\t\t// nextPairFreq isn't valid until we generate two\n\t\t\t// more values in the level below\n\t\t\tl.lastFreq = l.nextPairFreq\n\t\t\t// Take leaf counts from the lower level, except counts[level] remains the same.\n\t\t\tcopy(leafCounts[level][:level], leafCounts[level-1][:level])\n\t\t\tlevels[l.level-1].needed = 2\n\t\t}\n\n\t\tif l.needed--; l.needed == 0 {\n\t\t\t// We've done everything we need to do for this level.\n\t\t\t// Continue calculating one level up. Fill in nextPairFreq\n\t\t\t// of that level with the sum of the two nodes we've just calculated on\n\t\t\t// this level.\n\t\t\tif l.level == maxBits {\n\t\t\t\t// All done!\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tlevels[l.level+1].nextPairFreq = prevFreq + l.lastFreq\n\t\t\tlevel++\n\t\t} else {\n\t\t\t// If we stole from below, move down temporarily to replenish it.\n\t\t\tfor levels[level-1].needed > 0 {\n\t\t\t\tlevel--\n\t\t\t}\n\t\t}\n\t}\n\n\t// Somethings is wrong if at the end, the top level is null or hasn't used\n\t// all of the leaves.\n\tif leafCounts[maxBits][maxBits] != n {\n\t\tpanic(\"leafCounts[maxBits][maxBits] != n\")\n\t}\n\n\tbitCount := h.bitCount[:maxBits+1]\n\tbits := 1\n\tcounts := &leafCounts[maxBits]\n\tfor level := maxBits; level > 0; level-- {\n\t\t// chain.leafCount gives the number of literals requiring at least \"bits\"\n\t\t// bits to encode.\n\t\tbitCount[bits] = counts[level] - counts[level-1]\n\t\tbits++\n\t}\n\treturn bitCount\n}\n\n// Look at the leaves and assign them a bit count and an encoding as specified\n// in RFC 1951 3.2.2\nfunc (h *huffmanEncoder) assignEncodingAndSize(bitCount []int32, list []literalNode) {\n\tcode := uint16(0)\n\tfor n, bits := range bitCount {\n\t\tcode <<= 1\n\t\tif n == 0 || bits == 0 {\n\t\t\tcontinue\n\t\t}\n\t\t// The literals list[len(list)-bits] .. list[len(list)-bits]\n\t\t// are encoded using \"bits\" bits, and get the values\n\t\t// code, code + 1, ....  The code values are\n\t\t// assigned in literal order (not frequency order).\n\t\tchunk := list[len(list)-int(bits):]\n\n\t\th.lns.sort(chunk)\n\t\tfor _, node := range chunk {\n\t\t\th.codes[node.literal] = hcode{code: reverseBits(code, uint8(n)), len: uint16(n)}\n\t\t\tcode++\n\t\t}\n\t\tlist = list[0 : len(list)-int(bits)]\n\t}\n}\n\n// Update this Huffman Code object to be the minimum code for the specified frequency count.\n//\n// freq  An array of frequencies, in which frequency[i] gives the frequency of literal i.\n// maxBits  The maximum number of bits to use for any literal.\nfunc (h *huffmanEncoder) generate(freq []int32, maxBits int32) {\n\tif h.freqcache == nil {\n\t\t// Allocate a reusable buffer with the longest possible frequency table.\n\t\t// Possible lengths are codegenCodeCount, offsetCodeCount and maxNumLit.\n\t\t// The largest of these is maxNumLit, so we allocate for that case.\n\t\th.freqcache = make([]literalNode, maxNumLit+1)\n\t}\n\tlist := h.freqcache[:len(freq)+1]\n\t// Number of non-zero literals\n\tcount := 0\n\t// Set list to be the set of all non-zero literals and their frequencies\n\tfor i, f := range freq {\n\t\tif f != 0 {\n\t\t\tlist[count] = literalNode{uint16(i), f}\n\t\t\tcount++\n\t\t} else {\n\t\t\tlist[count] = literalNode{}\n\t\t\th.codes[i].len = 0\n\t\t}\n\t}\n\tlist[len(freq)] = literalNode{}\n\n\tlist = list[:count]\n\tif count <= 2 {\n\t\t// Handle the small cases here, because they are awkward for the general case code. With\n\t\t// two or fewer literals, everything has bit length 1.\n\t\tfor i, node := range list {\n\t\t\t// \"list\" is in order of increasing literal value.\n\t\t\th.codes[node.literal].set(uint16(i), 1)\n\t\t}\n\t\treturn\n\t}\n\th.lfs.sort(list)\n\n\t// Get the number of literals for each bit count\n\tbitCount := h.bitCounts(list, maxBits)\n\t// And do the assignment\n\th.assignEncodingAndSize(bitCount, list)\n}\n\ntype byLiteral []literalNode\n\nfunc (s *byLiteral) sort(a []literalNode) {\n\t*s = byLiteral(a)\n\tsort.Sort(s)\n}\n\nfunc (s byLiteral) Len() int { return len(s) }\n\nfunc (s byLiteral) Less(i, j int) bool {\n\treturn s[i].literal < s[j].literal\n}\n\nfunc (s byLiteral) Swap(i, j int) { s[i], s[j] = s[j], s[i] }\n\ntype byFreq []literalNode\n\nfunc (s *byFreq) sort(a []literalNode) {\n\t*s = byFreq(a)\n\tsort.Sort(s)\n}\n\nfunc (s byFreq) Len() int { return len(s) }\n\nfunc (s byFreq) Less(i, j int) bool {\n\tif s[i].freq == s[j].freq {\n\t\treturn s[i].literal < s[j].literal\n\t}\n\treturn s[i].freq < s[j].freq\n}\n\nfunc (s byFreq) Swap(i, j int) { s[i], s[j] = s[j], s[i] }\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/inflate.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// Package flate implements the DEFLATE compressed data format, described in\n// RFC 1951.  The gzip and zlib packages implement access to DEFLATE-based file\n// formats.\npackage flate\n\nimport (\n\t\"bufio\"\n\t\"io\"\n\t\"strconv\"\n\t\"sync\"\n)\n\nconst (\n\tmaxCodeLen = 16 // max length of Huffman code\n\t// The next three numbers come from the RFC section 3.2.7, with the\n\t// additional proviso in section 3.2.5 which implies that distance codes\n\t// 30 and 31 should never occur in compressed data.\n\tmaxNumLit  = 286\n\tmaxNumDist = 30\n\tnumCodes   = 19 // number of codes in Huffman meta-code\n)\n\n// Initialize the fixedHuffmanDecoder only once upon first use.\nvar fixedOnce sync.Once\nvar fixedHuffmanDecoder huffmanDecoder\n\n// A CorruptInputError reports the presence of corrupt input at a given offset.\ntype CorruptInputError int64\n\nfunc (e CorruptInputError) Error() string {\n\treturn \"flate: corrupt input before offset \" + strconv.FormatInt(int64(e), 10)\n}\n\n// An InternalError reports an error in the flate code itself.\ntype InternalError string\n\nfunc (e InternalError) Error() string { return \"flate: internal error: \" + string(e) }\n\n// A ReadError reports an error encountered while reading input.\n//\n// Deprecated: No longer returned.\ntype ReadError struct {\n\tOffset int64 // byte offset where error occurred\n\tErr    error // error returned by underlying Read\n}\n\nfunc (e *ReadError) Error() string {\n\treturn \"flate: read error at offset \" + strconv.FormatInt(e.Offset, 10) + \": \" + e.Err.Error()\n}\n\n// A WriteError reports an error encountered while writing output.\n//\n// Deprecated: No longer returned.\ntype WriteError struct {\n\tOffset int64 // byte offset where error occurred\n\tErr    error // error returned by underlying Write\n}\n\nfunc (e *WriteError) Error() string {\n\treturn \"flate: write error at offset \" + strconv.FormatInt(e.Offset, 10) + \": \" + e.Err.Error()\n}\n\n// Resetter resets a ReadCloser returned by NewReader or NewReaderDict to\n// to switch to a new underlying Reader. This permits reusing a ReadCloser\n// instead of allocating a new one.\ntype Resetter interface {\n\t// Reset discards any buffered data and resets the Resetter as if it was\n\t// newly initialized with the given reader.\n\tReset(r io.Reader, dict []byte) error\n}\n\n// The data structure for decoding Huffman tables is based on that of\n// zlib. There is a lookup table of a fixed bit width (huffmanChunkBits),\n// For codes smaller than the table width, there are multiple entries\n// (each combination of trailing bits has the same value). For codes\n// larger than the table width, the table contains a link to an overflow\n// table. The width of each entry in the link table is the maximum code\n// size minus the chunk width.\n//\n// Note that you can do a lookup in the table even without all bits\n// filled. Since the extra bits are zero, and the DEFLATE Huffman codes\n// have the property that shorter codes come before longer ones, the\n// bit length estimate in the result is a lower bound on the actual\n// number of bits.\n//\n// See the following:\n//\thttp://www.gzip.org/algorithm.txt\n\n// chunk & 15 is number of bits\n// chunk >> 4 is value, including table link\n\nconst (\n\thuffmanChunkBits  = 9\n\thuffmanNumChunks  = 1 << huffmanChunkBits\n\thuffmanCountMask  = 15\n\thuffmanValueShift = 4\n)\n\ntype huffmanDecoder struct {\n\tmin      int                      // the minimum code length\n\tchunks   [huffmanNumChunks]uint32 // chunks as described above\n\tlinks    [][]uint32               // overflow links\n\tlinkMask uint32                   // mask the width of the link table\n}\n\n// Initialize Huffman decoding tables from array of code lengths.\n// Following this function, h is guaranteed to be initialized into a complete\n// tree (i.e., neither over-subscribed nor under-subscribed). The exception is a\n// degenerate case where the tree has only a single symbol with length 1. Empty\n// trees are permitted.\nfunc (h *huffmanDecoder) init(bits []int) bool {\n\t// Sanity enables additional runtime tests during Huffman\n\t// table construction. It's intended to be used during\n\t// development to supplement the currently ad-hoc unit tests.\n\tconst sanity = false\n\n\tif h.min != 0 {\n\t\t*h = huffmanDecoder{}\n\t}\n\n\t// Count number of codes of each length,\n\t// compute min and max length.\n\tvar count [maxCodeLen]int\n\tvar min, max int\n\tfor _, n := range bits {\n\t\tif n == 0 {\n\t\t\tcontinue\n\t\t}\n\t\tif min == 0 || n < min {\n\t\t\tmin = n\n\t\t}\n\t\tif n > max {\n\t\t\tmax = n\n\t\t}\n\t\tcount[n]++\n\t}\n\n\t// Empty tree. The decompressor.huffSym function will fail later if the tree\n\t// is used. Technically, an empty tree is only valid for the HDIST tree and\n\t// not the HCLEN and HLIT tree. However, a stream with an empty HCLEN tree\n\t// is guaranteed to fail since it will attempt to use the tree to decode the\n\t// codes for the HLIT and HDIST trees. Similarly, an empty HLIT tree is\n\t// guaranteed to fail later since the compressed data section must be\n\t// composed of at least one symbol (the end-of-block marker).\n\tif max == 0 {\n\t\treturn true\n\t}\n\n\tcode := 0\n\tvar nextcode [maxCodeLen]int\n\tfor i := min; i <= max; i++ {\n\t\tcode <<= 1\n\t\tnextcode[i] = code\n\t\tcode += count[i]\n\t}\n\n\t// Check that the coding is complete (i.e., that we've\n\t// assigned all 2-to-the-max possible bit sequences).\n\t// Exception: To be compatible with zlib, we also need to\n\t// accept degenerate single-code codings. See also\n\t// TestDegenerateHuffmanCoding.\n\tif code != 1<<uint(max) && !(code == 1 && max == 1) {\n\t\treturn false\n\t}\n\n\th.min = min\n\tif max > huffmanChunkBits {\n\t\tnumLinks := 1 << (uint(max) - huffmanChunkBits)\n\t\th.linkMask = uint32(numLinks - 1)\n\n\t\t// create link tables\n\t\tlink := nextcode[huffmanChunkBits+1] >> 1\n\t\th.links = make([][]uint32, huffmanNumChunks-link)\n\t\tfor j := uint(link); j < huffmanNumChunks; j++ {\n\t\t\treverse := int(reverseByte[j>>8]) | int(reverseByte[j&0xff])<<8\n\t\t\treverse >>= uint(16 - huffmanChunkBits)\n\t\t\toff := j - uint(link)\n\t\t\tif sanity && h.chunks[reverse] != 0 {\n\t\t\t\tpanic(\"impossible: overwriting existing chunk\")\n\t\t\t}\n\t\t\th.chunks[reverse] = uint32(off<<huffmanValueShift | (huffmanChunkBits + 1))\n\t\t\th.links[off] = make([]uint32, numLinks)\n\t\t}\n\t}\n\n\tfor i, n := range bits {\n\t\tif n == 0 {\n\t\t\tcontinue\n\t\t}\n\t\tcode := nextcode[n]\n\t\tnextcode[n]++\n\t\tchunk := uint32(i<<huffmanValueShift | n)\n\t\treverse := int(reverseByte[code>>8]) | int(reverseByte[code&0xff])<<8\n\t\treverse >>= uint(16 - n)\n\t\tif n <= huffmanChunkBits {\n\t\t\tfor off := reverse; off < len(h.chunks); off += 1 << uint(n) {\n\t\t\t\t// We should never need to overwrite\n\t\t\t\t// an existing chunk. Also, 0 is\n\t\t\t\t// never a valid chunk, because the\n\t\t\t\t// lower 4 \"count\" bits should be\n\t\t\t\t// between 1 and 15.\n\t\t\t\tif sanity && h.chunks[off] != 0 {\n\t\t\t\t\tpanic(\"impossible: overwriting existing chunk\")\n\t\t\t\t}\n\t\t\t\th.chunks[off] = chunk\n\t\t\t}\n\t\t} else {\n\t\t\tj := reverse & (huffmanNumChunks - 1)\n\t\t\tif sanity && h.chunks[j]&huffmanCountMask != huffmanChunkBits+1 {\n\t\t\t\t// Longer codes should have been\n\t\t\t\t// associated with a link table above.\n\t\t\t\tpanic(\"impossible: not an indirect chunk\")\n\t\t\t}\n\t\t\tvalue := h.chunks[j] >> huffmanValueShift\n\t\t\tlinktab := h.links[value]\n\t\t\treverse >>= huffmanChunkBits\n\t\t\tfor off := reverse; off < len(linktab); off += 1 << uint(n-huffmanChunkBits) {\n\t\t\t\tif sanity && linktab[off] != 0 {\n\t\t\t\t\tpanic(\"impossible: overwriting existing chunk\")\n\t\t\t\t}\n\t\t\t\tlinktab[off] = chunk\n\t\t\t}\n\t\t}\n\t}\n\n\tif sanity {\n\t\t// Above we've sanity checked that we never overwrote\n\t\t// an existing entry. Here we additionally check that\n\t\t// we filled the tables completely.\n\t\tfor i, chunk := range h.chunks {\n\t\t\tif chunk == 0 {\n\t\t\t\t// As an exception, in the degenerate\n\t\t\t\t// single-code case, we allow odd\n\t\t\t\t// chunks to be missing.\n\t\t\t\tif code == 1 && i%2 == 1 {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tpanic(\"impossible: missing chunk\")\n\t\t\t}\n\t\t}\n\t\tfor _, linktab := range h.links {\n\t\t\tfor _, chunk := range linktab {\n\t\t\t\tif chunk == 0 {\n\t\t\t\t\tpanic(\"impossible: missing chunk\")\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn true\n}\n\n// The actual read interface needed by NewReader.\n// If the passed in io.Reader does not also have ReadByte,\n// the NewReader will introduce its own buffering.\ntype Reader interface {\n\tio.Reader\n\tio.ByteReader\n}\n\n// Decompress state.\ntype decompressor struct {\n\t// Input source.\n\tr       Reader\n\troffset int64\n\n\t// Input bits, in top of b.\n\tb  uint32\n\tnb uint\n\n\t// Huffman decoders for literal/length, distance.\n\th1, h2 huffmanDecoder\n\n\t// Length arrays used to define Huffman codes.\n\tbits     *[maxNumLit + maxNumDist]int\n\tcodebits *[numCodes]int\n\n\t// Output history, buffer.\n\tdict dictDecoder\n\n\t// Temporary buffer (avoids repeated allocation).\n\tbuf [4]byte\n\n\t// Next step in the decompression,\n\t// and decompression state.\n\tstep      func(*decompressor)\n\tstepState int\n\tfinal     bool\n\terr       error\n\ttoRead    []byte\n\thl, hd    *huffmanDecoder\n\tcopyLen   int\n\tcopyDist  int\n}\n\nfunc (f *decompressor) nextBlock() {\n\tfor f.nb < 1+2 {\n\t\tif f.err = f.moreBits(); f.err != nil {\n\t\t\treturn\n\t\t}\n\t}\n\tf.final = f.b&1 == 1\n\tf.b >>= 1\n\ttyp := f.b & 3\n\tf.b >>= 2\n\tf.nb -= 1 + 2\n\tswitch typ {\n\tcase 0:\n\t\tf.dataBlock()\n\tcase 1:\n\t\t// compressed, fixed Huffman tables\n\t\tf.hl = &fixedHuffmanDecoder\n\t\tf.hd = nil\n\t\tf.huffmanBlock()\n\tcase 2:\n\t\t// compressed, dynamic Huffman tables\n\t\tif f.err = f.readHuffman(); f.err != nil {\n\t\t\tbreak\n\t\t}\n\t\tf.hl = &f.h1\n\t\tf.hd = &f.h2\n\t\tf.huffmanBlock()\n\tdefault:\n\t\t// 3 is reserved.\n\t\tf.err = CorruptInputError(f.roffset)\n\t}\n}\n\nfunc (f *decompressor) Read(b []byte) (int, error) {\n\tfor {\n\t\tif len(f.toRead) > 0 {\n\t\t\tn := copy(b, f.toRead)\n\t\t\tf.toRead = f.toRead[n:]\n\t\t\tif len(f.toRead) == 0 {\n\t\t\t\treturn n, f.err\n\t\t\t}\n\t\t\treturn n, nil\n\t\t}\n\t\tif f.err != nil {\n\t\t\treturn 0, f.err\n\t\t}\n\t\tf.step(f)\n\t\tif f.err != nil && len(f.toRead) == 0 {\n\t\t\tf.toRead = f.dict.readFlush() // Flush what's left in case of error\n\t\t}\n\t}\n}\n\n// Support the io.WriteTo interface for io.Copy and friends.\nfunc (f *decompressor) WriteTo(w io.Writer) (int64, error) {\n\ttotal := int64(0)\n\tflushed := false\n\tfor {\n\t\tif len(f.toRead) > 0 {\n\t\t\tn, err := w.Write(f.toRead)\n\t\t\ttotal += int64(n)\n\t\t\tif err != nil {\n\t\t\t\tf.err = err\n\t\t\t\treturn total, err\n\t\t\t}\n\t\t\tif n != len(f.toRead) {\n\t\t\t\treturn total, io.ErrShortWrite\n\t\t\t}\n\t\t\tf.toRead = f.toRead[:0]\n\t\t}\n\t\tif f.err != nil && flushed {\n\t\t\tif f.err == io.EOF {\n\t\t\t\treturn total, nil\n\t\t\t}\n\t\t\treturn total, f.err\n\t\t}\n\t\tif f.err == nil {\n\t\t\tf.step(f)\n\t\t}\n\t\tif len(f.toRead) == 0 && f.err != nil && !flushed {\n\t\t\tf.toRead = f.dict.readFlush() // Flush what's left in case of error\n\t\t\tflushed = true\n\t\t}\n\t}\n}\n\nfunc (f *decompressor) Close() error {\n\tif f.err == io.EOF {\n\t\treturn nil\n\t}\n\treturn f.err\n}\n\n// RFC 1951 section 3.2.7.\n// Compression with dynamic Huffman codes\n\nvar codeOrder = [...]int{16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15}\n\nfunc (f *decompressor) readHuffman() error {\n\t// HLIT[5], HDIST[5], HCLEN[4].\n\tfor f.nb < 5+5+4 {\n\t\tif err := f.moreBits(); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\tnlit := int(f.b&0x1F) + 257\n\tif nlit > maxNumLit {\n\t\treturn CorruptInputError(f.roffset)\n\t}\n\tf.b >>= 5\n\tndist := int(f.b&0x1F) + 1\n\tif ndist > maxNumDist {\n\t\treturn CorruptInputError(f.roffset)\n\t}\n\tf.b >>= 5\n\tnclen := int(f.b&0xF) + 4\n\t// numCodes is 19, so nclen is always valid.\n\tf.b >>= 4\n\tf.nb -= 5 + 5 + 4\n\n\t// (HCLEN+4)*3 bits: code lengths in the magic codeOrder order.\n\tfor i := 0; i < nclen; i++ {\n\t\tfor f.nb < 3 {\n\t\t\tif err := f.moreBits(); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\tf.codebits[codeOrder[i]] = int(f.b & 0x7)\n\t\tf.b >>= 3\n\t\tf.nb -= 3\n\t}\n\tfor i := nclen; i < len(codeOrder); i++ {\n\t\tf.codebits[codeOrder[i]] = 0\n\t}\n\tif !f.h1.init(f.codebits[0:]) {\n\t\treturn CorruptInputError(f.roffset)\n\t}\n\n\t// HLIT + 257 code lengths, HDIST + 1 code lengths,\n\t// using the code length Huffman code.\n\tfor i, n := 0, nlit+ndist; i < n; {\n\t\tx, err := f.huffSym(&f.h1)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif x < 16 {\n\t\t\t// Actual length.\n\t\t\tf.bits[i] = x\n\t\t\ti++\n\t\t\tcontinue\n\t\t}\n\t\t// Repeat previous length or zero.\n\t\tvar rep int\n\t\tvar nb uint\n\t\tvar b int\n\t\tswitch x {\n\t\tdefault:\n\t\t\treturn InternalError(\"unexpected length code\")\n\t\tcase 16:\n\t\t\trep = 3\n\t\t\tnb = 2\n\t\t\tif i == 0 {\n\t\t\t\treturn CorruptInputError(f.roffset)\n\t\t\t}\n\t\t\tb = f.bits[i-1]\n\t\tcase 17:\n\t\t\trep = 3\n\t\t\tnb = 3\n\t\t\tb = 0\n\t\tcase 18:\n\t\t\trep = 11\n\t\t\tnb = 7\n\t\t\tb = 0\n\t\t}\n\t\tfor f.nb < nb {\n\t\t\tif err := f.moreBits(); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t\trep += int(f.b & uint32(1<<nb-1))\n\t\tf.b >>= nb\n\t\tf.nb -= nb\n\t\tif i+rep > n {\n\t\t\treturn CorruptInputError(f.roffset)\n\t\t}\n\t\tfor j := 0; j < rep; j++ {\n\t\t\tf.bits[i] = b\n\t\t\ti++\n\t\t}\n\t}\n\n\tif !f.h1.init(f.bits[0:nlit]) || !f.h2.init(f.bits[nlit:nlit+ndist]) {\n\t\treturn CorruptInputError(f.roffset)\n\t}\n\n\t// As an optimization, we can initialize the min bits to read at a time\n\t// for the HLIT tree to the length of the EOB marker since we know that\n\t// every block must terminate with one. This preserves the property that\n\t// we never read any extra bytes after the end of the DEFLATE stream.\n\tif f.h1.min < f.bits[endBlockMarker] {\n\t\tf.h1.min = f.bits[endBlockMarker]\n\t}\n\n\treturn nil\n}\n\n// Decode a single Huffman block from f.\n// hl and hd are the Huffman states for the lit/length values\n// and the distance values, respectively. If hd == nil, using the\n// fixed distance encoding associated with fixed Huffman blocks.\nfunc (f *decompressor) huffmanBlock() {\n\tconst (\n\t\tstateInit = iota // Zero value must be stateInit\n\t\tstateDict\n\t)\n\n\tswitch f.stepState {\n\tcase stateInit:\n\t\tgoto readLiteral\n\tcase stateDict:\n\t\tgoto copyHistory\n\t}\n\nreadLiteral:\n\t// Read literal and/or (length, distance) according to RFC section 3.2.3.\n\t{\n\t\tv, err := f.huffSym(f.hl)\n\t\tif err != nil {\n\t\t\tf.err = err\n\t\t\treturn\n\t\t}\n\t\tvar n uint // number of bits extra\n\t\tvar length int\n\t\tswitch {\n\t\tcase v < 256:\n\t\t\tf.dict.writeByte(byte(v))\n\t\t\tif f.dict.availWrite() == 0 {\n\t\t\t\tf.toRead = f.dict.readFlush()\n\t\t\t\tf.step = (*decompressor).huffmanBlock\n\t\t\t\tf.stepState = stateInit\n\t\t\t\treturn\n\t\t\t}\n\t\t\tgoto readLiteral\n\t\tcase v == 256:\n\t\t\tf.finishBlock()\n\t\t\treturn\n\t\t// otherwise, reference to older data\n\t\tcase v < 265:\n\t\t\tlength = v - (257 - 3)\n\t\t\tn = 0\n\t\tcase v < 269:\n\t\t\tlength = v*2 - (265*2 - 11)\n\t\t\tn = 1\n\t\tcase v < 273:\n\t\t\tlength = v*4 - (269*4 - 19)\n\t\t\tn = 2\n\t\tcase v < 277:\n\t\t\tlength = v*8 - (273*8 - 35)\n\t\t\tn = 3\n\t\tcase v < 281:\n\t\t\tlength = v*16 - (277*16 - 67)\n\t\t\tn = 4\n\t\tcase v < 285:\n\t\t\tlength = v*32 - (281*32 - 131)\n\t\t\tn = 5\n\t\tcase v < maxNumLit:\n\t\t\tlength = 258\n\t\t\tn = 0\n\t\tdefault:\n\t\t\tf.err = CorruptInputError(f.roffset)\n\t\t\treturn\n\t\t}\n\t\tif n > 0 {\n\t\t\tfor f.nb < n {\n\t\t\t\tif err = f.moreBits(); err != nil {\n\t\t\t\t\tf.err = err\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t\tlength += int(f.b & uint32(1<<n-1))\n\t\t\tf.b >>= n\n\t\t\tf.nb -= n\n\t\t}\n\n\t\tvar dist int\n\t\tif f.hd == nil {\n\t\t\tfor f.nb < 5 {\n\t\t\t\tif err = f.moreBits(); err != nil {\n\t\t\t\t\tf.err = err\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t\tdist = int(reverseByte[(f.b&0x1F)<<3])\n\t\t\tf.b >>= 5\n\t\t\tf.nb -= 5\n\t\t} else {\n\t\t\tif dist, err = f.huffSym(f.hd); err != nil {\n\t\t\t\tf.err = err\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\n\t\tswitch {\n\t\tcase dist < 4:\n\t\t\tdist++\n\t\tcase dist < maxNumDist:\n\t\t\tnb := uint(dist-2) >> 1\n\t\t\t// have 1 bit in bottom of dist, need nb more.\n\t\t\textra := (dist & 1) << nb\n\t\t\tfor f.nb < nb {\n\t\t\t\tif err = f.moreBits(); err != nil {\n\t\t\t\t\tf.err = err\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t}\n\t\t\textra |= int(f.b & uint32(1<<nb-1))\n\t\t\tf.b >>= nb\n\t\t\tf.nb -= nb\n\t\t\tdist = 1<<(nb+1) + 1 + extra\n\t\tdefault:\n\t\t\tf.err = CorruptInputError(f.roffset)\n\t\t\treturn\n\t\t}\n\n\t\t// No check on length; encoding can be prescient.\n\t\tif dist > f.dict.histSize() {\n\t\t\tf.err = CorruptInputError(f.roffset)\n\t\t\treturn\n\t\t}\n\n\t\tf.copyLen, f.copyDist = length, dist\n\t\tgoto copyHistory\n\t}\n\ncopyHistory:\n\t// Perform a backwards copy according to RFC section 3.2.3.\n\t{\n\t\tcnt := f.dict.tryWriteCopy(f.copyDist, f.copyLen)\n\t\tif cnt == 0 {\n\t\t\tcnt = f.dict.writeCopy(f.copyDist, f.copyLen)\n\t\t}\n\t\tf.copyLen -= cnt\n\n\t\tif f.dict.availWrite() == 0 || f.copyLen > 0 {\n\t\t\tf.toRead = f.dict.readFlush()\n\t\t\tf.step = (*decompressor).huffmanBlock // We need to continue this work\n\t\t\tf.stepState = stateDict\n\t\t\treturn\n\t\t}\n\t\tgoto readLiteral\n\t}\n}\n\n// Copy a single uncompressed data block from input to output.\nfunc (f *decompressor) dataBlock() {\n\t// Uncompressed.\n\t// Discard current half-byte.\n\tf.nb = 0\n\tf.b = 0\n\n\t// Length then ones-complement of length.\n\tnr, err := io.ReadFull(f.r, f.buf[0:4])\n\tf.roffset += int64(nr)\n\tif err != nil {\n\t\tif err == io.EOF {\n\t\t\terr = io.ErrUnexpectedEOF\n\t\t}\n\t\tf.err = err\n\t\treturn\n\t}\n\tn := int(f.buf[0]) | int(f.buf[1])<<8\n\tnn := int(f.buf[2]) | int(f.buf[3])<<8\n\tif uint16(nn) != uint16(^n) {\n\t\tf.err = CorruptInputError(f.roffset)\n\t\treturn\n\t}\n\n\tif n == 0 {\n\t\tf.toRead = f.dict.readFlush()\n\t\tf.finishBlock()\n\t\treturn\n\t}\n\n\tf.copyLen = n\n\tf.copyData()\n}\n\n// copyData copies f.copyLen bytes from the underlying reader into f.hist.\n// It pauses for reads when f.hist is full.\nfunc (f *decompressor) copyData() {\n\tbuf := f.dict.writeSlice()\n\tif len(buf) > f.copyLen {\n\t\tbuf = buf[:f.copyLen]\n\t}\n\n\tcnt, err := io.ReadFull(f.r, buf)\n\tf.roffset += int64(cnt)\n\tf.copyLen -= cnt\n\tf.dict.writeMark(cnt)\n\tif err != nil {\n\t\tif err == io.EOF {\n\t\t\terr = io.ErrUnexpectedEOF\n\t\t}\n\t\tf.err = err\n\t\treturn\n\t}\n\n\tif f.dict.availWrite() == 0 || f.copyLen > 0 {\n\t\tf.toRead = f.dict.readFlush()\n\t\tf.step = (*decompressor).copyData\n\t\treturn\n\t}\n\tf.finishBlock()\n}\n\nfunc (f *decompressor) finishBlock() {\n\tif f.final {\n\t\tif f.dict.availRead() > 0 {\n\t\t\tf.toRead = f.dict.readFlush()\n\t\t}\n\t\tf.err = io.EOF\n\t}\n\tf.step = (*decompressor).nextBlock\n}\n\nfunc (f *decompressor) moreBits() error {\n\tc, err := f.r.ReadByte()\n\tif err != nil {\n\t\tif err == io.EOF {\n\t\t\terr = io.ErrUnexpectedEOF\n\t\t}\n\t\treturn err\n\t}\n\tf.roffset++\n\tf.b |= uint32(c) << f.nb\n\tf.nb += 8\n\treturn nil\n}\n\n// Read the next Huffman-encoded symbol from f according to h.\nfunc (f *decompressor) huffSym(h *huffmanDecoder) (int, error) {\n\t// Since a huffmanDecoder can be empty or be composed of a degenerate tree\n\t// with single element, huffSym must error on these two edge cases. In both\n\t// cases, the chunks slice will be 0 for the invalid sequence, leading it\n\t// satisfy the n == 0 check below.\n\tn := uint(h.min)\n\tfor {\n\t\tfor f.nb < n {\n\t\t\tif err := f.moreBits(); err != nil {\n\t\t\t\treturn 0, err\n\t\t\t}\n\t\t}\n\t\tchunk := h.chunks[f.b&(huffmanNumChunks-1)]\n\t\tn = uint(chunk & huffmanCountMask)\n\t\tif n > huffmanChunkBits {\n\t\t\tchunk = h.links[chunk>>huffmanValueShift][(f.b>>huffmanChunkBits)&h.linkMask]\n\t\t\tn = uint(chunk & huffmanCountMask)\n\t\t}\n\t\tif n <= f.nb {\n\t\t\tif n == 0 {\n\t\t\t\tf.err = CorruptInputError(f.roffset)\n\t\t\t\treturn 0, f.err\n\t\t\t}\n\t\t\tf.b >>= n\n\t\t\tf.nb -= n\n\t\t\treturn int(chunk >> huffmanValueShift), nil\n\t\t}\n\t}\n}\n\nfunc makeReader(r io.Reader) Reader {\n\tif rr, ok := r.(Reader); ok {\n\t\treturn rr\n\t}\n\treturn bufio.NewReader(r)\n}\n\nfunc fixedHuffmanDecoderInit() {\n\tfixedOnce.Do(func() {\n\t\t// These come from the RFC section 3.2.6.\n\t\tvar bits [288]int\n\t\tfor i := 0; i < 144; i++ {\n\t\t\tbits[i] = 8\n\t\t}\n\t\tfor i := 144; i < 256; i++ {\n\t\t\tbits[i] = 9\n\t\t}\n\t\tfor i := 256; i < 280; i++ {\n\t\t\tbits[i] = 7\n\t\t}\n\t\tfor i := 280; i < 288; i++ {\n\t\t\tbits[i] = 8\n\t\t}\n\t\tfixedHuffmanDecoder.init(bits[:])\n\t})\n}\n\nfunc (f *decompressor) Reset(r io.Reader, dict []byte) error {\n\t*f = decompressor{\n\t\tr:        makeReader(r),\n\t\tbits:     f.bits,\n\t\tcodebits: f.codebits,\n\t\tdict:     f.dict,\n\t\tstep:     (*decompressor).nextBlock,\n\t}\n\tf.dict.init(maxMatchOffset, dict)\n\treturn nil\n}\n\n// NewReader returns a new ReadCloser that can be used\n// to read the uncompressed version of r.\n// If r does not also implement io.ByteReader,\n// the decompressor may read more data than necessary from r.\n// It is the caller's responsibility to call Close on the ReadCloser\n// when finished reading.\n//\n// The ReadCloser returned by NewReader also implements Resetter.\nfunc NewReader(r io.Reader) io.ReadCloser {\n\tfixedHuffmanDecoderInit()\n\n\tvar f decompressor\n\tf.r = makeReader(r)\n\tf.bits = new([maxNumLit + maxNumDist]int)\n\tf.codebits = new([numCodes]int)\n\tf.step = (*decompressor).nextBlock\n\tf.dict.init(maxMatchOffset, nil)\n\treturn &f\n}\n\n// NewReaderDict is like NewReader but initializes the reader\n// with a preset dictionary. The returned Reader behaves as if\n// the uncompressed data stream started with the given dictionary,\n// which has already been read. NewReaderDict is typically used\n// to read data compressed by NewWriterDict.\n//\n// The ReadCloser returned by NewReader also implements Resetter.\nfunc NewReaderDict(r io.Reader, dict []byte) io.ReadCloser {\n\tfixedHuffmanDecoderInit()\n\n\tvar f decompressor\n\tf.r = makeReader(r)\n\tf.bits = new([maxNumLit + maxNumDist]int)\n\tf.codebits = new([numCodes]int)\n\tf.step = (*decompressor).nextBlock\n\tf.dict.init(maxMatchOffset, dict)\n\treturn &f\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/reverse_bits.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage flate\n\nvar reverseByte = [256]byte{\n\t0x00, 0x80, 0x40, 0xc0, 0x20, 0xa0, 0x60, 0xe0,\n\t0x10, 0x90, 0x50, 0xd0, 0x30, 0xb0, 0x70, 0xf0,\n\t0x08, 0x88, 0x48, 0xc8, 0x28, 0xa8, 0x68, 0xe8,\n\t0x18, 0x98, 0x58, 0xd8, 0x38, 0xb8, 0x78, 0xf8,\n\t0x04, 0x84, 0x44, 0xc4, 0x24, 0xa4, 0x64, 0xe4,\n\t0x14, 0x94, 0x54, 0xd4, 0x34, 0xb4, 0x74, 0xf4,\n\t0x0c, 0x8c, 0x4c, 0xcc, 0x2c, 0xac, 0x6c, 0xec,\n\t0x1c, 0x9c, 0x5c, 0xdc, 0x3c, 0xbc, 0x7c, 0xfc,\n\t0x02, 0x82, 0x42, 0xc2, 0x22, 0xa2, 0x62, 0xe2,\n\t0x12, 0x92, 0x52, 0xd2, 0x32, 0xb2, 0x72, 0xf2,\n\t0x0a, 0x8a, 0x4a, 0xca, 0x2a, 0xaa, 0x6a, 0xea,\n\t0x1a, 0x9a, 0x5a, 0xda, 0x3a, 0xba, 0x7a, 0xfa,\n\t0x06, 0x86, 0x46, 0xc6, 0x26, 0xa6, 0x66, 0xe6,\n\t0x16, 0x96, 0x56, 0xd6, 0x36, 0xb6, 0x76, 0xf6,\n\t0x0e, 0x8e, 0x4e, 0xce, 0x2e, 0xae, 0x6e, 0xee,\n\t0x1e, 0x9e, 0x5e, 0xde, 0x3e, 0xbe, 0x7e, 0xfe,\n\t0x01, 0x81, 0x41, 0xc1, 0x21, 0xa1, 0x61, 0xe1,\n\t0x11, 0x91, 0x51, 0xd1, 0x31, 0xb1, 0x71, 0xf1,\n\t0x09, 0x89, 0x49, 0xc9, 0x29, 0xa9, 0x69, 0xe9,\n\t0x19, 0x99, 0x59, 0xd9, 0x39, 0xb9, 0x79, 0xf9,\n\t0x05, 0x85, 0x45, 0xc5, 0x25, 0xa5, 0x65, 0xe5,\n\t0x15, 0x95, 0x55, 0xd5, 0x35, 0xb5, 0x75, 0xf5,\n\t0x0d, 0x8d, 0x4d, 0xcd, 0x2d, 0xad, 0x6d, 0xed,\n\t0x1d, 0x9d, 0x5d, 0xdd, 0x3d, 0xbd, 0x7d, 0xfd,\n\t0x03, 0x83, 0x43, 0xc3, 0x23, 0xa3, 0x63, 0xe3,\n\t0x13, 0x93, 0x53, 0xd3, 0x33, 0xb3, 0x73, 0xf3,\n\t0x0b, 0x8b, 0x4b, 0xcb, 0x2b, 0xab, 0x6b, 0xeb,\n\t0x1b, 0x9b, 0x5b, 0xdb, 0x3b, 0xbb, 0x7b, 0xfb,\n\t0x07, 0x87, 0x47, 0xc7, 0x27, 0xa7, 0x67, 0xe7,\n\t0x17, 0x97, 0x57, 0xd7, 0x37, 0xb7, 0x77, 0xf7,\n\t0x0f, 0x8f, 0x4f, 0xcf, 0x2f, 0xaf, 0x6f, 0xef,\n\t0x1f, 0x9f, 0x5f, 0xdf, 0x3f, 0xbf, 0x7f, 0xff,\n}\n\nfunc reverseUint16(v uint16) uint16 {\n\treturn uint16(reverseByte[v>>8]) | uint16(reverseByte[v&0xFF])<<8\n}\n\nfunc reverseBits(number uint16, bitLength byte) uint16 {\n\treturn reverseUint16(number << uint8(16-bitLength))\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/snappy.go",
    "content": "// Copyright 2011 The Snappy-Go Authors. All rights reserved.\n// Modified for deflate by Klaus Post (c) 2015.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage flate\n\n// emitLiteral writes a literal chunk and returns the number of bytes written.\nfunc emitLiteral(dst *tokens, lit []byte) {\n\tol := int(dst.n)\n\tfor i, v := range lit {\n\t\tdst.tokens[(i+ol)&maxStoreBlockSize] = token(v)\n\t}\n\tdst.n += uint16(len(lit))\n}\n\n// emitCopy writes a copy chunk and returns the number of bytes written.\nfunc emitCopy(dst *tokens, offset, length int) {\n\tdst.tokens[dst.n] = matchToken(uint32(length-3), uint32(offset-minOffsetSize))\n\tdst.n++\n}\n\ntype snappyEnc interface {\n\tEncode(dst *tokens, src []byte)\n\tReset()\n}\n\nfunc newSnappy(level int) snappyEnc {\n\tswitch level {\n\tcase 1:\n\t\treturn &snappyL1{}\n\tcase 2:\n\t\treturn &snappyL2{snappyGen: snappyGen{cur: maxStoreBlockSize, prev: make([]byte, 0, maxStoreBlockSize)}}\n\tcase 3:\n\t\treturn &snappyL3{snappyGen: snappyGen{cur: maxStoreBlockSize, prev: make([]byte, 0, maxStoreBlockSize)}}\n\tcase 4:\n\t\treturn &snappyL4{snappyL3{snappyGen: snappyGen{cur: maxStoreBlockSize, prev: make([]byte, 0, maxStoreBlockSize)}}}\n\tdefault:\n\t\tpanic(\"invalid level specified\")\n\t}\n}\n\nconst (\n\ttableBits       = 14             // Bits used in the table\n\ttableSize       = 1 << tableBits // Size of the table\n\ttableMask       = tableSize - 1  // Mask for table indices. Redundant, but can eliminate bounds checks.\n\ttableShift      = 32 - tableBits // Right-shift to get the tableBits most significant bits of a uint32.\n\tbaseMatchOffset = 1              // The smallest match offset\n\tbaseMatchLength = 3              // The smallest match length per the RFC section 3.2.5\n\tmaxMatchOffset  = 1 << 15        // The largest match offset\n)\n\nfunc load32(b []byte, i int) uint32 {\n\tb = b[i : i+4 : len(b)] // Help the compiler eliminate bounds checks on the next line.\n\treturn uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24\n}\n\nfunc load64(b []byte, i int) uint64 {\n\tb = b[i : i+8 : len(b)] // Help the compiler eliminate bounds checks on the next line.\n\treturn uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 |\n\t\tuint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56\n}\n\nfunc hash(u uint32) uint32 {\n\treturn (u * 0x1e35a7bd) >> tableShift\n}\n\n// snappyL1 encapsulates level 1 compression\ntype snappyL1 struct{}\n\nfunc (e *snappyL1) Reset() {}\n\nfunc (e *snappyL1) Encode(dst *tokens, src []byte) {\n\tconst (\n\t\tinputMargin            = 16 - 1\n\t\tminNonLiteralBlockSize = 1 + 1 + inputMargin\n\t)\n\n\t// This check isn't in the Snappy implementation, but there, the caller\n\t// instead of the callee handles this case.\n\tif len(src) < minNonLiteralBlockSize {\n\t\t// We do not fill the token table.\n\t\t// This will be picked up by caller.\n\t\tdst.n = uint16(len(src))\n\t\treturn\n\t}\n\n\t// Initialize the hash table.\n\t//\n\t// The table element type is uint16, as s < sLimit and sLimit < len(src)\n\t// and len(src) <= maxStoreBlockSize and maxStoreBlockSize == 65535.\n\tvar table [tableSize]uint16\n\n\t// sLimit is when to stop looking for offset/length copies. The inputMargin\n\t// lets us use a fast path for emitLiteral in the main loop, while we are\n\t// looking for copies.\n\tsLimit := len(src) - inputMargin\n\n\t// nextEmit is where in src the next emitLiteral should start from.\n\tnextEmit := 0\n\n\t// The encoded form must start with a literal, as there are no previous\n\t// bytes to copy, so we start looking for hash matches at s == 1.\n\ts := 1\n\tnextHash := hash(load32(src, s))\n\n\tfor {\n\t\t// Copied from the C++ snappy implementation:\n\t\t//\n\t\t// Heuristic match skipping: If 32 bytes are scanned with no matches\n\t\t// found, start looking only at every other byte. If 32 more bytes are\n\t\t// scanned (or skipped), look at every third byte, etc.. When a match\n\t\t// is found, immediately go back to looking at every byte. This is a\n\t\t// small loss (~5% performance, ~0.1% density) for compressible data\n\t\t// due to more bookkeeping, but for non-compressible data (such as\n\t\t// JPEG) it's a huge win since the compressor quickly \"realizes\" the\n\t\t// data is incompressible and doesn't bother looking for matches\n\t\t// everywhere.\n\t\t//\n\t\t// The \"skip\" variable keeps track of how many bytes there are since\n\t\t// the last match; dividing it by 32 (ie. right-shifting by five) gives\n\t\t// the number of bytes to move ahead for each iteration.\n\t\tskip := 32\n\n\t\tnextS := s\n\t\tcandidate := 0\n\t\tfor {\n\t\t\ts = nextS\n\t\t\tbytesBetweenHashLookups := skip >> 5\n\t\t\tnextS = s + bytesBetweenHashLookups\n\t\t\tskip += bytesBetweenHashLookups\n\t\t\tif nextS > sLimit {\n\t\t\t\tgoto emitRemainder\n\t\t\t}\n\t\t\tcandidate = int(table[nextHash&tableMask])\n\t\t\ttable[nextHash&tableMask] = uint16(s)\n\t\t\tnextHash = hash(load32(src, nextS))\n\t\t\tif s-candidate <= maxMatchOffset && load32(src, s) == load32(src, candidate) {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\t// A 4-byte match has been found. We'll later see if more than 4 bytes\n\t\t// match. But, prior to the match, src[nextEmit:s] are unmatched. Emit\n\t\t// them as literal bytes.\n\t\temitLiteral(dst, src[nextEmit:s])\n\n\t\t// Call emitCopy, and then see if another emitCopy could be our next\n\t\t// move. Repeat until we find no match for the input immediately after\n\t\t// what was consumed by the last emitCopy call.\n\t\t//\n\t\t// If we exit this loop normally then we need to call emitLiteral next,\n\t\t// though we don't yet know how big the literal will be. We handle that\n\t\t// by proceeding to the next iteration of the main loop. We also can\n\t\t// exit this loop via goto if we get close to exhausting the input.\n\t\tfor {\n\t\t\t// Invariant: we have a 4-byte match at s, and no need to emit any\n\t\t\t// literal bytes prior to s.\n\t\t\tbase := s\n\n\t\t\t// Extend the 4-byte match as long as possible.\n\t\t\t//\n\t\t\t// This is an inlined version of Snappy's:\n\t\t\t//\ts = extendMatch(src, candidate+4, s+4)\n\t\t\ts += 4\n\t\t\ts1 := base + maxMatchLength\n\t\t\tif s1 > len(src) {\n\t\t\t\ts1 = len(src)\n\t\t\t}\n\t\t\ta := src[s:s1]\n\t\t\tb := src[candidate+4:]\n\t\t\tb = b[:len(a)]\n\t\t\tl := len(a)\n\t\t\tfor i := range a {\n\t\t\t\tif a[i] != b[i] {\n\t\t\t\t\tl = i\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\ts += l\n\n\t\t\t// matchToken is flate's equivalent of Snappy's emitCopy.\n\t\t\tdst.tokens[dst.n] = matchToken(uint32(s-base-baseMatchLength), uint32(base-candidate-baseMatchOffset))\n\t\t\tdst.n++\n\t\t\tnextEmit = s\n\t\t\tif s >= sLimit {\n\t\t\t\tgoto emitRemainder\n\t\t\t}\n\n\t\t\t// We could immediately start working at s now, but to improve\n\t\t\t// compression we first update the hash table at s-1 and at s. If\n\t\t\t// another emitCopy is not our next move, also calculate nextHash\n\t\t\t// at s+1. At least on GOARCH=amd64, these three hash calculations\n\t\t\t// are faster as one load64 call (with some shifts) instead of\n\t\t\t// three load32 calls.\n\t\t\tx := load64(src, s-1)\n\t\t\tprevHash := hash(uint32(x >> 0))\n\t\t\ttable[prevHash&tableMask] = uint16(s - 1)\n\t\t\tcurrHash := hash(uint32(x >> 8))\n\t\t\tcandidate = int(table[currHash&tableMask])\n\t\t\ttable[currHash&tableMask] = uint16(s)\n\t\t\tif s-candidate > maxMatchOffset || uint32(x>>8) != load32(src, candidate) {\n\t\t\t\tnextHash = hash(uint32(x >> 16))\n\t\t\t\ts++\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\nemitRemainder:\n\tif nextEmit < len(src) {\n\t\temitLiteral(dst, src[nextEmit:])\n\t}\n}\n\ntype tableEntry struct {\n\tval    uint32\n\toffset int32\n}\n\nfunc load3232(b []byte, i int32) uint32 {\n\tb = b[i : i+4 : len(b)] // Help the compiler eliminate bounds checks on the next line.\n\treturn uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24\n}\n\nfunc load6432(b []byte, i int32) uint64 {\n\tb = b[i : i+8 : len(b)] // Help the compiler eliminate bounds checks on the next line.\n\treturn uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 |\n\t\tuint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56\n}\n\n// snappyGen maintains the table for matches,\n// and the previous byte block for level 2.\n// This is the generic implementation.\ntype snappyGen struct {\n\tprev []byte\n\tcur  int32\n}\n\n// snappyGen maintains the table for matches,\n// and the previous byte block for level 2.\n// This is the generic implementation.\ntype snappyL2 struct {\n\tsnappyGen\n\ttable [tableSize]tableEntry\n}\n\n// EncodeL2 uses a similar algorithm to level 1, but is capable\n// of matching across blocks giving better compression at a small slowdown.\nfunc (e *snappyL2) Encode(dst *tokens, src []byte) {\n\tconst (\n\t\tinputMargin            = 8 - 1\n\t\tminNonLiteralBlockSize = 1 + 1 + inputMargin\n\t)\n\n\t// Protect against e.cur wraparound.\n\tif e.cur > 1<<30 {\n\t\tfor i := range e.table {\n\t\t\te.table[i] = tableEntry{}\n\t\t}\n\t\te.cur = maxStoreBlockSize\n\t}\n\n\t// This check isn't in the Snappy implementation, but there, the caller\n\t// instead of the callee handles this case.\n\tif len(src) < minNonLiteralBlockSize {\n\t\t// We do not fill the token table.\n\t\t// This will be picked up by caller.\n\t\tdst.n = uint16(len(src))\n\t\te.cur += maxStoreBlockSize\n\t\te.prev = e.prev[:0]\n\t\treturn\n\t}\n\n\t// sLimit is when to stop looking for offset/length copies. The inputMargin\n\t// lets us use a fast path for emitLiteral in the main loop, while we are\n\t// looking for copies.\n\tsLimit := int32(len(src) - inputMargin)\n\n\t// nextEmit is where in src the next emitLiteral should start from.\n\tnextEmit := int32(0)\n\ts := int32(0)\n\tcv := load3232(src, s)\n\tnextHash := hash(cv)\n\n\tfor {\n\t\t// Copied from the C++ snappy implementation:\n\t\t//\n\t\t// Heuristic match skipping: If 32 bytes are scanned with no matches\n\t\t// found, start looking only at every other byte. If 32 more bytes are\n\t\t// scanned (or skipped), look at every third byte, etc.. When a match\n\t\t// is found, immediately go back to looking at every byte. This is a\n\t\t// small loss (~5% performance, ~0.1% density) for compressible data\n\t\t// due to more bookkeeping, but for non-compressible data (such as\n\t\t// JPEG) it's a huge win since the compressor quickly \"realizes\" the\n\t\t// data is incompressible and doesn't bother looking for matches\n\t\t// everywhere.\n\t\t//\n\t\t// The \"skip\" variable keeps track of how many bytes there are since\n\t\t// the last match; dividing it by 32 (ie. right-shifting by five) gives\n\t\t// the number of bytes to move ahead for each iteration.\n\t\tskip := int32(32)\n\n\t\tnextS := s\n\t\tvar candidate tableEntry\n\t\tfor {\n\t\t\ts = nextS\n\t\t\tbytesBetweenHashLookups := skip >> 5\n\t\t\tnextS = s + bytesBetweenHashLookups\n\t\t\tskip += bytesBetweenHashLookups\n\t\t\tif nextS > sLimit {\n\t\t\t\tgoto emitRemainder\n\t\t\t}\n\t\t\tcandidate = e.table[nextHash&tableMask]\n\t\t\tnow := load3232(src, nextS)\n\t\t\te.table[nextHash&tableMask] = tableEntry{offset: s + e.cur, val: cv}\n\t\t\tnextHash = hash(now)\n\n\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\tif offset > maxMatchOffset || cv != candidate.val {\n\t\t\t\t// Out of range or not matched.\n\t\t\t\tcv = now\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\n\t\t// A 4-byte match has been found. We'll later see if more than 4 bytes\n\t\t// match. But, prior to the match, src[nextEmit:s] are unmatched. Emit\n\t\t// them as literal bytes.\n\t\temitLiteral(dst, src[nextEmit:s])\n\n\t\t// Call emitCopy, and then see if another emitCopy could be our next\n\t\t// move. Repeat until we find no match for the input immediately after\n\t\t// what was consumed by the last emitCopy call.\n\t\t//\n\t\t// If we exit this loop normally then we need to call emitLiteral next,\n\t\t// though we don't yet know how big the literal will be. We handle that\n\t\t// by proceeding to the next iteration of the main loop. We also can\n\t\t// exit this loop via goto if we get close to exhausting the input.\n\t\tfor {\n\t\t\t// Invariant: we have a 4-byte match at s, and no need to emit any\n\t\t\t// literal bytes prior to s.\n\n\t\t\t// Extend the 4-byte match as long as possible.\n\t\t\t//\n\t\t\ts += 4\n\t\t\tt := candidate.offset - e.cur + 4\n\t\t\tl := e.matchlen(s, t, src)\n\n\t\t\t// matchToken is flate's equivalent of Snappy's emitCopy. (length,offset)\n\t\t\tdst.tokens[dst.n] = matchToken(uint32(l+4-baseMatchLength), uint32(s-t-baseMatchOffset))\n\t\t\tdst.n++\n\t\t\ts += l\n\t\t\tnextEmit = s\n\t\t\tif s >= sLimit {\n\t\t\t\tt += l\n\t\t\t\t// Index first pair after match end.\n\t\t\t\tif int(t+4) < len(src) && t > 0 {\n\t\t\t\t\tcv := load3232(src, t)\n\t\t\t\t\te.table[hash(cv)&tableMask] = tableEntry{offset: t + e.cur, val: cv}\n\t\t\t\t}\n\t\t\t\tgoto emitRemainder\n\t\t\t}\n\n\t\t\t// We could immediately start working at s now, but to improve\n\t\t\t// compression we first update the hash table at s-1 and at s. If\n\t\t\t// another emitCopy is not our next move, also calculate nextHash\n\t\t\t// at s+1. At least on GOARCH=amd64, these three hash calculations\n\t\t\t// are faster as one load64 call (with some shifts) instead of\n\t\t\t// three load32 calls.\n\t\t\tx := load6432(src, s-1)\n\t\t\tprevHash := hash(uint32(x))\n\t\t\te.table[prevHash&tableMask] = tableEntry{offset: e.cur + s - 1, val: uint32(x)}\n\t\t\tx >>= 8\n\t\t\tcurrHash := hash(uint32(x))\n\t\t\tcandidate = e.table[currHash&tableMask]\n\t\t\te.table[currHash&tableMask] = tableEntry{offset: e.cur + s, val: uint32(x)}\n\n\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\tif offset > maxMatchOffset || uint32(x) != candidate.val {\n\t\t\t\tcv = uint32(x >> 8)\n\t\t\t\tnextHash = hash(cv)\n\t\t\t\ts++\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\nemitRemainder:\n\tif int(nextEmit) < len(src) {\n\t\temitLiteral(dst, src[nextEmit:])\n\t}\n\te.cur += int32(len(src))\n\te.prev = e.prev[:len(src)]\n\tcopy(e.prev, src)\n}\n\ntype tableEntryPrev struct {\n\tCur  tableEntry\n\tPrev tableEntry\n}\n\n// snappyL3\ntype snappyL3 struct {\n\tsnappyGen\n\ttable [tableSize]tableEntryPrev\n}\n\n// Encode uses a similar algorithm to level 2, will check up to two candidates.\nfunc (e *snappyL3) Encode(dst *tokens, src []byte) {\n\tconst (\n\t\tinputMargin            = 8 - 1\n\t\tminNonLiteralBlockSize = 1 + 1 + inputMargin\n\t)\n\n\t// Protect against e.cur wraparound.\n\tif e.cur > 1<<30 {\n\t\tfor i := range e.table {\n\t\t\te.table[i] = tableEntryPrev{}\n\t\t}\n\t\te.snappyGen = snappyGen{cur: maxStoreBlockSize, prev: e.prev[:0]}\n\t}\n\n\t// This check isn't in the Snappy implementation, but there, the caller\n\t// instead of the callee handles this case.\n\tif len(src) < minNonLiteralBlockSize {\n\t\t// We do not fill the token table.\n\t\t// This will be picked up by caller.\n\t\tdst.n = uint16(len(src))\n\t\te.cur += maxStoreBlockSize\n\t\te.prev = e.prev[:0]\n\t\treturn\n\t}\n\n\t// sLimit is when to stop looking for offset/length copies. The inputMargin\n\t// lets us use a fast path for emitLiteral in the main loop, while we are\n\t// looking for copies.\n\tsLimit := int32(len(src) - inputMargin)\n\n\t// nextEmit is where in src the next emitLiteral should start from.\n\tnextEmit := int32(0)\n\ts := int32(0)\n\tcv := load3232(src, s)\n\tnextHash := hash(cv)\n\n\tfor {\n\t\t// Copied from the C++ snappy implementation:\n\t\t//\n\t\t// Heuristic match skipping: If 32 bytes are scanned with no matches\n\t\t// found, start looking only at every other byte. If 32 more bytes are\n\t\t// scanned (or skipped), look at every third byte, etc.. When a match\n\t\t// is found, immediately go back to looking at every byte. This is a\n\t\t// small loss (~5% performance, ~0.1% density) for compressible data\n\t\t// due to more bookkeeping, but for non-compressible data (such as\n\t\t// JPEG) it's a huge win since the compressor quickly \"realizes\" the\n\t\t// data is incompressible and doesn't bother looking for matches\n\t\t// everywhere.\n\t\t//\n\t\t// The \"skip\" variable keeps track of how many bytes there are since\n\t\t// the last match; dividing it by 32 (ie. right-shifting by five) gives\n\t\t// the number of bytes to move ahead for each iteration.\n\t\tskip := int32(32)\n\n\t\tnextS := s\n\t\tvar candidate tableEntry\n\t\tfor {\n\t\t\ts = nextS\n\t\t\tbytesBetweenHashLookups := skip >> 5\n\t\t\tnextS = s + bytesBetweenHashLookups\n\t\t\tskip += bytesBetweenHashLookups\n\t\t\tif nextS > sLimit {\n\t\t\t\tgoto emitRemainder\n\t\t\t}\n\t\t\tcandidates := e.table[nextHash&tableMask]\n\t\t\tnow := load3232(src, nextS)\n\t\t\te.table[nextHash&tableMask] = tableEntryPrev{Prev: candidates.Cur, Cur: tableEntry{offset: s + e.cur, val: cv}}\n\t\t\tnextHash = hash(now)\n\n\t\t\t// Check both candidates\n\t\t\tcandidate = candidates.Cur\n\t\t\tif cv == candidate.val {\n\t\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\t\tif offset <= maxMatchOffset {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t// We only check if value mismatches.\n\t\t\t\t// Offset will always be invalid in other cases.\n\t\t\t\tcandidate = candidates.Prev\n\t\t\t\tif cv == candidate.val {\n\t\t\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\t\t\tif offset <= maxMatchOffset {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tcv = now\n\t\t}\n\n\t\t// A 4-byte match has been found. We'll later see if more than 4 bytes\n\t\t// match. But, prior to the match, src[nextEmit:s] are unmatched. Emit\n\t\t// them as literal bytes.\n\t\temitLiteral(dst, src[nextEmit:s])\n\n\t\t// Call emitCopy, and then see if another emitCopy could be our next\n\t\t// move. Repeat until we find no match for the input immediately after\n\t\t// what was consumed by the last emitCopy call.\n\t\t//\n\t\t// If we exit this loop normally then we need to call emitLiteral next,\n\t\t// though we don't yet know how big the literal will be. We handle that\n\t\t// by proceeding to the next iteration of the main loop. We also can\n\t\t// exit this loop via goto if we get close to exhausting the input.\n\t\tfor {\n\t\t\t// Invariant: we have a 4-byte match at s, and no need to emit any\n\t\t\t// literal bytes prior to s.\n\n\t\t\t// Extend the 4-byte match as long as possible.\n\t\t\t//\n\t\t\ts += 4\n\t\t\tt := candidate.offset - e.cur + 4\n\t\t\tl := e.matchlen(s, t, src)\n\n\t\t\t// matchToken is flate's equivalent of Snappy's emitCopy. (length,offset)\n\t\t\tdst.tokens[dst.n] = matchToken(uint32(l+4-baseMatchLength), uint32(s-t-baseMatchOffset))\n\t\t\tdst.n++\n\t\t\ts += l\n\t\t\tnextEmit = s\n\t\t\tif s >= sLimit {\n\t\t\t\tt += l\n\t\t\t\t// Index first pair after match end.\n\t\t\t\tif int(t+4) < len(src) && t > 0 {\n\t\t\t\t\tcv := load3232(src, t)\n\t\t\t\t\tnextHash = hash(cv)\n\t\t\t\t\te.table[nextHash&tableMask] = tableEntryPrev{\n\t\t\t\t\t\tPrev: e.table[nextHash&tableMask].Cur,\n\t\t\t\t\t\tCur:  tableEntry{offset: e.cur + t, val: cv},\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tgoto emitRemainder\n\t\t\t}\n\n\t\t\t// We could immediately start working at s now, but to improve\n\t\t\t// compression we first update the hash table at s-3 to s. If\n\t\t\t// another emitCopy is not our next move, also calculate nextHash\n\t\t\t// at s+1. At least on GOARCH=amd64, these three hash calculations\n\t\t\t// are faster as one load64 call (with some shifts) instead of\n\t\t\t// three load32 calls.\n\t\t\tx := load6432(src, s-3)\n\t\t\tprevHash := hash(uint32(x))\n\t\t\te.table[prevHash&tableMask] = tableEntryPrev{\n\t\t\t\tPrev: e.table[prevHash&tableMask].Cur,\n\t\t\t\tCur:  tableEntry{offset: e.cur + s - 3, val: uint32(x)},\n\t\t\t}\n\t\t\tx >>= 8\n\t\t\tprevHash = hash(uint32(x))\n\n\t\t\te.table[prevHash&tableMask] = tableEntryPrev{\n\t\t\t\tPrev: e.table[prevHash&tableMask].Cur,\n\t\t\t\tCur:  tableEntry{offset: e.cur + s - 2, val: uint32(x)},\n\t\t\t}\n\t\t\tx >>= 8\n\t\t\tprevHash = hash(uint32(x))\n\n\t\t\te.table[prevHash&tableMask] = tableEntryPrev{\n\t\t\t\tPrev: e.table[prevHash&tableMask].Cur,\n\t\t\t\tCur:  tableEntry{offset: e.cur + s - 1, val: uint32(x)},\n\t\t\t}\n\t\t\tx >>= 8\n\t\t\tcurrHash := hash(uint32(x))\n\t\t\tcandidates := e.table[currHash&tableMask]\n\t\t\tcv = uint32(x)\n\t\t\te.table[currHash&tableMask] = tableEntryPrev{\n\t\t\t\tPrev: candidates.Cur,\n\t\t\t\tCur:  tableEntry{offset: s + e.cur, val: cv},\n\t\t\t}\n\n\t\t\t// Check both candidates\n\t\t\tcandidate = candidates.Cur\n\t\t\tif cv == candidate.val {\n\t\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\t\tif offset <= maxMatchOffset {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t// We only check if value mismatches.\n\t\t\t\t// Offset will always be invalid in other cases.\n\t\t\t\tcandidate = candidates.Prev\n\t\t\t\tif cv == candidate.val {\n\t\t\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\t\t\tif offset <= maxMatchOffset {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tcv = uint32(x >> 8)\n\t\t\tnextHash = hash(cv)\n\t\t\ts++\n\t\t\tbreak\n\t\t}\n\t}\n\nemitRemainder:\n\tif int(nextEmit) < len(src) {\n\t\temitLiteral(dst, src[nextEmit:])\n\t}\n\te.cur += int32(len(src))\n\te.prev = e.prev[:len(src)]\n\tcopy(e.prev, src)\n}\n\n// snappyL4\ntype snappyL4 struct {\n\tsnappyL3\n}\n\n// Encode uses a similar algorithm to level 3,\n// but will check up to two candidates if first isn't long enough.\nfunc (e *snappyL4) Encode(dst *tokens, src []byte) {\n\tconst (\n\t\tinputMargin            = 8 - 3\n\t\tminNonLiteralBlockSize = 1 + 1 + inputMargin\n\t\tmatchLenGood           = 12\n\t)\n\n\t// Protect against e.cur wraparound.\n\tif e.cur > 1<<30 {\n\t\tfor i := range e.table {\n\t\t\te.table[i] = tableEntryPrev{}\n\t\t}\n\t\te.snappyGen = snappyGen{cur: maxStoreBlockSize, prev: e.prev[:0]}\n\t}\n\n\t// This check isn't in the Snappy implementation, but there, the caller\n\t// instead of the callee handles this case.\n\tif len(src) < minNonLiteralBlockSize {\n\t\t// We do not fill the token table.\n\t\t// This will be picked up by caller.\n\t\tdst.n = uint16(len(src))\n\t\te.cur += maxStoreBlockSize\n\t\te.prev = e.prev[:0]\n\t\treturn\n\t}\n\n\t// sLimit is when to stop looking for offset/length copies. The inputMargin\n\t// lets us use a fast path for emitLiteral in the main loop, while we are\n\t// looking for copies.\n\tsLimit := int32(len(src) - inputMargin)\n\n\t// nextEmit is where in src the next emitLiteral should start from.\n\tnextEmit := int32(0)\n\ts := int32(0)\n\tcv := load3232(src, s)\n\tnextHash := hash(cv)\n\n\tfor {\n\t\t// Copied from the C++ snappy implementation:\n\t\t//\n\t\t// Heuristic match skipping: If 32 bytes are scanned with no matches\n\t\t// found, start looking only at every other byte. If 32 more bytes are\n\t\t// scanned (or skipped), look at every third byte, etc.. When a match\n\t\t// is found, immediately go back to looking at every byte. This is a\n\t\t// small loss (~5% performance, ~0.1% density) for compressible data\n\t\t// due to more bookkeeping, but for non-compressible data (such as\n\t\t// JPEG) it's a huge win since the compressor quickly \"realizes\" the\n\t\t// data is incompressible and doesn't bother looking for matches\n\t\t// everywhere.\n\t\t//\n\t\t// The \"skip\" variable keeps track of how many bytes there are since\n\t\t// the last match; dividing it by 32 (ie. right-shifting by five) gives\n\t\t// the number of bytes to move ahead for each iteration.\n\t\tskip := int32(32)\n\n\t\tnextS := s\n\t\tvar candidate tableEntry\n\t\tvar candidateAlt tableEntry\n\t\tfor {\n\t\t\ts = nextS\n\t\t\tbytesBetweenHashLookups := skip >> 5\n\t\t\tnextS = s + bytesBetweenHashLookups\n\t\t\tskip += bytesBetweenHashLookups\n\t\t\tif nextS > sLimit {\n\t\t\t\tgoto emitRemainder\n\t\t\t}\n\t\t\tcandidates := e.table[nextHash&tableMask]\n\t\t\tnow := load3232(src, nextS)\n\t\t\te.table[nextHash&tableMask] = tableEntryPrev{Prev: candidates.Cur, Cur: tableEntry{offset: s + e.cur, val: cv}}\n\t\t\tnextHash = hash(now)\n\n\t\t\t// Check both candidates\n\t\t\tcandidate = candidates.Cur\n\t\t\tif cv == candidate.val {\n\t\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\t\tif offset < maxMatchOffset {\n\t\t\t\t\toffset = s - (candidates.Prev.offset - e.cur)\n\t\t\t\t\tif cv == candidates.Prev.val && offset < maxMatchOffset {\n\t\t\t\t\t\tcandidateAlt = candidates.Prev\n\t\t\t\t\t}\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t// We only check if value mismatches.\n\t\t\t\t// Offset will always be invalid in other cases.\n\t\t\t\tcandidate = candidates.Prev\n\t\t\t\tif cv == candidate.val {\n\t\t\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\t\t\tif offset < maxMatchOffset {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tcv = now\n\t\t}\n\n\t\t// A 4-byte match has been found. We'll later see if more than 4 bytes\n\t\t// match. But, prior to the match, src[nextEmit:s] are unmatched. Emit\n\t\t// them as literal bytes.\n\t\temitLiteral(dst, src[nextEmit:s])\n\n\t\t// Call emitCopy, and then see if another emitCopy could be our next\n\t\t// move. Repeat until we find no match for the input immediately after\n\t\t// what was consumed by the last emitCopy call.\n\t\t//\n\t\t// If we exit this loop normally then we need to call emitLiteral next,\n\t\t// though we don't yet know how big the literal will be. We handle that\n\t\t// by proceeding to the next iteration of the main loop. We also can\n\t\t// exit this loop via goto if we get close to exhausting the input.\n\t\tfor {\n\t\t\t// Invariant: we have a 4-byte match at s, and no need to emit any\n\t\t\t// literal bytes prior to s.\n\n\t\t\t// Extend the 4-byte match as long as possible.\n\t\t\t//\n\t\t\ts += 4\n\t\t\tt := candidate.offset - e.cur + 4\n\t\t\tl := e.matchlen(s, t, src)\n\t\t\t// Try alternative candidate if match length < matchLenGood.\n\t\t\tif l < matchLenGood-4 && candidateAlt.offset != 0 {\n\t\t\t\tt2 := candidateAlt.offset - e.cur + 4\n\t\t\t\tl2 := e.matchlen(s, t2, src)\n\t\t\t\tif l2 > l {\n\t\t\t\t\tl = l2\n\t\t\t\t\tt = t2\n\t\t\t\t}\n\t\t\t}\n\t\t\t// matchToken is flate's equivalent of Snappy's emitCopy. (length,offset)\n\t\t\tdst.tokens[dst.n] = matchToken(uint32(l+4-baseMatchLength), uint32(s-t-baseMatchOffset))\n\t\t\tdst.n++\n\t\t\ts += l\n\t\t\tnextEmit = s\n\t\t\tif s >= sLimit {\n\t\t\t\tt += l\n\t\t\t\t// Index first pair after match end.\n\t\t\t\tif int(t+4) < len(src) && t > 0 {\n\t\t\t\t\tcv := load3232(src, t)\n\t\t\t\t\tnextHash = hash(cv)\n\t\t\t\t\te.table[nextHash&tableMask] = tableEntryPrev{\n\t\t\t\t\t\tPrev: e.table[nextHash&tableMask].Cur,\n\t\t\t\t\t\tCur:  tableEntry{offset: e.cur + t, val: cv},\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tgoto emitRemainder\n\t\t\t}\n\n\t\t\t// We could immediately start working at s now, but to improve\n\t\t\t// compression we first update the hash table at s-3 to s. If\n\t\t\t// another emitCopy is not our next move, also calculate nextHash\n\t\t\t// at s+1. At least on GOARCH=amd64, these three hash calculations\n\t\t\t// are faster as one load64 call (with some shifts) instead of\n\t\t\t// three load32 calls.\n\t\t\tx := load6432(src, s-3)\n\t\t\tprevHash := hash(uint32(x))\n\t\t\te.table[prevHash&tableMask] = tableEntryPrev{\n\t\t\t\tPrev: e.table[prevHash&tableMask].Cur,\n\t\t\t\tCur:  tableEntry{offset: e.cur + s - 3, val: uint32(x)},\n\t\t\t}\n\t\t\tx >>= 8\n\t\t\tprevHash = hash(uint32(x))\n\n\t\t\te.table[prevHash&tableMask] = tableEntryPrev{\n\t\t\t\tPrev: e.table[prevHash&tableMask].Cur,\n\t\t\t\tCur:  tableEntry{offset: e.cur + s - 2, val: uint32(x)},\n\t\t\t}\n\t\t\tx >>= 8\n\t\t\tprevHash = hash(uint32(x))\n\n\t\t\te.table[prevHash&tableMask] = tableEntryPrev{\n\t\t\t\tPrev: e.table[prevHash&tableMask].Cur,\n\t\t\t\tCur:  tableEntry{offset: e.cur + s - 1, val: uint32(x)},\n\t\t\t}\n\t\t\tx >>= 8\n\t\t\tcurrHash := hash(uint32(x))\n\t\t\tcandidates := e.table[currHash&tableMask]\n\t\t\tcv = uint32(x)\n\t\t\te.table[currHash&tableMask] = tableEntryPrev{\n\t\t\t\tPrev: candidates.Cur,\n\t\t\t\tCur:  tableEntry{offset: s + e.cur, val: cv},\n\t\t\t}\n\n\t\t\t// Check both candidates\n\t\t\tcandidate = candidates.Cur\n\t\t\tcandidateAlt = tableEntry{}\n\t\t\tif cv == candidate.val {\n\t\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\t\tif offset <= maxMatchOffset {\n\t\t\t\t\toffset = s - (candidates.Prev.offset - e.cur)\n\t\t\t\t\tif cv == candidates.Prev.val && offset <= maxMatchOffset {\n\t\t\t\t\t\tcandidateAlt = candidates.Prev\n\t\t\t\t\t}\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t// We only check if value mismatches.\n\t\t\t\t// Offset will always be invalid in other cases.\n\t\t\t\tcandidate = candidates.Prev\n\t\t\t\tif cv == candidate.val {\n\t\t\t\t\toffset := s - (candidate.offset - e.cur)\n\t\t\t\t\tif offset <= maxMatchOffset {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tcv = uint32(x >> 8)\n\t\t\tnextHash = hash(cv)\n\t\t\ts++\n\t\t\tbreak\n\t\t}\n\t}\n\nemitRemainder:\n\tif int(nextEmit) < len(src) {\n\t\temitLiteral(dst, src[nextEmit:])\n\t}\n\te.cur += int32(len(src))\n\te.prev = e.prev[:len(src)]\n\tcopy(e.prev, src)\n}\n\nfunc (e *snappyGen) matchlen(s, t int32, src []byte) int32 {\n\ts1 := int(s) + maxMatchLength - 4\n\tif s1 > len(src) {\n\t\ts1 = len(src)\n\t}\n\n\t// If we are inside the current block\n\tif t >= 0 {\n\t\tb := src[t:]\n\t\ta := src[s:s1]\n\t\tb = b[:len(a)]\n\t\t// Extend the match to be as long as possible.\n\t\tfor i := range a {\n\t\t\tif a[i] != b[i] {\n\t\t\t\treturn int32(i)\n\t\t\t}\n\t\t}\n\t\treturn int32(len(a))\n\t}\n\n\t// We found a match in the previous block.\n\ttp := int32(len(e.prev)) + t\n\tif tp < 0 {\n\t\treturn 0\n\t}\n\n\t// Extend the match to be as long as possible.\n\ta := src[s:s1]\n\tb := e.prev[tp:]\n\tif len(b) > len(a) {\n\t\tb = b[:len(a)]\n\t}\n\ta = a[:len(b)]\n\tfor i := range b {\n\t\tif a[i] != b[i] {\n\t\t\treturn int32(i)\n\t\t}\n\t}\n\n\t// If we reached our limit, we matched everything we are\n\t// allowed to in the previous block and we return.\n\tn := int32(len(b))\n\tif int(s+n) == s1 {\n\t\treturn n\n\t}\n\n\t// Continue looking for more matches in the current block.\n\ta = src[s+n : s1]\n\tb = src[:len(a)]\n\tfor i := range a {\n\t\tif a[i] != b[i] {\n\t\t\treturn int32(i) + n\n\t\t}\n\t}\n\treturn int32(len(a)) + n\n}\n\n// Reset the encoding table.\nfunc (e *snappyGen) Reset() {\n\te.prev = e.prev[:0]\n\te.cur += maxMatchOffset\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/compress/flate/token.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage flate\n\nimport \"fmt\"\n\nconst (\n\t// 2 bits:   type   0 = literal  1=EOF  2=Match   3=Unused\n\t// 8 bits:   xlength = length - MIN_MATCH_LENGTH\n\t// 22 bits   xoffset = offset - MIN_OFFSET_SIZE, or literal\n\tlengthShift = 22\n\toffsetMask  = 1<<lengthShift - 1\n\ttypeMask    = 3 << 30\n\tliteralType = 0 << 30\n\tmatchType   = 1 << 30\n)\n\n// The length code for length X (MIN_MATCH_LENGTH <= X <= MAX_MATCH_LENGTH)\n// is lengthCodes[length - MIN_MATCH_LENGTH]\nvar lengthCodes = [...]uint32{\n\t0, 1, 2, 3, 4, 5, 6, 7, 8, 8,\n\t9, 9, 10, 10, 11, 11, 12, 12, 12, 12,\n\t13, 13, 13, 13, 14, 14, 14, 14, 15, 15,\n\t15, 15, 16, 16, 16, 16, 16, 16, 16, 16,\n\t17, 17, 17, 17, 17, 17, 17, 17, 18, 18,\n\t18, 18, 18, 18, 18, 18, 19, 19, 19, 19,\n\t19, 19, 19, 19, 20, 20, 20, 20, 20, 20,\n\t20, 20, 20, 20, 20, 20, 20, 20, 20, 20,\n\t21, 21, 21, 21, 21, 21, 21, 21, 21, 21,\n\t21, 21, 21, 21, 21, 21, 22, 22, 22, 22,\n\t22, 22, 22, 22, 22, 22, 22, 22, 22, 22,\n\t22, 22, 23, 23, 23, 23, 23, 23, 23, 23,\n\t23, 23, 23, 23, 23, 23, 23, 23, 24, 24,\n\t24, 24, 24, 24, 24, 24, 24, 24, 24, 24,\n\t24, 24, 24, 24, 24, 24, 24, 24, 24, 24,\n\t24, 24, 24, 24, 24, 24, 24, 24, 24, 24,\n\t25, 25, 25, 25, 25, 25, 25, 25, 25, 25,\n\t25, 25, 25, 25, 25, 25, 25, 25, 25, 25,\n\t25, 25, 25, 25, 25, 25, 25, 25, 25, 25,\n\t25, 25, 26, 26, 26, 26, 26, 26, 26, 26,\n\t26, 26, 26, 26, 26, 26, 26, 26, 26, 26,\n\t26, 26, 26, 26, 26, 26, 26, 26, 26, 26,\n\t26, 26, 26, 26, 27, 27, 27, 27, 27, 27,\n\t27, 27, 27, 27, 27, 27, 27, 27, 27, 27,\n\t27, 27, 27, 27, 27, 27, 27, 27, 27, 27,\n\t27, 27, 27, 27, 27, 28,\n}\n\nvar offsetCodes = [...]uint32{\n\t0, 1, 2, 3, 4, 4, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7,\n\t8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9,\n\t10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,\n\t11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,\n\t12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,\n\t12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,\n\t13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,\n\t13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,\n\t14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,\n\t14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,\n\t14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,\n\t14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14,\n\t15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,\n\t15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,\n\t15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,\n\t15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15,\n}\n\ntype token uint32\n\ntype tokens struct {\n\ttokens [maxStoreBlockSize + 1]token\n\tn      uint16 // Must be able to contain maxStoreBlockSize\n}\n\n// Convert a literal into a literal token.\nfunc literalToken(literal uint32) token { return token(literalType + literal) }\n\n// Convert a < xlength, xoffset > pair into a match token.\nfunc matchToken(xlength uint32, xoffset uint32) token {\n\treturn token(matchType + xlength<<lengthShift + xoffset)\n}\n\nfunc matchTokend(xlength uint32, xoffset uint32) token {\n\tif xlength > maxMatchLength || xoffset > maxMatchOffset {\n\t\tpanic(fmt.Sprintf(\"Invalid match: len: %d, offset: %d\\n\", xlength, xoffset))\n\t\treturn token(matchType)\n\t}\n\treturn token(matchType + xlength<<lengthShift + xoffset)\n}\n\n// Returns the type of a token\nfunc (t token) typ() uint32 { return uint32(t) & typeMask }\n\n// Returns the literal of a literal token\nfunc (t token) literal() uint32 { return uint32(t - literalType) }\n\n// Returns the extra offset of a match token\nfunc (t token) offset() uint32 { return uint32(t) & offsetMask }\n\nfunc (t token) length() uint32 { return uint32((t - matchType) >> lengthShift) }\n\nfunc lengthCode(len uint32) uint32 { return lengthCodes[len] }\n\n// Returns the offset code corresponding to a specific offset\nfunc offsetCode(off uint32) uint32 {\n\tif off < uint32(len(offsetCodes)) {\n\t\treturn offsetCodes[off]\n\t} else if off>>7 < uint32(len(offsetCodes)) {\n\t\treturn offsetCodes[off>>7] + 14\n\t} else {\n\t\treturn offsetCodes[off>>14] + 28\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/cpuid/LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2015 Klaus Post\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n"
  },
  {
    "path": "vendor/github.com/klauspost/cpuid/cpuid.go",
    "content": "// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.\n\n// Package cpuid provides information about the CPU running the current program.\n//\n// CPU features are detected on startup, and kept for fast access through the life of the application.\n// Currently x86 / x64 (AMD64) is supported.\n//\n// You can access the CPU information by accessing the shared CPU variable of the cpuid library.\n//\n// Package home: https://github.com/klauspost/cpuid\npackage cpuid\n\nimport \"strings\"\n\n// Vendor is a representation of a CPU vendor.\ntype Vendor int\n\nconst (\n\tOther Vendor = iota\n\tIntel\n\tAMD\n\tVIA\n\tTransmeta\n\tNSC\n\tKVM  // Kernel-based Virtual Machine\n\tMSVM // Microsoft Hyper-V or Windows Virtual PC\n\tVMware\n\tXenHVM\n)\n\nconst (\n\tCMOV        = 1 << iota // i686 CMOV\n\tNX                      // NX (No-Execute) bit\n\tAMD3DNOW                // AMD 3DNOW\n\tAMD3DNOWEXT             // AMD 3DNowExt\n\tMMX                     // standard MMX\n\tMMXEXT                  // SSE integer functions or AMD MMX ext\n\tSSE                     // SSE functions\n\tSSE2                    // P4 SSE functions\n\tSSE3                    // Prescott SSE3 functions\n\tSSSE3                   // Conroe SSSE3 functions\n\tSSE4                    // Penryn SSE4.1 functions\n\tSSE4A                   // AMD Barcelona microarchitecture SSE4a instructions\n\tSSE42                   // Nehalem SSE4.2 functions\n\tAVX                     // AVX functions\n\tAVX2                    // AVX2 functions\n\tFMA3                    // Intel FMA 3\n\tFMA4                    // Bulldozer FMA4 functions\n\tXOP                     // Bulldozer XOP functions\n\tF16C                    // Half-precision floating-point conversion\n\tBMI1                    // Bit Manipulation Instruction Set 1\n\tBMI2                    // Bit Manipulation Instruction Set 2\n\tTBM                     // AMD Trailing Bit Manipulation\n\tLZCNT                   // LZCNT instruction\n\tPOPCNT                  // POPCNT instruction\n\tAESNI                   // Advanced Encryption Standard New Instructions\n\tCLMUL                   // Carry-less Multiplication\n\tHTT                     // Hyperthreading (enabled)\n\tHLE                     // Hardware Lock Elision\n\tRTM                     // Restricted Transactional Memory\n\tRDRAND                  // RDRAND instruction is available\n\tRDSEED                  // RDSEED instruction is available\n\tADX                     // Intel ADX (Multi-Precision Add-Carry Instruction Extensions)\n\tSHA                     // Intel SHA Extensions\n\tAVX512F                 // AVX-512 Foundation\n\tAVX512DQ                // AVX-512 Doubleword and Quadword Instructions\n\tAVX512IFMA              // AVX-512 Integer Fused Multiply-Add Instructions\n\tAVX512PF                // AVX-512 Prefetch Instructions\n\tAVX512ER                // AVX-512 Exponential and Reciprocal Instructions\n\tAVX512CD                // AVX-512 Conflict Detection Instructions\n\tAVX512BW                // AVX-512 Byte and Word Instructions\n\tAVX512VL                // AVX-512 Vector Length Extensions\n\tAVX512VBMI              // AVX-512 Vector Bit Manipulation Instructions\n\tMPX                     // Intel MPX (Memory Protection Extensions)\n\tERMS                    // Enhanced REP MOVSB/STOSB\n\tRDTSCP                  // RDTSCP Instruction\n\tCX16                    // CMPXCHG16B Instruction\n\tSGX                     // Software Guard Extensions\n\n\t// Performance indicators\n\tSSE2SLOW // SSE2 is supported, but usually not faster\n\tSSE3SLOW // SSE3 is supported, but usually not faster\n\tATOM     // Atom processor, some SSSE3 instructions are slower\n)\n\nvar flagNames = map[Flags]string{\n\tCMOV:        \"CMOV\",        // i686 CMOV\n\tNX:          \"NX\",          // NX (No-Execute) bit\n\tAMD3DNOW:    \"AMD3DNOW\",    // AMD 3DNOW\n\tAMD3DNOWEXT: \"AMD3DNOWEXT\", // AMD 3DNowExt\n\tMMX:         \"MMX\",         // Standard MMX\n\tMMXEXT:      \"MMXEXT\",      // SSE integer functions or AMD MMX ext\n\tSSE:         \"SSE\",         // SSE functions\n\tSSE2:        \"SSE2\",        // P4 SSE2 functions\n\tSSE3:        \"SSE3\",        // Prescott SSE3 functions\n\tSSSE3:       \"SSSE3\",       // Conroe SSSE3 functions\n\tSSE4:        \"SSE4.1\",      // Penryn SSE4.1 functions\n\tSSE4A:       \"SSE4A\",       // AMD Barcelona microarchitecture SSE4a instructions\n\tSSE42:       \"SSE4.2\",      // Nehalem SSE4.2 functions\n\tAVX:         \"AVX\",         // AVX functions\n\tAVX2:        \"AVX2\",        // AVX functions\n\tFMA3:        \"FMA3\",        // Intel FMA 3\n\tFMA4:        \"FMA4\",        // Bulldozer FMA4 functions\n\tXOP:         \"XOP\",         // Bulldozer XOP functions\n\tF16C:        \"F16C\",        // Half-precision floating-point conversion\n\tBMI1:        \"BMI1\",        // Bit Manipulation Instruction Set 1\n\tBMI2:        \"BMI2\",        // Bit Manipulation Instruction Set 2\n\tTBM:         \"TBM\",         // AMD Trailing Bit Manipulation\n\tLZCNT:       \"LZCNT\",       // LZCNT instruction\n\tPOPCNT:      \"POPCNT\",      // POPCNT instruction\n\tAESNI:       \"AESNI\",       // Advanced Encryption Standard New Instructions\n\tCLMUL:       \"CLMUL\",       // Carry-less Multiplication\n\tHTT:         \"HTT\",         // Hyperthreading (enabled)\n\tHLE:         \"HLE\",         // Hardware Lock Elision\n\tRTM:         \"RTM\",         // Restricted Transactional Memory\n\tRDRAND:      \"RDRAND\",      // RDRAND instruction is available\n\tRDSEED:      \"RDSEED\",      // RDSEED instruction is available\n\tADX:         \"ADX\",         // Intel ADX (Multi-Precision Add-Carry Instruction Extensions)\n\tSHA:         \"SHA\",         // Intel SHA Extensions\n\tAVX512F:     \"AVX512F\",     // AVX-512 Foundation\n\tAVX512DQ:    \"AVX512DQ\",    // AVX-512 Doubleword and Quadword Instructions\n\tAVX512IFMA:  \"AVX512IFMA\",  // AVX-512 Integer Fused Multiply-Add Instructions\n\tAVX512PF:    \"AVX512PF\",    // AVX-512 Prefetch Instructions\n\tAVX512ER:    \"AVX512ER\",    // AVX-512 Exponential and Reciprocal Instructions\n\tAVX512CD:    \"AVX512CD\",    // AVX-512 Conflict Detection Instructions\n\tAVX512BW:    \"AVX512BW\",    // AVX-512 Byte and Word Instructions\n\tAVX512VL:    \"AVX512VL\",    // AVX-512 Vector Length Extensions\n\tAVX512VBMI:  \"AVX512VBMI\",  // AVX-512 Vector Bit Manipulation Instructions\n\tMPX:         \"MPX\",         // Intel MPX (Memory Protection Extensions)\n\tERMS:        \"ERMS\",        // Enhanced REP MOVSB/STOSB\n\tRDTSCP:      \"RDTSCP\",      // RDTSCP Instruction\n\tCX16:        \"CX16\",        // CMPXCHG16B Instruction\n\tSGX:         \"SGX\",         // Software Guard Extensions\n\n\t// Performance indicators\n\tSSE2SLOW: \"SSE2SLOW\", // SSE2 supported, but usually not faster\n\tSSE3SLOW: \"SSE3SLOW\", // SSE3 supported, but usually not faster\n\tATOM:     \"ATOM\",     // Atom processor, some SSSE3 instructions are slower\n\n}\n\n// CPUInfo contains information about the detected system CPU.\ntype CPUInfo struct {\n\tBrandName      string // Brand name reported by the CPU\n\tVendorID       Vendor // Comparable CPU vendor ID\n\tFeatures       Flags  // Features of the CPU\n\tPhysicalCores  int    // Number of physical processor cores in your CPU. Will be 0 if undetectable.\n\tThreadsPerCore int    // Number of threads per physical core. Will be 1 if undetectable.\n\tLogicalCores   int    // Number of physical cores times threads that can run on each core through the use of hyperthreading. Will be 0 if undetectable.\n\tFamily         int    // CPU family number\n\tModel          int    // CPU model number\n\tCacheLine      int    // Cache line size in bytes. Will be 0 if undetectable.\n\tCache          struct {\n\t\tL1I int // L1 Instruction Cache (per core or shared). Will be -1 if undetected\n\t\tL1D int // L1 Data Cache (per core or shared). Will be -1 if undetected\n\t\tL2  int // L2 Cache (per core or shared). Will be -1 if undetected\n\t\tL3  int // L3 Instruction Cache (per core or shared). Will be -1 if undetected\n\t}\n\tSGX       SGXSupport\n\tmaxFunc   uint32\n\tmaxExFunc uint32\n}\n\nvar cpuid func(op uint32) (eax, ebx, ecx, edx uint32)\nvar cpuidex func(op, op2 uint32) (eax, ebx, ecx, edx uint32)\nvar xgetbv func(index uint32) (eax, edx uint32)\nvar rdtscpAsm func() (eax, ebx, ecx, edx uint32)\n\n// CPU contains information about the CPU as detected on startup,\n// or when Detect last was called.\n//\n// Use this as the primary entry point to you data,\n// this way queries are\nvar CPU CPUInfo\n\nfunc init() {\n\tinitCPU()\n\tDetect()\n}\n\n// Detect will re-detect current CPU info.\n// This will replace the content of the exported CPU variable.\n//\n// Unless you expect the CPU to change while you are running your program\n// you should not need to call this function.\n// If you call this, you must ensure that no other goroutine is accessing the\n// exported CPU variable.\nfunc Detect() {\n\tCPU.maxFunc = maxFunctionID()\n\tCPU.maxExFunc = maxExtendedFunction()\n\tCPU.BrandName = brandName()\n\tCPU.CacheLine = cacheLine()\n\tCPU.Family, CPU.Model = familyModel()\n\tCPU.Features = support()\n\tCPU.SGX = sgx(CPU.Features&SGX != 0)\n\tCPU.ThreadsPerCore = threadsPerCore()\n\tCPU.LogicalCores = logicalCores()\n\tCPU.PhysicalCores = physicalCores()\n\tCPU.VendorID = vendorID()\n\tCPU.cacheSize()\n}\n\n// Generated here: http://play.golang.org/p/BxFH2Gdc0G\n\n// Cmov indicates support of CMOV instructions\nfunc (c CPUInfo) Cmov() bool {\n\treturn c.Features&CMOV != 0\n}\n\n// Amd3dnow indicates support of AMD 3DNOW! instructions\nfunc (c CPUInfo) Amd3dnow() bool {\n\treturn c.Features&AMD3DNOW != 0\n}\n\n// Amd3dnowExt indicates support of AMD 3DNOW! Extended instructions\nfunc (c CPUInfo) Amd3dnowExt() bool {\n\treturn c.Features&AMD3DNOWEXT != 0\n}\n\n// MMX indicates support of MMX instructions\nfunc (c CPUInfo) MMX() bool {\n\treturn c.Features&MMX != 0\n}\n\n// MMXExt indicates support of MMXEXT instructions\n// (SSE integer functions or AMD MMX ext)\nfunc (c CPUInfo) MMXExt() bool {\n\treturn c.Features&MMXEXT != 0\n}\n\n// SSE indicates support of SSE instructions\nfunc (c CPUInfo) SSE() bool {\n\treturn c.Features&SSE != 0\n}\n\n// SSE2 indicates support of SSE 2 instructions\nfunc (c CPUInfo) SSE2() bool {\n\treturn c.Features&SSE2 != 0\n}\n\n// SSE3 indicates support of SSE 3 instructions\nfunc (c CPUInfo) SSE3() bool {\n\treturn c.Features&SSE3 != 0\n}\n\n// SSSE3 indicates support of SSSE 3 instructions\nfunc (c CPUInfo) SSSE3() bool {\n\treturn c.Features&SSSE3 != 0\n}\n\n// SSE4 indicates support of SSE 4 (also called SSE 4.1) instructions\nfunc (c CPUInfo) SSE4() bool {\n\treturn c.Features&SSE4 != 0\n}\n\n// SSE42 indicates support of SSE4.2 instructions\nfunc (c CPUInfo) SSE42() bool {\n\treturn c.Features&SSE42 != 0\n}\n\n// AVX indicates support of AVX instructions\n// and operating system support of AVX instructions\nfunc (c CPUInfo) AVX() bool {\n\treturn c.Features&AVX != 0\n}\n\n// AVX2 indicates support of AVX2 instructions\nfunc (c CPUInfo) AVX2() bool {\n\treturn c.Features&AVX2 != 0\n}\n\n// FMA3 indicates support of FMA3 instructions\nfunc (c CPUInfo) FMA3() bool {\n\treturn c.Features&FMA3 != 0\n}\n\n// FMA4 indicates support of FMA4 instructions\nfunc (c CPUInfo) FMA4() bool {\n\treturn c.Features&FMA4 != 0\n}\n\n// XOP indicates support of XOP instructions\nfunc (c CPUInfo) XOP() bool {\n\treturn c.Features&XOP != 0\n}\n\n// F16C indicates support of F16C instructions\nfunc (c CPUInfo) F16C() bool {\n\treturn c.Features&F16C != 0\n}\n\n// BMI1 indicates support of BMI1 instructions\nfunc (c CPUInfo) BMI1() bool {\n\treturn c.Features&BMI1 != 0\n}\n\n// BMI2 indicates support of BMI2 instructions\nfunc (c CPUInfo) BMI2() bool {\n\treturn c.Features&BMI2 != 0\n}\n\n// TBM indicates support of TBM instructions\n// (AMD Trailing Bit Manipulation)\nfunc (c CPUInfo) TBM() bool {\n\treturn c.Features&TBM != 0\n}\n\n// Lzcnt indicates support of LZCNT instruction\nfunc (c CPUInfo) Lzcnt() bool {\n\treturn c.Features&LZCNT != 0\n}\n\n// Popcnt indicates support of POPCNT instruction\nfunc (c CPUInfo) Popcnt() bool {\n\treturn c.Features&POPCNT != 0\n}\n\n// HTT indicates the processor has Hyperthreading enabled\nfunc (c CPUInfo) HTT() bool {\n\treturn c.Features&HTT != 0\n}\n\n// SSE2Slow indicates that SSE2 may be slow on this processor\nfunc (c CPUInfo) SSE2Slow() bool {\n\treturn c.Features&SSE2SLOW != 0\n}\n\n// SSE3Slow indicates that SSE3 may be slow on this processor\nfunc (c CPUInfo) SSE3Slow() bool {\n\treturn c.Features&SSE3SLOW != 0\n}\n\n// AesNi indicates support of AES-NI instructions\n// (Advanced Encryption Standard New Instructions)\nfunc (c CPUInfo) AesNi() bool {\n\treturn c.Features&AESNI != 0\n}\n\n// Clmul indicates support of CLMUL instructions\n// (Carry-less Multiplication)\nfunc (c CPUInfo) Clmul() bool {\n\treturn c.Features&CLMUL != 0\n}\n\n// NX indicates support of NX (No-Execute) bit\nfunc (c CPUInfo) NX() bool {\n\treturn c.Features&NX != 0\n}\n\n// SSE4A indicates support of AMD Barcelona microarchitecture SSE4a instructions\nfunc (c CPUInfo) SSE4A() bool {\n\treturn c.Features&SSE4A != 0\n}\n\n// HLE indicates support of Hardware Lock Elision\nfunc (c CPUInfo) HLE() bool {\n\treturn c.Features&HLE != 0\n}\n\n// RTM indicates support of Restricted Transactional Memory\nfunc (c CPUInfo) RTM() bool {\n\treturn c.Features&RTM != 0\n}\n\n// Rdrand indicates support of RDRAND instruction is available\nfunc (c CPUInfo) Rdrand() bool {\n\treturn c.Features&RDRAND != 0\n}\n\n// Rdseed indicates support of RDSEED instruction is available\nfunc (c CPUInfo) Rdseed() bool {\n\treturn c.Features&RDSEED != 0\n}\n\n// ADX indicates support of Intel ADX (Multi-Precision Add-Carry Instruction Extensions)\nfunc (c CPUInfo) ADX() bool {\n\treturn c.Features&ADX != 0\n}\n\n// SHA indicates support of Intel SHA Extensions\nfunc (c CPUInfo) SHA() bool {\n\treturn c.Features&SHA != 0\n}\n\n// AVX512F indicates support of AVX-512 Foundation\nfunc (c CPUInfo) AVX512F() bool {\n\treturn c.Features&AVX512F != 0\n}\n\n// AVX512DQ indicates support of AVX-512 Doubleword and Quadword Instructions\nfunc (c CPUInfo) AVX512DQ() bool {\n\treturn c.Features&AVX512DQ != 0\n}\n\n// AVX512IFMA indicates support of AVX-512 Integer Fused Multiply-Add Instructions\nfunc (c CPUInfo) AVX512IFMA() bool {\n\treturn c.Features&AVX512IFMA != 0\n}\n\n// AVX512PF indicates support of AVX-512 Prefetch Instructions\nfunc (c CPUInfo) AVX512PF() bool {\n\treturn c.Features&AVX512PF != 0\n}\n\n// AVX512ER indicates support of AVX-512 Exponential and Reciprocal Instructions\nfunc (c CPUInfo) AVX512ER() bool {\n\treturn c.Features&AVX512ER != 0\n}\n\n// AVX512CD indicates support of AVX-512 Conflict Detection Instructions\nfunc (c CPUInfo) AVX512CD() bool {\n\treturn c.Features&AVX512CD != 0\n}\n\n// AVX512BW indicates support of AVX-512 Byte and Word Instructions\nfunc (c CPUInfo) AVX512BW() bool {\n\treturn c.Features&AVX512BW != 0\n}\n\n// AVX512VL indicates support of AVX-512 Vector Length Extensions\nfunc (c CPUInfo) AVX512VL() bool {\n\treturn c.Features&AVX512VL != 0\n}\n\n// AVX512VBMI indicates support of AVX-512 Vector Bit Manipulation Instructions\nfunc (c CPUInfo) AVX512VBMI() bool {\n\treturn c.Features&AVX512VBMI != 0\n}\n\n// MPX indicates support of Intel MPX (Memory Protection Extensions)\nfunc (c CPUInfo) MPX() bool {\n\treturn c.Features&MPX != 0\n}\n\n// ERMS indicates support of Enhanced REP MOVSB/STOSB\nfunc (c CPUInfo) ERMS() bool {\n\treturn c.Features&ERMS != 0\n}\n\nfunc (c CPUInfo) RDTSCP() bool {\n\treturn c.Features&RDTSCP != 0\n}\n\nfunc (c CPUInfo) CX16() bool {\n\treturn c.Features&CX16 != 0\n}\n\n// Atom indicates an Atom processor\nfunc (c CPUInfo) Atom() bool {\n\treturn c.Features&ATOM != 0\n}\n\n// Intel returns true if vendor is recognized as Intel\nfunc (c CPUInfo) Intel() bool {\n\treturn c.VendorID == Intel\n}\n\n// AMD returns true if vendor is recognized as AMD\nfunc (c CPUInfo) AMD() bool {\n\treturn c.VendorID == AMD\n}\n\n// Transmeta returns true if vendor is recognized as Transmeta\nfunc (c CPUInfo) Transmeta() bool {\n\treturn c.VendorID == Transmeta\n}\n\n// NSC returns true if vendor is recognized as National Semiconductor\nfunc (c CPUInfo) NSC() bool {\n\treturn c.VendorID == NSC\n}\n\n// VIA returns true if vendor is recognized as VIA\nfunc (c CPUInfo) VIA() bool {\n\treturn c.VendorID == VIA\n}\n\n// RTCounter returns the 64-bit time-stamp counter\n// Uses the RDTSCP instruction. The value 0 is returned\n// if the CPU does not support the instruction.\nfunc (c CPUInfo) RTCounter() uint64 {\n\tif !c.RDTSCP() {\n\t\treturn 0\n\t}\n\ta, _, _, d := rdtscpAsm()\n\treturn uint64(a) | (uint64(d) << 32)\n}\n\n// Ia32TscAux returns the IA32_TSC_AUX part of the RDTSCP.\n// This variable is OS dependent, but on Linux contains information\n// about the current cpu/core the code is running on.\n// If the RDTSCP instruction isn't supported on the CPU, the value 0 is returned.\nfunc (c CPUInfo) Ia32TscAux() uint32 {\n\tif !c.RDTSCP() {\n\t\treturn 0\n\t}\n\t_, _, ecx, _ := rdtscpAsm()\n\treturn ecx\n}\n\n// LogicalCPU will return the Logical CPU the code is currently executing on.\n// This is likely to change when the OS re-schedules the running thread\n// to another CPU.\n// If the current core cannot be detected, -1 will be returned.\nfunc (c CPUInfo) LogicalCPU() int {\n\tif c.maxFunc < 1 {\n\t\treturn -1\n\t}\n\t_, ebx, _, _ := cpuid(1)\n\treturn int(ebx >> 24)\n}\n\n// VM Will return true if the cpu id indicates we are in\n// a virtual machine. This is only a hint, and will very likely\n// have many false negatives.\nfunc (c CPUInfo) VM() bool {\n\tswitch c.VendorID {\n\tcase MSVM, KVM, VMware, XenHVM:\n\t\treturn true\n\t}\n\treturn false\n}\n\n// Flags contains detected cpu features and caracteristics\ntype Flags uint64\n\n// String returns a string representation of the detected\n// CPU features.\nfunc (f Flags) String() string {\n\treturn strings.Join(f.Strings(), \",\")\n}\n\n// Strings returns and array of the detected features.\nfunc (f Flags) Strings() []string {\n\ts := support()\n\tr := make([]string, 0, 20)\n\tfor i := uint(0); i < 64; i++ {\n\t\tkey := Flags(1 << i)\n\t\tval := flagNames[key]\n\t\tif s&key != 0 {\n\t\t\tr = append(r, val)\n\t\t}\n\t}\n\treturn r\n}\n\nfunc maxExtendedFunction() uint32 {\n\teax, _, _, _ := cpuid(0x80000000)\n\treturn eax\n}\n\nfunc maxFunctionID() uint32 {\n\ta, _, _, _ := cpuid(0)\n\treturn a\n}\n\nfunc brandName() string {\n\tif maxExtendedFunction() >= 0x80000004 {\n\t\tv := make([]uint32, 0, 48)\n\t\tfor i := uint32(0); i < 3; i++ {\n\t\t\ta, b, c, d := cpuid(0x80000002 + i)\n\t\t\tv = append(v, a, b, c, d)\n\t\t}\n\t\treturn strings.Trim(string(valAsString(v...)), \" \")\n\t}\n\treturn \"unknown\"\n}\n\nfunc threadsPerCore() int {\n\tmfi := maxFunctionID()\n\tif mfi < 0x4 || vendorID() != Intel {\n\t\treturn 1\n\t}\n\n\tif mfi < 0xb {\n\t\t_, b, _, d := cpuid(1)\n\t\tif (d & (1 << 28)) != 0 {\n\t\t\t// v will contain logical core count\n\t\t\tv := (b >> 16) & 255\n\t\t\tif v > 1 {\n\t\t\t\ta4, _, _, _ := cpuid(4)\n\t\t\t\t// physical cores\n\t\t\t\tv2 := (a4 >> 26) + 1\n\t\t\t\tif v2 > 0 {\n\t\t\t\t\treturn int(v) / int(v2)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn 1\n\t}\n\t_, b, _, _ := cpuidex(0xb, 0)\n\tif b&0xffff == 0 {\n\t\treturn 1\n\t}\n\treturn int(b & 0xffff)\n}\n\nfunc logicalCores() int {\n\tmfi := maxFunctionID()\n\tswitch vendorID() {\n\tcase Intel:\n\t\t// Use this on old Intel processors\n\t\tif mfi < 0xb {\n\t\t\tif mfi < 1 {\n\t\t\t\treturn 0\n\t\t\t}\n\t\t\t// CPUID.1:EBX[23:16] represents the maximum number of addressable IDs (initial APIC ID)\n\t\t\t// that can be assigned to logical processors in a physical package.\n\t\t\t// The value may not be the same as the number of logical processors that are present in the hardware of a physical package.\n\t\t\t_, ebx, _, _ := cpuid(1)\n\t\t\tlogical := (ebx >> 16) & 0xff\n\t\t\treturn int(logical)\n\t\t}\n\t\t_, b, _, _ := cpuidex(0xb, 1)\n\t\treturn int(b & 0xffff)\n\tcase AMD:\n\t\t_, b, _, _ := cpuid(1)\n\t\treturn int((b >> 16) & 0xff)\n\tdefault:\n\t\treturn 0\n\t}\n}\n\nfunc familyModel() (int, int) {\n\tif maxFunctionID() < 0x1 {\n\t\treturn 0, 0\n\t}\n\teax, _, _, _ := cpuid(1)\n\tfamily := ((eax >> 8) & 0xf) + ((eax >> 20) & 0xff)\n\tmodel := ((eax >> 4) & 0xf) + ((eax >> 12) & 0xf0)\n\treturn int(family), int(model)\n}\n\nfunc physicalCores() int {\n\tswitch vendorID() {\n\tcase Intel:\n\t\treturn logicalCores() / threadsPerCore()\n\tcase AMD:\n\t\tif maxExtendedFunction() >= 0x80000008 {\n\t\t\t_, _, c, _ := cpuid(0x80000008)\n\t\t\treturn int(c&0xff) + 1\n\t\t}\n\t}\n\treturn 0\n}\n\n// Except from http://en.wikipedia.org/wiki/CPUID#EAX.3D0:_Get_vendor_ID\nvar vendorMapping = map[string]Vendor{\n\t\"AMDisbetter!\": AMD,\n\t\"AuthenticAMD\": AMD,\n\t\"CentaurHauls\": VIA,\n\t\"GenuineIntel\": Intel,\n\t\"TransmetaCPU\": Transmeta,\n\t\"GenuineTMx86\": Transmeta,\n\t\"Geode by NSC\": NSC,\n\t\"VIA VIA VIA \": VIA,\n\t\"KVMKVMKVMKVM\": KVM,\n\t\"Microsoft Hv\": MSVM,\n\t\"VMwareVMware\": VMware,\n\t\"XenVMMXenVMM\": XenHVM,\n}\n\nfunc vendorID() Vendor {\n\t_, b, c, d := cpuid(0)\n\tv := valAsString(b, d, c)\n\tvend, ok := vendorMapping[string(v)]\n\tif !ok {\n\t\treturn Other\n\t}\n\treturn vend\n}\n\nfunc cacheLine() int {\n\tif maxFunctionID() < 0x1 {\n\t\treturn 0\n\t}\n\n\t_, ebx, _, _ := cpuid(1)\n\tcache := (ebx & 0xff00) >> 5 // cflush size\n\tif cache == 0 && maxExtendedFunction() >= 0x80000006 {\n\t\t_, _, ecx, _ := cpuid(0x80000006)\n\t\tcache = ecx & 0xff // cacheline size\n\t}\n\t// TODO: Read from Cache and TLB Information\n\treturn int(cache)\n}\n\nfunc (c *CPUInfo) cacheSize() {\n\tc.Cache.L1D = -1\n\tc.Cache.L1I = -1\n\tc.Cache.L2 = -1\n\tc.Cache.L3 = -1\n\tvendor := vendorID()\n\tswitch vendor {\n\tcase Intel:\n\t\tif maxFunctionID() < 4 {\n\t\t\treturn\n\t\t}\n\t\tfor i := uint32(0); ; i++ {\n\t\t\teax, ebx, ecx, _ := cpuidex(4, i)\n\t\t\tcacheType := eax & 15\n\t\t\tif cacheType == 0 {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tcacheLevel := (eax >> 5) & 7\n\t\t\tcoherency := int(ebx&0xfff) + 1\n\t\t\tpartitions := int((ebx>>12)&0x3ff) + 1\n\t\t\tassociativity := int((ebx>>22)&0x3ff) + 1\n\t\t\tsets := int(ecx) + 1\n\t\t\tsize := associativity * partitions * coherency * sets\n\t\t\tswitch cacheLevel {\n\t\t\tcase 1:\n\t\t\t\tif cacheType == 1 {\n\t\t\t\t\t// 1 = Data Cache\n\t\t\t\t\tc.Cache.L1D = size\n\t\t\t\t} else if cacheType == 2 {\n\t\t\t\t\t// 2 = Instruction Cache\n\t\t\t\t\tc.Cache.L1I = size\n\t\t\t\t} else {\n\t\t\t\t\tif c.Cache.L1D < 0 {\n\t\t\t\t\t\tc.Cache.L1I = size\n\t\t\t\t\t}\n\t\t\t\t\tif c.Cache.L1I < 0 {\n\t\t\t\t\t\tc.Cache.L1I = size\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\tcase 2:\n\t\t\t\tc.Cache.L2 = size\n\t\t\tcase 3:\n\t\t\t\tc.Cache.L3 = size\n\t\t\t}\n\t\t}\n\tcase AMD:\n\t\t// Untested.\n\t\tif maxExtendedFunction() < 0x80000005 {\n\t\t\treturn\n\t\t}\n\t\t_, _, ecx, edx := cpuid(0x80000005)\n\t\tc.Cache.L1D = int(((ecx >> 24) & 0xFF) * 1024)\n\t\tc.Cache.L1I = int(((edx >> 24) & 0xFF) * 1024)\n\n\t\tif maxExtendedFunction() < 0x80000006 {\n\t\t\treturn\n\t\t}\n\t\t_, _, ecx, _ = cpuid(0x80000006)\n\t\tc.Cache.L2 = int(((ecx >> 16) & 0xFFFF) * 1024)\n\t}\n\n\treturn\n}\n\ntype SGXSupport struct {\n\tAvailable           bool\n\tSGX1Supported       bool\n\tSGX2Supported       bool\n\tMaxEnclaveSizeNot64 int64\n\tMaxEnclaveSize64    int64\n}\n\nfunc sgx(available bool) (rval SGXSupport) {\n\trval.Available = available\n\n\tif !available {\n\t\treturn\n\t}\n\n\ta, _, _, d := cpuidex(0x12, 0)\n\trval.SGX1Supported = a&0x01 != 0\n\trval.SGX2Supported = a&0x02 != 0\n\trval.MaxEnclaveSizeNot64 = 1 << (d & 0xFF)     // pow 2\n\trval.MaxEnclaveSize64 = 1 << ((d >> 8) & 0xFF) // pow 2\n\n\treturn\n}\n\nfunc support() Flags {\n\tmfi := maxFunctionID()\n\tvend := vendorID()\n\tif mfi < 0x1 {\n\t\treturn 0\n\t}\n\trval := uint64(0)\n\t_, _, c, d := cpuid(1)\n\tif (d & (1 << 15)) != 0 {\n\t\trval |= CMOV\n\t}\n\tif (d & (1 << 23)) != 0 {\n\t\trval |= MMX\n\t}\n\tif (d & (1 << 25)) != 0 {\n\t\trval |= MMXEXT\n\t}\n\tif (d & (1 << 25)) != 0 {\n\t\trval |= SSE\n\t}\n\tif (d & (1 << 26)) != 0 {\n\t\trval |= SSE2\n\t}\n\tif (c & 1) != 0 {\n\t\trval |= SSE3\n\t}\n\tif (c & 0x00000200) != 0 {\n\t\trval |= SSSE3\n\t}\n\tif (c & 0x00080000) != 0 {\n\t\trval |= SSE4\n\t}\n\tif (c & 0x00100000) != 0 {\n\t\trval |= SSE42\n\t}\n\tif (c & (1 << 25)) != 0 {\n\t\trval |= AESNI\n\t}\n\tif (c & (1 << 1)) != 0 {\n\t\trval |= CLMUL\n\t}\n\tif c&(1<<23) != 0 {\n\t\trval |= POPCNT\n\t}\n\tif c&(1<<30) != 0 {\n\t\trval |= RDRAND\n\t}\n\tif c&(1<<29) != 0 {\n\t\trval |= F16C\n\t}\n\tif c&(1<<13) != 0 {\n\t\trval |= CX16\n\t}\n\tif vend == Intel && (d&(1<<28)) != 0 && mfi >= 4 {\n\t\tif threadsPerCore() > 1 {\n\t\t\trval |= HTT\n\t\t}\n\t}\n\n\t// Check XGETBV, OXSAVE and AVX bits\n\tif c&(1<<26) != 0 && c&(1<<27) != 0 && c&(1<<28) != 0 {\n\t\t// Check for OS support\n\t\teax, _ := xgetbv(0)\n\t\tif (eax & 0x6) == 0x6 {\n\t\t\trval |= AVX\n\t\t\tif (c & 0x00001000) != 0 {\n\t\t\t\trval |= FMA3\n\t\t\t}\n\t\t}\n\t}\n\n\t// Check AVX2, AVX2 requires OS support, but BMI1/2 don't.\n\tif mfi >= 7 {\n\t\t_, ebx, ecx, _ := cpuidex(7, 0)\n\t\tif (rval&AVX) != 0 && (ebx&0x00000020) != 0 {\n\t\t\trval |= AVX2\n\t\t}\n\t\tif (ebx & 0x00000008) != 0 {\n\t\t\trval |= BMI1\n\t\t\tif (ebx & 0x00000100) != 0 {\n\t\t\t\trval |= BMI2\n\t\t\t}\n\t\t}\n\t\tif ebx&(1<<2) != 0 {\n\t\t\trval |= SGX\n\t\t}\n\t\tif ebx&(1<<4) != 0 {\n\t\t\trval |= HLE\n\t\t}\n\t\tif ebx&(1<<9) != 0 {\n\t\t\trval |= ERMS\n\t\t}\n\t\tif ebx&(1<<11) != 0 {\n\t\t\trval |= RTM\n\t\t}\n\t\tif ebx&(1<<14) != 0 {\n\t\t\trval |= MPX\n\t\t}\n\t\tif ebx&(1<<18) != 0 {\n\t\t\trval |= RDSEED\n\t\t}\n\t\tif ebx&(1<<19) != 0 {\n\t\t\trval |= ADX\n\t\t}\n\t\tif ebx&(1<<29) != 0 {\n\t\t\trval |= SHA\n\t\t}\n\n\t\t// Only detect AVX-512 features if XGETBV is supported\n\t\tif c&((1<<26)|(1<<27)) == (1<<26)|(1<<27) {\n\t\t\t// Check for OS support\n\t\t\teax, _ := xgetbv(0)\n\n\t\t\t// Verify that XCR0[7:5] = ‘111b’ (OPMASK state, upper 256-bit of ZMM0-ZMM15 and\n\t\t\t// ZMM16-ZMM31 state are enabled by OS)\n\t\t\t/// and that XCR0[2:1] = ‘11b’ (XMM state and YMM state are enabled by OS).\n\t\t\tif (eax>>5)&7 == 7 && (eax>>1)&3 == 3 {\n\t\t\t\tif ebx&(1<<16) != 0 {\n\t\t\t\t\trval |= AVX512F\n\t\t\t\t}\n\t\t\t\tif ebx&(1<<17) != 0 {\n\t\t\t\t\trval |= AVX512DQ\n\t\t\t\t}\n\t\t\t\tif ebx&(1<<21) != 0 {\n\t\t\t\t\trval |= AVX512IFMA\n\t\t\t\t}\n\t\t\t\tif ebx&(1<<26) != 0 {\n\t\t\t\t\trval |= AVX512PF\n\t\t\t\t}\n\t\t\t\tif ebx&(1<<27) != 0 {\n\t\t\t\t\trval |= AVX512ER\n\t\t\t\t}\n\t\t\t\tif ebx&(1<<28) != 0 {\n\t\t\t\t\trval |= AVX512CD\n\t\t\t\t}\n\t\t\t\tif ebx&(1<<30) != 0 {\n\t\t\t\t\trval |= AVX512BW\n\t\t\t\t}\n\t\t\t\tif ebx&(1<<31) != 0 {\n\t\t\t\t\trval |= AVX512VL\n\t\t\t\t}\n\t\t\t\t// ecx\n\t\t\t\tif ecx&(1<<1) != 0 {\n\t\t\t\t\trval |= AVX512VBMI\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif maxExtendedFunction() >= 0x80000001 {\n\t\t_, _, c, d := cpuid(0x80000001)\n\t\tif (c & (1 << 5)) != 0 {\n\t\t\trval |= LZCNT\n\t\t\trval |= POPCNT\n\t\t}\n\t\tif (d & (1 << 31)) != 0 {\n\t\t\trval |= AMD3DNOW\n\t\t}\n\t\tif (d & (1 << 30)) != 0 {\n\t\t\trval |= AMD3DNOWEXT\n\t\t}\n\t\tif (d & (1 << 23)) != 0 {\n\t\t\trval |= MMX\n\t\t}\n\t\tif (d & (1 << 22)) != 0 {\n\t\t\trval |= MMXEXT\n\t\t}\n\t\tif (c & (1 << 6)) != 0 {\n\t\t\trval |= SSE4A\n\t\t}\n\t\tif d&(1<<20) != 0 {\n\t\t\trval |= NX\n\t\t}\n\t\tif d&(1<<27) != 0 {\n\t\t\trval |= RDTSCP\n\t\t}\n\n\t\t/* Allow for selectively disabling SSE2 functions on AMD processors\n\t\t   with SSE2 support but not SSE4a. This includes Athlon64, some\n\t\t   Opteron, and some Sempron processors. MMX, SSE, or 3DNow! are faster\n\t\t   than SSE2 often enough to utilize this special-case flag.\n\t\t   AV_CPU_FLAG_SSE2 and AV_CPU_FLAG_SSE2SLOW are both set in this case\n\t\t   so that SSE2 is used unless explicitly disabled by checking\n\t\t   AV_CPU_FLAG_SSE2SLOW. */\n\t\tif vendorID() != Intel &&\n\t\t\trval&SSE2 != 0 && (c&0x00000040) == 0 {\n\t\t\trval |= SSE2SLOW\n\t\t}\n\n\t\t/* XOP and FMA4 use the AVX instruction coding scheme, so they can't be\n\t\t * used unless the OS has AVX support. */\n\t\tif (rval & AVX) != 0 {\n\t\t\tif (c & 0x00000800) != 0 {\n\t\t\t\trval |= XOP\n\t\t\t}\n\t\t\tif (c & 0x00010000) != 0 {\n\t\t\t\trval |= FMA4\n\t\t\t}\n\t\t}\n\n\t\tif vendorID() == Intel {\n\t\t\tfamily, model := familyModel()\n\t\t\tif family == 6 && (model == 9 || model == 13 || model == 14) {\n\t\t\t\t/* 6/9 (pentium-m \"banias\"), 6/13 (pentium-m \"dothan\"), and\n\t\t\t\t * 6/14 (core1 \"yonah\") theoretically support sse2, but it's\n\t\t\t\t * usually slower than mmx. */\n\t\t\t\tif (rval & SSE2) != 0 {\n\t\t\t\t\trval |= SSE2SLOW\n\t\t\t\t}\n\t\t\t\tif (rval & SSE3) != 0 {\n\t\t\t\t\trval |= SSE3SLOW\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* The Atom processor has SSSE3 support, which is useful in many cases,\n\t\t\t * but sometimes the SSSE3 version is slower than the SSE2 equivalent\n\t\t\t * on the Atom, but is generally faster on other processors supporting\n\t\t\t * SSSE3. This flag allows for selectively disabling certain SSSE3\n\t\t\t * functions on the Atom. */\n\t\t\tif family == 6 && model == 28 {\n\t\t\t\trval |= ATOM\n\t\t\t}\n\t\t}\n\t}\n\treturn Flags(rval)\n}\n\nfunc valAsString(values ...uint32) []byte {\n\tr := make([]byte, 4*len(values))\n\tfor i, v := range values {\n\t\tdst := r[i*4:]\n\t\tdst[0] = byte(v & 0xff)\n\t\tdst[1] = byte((v >> 8) & 0xff)\n\t\tdst[2] = byte((v >> 16) & 0xff)\n\t\tdst[3] = byte((v >> 24) & 0xff)\n\t\tswitch {\n\t\tcase dst[0] == 0:\n\t\t\treturn r[:i*4]\n\t\tcase dst[1] == 0:\n\t\t\treturn r[:i*4+1]\n\t\tcase dst[2] == 0:\n\t\t\treturn r[:i*4+2]\n\t\tcase dst[3] == 0:\n\t\t\treturn r[:i*4+3]\n\t\t}\n\t}\n\treturn r\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/cpuid/cpuid_386.s",
    "content": "// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.\n\n// +build 386,!gccgo\n\n// func asmCpuid(op uint32) (eax, ebx, ecx, edx uint32)\nTEXT ·asmCpuid(SB), 7, $0\n\tXORL CX, CX\n\tMOVL op+0(FP), AX\n\tCPUID\n\tMOVL AX, eax+4(FP)\n\tMOVL BX, ebx+8(FP)\n\tMOVL CX, ecx+12(FP)\n\tMOVL DX, edx+16(FP)\n\tRET\n\n// func asmCpuidex(op, op2 uint32) (eax, ebx, ecx, edx uint32)\nTEXT ·asmCpuidex(SB), 7, $0\n\tMOVL op+0(FP), AX\n\tMOVL op2+4(FP), CX\n\tCPUID\n\tMOVL AX, eax+8(FP)\n\tMOVL BX, ebx+12(FP)\n\tMOVL CX, ecx+16(FP)\n\tMOVL DX, edx+20(FP)\n\tRET\n\n// func xgetbv(index uint32) (eax, edx uint32)\nTEXT ·asmXgetbv(SB), 7, $0\n\tMOVL index+0(FP), CX\n\tBYTE $0x0f; BYTE $0x01; BYTE $0xd0 // XGETBV\n\tMOVL AX, eax+4(FP)\n\tMOVL DX, edx+8(FP)\n\tRET\n\n// func asmRdtscpAsm() (eax, ebx, ecx, edx uint32)\nTEXT ·asmRdtscpAsm(SB), 7, $0\n\tBYTE $0x0F; BYTE $0x01; BYTE $0xF9 // RDTSCP\n\tMOVL AX, eax+0(FP)\n\tMOVL BX, ebx+4(FP)\n\tMOVL CX, ecx+8(FP)\n\tMOVL DX, edx+12(FP)\n\tRET\n"
  },
  {
    "path": "vendor/github.com/klauspost/cpuid/cpuid_amd64.s",
    "content": "// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.\n\n//+build amd64,!gccgo\n\n// func asmCpuid(op uint32) (eax, ebx, ecx, edx uint32)\nTEXT ·asmCpuid(SB), 7, $0\n\tXORQ CX, CX\n\tMOVL op+0(FP), AX\n\tCPUID\n\tMOVL AX, eax+8(FP)\n\tMOVL BX, ebx+12(FP)\n\tMOVL CX, ecx+16(FP)\n\tMOVL DX, edx+20(FP)\n\tRET\n\n// func asmCpuidex(op, op2 uint32) (eax, ebx, ecx, edx uint32)\nTEXT ·asmCpuidex(SB), 7, $0\n\tMOVL op+0(FP), AX\n\tMOVL op2+4(FP), CX\n\tCPUID\n\tMOVL AX, eax+8(FP)\n\tMOVL BX, ebx+12(FP)\n\tMOVL CX, ecx+16(FP)\n\tMOVL DX, edx+20(FP)\n\tRET\n\n// func asmXgetbv(index uint32) (eax, edx uint32)\nTEXT ·asmXgetbv(SB), 7, $0\n\tMOVL index+0(FP), CX\n\tBYTE $0x0f; BYTE $0x01; BYTE $0xd0 // XGETBV\n\tMOVL AX, eax+8(FP)\n\tMOVL DX, edx+12(FP)\n\tRET\n\n// func asmRdtscpAsm() (eax, ebx, ecx, edx uint32)\nTEXT ·asmRdtscpAsm(SB), 7, $0\n\tBYTE $0x0F; BYTE $0x01; BYTE $0xF9 // RDTSCP\n\tMOVL AX, eax+0(FP)\n\tMOVL BX, ebx+4(FP)\n\tMOVL CX, ecx+8(FP)\n\tMOVL DX, edx+12(FP)\n\tRET\n"
  },
  {
    "path": "vendor/github.com/klauspost/cpuid/detect_intel.go",
    "content": "// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.\n\n// +build 386,!gccgo amd64,!gccgo\n\npackage cpuid\n\nfunc asmCpuid(op uint32) (eax, ebx, ecx, edx uint32)\nfunc asmCpuidex(op, op2 uint32) (eax, ebx, ecx, edx uint32)\nfunc asmXgetbv(index uint32) (eax, edx uint32)\nfunc asmRdtscpAsm() (eax, ebx, ecx, edx uint32)\n\nfunc initCPU() {\n\tcpuid = asmCpuid\n\tcpuidex = asmCpuidex\n\txgetbv = asmXgetbv\n\trdtscpAsm = asmRdtscpAsm\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/cpuid/detect_ref.go",
    "content": "// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.\n\n// +build !amd64,!386 gccgo\n\npackage cpuid\n\nfunc initCPU() {\n\tcpuid = func(op uint32) (eax, ebx, ecx, edx uint32) {\n\t\treturn 0, 0, 0, 0\n\t}\n\n\tcpuidex = func(op, op2 uint32) (eax, ebx, ecx, edx uint32) {\n\t\treturn 0, 0, 0, 0\n\t}\n\n\txgetbv = func(index uint32) (eax, edx uint32) {\n\t\treturn 0, 0\n\t}\n\n\trdtscpAsm = func() (eax, ebx, ecx, edx uint32) {\n\t\treturn 0, 0, 0, 0\n\t}\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/cpuid/generate.go",
    "content": "package cpuid\n\n//go:generate go run private-gen.go\n"
  },
  {
    "path": "vendor/github.com/klauspost/cpuid/private-gen.go",
    "content": "// +build ignore\n\npackage main\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"go/ast\"\n\t\"go/parser\"\n\t\"go/printer\"\n\t\"go/token\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"os\"\n\t\"reflect\"\n\t\"strings\"\n\t\"unicode\"\n\t\"unicode/utf8\"\n)\n\nvar inFiles = []string{\"cpuid.go\", \"cpuid_test.go\"}\nvar copyFiles = []string{\"cpuid_amd64.s\", \"cpuid_386.s\", \"detect_ref.go\", \"detect_intel.go\"}\nvar fileSet = token.NewFileSet()\nvar reWrites = []rewrite{\n\tinitRewrite(\"CPUInfo -> cpuInfo\"),\n\tinitRewrite(\"Vendor -> vendor\"),\n\tinitRewrite(\"Flags -> flags\"),\n\tinitRewrite(\"Detect -> detect\"),\n\tinitRewrite(\"CPU -> cpu\"),\n}\nvar excludeNames = map[string]bool{\"string\": true, \"join\": true, \"trim\": true,\n\t// cpuid_test.go\n\t\"t\": true, \"println\": true, \"logf\": true, \"log\": true, \"fatalf\": true, \"fatal\": true,\n}\n\nvar excludePrefixes = []string{\"test\", \"benchmark\"}\n\nfunc main() {\n\tPackage := \"private\"\n\tparserMode := parser.ParseComments\n\texported := make(map[string]rewrite)\n\tfor _, file := range inFiles {\n\t\tin, err := os.Open(file)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"opening input\", err)\n\t\t}\n\n\t\tsrc, err := ioutil.ReadAll(in)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"reading input\", err)\n\t\t}\n\n\t\tastfile, err := parser.ParseFile(fileSet, file, src, parserMode)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"parsing input\", err)\n\t\t}\n\n\t\tfor _, rw := range reWrites {\n\t\t\tastfile = rw(astfile)\n\t\t}\n\n\t\t// Inspect the AST and print all identifiers and literals.\n\t\tvar startDecl token.Pos\n\t\tvar endDecl token.Pos\n\t\tast.Inspect(astfile, func(n ast.Node) bool {\n\t\t\tvar s string\n\t\t\tswitch x := n.(type) {\n\t\t\tcase *ast.Ident:\n\t\t\t\tif x.IsExported() {\n\t\t\t\t\tt := strings.ToLower(x.Name)\n\t\t\t\t\tfor _, pre := range excludePrefixes {\n\t\t\t\t\t\tif strings.HasPrefix(t, pre) {\n\t\t\t\t\t\t\treturn true\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif excludeNames[t] != true {\n\t\t\t\t\t\t//if x.Pos() > startDecl && x.Pos() < endDecl {\n\t\t\t\t\t\texported[x.Name] = initRewrite(x.Name + \" -> \" + t)\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\tcase *ast.GenDecl:\n\t\t\t\tif x.Tok == token.CONST && x.Lparen > 0 {\n\t\t\t\t\tstartDecl = x.Lparen\n\t\t\t\t\tendDecl = x.Rparen\n\t\t\t\t\t// fmt.Printf(\"Decl:%s -> %s\\n\", fileSet.Position(startDecl), fileSet.Position(endDecl))\n\t\t\t\t}\n\t\t\t}\n\t\t\tif s != \"\" {\n\t\t\t\tfmt.Printf(\"%s:\\t%s\\n\", fileSet.Position(n.Pos()), s)\n\t\t\t}\n\t\t\treturn true\n\t\t})\n\n\t\tfor _, rw := range exported {\n\t\t\tastfile = rw(astfile)\n\t\t}\n\n\t\tvar buf bytes.Buffer\n\n\t\tprinter.Fprint(&buf, fileSet, astfile)\n\n\t\t// Remove package documentation and insert information\n\t\ts := buf.String()\n\t\tind := strings.Index(buf.String(), \"\\npackage cpuid\")\n\t\ts = s[ind:]\n\t\ts = \"// Generated, DO NOT EDIT,\\n\" +\n\t\t\t\"// but copy it to your own project and rename the package.\\n\" +\n\t\t\t\"// See more at http://github.com/klauspost/cpuid\\n\" +\n\t\t\ts\n\n\t\toutputName := Package + string(os.PathSeparator) + file\n\n\t\terr = ioutil.WriteFile(outputName, []byte(s), 0644)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"writing output: %s\", err)\n\t\t}\n\t\tlog.Println(\"Generated\", outputName)\n\t}\n\n\tfor _, file := range copyFiles {\n\t\tdst := \"\"\n\t\tif strings.HasPrefix(file, \"cpuid\") {\n\t\t\tdst = Package + string(os.PathSeparator) + file\n\t\t} else {\n\t\t\tdst = Package + string(os.PathSeparator) + \"cpuid_\" + file\n\t\t}\n\t\terr := copyFile(file, dst)\n\t\tif err != nil {\n\t\t\tlog.Fatalf(\"copying file: %s\", err)\n\t\t}\n\t\tlog.Println(\"Copied\", dst)\n\t}\n}\n\n// CopyFile copies a file from src to dst. If src and dst files exist, and are\n// the same, then return success. Copy the file contents from src to dst.\nfunc copyFile(src, dst string) (err error) {\n\tsfi, err := os.Stat(src)\n\tif err != nil {\n\t\treturn\n\t}\n\tif !sfi.Mode().IsRegular() {\n\t\t// cannot copy non-regular files (e.g., directories,\n\t\t// symlinks, devices, etc.)\n\t\treturn fmt.Errorf(\"CopyFile: non-regular source file %s (%q)\", sfi.Name(), sfi.Mode().String())\n\t}\n\tdfi, err := os.Stat(dst)\n\tif err != nil {\n\t\tif !os.IsNotExist(err) {\n\t\t\treturn\n\t\t}\n\t} else {\n\t\tif !(dfi.Mode().IsRegular()) {\n\t\t\treturn fmt.Errorf(\"CopyFile: non-regular destination file %s (%q)\", dfi.Name(), dfi.Mode().String())\n\t\t}\n\t\tif os.SameFile(sfi, dfi) {\n\t\t\treturn\n\t\t}\n\t}\n\terr = copyFileContents(src, dst)\n\treturn\n}\n\n// copyFileContents copies the contents of the file named src to the file named\n// by dst. The file will be created if it does not already exist. If the\n// destination file exists, all it's contents will be replaced by the contents\n// of the source file.\nfunc copyFileContents(src, dst string) (err error) {\n\tin, err := os.Open(src)\n\tif err != nil {\n\t\treturn\n\t}\n\tdefer in.Close()\n\tout, err := os.Create(dst)\n\tif err != nil {\n\t\treturn\n\t}\n\tdefer func() {\n\t\tcerr := out.Close()\n\t\tif err == nil {\n\t\t\terr = cerr\n\t\t}\n\t}()\n\tif _, err = io.Copy(out, in); err != nil {\n\t\treturn\n\t}\n\terr = out.Sync()\n\treturn\n}\n\ntype rewrite func(*ast.File) *ast.File\n\n// Mostly copied from gofmt\nfunc initRewrite(rewriteRule string) rewrite {\n\tf := strings.Split(rewriteRule, \"->\")\n\tif len(f) != 2 {\n\t\tfmt.Fprintf(os.Stderr, \"rewrite rule must be of the form 'pattern -> replacement'\\n\")\n\t\tos.Exit(2)\n\t}\n\tpattern := parseExpr(f[0], \"pattern\")\n\treplace := parseExpr(f[1], \"replacement\")\n\treturn func(p *ast.File) *ast.File { return rewriteFile(pattern, replace, p) }\n}\n\n// parseExpr parses s as an expression.\n// It might make sense to expand this to allow statement patterns,\n// but there are problems with preserving formatting and also\n// with what a wildcard for a statement looks like.\nfunc parseExpr(s, what string) ast.Expr {\n\tx, err := parser.ParseExpr(s)\n\tif err != nil {\n\t\tfmt.Fprintf(os.Stderr, \"parsing %s %s at %s\\n\", what, s, err)\n\t\tos.Exit(2)\n\t}\n\treturn x\n}\n\n// Keep this function for debugging.\n/*\nfunc dump(msg string, val reflect.Value) {\n\tfmt.Printf(\"%s:\\n\", msg)\n\tast.Print(fileSet, val.Interface())\n\tfmt.Println()\n}\n*/\n\n// rewriteFile applies the rewrite rule 'pattern -> replace' to an entire file.\nfunc rewriteFile(pattern, replace ast.Expr, p *ast.File) *ast.File {\n\tcmap := ast.NewCommentMap(fileSet, p, p.Comments)\n\tm := make(map[string]reflect.Value)\n\tpat := reflect.ValueOf(pattern)\n\trepl := reflect.ValueOf(replace)\n\n\tvar rewriteVal func(val reflect.Value) reflect.Value\n\trewriteVal = func(val reflect.Value) reflect.Value {\n\t\t// don't bother if val is invalid to start with\n\t\tif !val.IsValid() {\n\t\t\treturn reflect.Value{}\n\t\t}\n\t\tfor k := range m {\n\t\t\tdelete(m, k)\n\t\t}\n\t\tval = apply(rewriteVal, val)\n\t\tif match(m, pat, val) {\n\t\t\tval = subst(m, repl, reflect.ValueOf(val.Interface().(ast.Node).Pos()))\n\t\t}\n\t\treturn val\n\t}\n\n\tr := apply(rewriteVal, reflect.ValueOf(p)).Interface().(*ast.File)\n\tr.Comments = cmap.Filter(r).Comments() // recreate comments list\n\treturn r\n}\n\n// set is a wrapper for x.Set(y); it protects the caller from panics if x cannot be changed to y.\nfunc set(x, y reflect.Value) {\n\t// don't bother if x cannot be set or y is invalid\n\tif !x.CanSet() || !y.IsValid() {\n\t\treturn\n\t}\n\tdefer func() {\n\t\tif x := recover(); x != nil {\n\t\t\tif s, ok := x.(string); ok &&\n\t\t\t\t(strings.Contains(s, \"type mismatch\") || strings.Contains(s, \"not assignable\")) {\n\t\t\t\t// x cannot be set to y - ignore this rewrite\n\t\t\t\treturn\n\t\t\t}\n\t\t\tpanic(x)\n\t\t}\n\t}()\n\tx.Set(y)\n}\n\n// Values/types for special cases.\nvar (\n\tobjectPtrNil = reflect.ValueOf((*ast.Object)(nil))\n\tscopePtrNil  = reflect.ValueOf((*ast.Scope)(nil))\n\n\tidentType     = reflect.TypeOf((*ast.Ident)(nil))\n\tobjectPtrType = reflect.TypeOf((*ast.Object)(nil))\n\tpositionType  = reflect.TypeOf(token.NoPos)\n\tcallExprType  = reflect.TypeOf((*ast.CallExpr)(nil))\n\tscopePtrType  = reflect.TypeOf((*ast.Scope)(nil))\n)\n\n// apply replaces each AST field x in val with f(x), returning val.\n// To avoid extra conversions, f operates on the reflect.Value form.\nfunc apply(f func(reflect.Value) reflect.Value, val reflect.Value) reflect.Value {\n\tif !val.IsValid() {\n\t\treturn reflect.Value{}\n\t}\n\n\t// *ast.Objects introduce cycles and are likely incorrect after\n\t// rewrite; don't follow them but replace with nil instead\n\tif val.Type() == objectPtrType {\n\t\treturn objectPtrNil\n\t}\n\n\t// similarly for scopes: they are likely incorrect after a rewrite;\n\t// replace them with nil\n\tif val.Type() == scopePtrType {\n\t\treturn scopePtrNil\n\t}\n\n\tswitch v := reflect.Indirect(val); v.Kind() {\n\tcase reflect.Slice:\n\t\tfor i := 0; i < v.Len(); i++ {\n\t\t\te := v.Index(i)\n\t\t\tset(e, f(e))\n\t\t}\n\tcase reflect.Struct:\n\t\tfor i := 0; i < v.NumField(); i++ {\n\t\t\te := v.Field(i)\n\t\t\tset(e, f(e))\n\t\t}\n\tcase reflect.Interface:\n\t\te := v.Elem()\n\t\tset(v, f(e))\n\t}\n\treturn val\n}\n\nfunc isWildcard(s string) bool {\n\trune, size := utf8.DecodeRuneInString(s)\n\treturn size == len(s) && unicode.IsLower(rune)\n}\n\n// match returns true if pattern matches val,\n// recording wildcard submatches in m.\n// If m == nil, match checks whether pattern == val.\nfunc match(m map[string]reflect.Value, pattern, val reflect.Value) bool {\n\t// Wildcard matches any expression.  If it appears multiple\n\t// times in the pattern, it must match the same expression\n\t// each time.\n\tif m != nil && pattern.IsValid() && pattern.Type() == identType {\n\t\tname := pattern.Interface().(*ast.Ident).Name\n\t\tif isWildcard(name) && val.IsValid() {\n\t\t\t// wildcards only match valid (non-nil) expressions.\n\t\t\tif _, ok := val.Interface().(ast.Expr); ok && !val.IsNil() {\n\t\t\t\tif old, ok := m[name]; ok {\n\t\t\t\t\treturn match(nil, old, val)\n\t\t\t\t}\n\t\t\t\tm[name] = val\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\n\t// Otherwise, pattern and val must match recursively.\n\tif !pattern.IsValid() || !val.IsValid() {\n\t\treturn !pattern.IsValid() && !val.IsValid()\n\t}\n\tif pattern.Type() != val.Type() {\n\t\treturn false\n\t}\n\n\t// Special cases.\n\tswitch pattern.Type() {\n\tcase identType:\n\t\t// For identifiers, only the names need to match\n\t\t// (and none of the other *ast.Object information).\n\t\t// This is a common case, handle it all here instead\n\t\t// of recursing down any further via reflection.\n\t\tp := pattern.Interface().(*ast.Ident)\n\t\tv := val.Interface().(*ast.Ident)\n\t\treturn p == nil && v == nil || p != nil && v != nil && p.Name == v.Name\n\tcase objectPtrType, positionType:\n\t\t// object pointers and token positions always match\n\t\treturn true\n\tcase callExprType:\n\t\t// For calls, the Ellipsis fields (token.Position) must\n\t\t// match since that is how f(x) and f(x...) are different.\n\t\t// Check them here but fall through for the remaining fields.\n\t\tp := pattern.Interface().(*ast.CallExpr)\n\t\tv := val.Interface().(*ast.CallExpr)\n\t\tif p.Ellipsis.IsValid() != v.Ellipsis.IsValid() {\n\t\t\treturn false\n\t\t}\n\t}\n\n\tp := reflect.Indirect(pattern)\n\tv := reflect.Indirect(val)\n\tif !p.IsValid() || !v.IsValid() {\n\t\treturn !p.IsValid() && !v.IsValid()\n\t}\n\n\tswitch p.Kind() {\n\tcase reflect.Slice:\n\t\tif p.Len() != v.Len() {\n\t\t\treturn false\n\t\t}\n\t\tfor i := 0; i < p.Len(); i++ {\n\t\t\tif !match(m, p.Index(i), v.Index(i)) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\treturn true\n\n\tcase reflect.Struct:\n\t\tfor i := 0; i < p.NumField(); i++ {\n\t\t\tif !match(m, p.Field(i), v.Field(i)) {\n\t\t\t\treturn false\n\t\t\t}\n\t\t}\n\t\treturn true\n\n\tcase reflect.Interface:\n\t\treturn match(m, p.Elem(), v.Elem())\n\t}\n\n\t// Handle token integers, etc.\n\treturn p.Interface() == v.Interface()\n}\n\n// subst returns a copy of pattern with values from m substituted in place\n// of wildcards and pos used as the position of tokens from the pattern.\n// if m == nil, subst returns a copy of pattern and doesn't change the line\n// number information.\nfunc subst(m map[string]reflect.Value, pattern reflect.Value, pos reflect.Value) reflect.Value {\n\tif !pattern.IsValid() {\n\t\treturn reflect.Value{}\n\t}\n\n\t// Wildcard gets replaced with map value.\n\tif m != nil && pattern.Type() == identType {\n\t\tname := pattern.Interface().(*ast.Ident).Name\n\t\tif isWildcard(name) {\n\t\t\tif old, ok := m[name]; ok {\n\t\t\t\treturn subst(nil, old, reflect.Value{})\n\t\t\t}\n\t\t}\n\t}\n\n\tif pos.IsValid() && pattern.Type() == positionType {\n\t\t// use new position only if old position was valid in the first place\n\t\tif old := pattern.Interface().(token.Pos); !old.IsValid() {\n\t\t\treturn pattern\n\t\t}\n\t\treturn pos\n\t}\n\n\t// Otherwise copy.\n\tswitch p := pattern; p.Kind() {\n\tcase reflect.Slice:\n\t\tv := reflect.MakeSlice(p.Type(), p.Len(), p.Len())\n\t\tfor i := 0; i < p.Len(); i++ {\n\t\t\tv.Index(i).Set(subst(m, p.Index(i), pos))\n\t\t}\n\t\treturn v\n\n\tcase reflect.Struct:\n\t\tv := reflect.New(p.Type()).Elem()\n\t\tfor i := 0; i < p.NumField(); i++ {\n\t\t\tv.Field(i).Set(subst(m, p.Field(i), pos))\n\t\t}\n\t\treturn v\n\n\tcase reflect.Ptr:\n\t\tv := reflect.New(p.Type()).Elem()\n\t\tif elem := p.Elem(); elem.IsValid() {\n\t\t\tv.Set(subst(m, elem, pos).Addr())\n\t\t}\n\t\treturn v\n\n\tcase reflect.Interface:\n\t\tv := reflect.New(p.Type()).Elem()\n\t\tif elem := p.Elem(); elem.IsValid() {\n\t\t\tv.Set(subst(m, elem, pos))\n\t\t}\n\t\treturn v\n\t}\n\n\treturn pattern\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/crc32/LICENSE",
    "content": "Copyright (c) 2012 The Go Authors. All rights reserved.\nCopyright (c) 2015 Klaus Post\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n   * Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n   * Redistributions in binary form must reproduce the above\ncopyright notice, this list of conditions and the following disclaimer\nin the documentation and/or other materials provided with the\ndistribution.\n   * Neither the name of Google Inc. nor the names of its\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "vendor/github.com/klauspost/crc32/crc32.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// Package crc32 implements the 32-bit cyclic redundancy check, or CRC-32,\n// checksum. See http://en.wikipedia.org/wiki/Cyclic_redundancy_check for\n// information.\n//\n// Polynomials are represented in LSB-first form also known as reversed representation.\n//\n// See http://en.wikipedia.org/wiki/Mathematics_of_cyclic_redundancy_checks#Reversed_representations_and_reciprocal_polynomials\n// for information.\npackage crc32\n\nimport (\n\t\"hash\"\n\t\"sync\"\n)\n\n// The size of a CRC-32 checksum in bytes.\nconst Size = 4\n\n// Predefined polynomials.\nconst (\n\t// IEEE is by far and away the most common CRC-32 polynomial.\n\t// Used by ethernet (IEEE 802.3), v.42, fddi, gzip, zip, png, ...\n\tIEEE = 0xedb88320\n\n\t// Castagnoli's polynomial, used in iSCSI.\n\t// Has better error detection characteristics than IEEE.\n\t// http://dx.doi.org/10.1109/26.231911\n\tCastagnoli = 0x82f63b78\n\n\t// Koopman's polynomial.\n\t// Also has better error detection characteristics than IEEE.\n\t// http://dx.doi.org/10.1109/DSN.2002.1028931\n\tKoopman = 0xeb31d82e\n)\n\n// Table is a 256-word table representing the polynomial for efficient processing.\ntype Table [256]uint32\n\n// castagnoliTable points to a lazily initialized Table for the Castagnoli\n// polynomial. MakeTable will always return this value when asked to make a\n// Castagnoli table so we can compare against it to find when the caller is\n// using this polynomial.\nvar castagnoliTable *Table\nvar castagnoliTable8 *slicing8Table\nvar castagnoliOnce sync.Once\n\nfunc castagnoliInit() {\n\tcastagnoliTable = makeTable(Castagnoli)\n\tcastagnoliTable8 = makeTable8(Castagnoli)\n}\n\n// IEEETable is the table for the IEEE polynomial.\nvar IEEETable = makeTable(IEEE)\n\n// slicing8Table is array of 8 Tables\ntype slicing8Table [8]Table\n\n// ieeeTable8 is the slicing8Table for IEEE\nvar ieeeTable8 *slicing8Table\nvar ieeeTable8Once sync.Once\n\n// MakeTable returns a Table constructed from the specified polynomial.\n// The contents of this Table must not be modified.\nfunc MakeTable(poly uint32) *Table {\n\tswitch poly {\n\tcase IEEE:\n\t\treturn IEEETable\n\tcase Castagnoli:\n\t\tcastagnoliOnce.Do(castagnoliInit)\n\t\treturn castagnoliTable\n\t}\n\treturn makeTable(poly)\n}\n\n// makeTable returns the Table constructed from the specified polynomial.\nfunc makeTable(poly uint32) *Table {\n\tt := new(Table)\n\tfor i := 0; i < 256; i++ {\n\t\tcrc := uint32(i)\n\t\tfor j := 0; j < 8; j++ {\n\t\t\tif crc&1 == 1 {\n\t\t\t\tcrc = (crc >> 1) ^ poly\n\t\t\t} else {\n\t\t\t\tcrc >>= 1\n\t\t\t}\n\t\t}\n\t\tt[i] = crc\n\t}\n\treturn t\n}\n\n// makeTable8 returns slicing8Table constructed from the specified polynomial.\nfunc makeTable8(poly uint32) *slicing8Table {\n\tt := new(slicing8Table)\n\tt[0] = *makeTable(poly)\n\tfor i := 0; i < 256; i++ {\n\t\tcrc := t[0][i]\n\t\tfor j := 1; j < 8; j++ {\n\t\t\tcrc = t[0][crc&0xFF] ^ (crc >> 8)\n\t\t\tt[j][i] = crc\n\t\t}\n\t}\n\treturn t\n}\n\n// digest represents the partial evaluation of a checksum.\ntype digest struct {\n\tcrc uint32\n\ttab *Table\n}\n\n// New creates a new hash.Hash32 computing the CRC-32 checksum\n// using the polynomial represented by the Table.\n// Its Sum method will lay the value out in big-endian byte order.\nfunc New(tab *Table) hash.Hash32 { return &digest{0, tab} }\n\n// NewIEEE creates a new hash.Hash32 computing the CRC-32 checksum\n// using the IEEE polynomial.\n// Its Sum method will lay the value out in big-endian byte order.\nfunc NewIEEE() hash.Hash32 { return New(IEEETable) }\n\nfunc (d *digest) Size() int { return Size }\n\nfunc (d *digest) BlockSize() int { return 1 }\n\nfunc (d *digest) Reset() { d.crc = 0 }\n\nfunc update(crc uint32, tab *Table, p []byte) uint32 {\n\tcrc = ^crc\n\tfor _, v := range p {\n\t\tcrc = tab[byte(crc)^v] ^ (crc >> 8)\n\t}\n\treturn ^crc\n}\n\n// updateSlicingBy8 updates CRC using Slicing-by-8\nfunc updateSlicingBy8(crc uint32, tab *slicing8Table, p []byte) uint32 {\n\tcrc = ^crc\n\tfor len(p) > 8 {\n\t\tcrc ^= uint32(p[0]) | uint32(p[1])<<8 | uint32(p[2])<<16 | uint32(p[3])<<24\n\t\tcrc = tab[0][p[7]] ^ tab[1][p[6]] ^ tab[2][p[5]] ^ tab[3][p[4]] ^\n\t\t\ttab[4][crc>>24] ^ tab[5][(crc>>16)&0xFF] ^\n\t\t\ttab[6][(crc>>8)&0xFF] ^ tab[7][crc&0xFF]\n\t\tp = p[8:]\n\t}\n\tcrc = ^crc\n\tif len(p) == 0 {\n\t\treturn crc\n\t}\n\treturn update(crc, &tab[0], p)\n}\n\n// Update returns the result of adding the bytes in p to the crc.\nfunc Update(crc uint32, tab *Table, p []byte) uint32 {\n\tif tab == castagnoliTable {\n\t\treturn updateCastagnoli(crc, p)\n\t}\n\tif tab == IEEETable {\n\t\treturn updateIEEE(crc, p)\n\t}\n\treturn update(crc, tab, p)\n}\n\nfunc (d *digest) Write(p []byte) (n int, err error) {\n\td.crc = Update(d.crc, d.tab, p)\n\treturn len(p), nil\n}\n\nfunc (d *digest) Sum32() uint32 { return d.crc }\n\nfunc (d *digest) Sum(in []byte) []byte {\n\ts := d.Sum32()\n\treturn append(in, byte(s>>24), byte(s>>16), byte(s>>8), byte(s))\n}\n\n// Checksum returns the CRC-32 checksum of data\n// using the polynomial represented by the Table.\nfunc Checksum(data []byte, tab *Table) uint32 { return Update(0, tab, data) }\n\n// ChecksumIEEE returns the CRC-32 checksum of data\n// using the IEEE polynomial.\nfunc ChecksumIEEE(data []byte) uint32 { return updateIEEE(0, data) }\n"
  },
  {
    "path": "vendor/github.com/klauspost/crc32/crc32_amd64.go",
    "content": "// Copyright 2011 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build !appengine,!gccgo\n\npackage crc32\n\n// This file contains the code to call the SSE 4.2 version of the Castagnoli\n// and IEEE CRC.\n\n// haveSSE41/haveSSE42/haveCLMUL are defined in crc_amd64.s and use\n// CPUID to test for SSE 4.1, 4.2 and CLMUL support.\nfunc haveSSE41() bool\nfunc haveSSE42() bool\nfunc haveCLMUL() bool\n\n// castagnoliSSE42 is defined in crc_amd64.s and uses the SSE4.2 CRC32\n// instruction.\n//go:noescape\nfunc castagnoliSSE42(crc uint32, p []byte) uint32\n\n// ieeeCLMUL is defined in crc_amd64.s and uses the PCLMULQDQ\n// instruction as well as SSE 4.1.\n//go:noescape\nfunc ieeeCLMUL(crc uint32, p []byte) uint32\n\nvar sse42 = haveSSE42()\nvar useFastIEEE = haveCLMUL() && haveSSE41()\n\nfunc updateCastagnoli(crc uint32, p []byte) uint32 {\n\tif sse42 {\n\t\treturn castagnoliSSE42(crc, p)\n\t}\n\t// only use slicing-by-8 when input is >= 16 Bytes\n\tif len(p) >= 16 {\n\t\treturn updateSlicingBy8(crc, castagnoliTable8, p)\n\t}\n\treturn update(crc, castagnoliTable, p)\n}\n\nfunc updateIEEE(crc uint32, p []byte) uint32 {\n\tif useFastIEEE && len(p) >= 64 {\n\t\tleft := len(p) & 15\n\t\tdo := len(p) - left\n\t\tcrc = ^ieeeCLMUL(^crc, p[:do])\n\t\tif left > 0 {\n\t\t\tcrc = update(crc, IEEETable, p[do:])\n\t\t}\n\t\treturn crc\n\t}\n\n\t// only use slicing-by-8 when input is >= 16 Bytes\n\tif len(p) >= 16 {\n\t\tieeeTable8Once.Do(func() {\n\t\t\tieeeTable8 = makeTable8(IEEE)\n\t\t})\n\t\treturn updateSlicingBy8(crc, ieeeTable8, p)\n\t}\n\n\treturn update(crc, IEEETable, p)\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/crc32/crc32_amd64.s",
    "content": "// Copyright 2011 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build gc\n\n#define NOSPLIT 4\n#define RODATA 8\n\n// func castagnoliSSE42(crc uint32, p []byte) uint32\nTEXT ·castagnoliSSE42(SB), NOSPLIT, $0\n\tMOVL crc+0(FP), AX    // CRC value\n\tMOVQ p+8(FP), SI      // data pointer\n\tMOVQ p_len+16(FP), CX // len(p)\n\n\tNOTL AX\n\n\t// If there's less than 8 bytes to process, we do it byte-by-byte.\n\tCMPQ CX, $8\n\tJL   cleanup\n\n\t// Process individual bytes until the input is 8-byte aligned.\nstartup:\n\tMOVQ SI, BX\n\tANDQ $7, BX\n\tJZ   aligned\n\n\tCRC32B (SI), AX\n\tDECQ   CX\n\tINCQ   SI\n\tJMP    startup\n\naligned:\n\t// The input is now 8-byte aligned and we can process 8-byte chunks.\n\tCMPQ CX, $8\n\tJL   cleanup\n\n\tCRC32Q (SI), AX\n\tADDQ   $8, SI\n\tSUBQ   $8, CX\n\tJMP    aligned\n\ncleanup:\n\t// We may have some bytes left over that we process one at a time.\n\tCMPQ CX, $0\n\tJE   done\n\n\tCRC32B (SI), AX\n\tINCQ   SI\n\tDECQ   CX\n\tJMP    cleanup\n\ndone:\n\tNOTL AX\n\tMOVL AX, ret+32(FP)\n\tRET\n\n// func haveSSE42() bool\nTEXT ·haveSSE42(SB), NOSPLIT, $0\n\tXORQ AX, AX\n\tINCL AX\n\tCPUID\n\tSHRQ $20, CX\n\tANDQ $1, CX\n\tMOVB CX, ret+0(FP)\n\tRET\n\n// func haveCLMUL() bool\nTEXT ·haveCLMUL(SB), NOSPLIT, $0\n\tXORQ AX, AX\n\tINCL AX\n\tCPUID\n\tSHRQ $1, CX\n\tANDQ $1, CX\n\tMOVB CX, ret+0(FP)\n\tRET\n\n// func haveSSE41() bool\nTEXT ·haveSSE41(SB), NOSPLIT, $0\n\tXORQ AX, AX\n\tINCL AX\n\tCPUID\n\tSHRQ $19, CX\n\tANDQ $1, CX\n\tMOVB CX, ret+0(FP)\n\tRET\n\n// CRC32 polynomial data\n//\n// These constants are lifted from the\n// Linux kernel, since they avoid the costly\n// PSHUFB 16 byte reversal proposed in the\n// original Intel paper.\nDATA r2r1kp<>+0(SB)/8, $0x154442bd4\nDATA r2r1kp<>+8(SB)/8, $0x1c6e41596\nDATA r4r3kp<>+0(SB)/8, $0x1751997d0\nDATA r4r3kp<>+8(SB)/8, $0x0ccaa009e\nDATA rupolykp<>+0(SB)/8, $0x1db710641\nDATA rupolykp<>+8(SB)/8, $0x1f7011641\nDATA r5kp<>+0(SB)/8, $0x163cd6124\n\nGLOBL r2r1kp<>(SB), RODATA, $16\nGLOBL r4r3kp<>(SB), RODATA, $16\nGLOBL rupolykp<>(SB), RODATA, $16\nGLOBL r5kp<>(SB), RODATA, $8\n\n// Based on http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf\n// len(p) must be at least 64, and must be a multiple of 16.\n\n// func ieeeCLMUL(crc uint32, p []byte) uint32\nTEXT ·ieeeCLMUL(SB), NOSPLIT, $0\n\tMOVL crc+0(FP), X0    // Initial CRC value\n\tMOVQ p+8(FP), SI      // data pointer\n\tMOVQ p_len+16(FP), CX // len(p)\n\n\tMOVOU (SI), X1\n\tMOVOU 16(SI), X2\n\tMOVOU 32(SI), X3\n\tMOVOU 48(SI), X4\n\tPXOR  X0, X1\n\tADDQ  $64, SI    // buf+=64\n\tSUBQ  $64, CX    // len-=64\n\tCMPQ  CX, $64    // Less than 64 bytes left\n\tJB    remain64\n\n\tMOVOA r2r1kp<>+0(SB), X0\n\nloopback64:\n\tMOVOA X1, X5\n\tMOVOA X2, X6\n\tMOVOA X3, X7\n\tMOVOA X4, X8\n\n\tPCLMULQDQ $0, X0, X1\n\tPCLMULQDQ $0, X0, X2\n\tPCLMULQDQ $0, X0, X3\n\tPCLMULQDQ $0, X0, X4\n\n\t// Load next early\n\tMOVOU (SI), X11\n\tMOVOU 16(SI), X12\n\tMOVOU 32(SI), X13\n\tMOVOU 48(SI), X14\n\n\tPCLMULQDQ $0x11, X0, X5\n\tPCLMULQDQ $0x11, X0, X6\n\tPCLMULQDQ $0x11, X0, X7\n\tPCLMULQDQ $0x11, X0, X8\n\n\tPXOR X5, X1\n\tPXOR X6, X2\n\tPXOR X7, X3\n\tPXOR X8, X4\n\n\tPXOR X11, X1\n\tPXOR X12, X2\n\tPXOR X13, X3\n\tPXOR X14, X4\n\n\tADDQ $0x40, DI\n\tADDQ $64, SI    // buf+=64\n\tSUBQ $64, CX    // len-=64\n\tCMPQ CX, $64    // Less than 64 bytes left?\n\tJGE  loopback64\n\n\t// Fold result into a single register (X1)\nremain64:\n\tMOVOA r4r3kp<>+0(SB), X0\n\n\tMOVOA     X1, X5\n\tPCLMULQDQ $0, X0, X1\n\tPCLMULQDQ $0x11, X0, X5\n\tPXOR      X5, X1\n\tPXOR      X2, X1\n\n\tMOVOA     X1, X5\n\tPCLMULQDQ $0, X0, X1\n\tPCLMULQDQ $0x11, X0, X5\n\tPXOR      X5, X1\n\tPXOR      X3, X1\n\n\tMOVOA     X1, X5\n\tPCLMULQDQ $0, X0, X1\n\tPCLMULQDQ $0x11, X0, X5\n\tPXOR      X5, X1\n\tPXOR      X4, X1\n\n\t// More than 16 bytes left?\n\tCMPQ CX, $16\n\tJB   finish\n\n\t// Encode 16 bytes\nremain16:\n\tMOVOU     (SI), X10\n\tMOVOA     X1, X5\n\tPCLMULQDQ $0, X0, X1\n\tPCLMULQDQ $0x11, X0, X5\n\tPXOR      X5, X1\n\tPXOR      X10, X1\n\tSUBQ      $16, CX\n\tADDQ      $16, SI\n\tCMPQ      CX, $16\n\tJGE       remain16\n\nfinish:\n\t// Fold final result into 32 bits and return it\n\tPCMPEQB   X3, X3\n\tPCLMULQDQ $1, X1, X0\n\tPSRLDQ    $8, X1\n\tPXOR      X0, X1\n\n\tMOVOA X1, X2\n\tMOVQ  r5kp<>+0(SB), X0\n\n\t// Creates 32 bit mask. Note that we don't care about upper half.\n\tPSRLQ $32, X3\n\n\tPSRLDQ    $4, X2\n\tPAND      X3, X1\n\tPCLMULQDQ $0, X0, X1\n\tPXOR      X2, X1\n\n\tMOVOA rupolykp<>+0(SB), X0\n\n\tMOVOA     X1, X2\n\tPAND      X3, X1\n\tPCLMULQDQ $0x10, X0, X1\n\tPAND      X3, X1\n\tPCLMULQDQ $0, X0, X1\n\tPXOR      X2, X1\n\n\t// PEXTRD   $1, X1, AX  (SSE 4.1)\n\tBYTE $0x66; BYTE $0x0f; BYTE $0x3a\n\tBYTE $0x16; BYTE $0xc8; BYTE $0x01\n\tMOVL AX, ret+32(FP)\n\n\tRET\n"
  },
  {
    "path": "vendor/github.com/klauspost/crc32/crc32_amd64p32.go",
    "content": "// Copyright 2011 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build !appengine,!gccgo\n\npackage crc32\n\n// This file contains the code to call the SSE 4.2 version of the Castagnoli\n// CRC.\n\n// haveSSE42 is defined in crc_amd64p32.s and uses CPUID to test for SSE 4.2\n// support.\nfunc haveSSE42() bool\n\n// castagnoliSSE42 is defined in crc_amd64.s and uses the SSE4.2 CRC32\n// instruction.\n//go:noescape\nfunc castagnoliSSE42(crc uint32, p []byte) uint32\n\nvar sse42 = haveSSE42()\n\nfunc updateCastagnoli(crc uint32, p []byte) uint32 {\n\tif sse42 {\n\t\treturn castagnoliSSE42(crc, p)\n\t}\n\treturn update(crc, castagnoliTable, p)\n}\n\nfunc updateIEEE(crc uint32, p []byte) uint32 {\n\t// only use slicing-by-8 when input is >= 4KB\n\tif len(p) >= 4096 {\n\t\tieeeTable8Once.Do(func() {\n\t\t\tieeeTable8 = makeTable8(IEEE)\n\t\t})\n\t\treturn updateSlicingBy8(crc, ieeeTable8, p)\n\t}\n\n\treturn update(crc, IEEETable, p)\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/crc32/crc32_amd64p32.s",
    "content": "// Copyright 2011 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build gc\n\n#define NOSPLIT 4\n#define RODATA 8\n\n// func castagnoliSSE42(crc uint32, p []byte) uint32\nTEXT ·castagnoliSSE42(SB), NOSPLIT, $0\n\tMOVL crc+0(FP), AX   // CRC value\n\tMOVL p+4(FP), SI     // data pointer\n\tMOVL p_len+8(FP), CX // len(p)\n\n\tNOTL AX\n\n\t// If there's less than 8 bytes to process, we do it byte-by-byte.\n\tCMPQ CX, $8\n\tJL   cleanup\n\n\t// Process individual bytes until the input is 8-byte aligned.\nstartup:\n\tMOVQ SI, BX\n\tANDQ $7, BX\n\tJZ   aligned\n\n\tCRC32B (SI), AX\n\tDECQ   CX\n\tINCQ   SI\n\tJMP    startup\n\naligned:\n\t// The input is now 8-byte aligned and we can process 8-byte chunks.\n\tCMPQ CX, $8\n\tJL   cleanup\n\n\tCRC32Q (SI), AX\n\tADDQ   $8, SI\n\tSUBQ   $8, CX\n\tJMP    aligned\n\ncleanup:\n\t// We may have some bytes left over that we process one at a time.\n\tCMPQ CX, $0\n\tJE   done\n\n\tCRC32B (SI), AX\n\tINCQ   SI\n\tDECQ   CX\n\tJMP    cleanup\n\ndone:\n\tNOTL AX\n\tMOVL AX, ret+16(FP)\n\tRET\n\n// func haveSSE42() bool\nTEXT ·haveSSE42(SB), NOSPLIT, $0\n\tXORQ AX, AX\n\tINCL AX\n\tCPUID\n\tSHRQ $20, CX\n\tANDQ $1, CX\n\tMOVB CX, ret+0(FP)\n\tRET\n\n"
  },
  {
    "path": "vendor/github.com/klauspost/crc32/crc32_generic.go",
    "content": "// Copyright 2011 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build !amd64,!amd64p32 appengine gccgo\n\npackage crc32\n\n// This file contains the generic version of updateCastagnoli which does\n// slicing-by-8, or uses the fallback for very small sizes.\n\nfunc updateCastagnoli(crc uint32, p []byte) uint32 {\n\t// only use slicing-by-8 when input is >= 16 Bytes\n\tif len(p) >= 16 {\n\t\treturn updateSlicingBy8(crc, castagnoliTable8, p)\n\t}\n\treturn update(crc, castagnoliTable, p)\n}\n\nfunc updateIEEE(crc uint32, p []byte) uint32 {\n\t// only use slicing-by-8 when input is >= 16 Bytes\n\tif len(p) >= 16 {\n\t\tieeeTable8Once.Do(func() {\n\t\t\tieeeTable8 = makeTable8(IEEE)\n\t\t})\n\t\treturn updateSlicingBy8(crc, ieeeTable8, p)\n\t}\n\treturn update(crc, IEEETable, p)\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/pgzip/LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2014 Klaus Post\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n"
  },
  {
    "path": "vendor/github.com/klauspost/pgzip/gunzip.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// Package pgzip implements reading and writing of gzip format compressed files,\n// as specified in RFC 1952.\n//\n// This is a drop in replacement for \"compress/gzip\".\n// This will split compression into blocks that are compressed in parallel.\n// This can be useful for compressing big amounts of data.\n// The gzip decompression has not been modified, but remains in the package,\n// so you can use it as a complete replacement for \"compress/gzip\".\n//\n// See more at https://github.com/klauspost/pgzip\npackage pgzip\n\nimport (\n\t\"bufio\"\n\t\"errors\"\n\t\"hash\"\n\t\"io\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/klauspost/compress/flate\"\n\t\"github.com/klauspost/crc32\"\n)\n\nconst (\n\tgzipID1     = 0x1f\n\tgzipID2     = 0x8b\n\tgzipDeflate = 8\n\tflagText    = 1 << 0\n\tflagHdrCrc  = 1 << 1\n\tflagExtra   = 1 << 2\n\tflagName    = 1 << 3\n\tflagComment = 1 << 4\n)\n\nfunc makeReader(r io.Reader) flate.Reader {\n\tif rr, ok := r.(flate.Reader); ok {\n\t\treturn rr\n\t}\n\treturn bufio.NewReader(r)\n}\n\nvar (\n\t// ErrChecksum is returned when reading GZIP data that has an invalid checksum.\n\tErrChecksum = errors.New(\"gzip: invalid checksum\")\n\t// ErrHeader is returned when reading GZIP data that has an invalid header.\n\tErrHeader = errors.New(\"gzip: invalid header\")\n)\n\n// The gzip file stores a header giving metadata about the compressed file.\n// That header is exposed as the fields of the Writer and Reader structs.\ntype Header struct {\n\tComment string    // comment\n\tExtra   []byte    // \"extra data\"\n\tModTime time.Time // modification time\n\tName    string    // file name\n\tOS      byte      // operating system type\n}\n\n// A Reader is an io.Reader that can be read to retrieve\n// uncompressed data from a gzip-format compressed file.\n//\n// In general, a gzip file can be a concatenation of gzip files,\n// each with its own header.  Reads from the Reader\n// return the concatenation of the uncompressed data of each.\n// Only the first header is recorded in the Reader fields.\n//\n// Gzip files store a length and checksum of the uncompressed data.\n// The Reader will return a ErrChecksum when Read\n// reaches the end of the uncompressed data if it does not\n// have the expected length or checksum.  Clients should treat data\n// returned by Read as tentative until they receive the io.EOF\n// marking the end of the data.\ntype Reader struct {\n\tHeader\n\tr            flate.Reader\n\tdecompressor io.ReadCloser\n\tdigest       hash.Hash32\n\tsize         uint32\n\tflg          byte\n\tbuf          [512]byte\n\terr          error\n\tcloseErr     chan error\n\tmultistream  bool\n\n\treadAhead   chan read\n\troff        int // read offset\n\tcurrent     []byte\n\tcloseReader chan struct{}\n\tlastBlock   bool\n\tblockSize   int\n\tblocks      int\n\n\tactiveRA bool       // Indication if readahead is active\n\tmu       sync.Mutex // Lock for above\n\n\tblockPool chan []byte\n}\n\ntype read struct {\n\tb   []byte\n\terr error\n}\n\n// NewReader creates a new Reader reading the given reader.\n// The implementation buffers input and may read more data than necessary from r.\n// It is the caller's responsibility to call Close on the Reader when done.\nfunc NewReader(r io.Reader) (*Reader, error) {\n\tz := new(Reader)\n\tz.blocks = defaultBlocks\n\tz.blockSize = defaultBlockSize\n\tz.r = makeReader(r)\n\tz.digest = crc32.NewIEEE()\n\tz.multistream = true\n\tz.blockPool = make(chan []byte, z.blocks)\n\tfor i := 0; i < z.blocks; i++ {\n\t\tz.blockPool <- make([]byte, z.blockSize)\n\t}\n\tif err := z.readHeader(true); err != nil {\n\t\treturn nil, err\n\t}\n\treturn z, nil\n}\n\n// NewReaderN creates a new Reader reading the given reader.\n// The implementation buffers input and may read more data than necessary from r.\n// It is the caller's responsibility to call Close on the Reader when done.\n//\n// With this you can control the approximate size of your blocks,\n// as well as how many blocks you want to have prefetched.\n//\n// Default values for this is blockSize = 250000, blocks = 16,\n// meaning up to 16 blocks of maximum 250000 bytes will be\n// prefetched.\nfunc NewReaderN(r io.Reader, blockSize, blocks int) (*Reader, error) {\n\tz := new(Reader)\n\tz.blocks = blocks\n\tz.blockSize = blockSize\n\tz.r = makeReader(r)\n\tz.digest = crc32.NewIEEE()\n\tz.multistream = true\n\n\t// Account for too small values\n\tif z.blocks <= 0 {\n\t\tz.blocks = defaultBlocks\n\t}\n\tif z.blockSize <= 512 {\n\t\tz.blockSize = defaultBlockSize\n\t}\n\tz.blockPool = make(chan []byte, z.blocks)\n\tfor i := 0; i < z.blocks; i++ {\n\t\tz.blockPool <- make([]byte, z.blockSize)\n\t}\n\tif err := z.readHeader(true); err != nil {\n\t\treturn nil, err\n\t}\n\treturn z, nil\n}\n\n// Reset discards the Reader z's state and makes it equivalent to the\n// result of its original state from NewReader, but reading from r instead.\n// This permits reusing a Reader rather than allocating a new one.\nfunc (z *Reader) Reset(r io.Reader) error {\n\tz.killReadAhead()\n\tz.r = makeReader(r)\n\tz.digest = crc32.NewIEEE()\n\tz.size = 0\n\tz.err = nil\n\tz.multistream = true\n\n\t// Account for uninitialized values\n\tif z.blocks <= 0 {\n\t\tz.blocks = defaultBlocks\n\t}\n\tif z.blockSize <= 512 {\n\t\tz.blockSize = defaultBlockSize\n\t}\n\n\tif z.blockPool == nil {\n\t\tz.blockPool = make(chan []byte, z.blocks)\n\t\tfor i := 0; i < z.blocks; i++ {\n\t\t\tz.blockPool <- make([]byte, z.blockSize)\n\t\t}\n\t}\n\n\treturn z.readHeader(true)\n}\n\n// Multistream controls whether the reader supports multistream files.\n//\n// If enabled (the default), the Reader expects the input to be a sequence\n// of individually gzipped data streams, each with its own header and\n// trailer, ending at EOF. The effect is that the concatenation of a sequence\n// of gzipped files is treated as equivalent to the gzip of the concatenation\n// of the sequence. This is standard behavior for gzip readers.\n//\n// Calling Multistream(false) disables this behavior; disabling the behavior\n// can be useful when reading file formats that distinguish individual gzip\n// data streams or mix gzip data streams with other data streams.\n// In this mode, when the Reader reaches the end of the data stream,\n// Read returns io.EOF. If the underlying reader implements io.ByteReader,\n// it will be left positioned just after the gzip stream.\n// To start the next stream, call z.Reset(r) followed by z.Multistream(false).\n// If there is no next stream, z.Reset(r) will return io.EOF.\nfunc (z *Reader) Multistream(ok bool) {\n\tz.multistream = ok\n}\n\n// GZIP (RFC 1952) is little-endian, unlike ZLIB (RFC 1950).\nfunc get4(p []byte) uint32 {\n\treturn uint32(p[0]) | uint32(p[1])<<8 | uint32(p[2])<<16 | uint32(p[3])<<24\n}\n\nfunc (z *Reader) readString() (string, error) {\n\tvar err error\n\tneedconv := false\n\tfor i := 0; ; i++ {\n\t\tif i >= len(z.buf) {\n\t\t\treturn \"\", ErrHeader\n\t\t}\n\t\tz.buf[i], err = z.r.ReadByte()\n\t\tif err != nil {\n\t\t\treturn \"\", err\n\t\t}\n\t\tif z.buf[i] > 0x7f {\n\t\t\tneedconv = true\n\t\t}\n\t\tif z.buf[i] == 0 {\n\t\t\t// GZIP (RFC 1952) specifies that strings are NUL-terminated ISO 8859-1 (Latin-1).\n\t\t\tif needconv {\n\t\t\t\ts := make([]rune, 0, i)\n\t\t\t\tfor _, v := range z.buf[0:i] {\n\t\t\t\t\ts = append(s, rune(v))\n\t\t\t\t}\n\t\t\t\treturn string(s), nil\n\t\t\t}\n\t\t\treturn string(z.buf[0:i]), nil\n\t\t}\n\t}\n}\n\nfunc (z *Reader) read2() (uint32, error) {\n\t_, err := io.ReadFull(z.r, z.buf[0:2])\n\tif err != nil {\n\t\treturn 0, err\n\t}\n\treturn uint32(z.buf[0]) | uint32(z.buf[1])<<8, nil\n}\n\nfunc (z *Reader) readHeader(save bool) error {\n\tz.killReadAhead()\n\n\t_, err := io.ReadFull(z.r, z.buf[0:10])\n\tif err != nil {\n\t\treturn err\n\t}\n\tif z.buf[0] != gzipID1 || z.buf[1] != gzipID2 || z.buf[2] != gzipDeflate {\n\t\treturn ErrHeader\n\t}\n\tz.flg = z.buf[3]\n\tif save {\n\t\tz.ModTime = time.Unix(int64(get4(z.buf[4:8])), 0)\n\t\t// z.buf[8] is xfl, ignored\n\t\tz.OS = z.buf[9]\n\t}\n\tz.digest.Reset()\n\tz.digest.Write(z.buf[0:10])\n\n\tif z.flg&flagExtra != 0 {\n\t\tn, err := z.read2()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdata := make([]byte, n)\n\t\tif _, err = io.ReadFull(z.r, data); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif save {\n\t\t\tz.Extra = data\n\t\t}\n\t}\n\n\tvar s string\n\tif z.flg&flagName != 0 {\n\t\tif s, err = z.readString(); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif save {\n\t\t\tz.Name = s\n\t\t}\n\t}\n\n\tif z.flg&flagComment != 0 {\n\t\tif s, err = z.readString(); err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif save {\n\t\t\tz.Comment = s\n\t\t}\n\t}\n\n\tif z.flg&flagHdrCrc != 0 {\n\t\tn, err := z.read2()\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsum := z.digest.Sum32() & 0xFFFF\n\t\tif n != sum {\n\t\t\treturn ErrHeader\n\t\t}\n\t}\n\n\tz.digest.Reset()\n\tz.decompressor = flate.NewReader(z.r)\n\tz.doReadAhead()\n\treturn nil\n}\n\nfunc (z *Reader) killReadAhead() error {\n\tz.mu.Lock()\n\tdefer z.mu.Unlock()\n\tif z.activeRA {\n\t\tif z.closeReader != nil {\n\t\t\tclose(z.closeReader)\n\t\t}\n\n\t\t// Wait for decompressor to be closed and return error, if any.\n\t\te, ok := <-z.closeErr\n\t\tz.activeRA = false\n\t\tif !ok {\n\t\t\t// Channel is closed, so if there was any error it has already been returned.\n\t\t\treturn nil\n\t\t}\n\t\treturn e\n\t}\n\treturn nil\n}\n\n// Starts readahead.\n// Will return on error (including io.EOF)\n// or when z.closeReader is closed.\nfunc (z *Reader) doReadAhead() {\n\tz.mu.Lock()\n\tdefer z.mu.Unlock()\n\tz.activeRA = true\n\n\tif z.blocks <= 0 {\n\t\tz.blocks = defaultBlocks\n\t}\n\tif z.blockSize <= 512 {\n\t\tz.blockSize = defaultBlockSize\n\t}\n\tra := make(chan read, z.blocks)\n\tz.readAhead = ra\n\tcloseReader := make(chan struct{}, 0)\n\tz.closeReader = closeReader\n\tz.lastBlock = false\n\tcloseErr := make(chan error, 1)\n\tz.closeErr = closeErr\n\tz.size = 0\n\tz.roff = 0\n\tz.current = nil\n\tdecomp := z.decompressor\n\n\tgo func() {\n\t\tdefer func() {\n\t\t\tcloseErr <- decomp.Close()\n\t\t\tclose(closeErr)\n\t\t\tclose(ra)\n\t\t}()\n\n\t\t// We hold a local reference to digest, since\n\t\t// it way be changed by reset.\n\t\tdigest := z.digest\n\t\tvar wg sync.WaitGroup\n\t\tfor {\n\t\t\tvar buf []byte\n\t\t\tselect {\n\t\t\tcase buf = <-z.blockPool:\n\t\t\tcase <-closeReader:\n\t\t\t\treturn\n\t\t\t}\n\t\t\tbuf = buf[0:z.blockSize]\n\t\t\t// Try to fill the buffer\n\t\t\tn, err := io.ReadFull(decomp, buf)\n\t\t\tif err == io.ErrUnexpectedEOF {\n\t\t\t\terr = nil\n\t\t\t}\n\t\t\tif n < len(buf) {\n\t\t\t\tbuf = buf[0:n]\n\t\t\t}\n\t\t\twg.Wait()\n\t\t\twg.Add(1)\n\t\t\tgo func() {\n\t\t\t\tdigest.Write(buf)\n\t\t\t\twg.Done()\n\t\t\t}()\n\t\t\tz.size += uint32(n)\n\n\t\t\t// If we return any error, out digest must be ready\n\t\t\tif err != nil {\n\t\t\t\twg.Wait()\n\t\t\t}\n\t\t\tselect {\n\t\t\tcase z.readAhead <- read{b: buf, err: err}:\n\t\t\tcase <-closeReader:\n\t\t\t\t// Sent on close, we don't care about the next results\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif err != nil {\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}()\n}\n\nfunc (z *Reader) Read(p []byte) (n int, err error) {\n\tif z.err != nil {\n\t\treturn 0, z.err\n\t}\n\tif len(p) == 0 {\n\t\treturn 0, nil\n\t}\n\n\tfor {\n\t\tif len(z.current) == 0 && !z.lastBlock {\n\t\t\tread := <-z.readAhead\n\n\t\t\tif read.err != nil {\n\t\t\t\t// If not nil, the reader will have exited\n\t\t\t\tz.closeReader = nil\n\n\t\t\t\tif read.err != io.EOF {\n\t\t\t\t\tz.err = read.err\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tif read.err == io.EOF {\n\t\t\t\t\tz.lastBlock = true\n\t\t\t\t\terr = nil\n\t\t\t\t}\n\t\t\t}\n\t\t\tz.current = read.b\n\t\t\tz.roff = 0\n\t\t}\n\t\tavail := z.current[z.roff:]\n\t\tif len(p) >= len(avail) {\n\t\t\t// If len(p) >= len(current), return all content of current\n\t\t\tn = copy(p, avail)\n\t\t\tz.blockPool <- z.current\n\t\t\tz.current = nil\n\t\t\tif z.lastBlock {\n\t\t\t\terr = io.EOF\n\t\t\t\tbreak\n\t\t\t}\n\t\t} else {\n\t\t\t// We copy as much as there is space for\n\t\t\tn = copy(p, avail)\n\t\t\tz.roff += n\n\t\t}\n\t\treturn\n\t}\n\n\t// Finished file; check checksum + size.\n\tif _, err := io.ReadFull(z.r, z.buf[0:8]); err != nil {\n\t\tz.err = err\n\t\treturn 0, err\n\t}\n\tcrc32, isize := get4(z.buf[0:4]), get4(z.buf[4:8])\n\tsum := z.digest.Sum32()\n\tif sum != crc32 || isize != z.size {\n\t\tz.err = ErrChecksum\n\t\treturn 0, z.err\n\t}\n\n\t// File is ok; should we attempt reading one more?\n\tif !z.multistream {\n\t\treturn 0, io.EOF\n\t}\n\n\t// Is there another?\n\tif err = z.readHeader(false); err != nil {\n\t\tz.err = err\n\t\treturn\n\t}\n\n\t// Yes.  Reset and read from it.\n\treturn z.Read(p)\n}\n\nfunc (z *Reader) WriteTo(w io.Writer) (n int64, err error) {\n\ttotal := int64(0)\n\tfor {\n\t\tif z.err != nil {\n\t\t\treturn total, z.err\n\t\t}\n\t\t// We write both to output and digest.\n\t\tfor {\n\t\t\t// Read from input\n\t\t\tread := <-z.readAhead\n\t\t\tif read.err != nil {\n\t\t\t\t// If not nil, the reader will have exited\n\t\t\t\tz.closeReader = nil\n\n\t\t\t\tif read.err != io.EOF {\n\t\t\t\t\tz.err = read.err\n\t\t\t\t\treturn total, z.err\n\t\t\t\t}\n\t\t\t\tif read.err == io.EOF {\n\t\t\t\t\tz.lastBlock = true\n\t\t\t\t\terr = nil\n\t\t\t\t}\n\t\t\t}\n\t\t\t// Write what we got\n\t\t\tn, err := w.Write(read.b)\n\t\t\tif n != len(read.b) {\n\t\t\t\treturn total, io.ErrShortWrite\n\t\t\t}\n\t\t\ttotal += int64(n)\n\t\t\tif err != nil {\n\t\t\t\treturn total, err\n\t\t\t}\n\t\t\t// Put block back\n\t\t\tz.blockPool <- read.b\n\t\t\tif z.lastBlock {\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\t// Finished file; check checksum + size.\n\t\tif _, err := io.ReadFull(z.r, z.buf[0:8]); err != nil {\n\t\t\tz.err = err\n\t\t\treturn total, err\n\t\t}\n\t\tcrc32, isize := get4(z.buf[0:4]), get4(z.buf[4:8])\n\t\tsum := z.digest.Sum32()\n\t\tif sum != crc32 || isize != z.size {\n\t\t\tz.err = ErrChecksum\n\t\t\treturn total, z.err\n\t\t}\n\t\t// File is ok; should we attempt reading one more?\n\t\tif !z.multistream {\n\t\t\treturn total, nil\n\t\t}\n\n\t\t// Is there another?\n\t\terr = z.readHeader(false)\n\t\tif err == io.EOF {\n\t\t\treturn total, nil\n\t\t}\n\t\tif err != nil {\n\t\t\tz.err = err\n\t\t\treturn total, err\n\t\t}\n\t}\n}\n\n// Close closes the Reader. It does not close the underlying io.Reader.\nfunc (z *Reader) Close() error {\n\treturn z.killReadAhead()\n}\n"
  },
  {
    "path": "vendor/github.com/klauspost/pgzip/gzip.go",
    "content": "// Copyright 2010 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage pgzip\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"hash\"\n\t\"io\"\n\t\"sync\"\n\n\t\"github.com/klauspost/compress/flate\"\n\t\"github.com/klauspost/crc32\"\n)\n\nconst (\n\tdefaultBlockSize = 250000\n\ttailSize         = 16384\n\tdefaultBlocks    = 16\n)\n\n// These constants are copied from the flate package, so that code that imports\n// \"compress/gzip\" does not also have to import \"compress/flate\".\nconst (\n\tNoCompression       = flate.NoCompression\n\tBestSpeed           = flate.BestSpeed\n\tBestCompression     = flate.BestCompression\n\tDefaultCompression  = flate.DefaultCompression\n\tConstantCompression = flate.ConstantCompression\n)\n\n// A Writer is an io.WriteCloser.\n// Writes to a Writer are compressed and written to w.\ntype Writer struct {\n\tHeader\n\tw             io.Writer\n\tlevel         int\n\twroteHeader   bool\n\tblockSize     int\n\tblocks        int\n\tcurrentBuffer []byte\n\tprevTail      []byte\n\tdigest        hash.Hash32\n\tsize          int\n\tclosed        bool\n\tbuf           [10]byte\n\terr           error\n\tpushedErr     chan error\n\tresults       chan result\n\tdictFlatePool *sync.Pool\n\tdstPool       *sync.Pool\n}\n\ntype result struct {\n\tresult        chan []byte\n\tnotifyWritten chan struct{}\n}\n\n// Use SetConcurrency to finetune the concurrency level if needed.\n//\n// With this you can control the approximate size of your blocks,\n// as well as how many you want to be processing in parallel.\n//\n// Default values for this is SetConcurrency(250000, 16),\n// meaning blocks are split at 250000 bytes and up to 16 blocks\n// can be processing at once before the writer blocks.\nfunc (z *Writer) SetConcurrency(blockSize, blocks int) error {\n\tif blockSize <= tailSize {\n\t\treturn fmt.Errorf(\"gzip: block size cannot be less than or equal to %d\", tailSize)\n\t}\n\tif blocks <= 0 {\n\t\treturn errors.New(\"gzip: blocks cannot be zero or less\")\n\t}\n\tz.blockSize = blockSize\n\tz.results = make(chan result, blocks)\n\tz.blocks = blocks\n\treturn nil\n}\n\n// NewWriter returns a new Writer.\n// Writes to the returned writer are compressed and written to w.\n//\n// It is the caller's responsibility to call Close on the WriteCloser when done.\n// Writes may be buffered and not flushed until Close.\n//\n// Callers that wish to set the fields in Writer.Header must do so before\n// the first call to Write or Close. The Comment and Name header fields are\n// UTF-8 strings in Go, but the underlying format requires NUL-terminated ISO\n// 8859-1 (Latin-1). NUL or non-Latin-1 runes in those strings will lead to an\n// error on Write.\nfunc NewWriter(w io.Writer) *Writer {\n\tz, _ := NewWriterLevel(w, DefaultCompression)\n\treturn z\n}\n\n// NewWriterLevel is like NewWriter but specifies the compression level instead\n// of assuming DefaultCompression.\n//\n// The compression level can be DefaultCompression, NoCompression, or any\n// integer value between BestSpeed and BestCompression inclusive. The error\n// returned will be nil if the level is valid.\nfunc NewWriterLevel(w io.Writer, level int) (*Writer, error) {\n\tif level < ConstantCompression || level > BestCompression {\n\t\treturn nil, fmt.Errorf(\"gzip: invalid compression level: %d\", level)\n\t}\n\tz := new(Writer)\n\tz.SetConcurrency(defaultBlockSize, defaultBlocks)\n\tz.init(w, level)\n\treturn z, nil\n}\n\n// This function must be used by goroutines to set an\n// error condition, since z.err access is restricted\n// to the callers goruotine.\nfunc (z *Writer) pushError(err error) {\n\tz.pushedErr <- err\n\tclose(z.pushedErr)\n}\n\nfunc (z *Writer) init(w io.Writer, level int) {\n\tdigest := z.digest\n\tif digest != nil {\n\t\tdigest.Reset()\n\t} else {\n\t\tdigest = crc32.NewIEEE()\n\t}\n\n\t*z = Writer{\n\t\tHeader: Header{\n\t\t\tOS: 255, // unknown\n\t\t},\n\t\tw:         w,\n\t\tlevel:     level,\n\t\tdigest:    digest,\n\t\tpushedErr: make(chan error, 1),\n\t\tresults:   make(chan result, z.blocks),\n\t\tblockSize: z.blockSize,\n\t\tblocks:    z.blocks,\n\t}\n\tz.dictFlatePool = &sync.Pool{\n\t\tNew: func() interface{} {\n\t\t\tf, _ := flate.NewWriterDict(w, level, nil)\n\t\t\treturn f\n\t\t},\n\t}\n\tz.dstPool = &sync.Pool{New: func() interface{} { return make([]byte, 0, z.blockSize) }}\n\n}\n\n// Reset discards the Writer z's state and makes it equivalent to the\n// result of its original state from NewWriter or NewWriterLevel, but\n// writing to w instead. This permits reusing a Writer rather than\n// allocating a new one.\nfunc (z *Writer) Reset(w io.Writer) {\n\tif z.results != nil && !z.closed {\n\t\tclose(z.results)\n\t}\n\tz.SetConcurrency(defaultBlockSize, defaultBlocks)\n\tz.init(w, z.level)\n}\n\n// GZIP (RFC 1952) is little-endian, unlike ZLIB (RFC 1950).\nfunc put2(p []byte, v uint16) {\n\tp[0] = uint8(v >> 0)\n\tp[1] = uint8(v >> 8)\n}\n\nfunc put4(p []byte, v uint32) {\n\tp[0] = uint8(v >> 0)\n\tp[1] = uint8(v >> 8)\n\tp[2] = uint8(v >> 16)\n\tp[3] = uint8(v >> 24)\n}\n\n// writeBytes writes a length-prefixed byte slice to z.w.\nfunc (z *Writer) writeBytes(b []byte) error {\n\tif len(b) > 0xffff {\n\t\treturn errors.New(\"gzip.Write: Extra data is too large\")\n\t}\n\tput2(z.buf[0:2], uint16(len(b)))\n\t_, err := z.w.Write(z.buf[0:2])\n\tif err != nil {\n\t\treturn err\n\t}\n\t_, err = z.w.Write(b)\n\treturn err\n}\n\n// writeString writes a UTF-8 string s in GZIP's format to z.w.\n// GZIP (RFC 1952) specifies that strings are NUL-terminated ISO 8859-1 (Latin-1).\nfunc (z *Writer) writeString(s string) (err error) {\n\t// GZIP stores Latin-1 strings; error if non-Latin-1; convert if non-ASCII.\n\tneedconv := false\n\tfor _, v := range s {\n\t\tif v == 0 || v > 0xff {\n\t\t\treturn errors.New(\"gzip.Write: non-Latin-1 header string\")\n\t\t}\n\t\tif v > 0x7f {\n\t\t\tneedconv = true\n\t\t}\n\t}\n\tif needconv {\n\t\tb := make([]byte, 0, len(s))\n\t\tfor _, v := range s {\n\t\t\tb = append(b, byte(v))\n\t\t}\n\t\t_, err = z.w.Write(b)\n\t} else {\n\t\t_, err = io.WriteString(z.w, s)\n\t}\n\tif err != nil {\n\t\treturn err\n\t}\n\t// GZIP strings are NUL-terminated.\n\tz.buf[0] = 0\n\t_, err = z.w.Write(z.buf[0:1])\n\treturn err\n}\n\n// compressCurrent will compress the data currently buffered\n// This should only be called from the main writer/flush/closer\nfunc (z *Writer) compressCurrent(flush bool) {\n\tr := result{}\n\tr.result = make(chan []byte, 1)\n\tr.notifyWritten = make(chan struct{}, 0)\n\tz.results <- r\n\n\t// If block given is more than twice the block size, split it.\n\tc := z.currentBuffer\n\tif len(c) > z.blockSize*2 {\n\t\tc = c[:z.blockSize]\n\t\tgo compressBlock(c, z.prevTail, *z, r)\n\t\tz.prevTail = c[len(c)-tailSize:]\n\t\tz.currentBuffer = z.currentBuffer[z.blockSize:]\n\t\tz.compressCurrent(flush)\n\t\t// Last one flushes if needed\n\t\treturn\n\t}\n\n\tgo compressBlock(c, z.prevTail, *z, r)\n\tif len(c) > tailSize {\n\t\tz.prevTail = c[len(c)-tailSize:]\n\t} else {\n\t\tz.prevTail = nil\n\t}\n\tz.currentBuffer = make([]byte, 0, z.blockSize+(z.blockSize/4))\n\n\t// Wait if flushing\n\tif flush {\n\t\t_ = <-r.notifyWritten\n\t}\n}\n\n// Returns an error if it has been set.\n// Cannot be used by functions that are from internal goroutines.\nfunc (z *Writer) checkError() error {\n\tif z.err != nil {\n\t\treturn z.err\n\t}\n\tselect {\n\tcase err := <-z.pushedErr:\n\t\tz.err = err\n\tdefault:\n\t}\n\treturn z.err\n}\n\n// Write writes a compressed form of p to the underlying io.Writer. The\n// compressed bytes are not necessarily flushed to output until\n// the Writer is closed or Flush() is called.\n//\n// The function will return quickly, if there are unused buffers.\n// The sent slice (p) is copied, and the caller is free to re-use the buffer\n// when the function returns.\n//\n// Errors that occur during compression will be reported later, and a nil error\n// does not signify that the compression succeeded (since it is most likely still running)\n// That means that the call that returns an error may not be the call that caused it.\n// Only Flush and Close functions are guaranteed to return any errors up to that point.\nfunc (z *Writer) Write(p []byte) (int, error) {\n\tif z.checkError() != nil {\n\t\treturn 0, z.err\n\t}\n\t// Write the GZIP header lazily.\n\tif !z.wroteHeader {\n\t\tz.wroteHeader = true\n\t\tz.buf[0] = gzipID1\n\t\tz.buf[1] = gzipID2\n\t\tz.buf[2] = gzipDeflate\n\t\tz.buf[3] = 0\n\t\tif z.Extra != nil {\n\t\t\tz.buf[3] |= 0x04\n\t\t}\n\t\tif z.Name != \"\" {\n\t\t\tz.buf[3] |= 0x08\n\t\t}\n\t\tif z.Comment != \"\" {\n\t\t\tz.buf[3] |= 0x10\n\t\t}\n\t\tput4(z.buf[4:8], uint32(z.ModTime.Unix()))\n\t\tif z.level == BestCompression {\n\t\t\tz.buf[8] = 2\n\t\t} else if z.level == BestSpeed {\n\t\t\tz.buf[8] = 4\n\t\t} else {\n\t\t\tz.buf[8] = 0\n\t\t}\n\t\tz.buf[9] = z.OS\n\t\tvar n int\n\t\tn, z.err = z.w.Write(z.buf[0:10])\n\t\tif z.err != nil {\n\t\t\treturn n, z.err\n\t\t}\n\t\tif z.Extra != nil {\n\t\t\tz.err = z.writeBytes(z.Extra)\n\t\t\tif z.err != nil {\n\t\t\t\treturn n, z.err\n\t\t\t}\n\t\t}\n\t\tif z.Name != \"\" {\n\t\t\tz.err = z.writeString(z.Name)\n\t\t\tif z.err != nil {\n\t\t\t\treturn n, z.err\n\t\t\t}\n\t\t}\n\t\tif z.Comment != \"\" {\n\t\t\tz.err = z.writeString(z.Comment)\n\t\t\tif z.err != nil {\n\t\t\t\treturn n, z.err\n\t\t\t}\n\t\t}\n\t\t// Start receiving data from compressors\n\t\tgo func() {\n\t\t\tlisten := z.results\n\t\t\tfor {\n\t\t\t\tr, ok := <-listen\n\t\t\t\t// If closed, we are finished.\n\t\t\t\tif !ok {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tbuf := <-r.result\n\t\t\t\tn, err := z.w.Write(buf)\n\t\t\t\tif err != nil {\n\t\t\t\t\tz.pushError(err)\n\t\t\t\t\tclose(r.notifyWritten)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tif n != len(buf) {\n\t\t\t\t\tz.pushError(fmt.Errorf(\"gzip: short write %d should be %d\", n, len(buf)))\n\t\t\t\t\tclose(r.notifyWritten)\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tz.dstPool.Put(buf)\n\t\t\t\tclose(r.notifyWritten)\n\t\t\t}\n\t\t}()\n\t\tz.currentBuffer = make([]byte, 0, z.blockSize+(z.blockSize/4))\n\t}\n\t// Handle very large writes in a loop\n\tif len(p) > z.blockSize*z.blocks {\n\t\tq := p\n\t\tfor len(q) > 0 {\n\t\t\tlength := len(q)\n\t\t\tif length > z.blockSize {\n\t\t\t\tlength = z.blockSize\n\t\t\t}\n\t\t\tz.digest.Write(q[:length])\n\t\t\tz.currentBuffer = append(z.currentBuffer, q[:length]...)\n\t\t\tif len(z.currentBuffer) >= z.blockSize {\n\t\t\t\tz.compressCurrent(false)\n\t\t\t\tif z.err != nil {\n\t\t\t\t\treturn len(p) - len(q) - length, z.err\n\t\t\t\t}\n\t\t\t}\n\t\t\tz.size += length\n\t\t\tq = q[length:]\n\t\t}\n\t\treturn len(p), z.err\n\t} else {\n\t\tz.size += len(p)\n\t\tz.digest.Write(p)\n\t\tz.currentBuffer = append(z.currentBuffer, p...)\n\t\tif len(z.currentBuffer) >= z.blockSize {\n\t\t\tz.compressCurrent(false)\n\t\t}\n\t\treturn len(p), z.err\n\t}\n}\n\n// Step 1: compresses buffer to buffer\n// Step 2: send writer to channel\n// Step 3: Close result channel to indicate we are done\nfunc compressBlock(p, prevTail []byte, z Writer, r result) {\n\tdefer close(r.result)\n\tbuf := z.dstPool.Get().([]byte)\n\tdest := bytes.NewBuffer(buf[:0])\n\n\tcompressor := z.dictFlatePool.Get().(*flate.Writer)\n\tcompressor.ResetDict(dest, prevTail)\n\tcompressor.Write(p)\n\n\terr := compressor.Flush()\n\tif err != nil {\n\t\tz.pushError(err)\n\t\treturn\n\t}\n\tif z.closed {\n\t\terr = compressor.Close()\n\t\tif err != nil {\n\t\t\tz.pushError(err)\n\t\t\treturn\n\t\t}\n\t}\n\tz.dictFlatePool.Put(compressor)\n\t// Read back buffer\n\tbuf = dest.Bytes()\n\tr.result <- buf\n}\n\n// Flush flushes any pending compressed data to the underlying writer.\n//\n// It is useful mainly in compressed network protocols, to ensure that\n// a remote reader has enough data to reconstruct a packet. Flush does\n// not return until the data has been written. If the underlying\n// writer returns an error, Flush returns that error.\n//\n// In the terminology of the zlib library, Flush is equivalent to Z_SYNC_FLUSH.\nfunc (z *Writer) Flush() error {\n\tif z.checkError() != nil {\n\t\treturn z.err\n\t}\n\tif z.closed {\n\t\treturn nil\n\t}\n\tif !z.wroteHeader {\n\t\t_, err := z.Write(nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\t// We send current block to compression\n\tz.compressCurrent(true)\n\tif z.checkError() != nil {\n\t\treturn z.err\n\t}\n\n\treturn nil\n}\n\n// UncompressedSize will return the number of bytes written.\n// pgzip only, not a function in the official gzip package.\nfunc (z Writer) UncompressedSize() int {\n\treturn z.size\n}\n\n// Close closes the Writer, flushing any unwritten data to the underlying\n// io.Writer, but does not close the underlying io.Writer.\nfunc (z *Writer) Close() error {\n\tif z.checkError() != nil {\n\t\treturn z.err\n\t}\n\tif z.closed {\n\t\treturn nil\n\t}\n\n\tz.closed = true\n\tif !z.wroteHeader {\n\t\tz.Write(nil)\n\t\tif z.err != nil {\n\t\t\treturn z.err\n\t\t}\n\t}\n\tz.compressCurrent(true)\n\tif z.checkError() != nil {\n\t\treturn z.err\n\t}\n\tclose(z.results)\n\tput4(z.buf[0:4], z.digest.Sum32())\n\tput4(z.buf[4:8], uint32(z.size))\n\t_, z.err = z.w.Write(z.buf[0:8])\n\treturn z.err\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/go-digest/LICENSE.code",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        https://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   Copyright 2016 Docker, Inc.\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       https://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "vendor/github.com/opencontainers/go-digest/LICENSE.docs",
    "content": "Attribution-ShareAlike 4.0 International\n\n=======================================================================\n\nCreative Commons Corporation (\"Creative Commons\") is not a law firm and\ndoes not provide legal services or legal advice. Distribution of\nCreative Commons public licenses does not create a lawyer-client or\nother relationship. Creative Commons makes its licenses and related\ninformation available on an \"as-is\" basis. Creative Commons gives no\nwarranties regarding its licenses, any material licensed under their\nterms and conditions, or any related information. Creative Commons\ndisclaims all liability for damages resulting from their use to the\nfullest extent possible.\n\nUsing Creative Commons Public Licenses\n\nCreative Commons public licenses provide a standard set of terms and\nconditions that creators and other rights holders may use to share\noriginal works of authorship and other material subject to copyright\nand certain other rights specified in the public license below. The\nfollowing considerations are for informational purposes only, are not\nexhaustive, and do not form part of our licenses.\n\n     Considerations for licensors: Our public licenses are\n     intended for use by those authorized to give the public\n     permission to use material in ways otherwise restricted by\n     copyright and certain other rights. Our licenses are\n     irrevocable. Licensors should read and understand the terms\n     and conditions of the license they choose before applying it.\n     Licensors should also secure all rights necessary before\n     applying our licenses so that the public can reuse the\n     material as expected. Licensors should clearly mark any\n     material not subject to the license. This includes other CC-\n     licensed material, or material used under an exception or\n     limitation to copyright. More considerations for licensors:\n\twiki.creativecommons.org/Considerations_for_licensors\n\n     Considerations for the public: By using one of our public\n     licenses, a licensor grants the public permission to use the\n     licensed material under specified terms and conditions. If\n     the licensor's permission is not necessary for any reason--for\n     example, because of any applicable exception or limitation to\n     copyright--then that use is not regulated by the license. Our\n     licenses grant only permissions under copyright and certain\n     other rights that a licensor has authority to grant. Use of\n     the licensed material may still be restricted for other\n     reasons, including because others have copyright or other\n     rights in the material. A licensor may make special requests,\n     such as asking that all changes be marked or described.\n     Although not required by our licenses, you are encouraged to\n     respect those requests where reasonable. More_considerations\n     for the public:\n\twiki.creativecommons.org/Considerations_for_licensees\n\n=======================================================================\n\nCreative Commons Attribution-ShareAlike 4.0 International Public\nLicense\n\nBy exercising the Licensed Rights (defined below), You accept and agree\nto be bound by the terms and conditions of this Creative Commons\nAttribution-ShareAlike 4.0 International Public License (\"Public\nLicense\"). To the extent this Public License may be interpreted as a\ncontract, You are granted the Licensed Rights in consideration of Your\nacceptance of these terms and conditions, and the Licensor grants You\nsuch rights in consideration of benefits the Licensor receives from\nmaking the Licensed Material available under these terms and\nconditions.\n\n\nSection 1 -- Definitions.\n\n  a. Adapted Material means material subject to Copyright and Similar\n     Rights that is derived from or based upon the Licensed Material\n     and in which the Licensed Material is translated, altered,\n     arranged, transformed, or otherwise modified in a manner requiring\n     permission under the Copyright and Similar Rights held by the\n     Licensor. For purposes of this Public License, where the Licensed\n     Material is a musical work, performance, or sound recording,\n     Adapted Material is always produced where the Licensed Material is\n     synched in timed relation with a moving image.\n\n  b. Adapter's License means the license You apply to Your Copyright\n     and Similar Rights in Your contributions to Adapted Material in\n     accordance with the terms and conditions of this Public License.\n\n  c. BY-SA Compatible License means a license listed at\n     creativecommons.org/compatiblelicenses, approved by Creative\n     Commons as essentially the equivalent of this Public License.\n\n  d. Copyright and Similar Rights means copyright and/or similar rights\n     closely related to copyright including, without limitation,\n     performance, broadcast, sound recording, and Sui Generis Database\n     Rights, without regard to how the rights are labeled or\n     categorized. For purposes of this Public License, the rights\n     specified in Section 2(b)(1)-(2) are not Copyright and Similar\n     Rights.\n\n  e. Effective Technological Measures means those measures that, in the\n     absence of proper authority, may not be circumvented under laws\n     fulfilling obligations under Article 11 of the WIPO Copyright\n     Treaty adopted on December 20, 1996, and/or similar international\n     agreements.\n\n  f. Exceptions and Limitations means fair use, fair dealing, and/or\n     any other exception or limitation to Copyright and Similar Rights\n     that applies to Your use of the Licensed Material.\n\n  g. License Elements means the license attributes listed in the name\n     of a Creative Commons Public License. The License Elements of this\n     Public License are Attribution and ShareAlike.\n\n  h. Licensed Material means the artistic or literary work, database,\n     or other material to which the Licensor applied this Public\n     License.\n\n  i. Licensed Rights means the rights granted to You subject to the\n     terms and conditions of this Public License, which are limited to\n     all Copyright and Similar Rights that apply to Your use of the\n     Licensed Material and that the Licensor has authority to license.\n\n  j. Licensor means the individual(s) or entity(ies) granting rights\n     under this Public License.\n\n  k. Share means to provide material to the public by any means or\n     process that requires permission under the Licensed Rights, such\n     as reproduction, public display, public performance, distribution,\n     dissemination, communication, or importation, and to make material\n     available to the public including in ways that members of the\n     public may access the material from a place and at a time\n     individually chosen by them.\n\n  l. Sui Generis Database Rights means rights other than copyright\n     resulting from Directive 96/9/EC of the European Parliament and of\n     the Council of 11 March 1996 on the legal protection of databases,\n     as amended and/or succeeded, as well as other essentially\n     equivalent rights anywhere in the world.\n\n  m. You means the individual or entity exercising the Licensed Rights\n     under this Public License. Your has a corresponding meaning.\n\n\nSection 2 -- Scope.\n\n  a. License grant.\n\n       1. Subject to the terms and conditions of this Public License,\n          the Licensor hereby grants You a worldwide, royalty-free,\n          non-sublicensable, non-exclusive, irrevocable license to\n          exercise the Licensed Rights in the Licensed Material to:\n\n            a. reproduce and Share the Licensed Material, in whole or\n               in part; and\n\n            b. produce, reproduce, and Share Adapted Material.\n\n       2. Exceptions and Limitations. For the avoidance of doubt, where\n          Exceptions and Limitations apply to Your use, this Public\n          License does not apply, and You do not need to comply with\n          its terms and conditions.\n\n       3. Term. The term of this Public License is specified in Section\n          6(a).\n\n       4. Media and formats; technical modifications allowed. The\n          Licensor authorizes You to exercise the Licensed Rights in\n          all media and formats whether now known or hereafter created,\n          and to make technical modifications necessary to do so. The\n          Licensor waives and/or agrees not to assert any right or\n          authority to forbid You from making technical modifications\n          necessary to exercise the Licensed Rights, including\n          technical modifications necessary to circumvent Effective\n          Technological Measures. For purposes of this Public License,\n          simply making modifications authorized by this Section 2(a)\n          (4) never produces Adapted Material.\n\n       5. Downstream recipients.\n\n            a. Offer from the Licensor -- Licensed Material. Every\n               recipient of the Licensed Material automatically\n               receives an offer from the Licensor to exercise the\n               Licensed Rights under the terms and conditions of this\n               Public License.\n\n            b. Additional offer from the Licensor -- Adapted Material.\n               Every recipient of Adapted Material from You\n               automatically receives an offer from the Licensor to\n               exercise the Licensed Rights in the Adapted Material\n               under the conditions of the Adapter's License You apply.\n\n            c. No downstream restrictions. You may not offer or impose\n               any additional or different terms or conditions on, or\n               apply any Effective Technological Measures to, the\n               Licensed Material if doing so restricts exercise of the\n               Licensed Rights by any recipient of the Licensed\n               Material.\n\n       6. No endorsement. Nothing in this Public License constitutes or\n          may be construed as permission to assert or imply that You\n          are, or that Your use of the Licensed Material is, connected\n          with, or sponsored, endorsed, or granted official status by,\n          the Licensor or others designated to receive attribution as\n          provided in Section 3(a)(1)(A)(i).\n\n  b. Other rights.\n\n       1. Moral rights, such as the right of integrity, are not\n          licensed under this Public License, nor are publicity,\n          privacy, and/or other similar personality rights; however, to\n          the extent possible, the Licensor waives and/or agrees not to\n          assert any such rights held by the Licensor to the limited\n          extent necessary to allow You to exercise the Licensed\n          Rights, but not otherwise.\n\n       2. Patent and trademark rights are not licensed under this\n          Public License.\n\n       3. To the extent possible, the Licensor waives any right to\n          collect royalties from You for the exercise of the Licensed\n          Rights, whether directly or through a collecting society\n          under any voluntary or waivable statutory or compulsory\n          licensing scheme. In all other cases the Licensor expressly\n          reserves any right to collect such royalties.\n\n\nSection 3 -- License Conditions.\n\nYour exercise of the Licensed Rights is expressly made subject to the\nfollowing conditions.\n\n  a. Attribution.\n\n       1. If You Share the Licensed Material (including in modified\n          form), You must:\n\n            a. retain the following if it is supplied by the Licensor\n               with the Licensed Material:\n\n                 i. identification of the creator(s) of the Licensed\n                    Material and any others designated to receive\n                    attribution, in any reasonable manner requested by\n                    the Licensor (including by pseudonym if\n                    designated);\n\n                ii. a copyright notice;\n\n               iii. a notice that refers to this Public License;\n\n                iv. a notice that refers to the disclaimer of\n                    warranties;\n\n                 v. a URI or hyperlink to the Licensed Material to the\n                    extent reasonably practicable;\n\n            b. indicate if You modified the Licensed Material and\n               retain an indication of any previous modifications; and\n\n            c. indicate the Licensed Material is licensed under this\n               Public License, and include the text of, or the URI or\n               hyperlink to, this Public License.\n\n       2. You may satisfy the conditions in Section 3(a)(1) in any\n          reasonable manner based on the medium, means, and context in\n          which You Share the Licensed Material. For example, it may be\n          reasonable to satisfy the conditions by providing a URI or\n          hyperlink to a resource that includes the required\n          information.\n\n       3. If requested by the Licensor, You must remove any of the\n          information required by Section 3(a)(1)(A) to the extent\n          reasonably practicable.\n\n  b. ShareAlike.\n\n     In addition to the conditions in Section 3(a), if You Share\n     Adapted Material You produce, the following conditions also apply.\n\n       1. The Adapter's License You apply must be a Creative Commons\n          license with the same License Elements, this version or\n          later, or a BY-SA Compatible License.\n\n       2. You must include the text of, or the URI or hyperlink to, the\n          Adapter's License You apply. You may satisfy this condition\n          in any reasonable manner based on the medium, means, and\n          context in which You Share Adapted Material.\n\n       3. You may not offer or impose any additional or different terms\n          or conditions on, or apply any Effective Technological\n          Measures to, Adapted Material that restrict exercise of the\n          rights granted under the Adapter's License You apply.\n\n\nSection 4 -- Sui Generis Database Rights.\n\nWhere the Licensed Rights include Sui Generis Database Rights that\napply to Your use of the Licensed Material:\n\n  a. for the avoidance of doubt, Section 2(a)(1) grants You the right\n     to extract, reuse, reproduce, and Share all or a substantial\n     portion of the contents of the database;\n\n  b. if You include all or a substantial portion of the database\n     contents in a database in which You have Sui Generis Database\n     Rights, then the database in which You have Sui Generis Database\n     Rights (but not its individual contents) is Adapted Material,\n\n     including for purposes of Section 3(b); and\n  c. You must comply with the conditions in Section 3(a) if You Share\n     all or a substantial portion of the contents of the database.\n\nFor the avoidance of doubt, this Section 4 supplements and does not\nreplace Your obligations under this Public License where the Licensed\nRights include other Copyright and Similar Rights.\n\n\nSection 5 -- Disclaimer of Warranties and Limitation of Liability.\n\n  a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE\n     EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS\n     AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF\n     ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,\n     IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,\n     WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR\n     PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,\n     ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT\n     KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT\n     ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.\n\n  b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE\n     TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,\n     NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,\n     INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,\n     COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR\n     USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN\n     ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR\n     DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR\n     IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.\n\n  c. The disclaimer of warranties and limitation of liability provided\n     above shall be interpreted in a manner that, to the extent\n     possible, most closely approximates an absolute disclaimer and\n     waiver of all liability.\n\n\nSection 6 -- Term and Termination.\n\n  a. This Public License applies for the term of the Copyright and\n     Similar Rights licensed here. However, if You fail to comply with\n     this Public License, then Your rights under this Public License\n     terminate automatically.\n\n  b. Where Your right to use the Licensed Material has terminated under\n     Section 6(a), it reinstates:\n\n       1. automatically as of the date the violation is cured, provided\n          it is cured within 30 days of Your discovery of the\n          violation; or\n\n       2. upon express reinstatement by the Licensor.\n\n     For the avoidance of doubt, this Section 6(b) does not affect any\n     right the Licensor may have to seek remedies for Your violations\n     of this Public License.\n\n  c. For the avoidance of doubt, the Licensor may also offer the\n     Licensed Material under separate terms or conditions or stop\n     distributing the Licensed Material at any time; however, doing so\n     will not terminate this Public License.\n\n  d. Sections 1, 5, 6, 7, and 8 survive termination of this Public\n     License.\n\n\nSection 7 -- Other Terms and Conditions.\n\n  a. The Licensor shall not be bound by any additional or different\n     terms or conditions communicated by You unless expressly agreed.\n\n  b. Any arrangements, understandings, or agreements regarding the\n     Licensed Material not stated herein are separate from and\n     independent of the terms and conditions of this Public License.\n\n\nSection 8 -- Interpretation.\n\n  a. For the avoidance of doubt, this Public License does not, and\n     shall not be interpreted to, reduce, limit, restrict, or impose\n     conditions on any use of the Licensed Material that could lawfully\n     be made without permission under this Public License.\n\n  b. To the extent possible, if any provision of this Public License is\n     deemed unenforceable, it shall be automatically reformed to the\n     minimum extent necessary to make it enforceable. If the provision\n     cannot be reformed, it shall be severed from this Public License\n     without affecting the enforceability of the remaining terms and\n     conditions.\n\n  c. No term or condition of this Public License will be waived and no\n     failure to comply consented to unless expressly agreed to by the\n     Licensor.\n\n  d. Nothing in this Public License constitutes or may be interpreted\n     as a limitation upon, or waiver of, any privileges and immunities\n     that apply to the Licensor or You, including from the legal\n     processes of any jurisdiction or authority.\n\n\n=======================================================================\n\nCreative Commons is not a party to its public licenses.\nNotwithstanding, Creative Commons may elect to apply one of its public\nlicenses to material it publishes and in those instances will be\nconsidered the \"Licensor.\" Except for the limited purpose of indicating\nthat material is shared under a Creative Commons public license or as\notherwise permitted by the Creative Commons policies published at\ncreativecommons.org/policies, Creative Commons does not authorize the\nuse of the trademark \"Creative Commons\" or any other trademark or logo\nof Creative Commons without its prior written consent including,\nwithout limitation, in connection with any unauthorized modifications\nto any of its public licenses or any other arrangements,\nunderstandings, or agreements concerning use of licensed material. For\nthe avoidance of doubt, this paragraph does not form part of the public\nlicenses.\n\nCreative Commons may be contacted at creativecommons.org.\n"
  },
  {
    "path": "vendor/github.com/opencontainers/go-digest/algorithm.go",
    "content": "package digest\n\nimport (\n\t\"crypto\"\n\t\"fmt\"\n\t\"hash\"\n\t\"io\"\n)\n\n// Algorithm identifies and implementation of a digester by an identifier.\n// Note the that this defines both the hash algorithm used and the string\n// encoding.\ntype Algorithm string\n\n// supported digest types\nconst (\n\tSHA256 Algorithm = \"sha256\" // sha256 with hex encoding\n\tSHA384 Algorithm = \"sha384\" // sha384 with hex encoding\n\tSHA512 Algorithm = \"sha512\" // sha512 with hex encoding\n\n\t// Canonical is the primary digest algorithm used with the distribution\n\t// project. Other digests may be used but this one is the primary storage\n\t// digest.\n\tCanonical = SHA256\n)\n\nvar (\n\t// TODO(stevvooe): Follow the pattern of the standard crypto package for\n\t// registration of digests. Effectively, we are a registerable set and\n\t// common symbol access.\n\n\t// algorithms maps values to hash.Hash implementations. Other algorithms\n\t// may be available but they cannot be calculated by the digest package.\n\talgorithms = map[Algorithm]crypto.Hash{\n\t\tSHA256: crypto.SHA256,\n\t\tSHA384: crypto.SHA384,\n\t\tSHA512: crypto.SHA512,\n\t}\n)\n\n// Available returns true if the digest type is available for use. If this\n// returns false, Digester and Hash will return nil.\nfunc (a Algorithm) Available() bool {\n\th, ok := algorithms[a]\n\tif !ok {\n\t\treturn false\n\t}\n\n\t// check availability of the hash, as well\n\treturn h.Available()\n}\n\nfunc (a Algorithm) String() string {\n\treturn string(a)\n}\n\n// Size returns number of bytes returned by the hash.\nfunc (a Algorithm) Size() int {\n\th, ok := algorithms[a]\n\tif !ok {\n\t\treturn 0\n\t}\n\treturn h.Size()\n}\n\n// Set implemented to allow use of Algorithm as a command line flag.\nfunc (a *Algorithm) Set(value string) error {\n\tif value == \"\" {\n\t\t*a = Canonical\n\t} else {\n\t\t// just do a type conversion, support is queried with Available.\n\t\t*a = Algorithm(value)\n\t}\n\n\tif !a.Available() {\n\t\treturn ErrDigestUnsupported\n\t}\n\n\treturn nil\n}\n\n// Digester returns a new digester for the specified algorithm. If the algorithm\n// does not have a digester implementation, nil will be returned. This can be\n// checked by calling Available before calling Digester.\nfunc (a Algorithm) Digester() Digester {\n\treturn &digester{\n\t\talg:  a,\n\t\thash: a.Hash(),\n\t}\n}\n\n// Hash returns a new hash as used by the algorithm. If not available, the\n// method will panic. Check Algorithm.Available() before calling.\nfunc (a Algorithm) Hash() hash.Hash {\n\tif !a.Available() {\n\t\t// Empty algorithm string is invalid\n\t\tif a == \"\" {\n\t\t\tpanic(fmt.Sprintf(\"empty digest algorithm, validate before calling Algorithm.Hash()\"))\n\t\t}\n\n\t\t// NOTE(stevvooe): A missing hash is usually a programming error that\n\t\t// must be resolved at compile time. We don't import in the digest\n\t\t// package to allow users to choose their hash implementation (such as\n\t\t// when using stevvooe/resumable or a hardware accelerated package).\n\t\t//\n\t\t// Applications that may want to resolve the hash at runtime should\n\t\t// call Algorithm.Available before call Algorithm.Hash().\n\t\tpanic(fmt.Sprintf(\"%v not available (make sure it is imported)\", a))\n\t}\n\n\treturn algorithms[a].New()\n}\n\n// FromReader returns the digest of the reader using the algorithm.\nfunc (a Algorithm) FromReader(rd io.Reader) (Digest, error) {\n\tdigester := a.Digester()\n\n\tif _, err := io.Copy(digester.Hash(), rd); err != nil {\n\t\treturn \"\", err\n\t}\n\n\treturn digester.Digest(), nil\n}\n\n// FromBytes digests the input and returns a Digest.\nfunc (a Algorithm) FromBytes(p []byte) Digest {\n\tdigester := a.Digester()\n\n\tif _, err := digester.Hash().Write(p); err != nil {\n\t\t// Writes to a Hash should never fail. None of the existing\n\t\t// hash implementations in the stdlib or hashes vendored\n\t\t// here can return errors from Write. Having a panic in this\n\t\t// condition instead of having FromBytes return an error value\n\t\t// avoids unnecessary error handling paths in all callers.\n\t\tpanic(\"write to hash function returned error: \" + err.Error())\n\t}\n\n\treturn digester.Digest()\n}\n\n// FromString digests the string input and returns a Digest.\nfunc (a Algorithm) FromString(s string) Digest {\n\treturn a.FromBytes([]byte(s))\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/go-digest/digest.go",
    "content": "package digest\n\nimport (\n\t\"fmt\"\n\t\"hash\"\n\t\"io\"\n\t\"regexp\"\n\t\"strings\"\n)\n\n// Digest allows simple protection of hex formatted digest strings, prefixed\n// by their algorithm. Strings of type Digest have some guarantee of being in\n// the correct format and it provides quick access to the components of a\n// digest string.\n//\n// The following is an example of the contents of Digest types:\n//\n// \tsha256:7173b809ca12ec5dee4506cd86be934c4596dd234ee82c0662eac04a8c2c71dc\n//\n// This allows to abstract the digest behind this type and work only in those\n// terms.\ntype Digest string\n\n// NewDigest returns a Digest from alg and a hash.Hash object.\nfunc NewDigest(alg Algorithm, h hash.Hash) Digest {\n\treturn NewDigestFromBytes(alg, h.Sum(nil))\n}\n\n// NewDigestFromBytes returns a new digest from the byte contents of p.\n// Typically, this can come from hash.Hash.Sum(...) or xxx.SumXXX(...)\n// functions. This is also useful for rebuilding digests from binary\n// serializations.\nfunc NewDigestFromBytes(alg Algorithm, p []byte) Digest {\n\treturn Digest(fmt.Sprintf(\"%s:%x\", alg, p))\n}\n\n// NewDigestFromHex returns a Digest from alg and a the hex encoded digest.\nfunc NewDigestFromHex(alg, hex string) Digest {\n\treturn Digest(fmt.Sprintf(\"%s:%s\", alg, hex))\n}\n\n// DigestRegexp matches valid digest types.\nvar DigestRegexp = regexp.MustCompile(`[a-zA-Z0-9-_+.]+:[a-fA-F0-9]+`)\n\n// DigestRegexpAnchored matches valid digest types, anchored to the start and end of the match.\nvar DigestRegexpAnchored = regexp.MustCompile(`^` + DigestRegexp.String() + `$`)\n\nvar (\n\t// ErrDigestInvalidFormat returned when digest format invalid.\n\tErrDigestInvalidFormat = fmt.Errorf(\"invalid checksum digest format\")\n\n\t// ErrDigestInvalidLength returned when digest has invalid length.\n\tErrDigestInvalidLength = fmt.Errorf(\"invalid checksum digest length\")\n\n\t// ErrDigestUnsupported returned when the digest algorithm is unsupported.\n\tErrDigestUnsupported = fmt.Errorf(\"unsupported digest algorithm\")\n)\n\n// Parse parses s and returns the validated digest object. An error will\n// be returned if the format is invalid.\nfunc Parse(s string) (Digest, error) {\n\td := Digest(s)\n\treturn d, d.Validate()\n}\n\n// FromReader consumes the content of rd until io.EOF, returning canonical digest.\nfunc FromReader(rd io.Reader) (Digest, error) {\n\treturn Canonical.FromReader(rd)\n}\n\n// FromBytes digests the input and returns a Digest.\nfunc FromBytes(p []byte) Digest {\n\treturn Canonical.FromBytes(p)\n}\n\n// FromString digests the input and returns a Digest.\nfunc FromString(s string) Digest {\n\treturn Canonical.FromString(s)\n}\n\n// Validate checks that the contents of d is a valid digest, returning an\n// error if not.\nfunc (d Digest) Validate() error {\n\ts := string(d)\n\n\ti := strings.Index(s, \":\")\n\n\t// validate i then run through regexp\n\tif i < 0 || i+1 == len(s) || !DigestRegexpAnchored.MatchString(s) {\n\t\treturn ErrDigestInvalidFormat\n\t}\n\n\talgorithm := Algorithm(s[:i])\n\tif !algorithm.Available() {\n\t\treturn ErrDigestUnsupported\n\t}\n\n\t// Digests much always be hex-encoded, ensuring that their hex portion will\n\t// always be size*2\n\tif algorithm.Size()*2 != len(s[i+1:]) {\n\t\treturn ErrDigestInvalidLength\n\t}\n\n\treturn nil\n}\n\n// Algorithm returns the algorithm portion of the digest. This will panic if\n// the underlying digest is not in a valid format.\nfunc (d Digest) Algorithm() Algorithm {\n\treturn Algorithm(d[:d.sepIndex()])\n}\n\n// Verifier returns a writer object that can be used to verify a stream of\n// content against the digest. If the digest is invalid, the method will panic.\nfunc (d Digest) Verifier() Verifier {\n\treturn hashVerifier{\n\t\thash:   d.Algorithm().Hash(),\n\t\tdigest: d,\n\t}\n}\n\n// Hex returns the hex digest portion of the digest. This will panic if the\n// underlying digest is not in a valid format.\nfunc (d Digest) Hex() string {\n\treturn string(d[d.sepIndex()+1:])\n}\n\nfunc (d Digest) String() string {\n\treturn string(d)\n}\n\nfunc (d Digest) sepIndex() int {\n\ti := strings.Index(string(d), \":\")\n\n\tif i < 0 {\n\t\tpanic(fmt.Sprintf(\"no ':' separator in digest %q\", d))\n\t}\n\n\treturn i\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/go-digest/digester.go",
    "content": "package digest\n\nimport \"hash\"\n\n// Digester calculates the digest of written data. Writes should go directly\n// to the return value of Hash, while calling Digest will return the current\n// value of the digest.\ntype Digester interface {\n\tHash() hash.Hash // provides direct access to underlying hash instance.\n\tDigest() Digest\n}\n\n// digester provides a simple digester definition that embeds a hasher.\ntype digester struct {\n\talg  Algorithm\n\thash hash.Hash\n}\n\nfunc (d *digester) Hash() hash.Hash {\n\treturn d.hash\n}\n\nfunc (d *digester) Digest() Digest {\n\treturn NewDigest(d.alg, d.hash)\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/go-digest/doc.go",
    "content": "// Package digest provides a generalized type to opaquely represent message\n// digests and their operations within the registry. The Digest type is\n// designed to serve as a flexible identifier in a content-addressable system.\n// More importantly, it provides tools and wrappers to work with\n// hash.Hash-based digests with little effort.\n//\n// Basics\n//\n// The format of a digest is simply a string with two parts, dubbed the\n// \"algorithm\" and the \"digest\", separated by a colon:\n//\n// \t<algorithm>:<digest>\n//\n// An example of a sha256 digest representation follows:\n//\n// \tsha256:7173b809ca12ec5dee4506cd86be934c4596dd234ee82c0662eac04a8c2c71dc\n//\n// In this case, the string \"sha256\" is the algorithm and the hex bytes are\n// the \"digest\".\n//\n// Because the Digest type is simply a string, once a valid Digest is\n// obtained, comparisons are cheap, quick and simple to express with the\n// standard equality operator.\n//\n// Verification\n//\n// The main benefit of using the Digest type is simple verification against a\n// given digest. The Verifier interface, modeled after the stdlib hash.Hash\n// interface, provides a common write sink for digest verification. After\n// writing is complete, calling the Verifier.Verified method will indicate\n// whether or not the stream of bytes matches the target digest.\n//\n// Missing Features\n//\n// In addition to the above, we intend to add the following features to this\n// package:\n//\n// 1. A Digester type that supports write sink digest calculation.\n//\n// 2. Suspend and resume of ongoing digest calculations to support efficient digest verification in the registry.\n//\npackage digest\n"
  },
  {
    "path": "vendor/github.com/opencontainers/go-digest/verifiers.go",
    "content": "package digest\n\nimport (\n\t\"hash\"\n\t\"io\"\n)\n\n// Verifier presents a general verification interface to be used with message\n// digests and other byte stream verifications. Users instantiate a Verifier\n// from one of the various methods, write the data under test to it then check\n// the result with the Verified method.\ntype Verifier interface {\n\tio.Writer\n\n\t// Verified will return true if the content written to Verifier matches\n\t// the digest.\n\tVerified() bool\n}\n\ntype hashVerifier struct {\n\tdigest Digest\n\thash   hash.Hash\n}\n\nfunc (hv hashVerifier) Write(p []byte) (n int, err error) {\n\treturn hv.hash.Write(p)\n}\n\nfunc (hv hashVerifier) Verified() bool {\n\treturn hv.digest == NewDigest(hv.digest.Algorithm(), hv.hash)\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/image-spec/LICENSE",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   Copyright 2016 The Linux Foundation.\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "vendor/github.com/opencontainers/image-spec/specs-go/v1/config.go",
    "content": "// Copyright 2016 The Linux Foundation\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage v1\n\n// ImageConfig defines the execution parameters which should be used as a base when running a container using an image.\ntype ImageConfig struct {\n\t// User defines the username or UID which the process in the container should run as.\n\tUser string `json:\"User\"`\n\n\t// Memory defines the memory limit.\n\tMemory int64 `json:\"Memory\"`\n\n\t// MemorySwap defines the total memory usage limit (memory + swap).\n\tMemorySwap int64 `json:\"MemorySwap\"`\n\n\t// CPUShares is the CPU shares (relative weight vs. other containers).\n\tCPUShares int64 `json:\"CpuShares\"`\n\n\t// ExposedPorts a set of ports to expose from a container running this image.\n\tExposedPorts map[string]struct{} `json:\"ExposedPorts\"`\n\n\t// Env is a list of environment variables to be used in a container.\n\tEnv []string `json:\"Env\"`\n\n\t// Entrypoint defines a list of arguments to use as the command to execute when the container starts.\n\tEntrypoint []string `json:\"Entrypoint\"`\n\n\t// Cmd defines the default arguments to the entrypoint of the container.\n\tCmd []string `json:\"Cmd\"`\n\n\t// Volumes is a set of directories which should be created as data volumes in a container running this image.\n\tVolumes map[string]struct{} `json:\"Volumes\"`\n\n\t// WorkingDir sets the current working directory of the entrypoint process in the container.\n\tWorkingDir string `json:\"WorkingDir\"`\n}\n\n// RootFS describes a layer content addresses\ntype RootFS struct {\n\t// Type is the type of the rootfs.\n\tType string `json:\"type\"`\n\n\t// DiffIDs is an array of layer content hashes (DiffIDs), in order from bottom-most to top-most.\n\tDiffIDs []string `json:\"diff_ids\"`\n}\n\n// History describes the history of a layer.\ntype History struct {\n\t// Created is the creation time.\n\tCreated string `json:\"created\"`\n\n\t// CreatedBy is the command which created the layer.\n\tCreatedBy string `json:\"created_by\"`\n\n\t// Author is the author of the build point.\n\tAuthor string `json:\"author\"`\n\n\t// Comment is a custom message set when creating the layer.\n\tComment string `json:\"comment\"`\n\n\t// EmptyLayer is used to mark if the history item created a filesystem diff.\n\tEmptyLayer bool `json:\"empty_layer\"`\n}\n\n// Image is the JSON structure which describes some basic information about the image.\ntype Image struct {\n\t// Created defines an ISO-8601 formatted combined date and time at which the image was created.\n\tCreated string `json:\"created\"`\n\n\t// Author defines the name and/or email address of the person or entity which created and is responsible for maintaining the image.\n\tAuthor string `json:\"author\"`\n\n\t// Architecture is the CPU architecture which the binaries in this image are built to run on.\n\tArchitecture string `json:\"architecture\"`\n\n\t// OS is the name of the operating system which the image is built to run on.\n\tOS string `json:\"os\"`\n\n\t// Config defines the execution parameters which should be used as a base when running a container using the image.\n\tConfig ImageConfig `json:\"config\"`\n\n\t// RootFS references the layer content addresses used by the image.\n\tRootFS RootFS `json:\"rootfs\"`\n\n\t// History describes the history of each layer.\n\tHistory []History `json:\"history\"`\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/image-spec/specs-go/v1/descriptor.go",
    "content": "// Copyright 2016 The Linux Foundation\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage v1\n\n// Descriptor describes the disposition of targeted content.\ntype Descriptor struct {\n\t// MediaType contains the MIME type of the referenced object.\n\tMediaType string `json:\"mediaType\"`\n\n\t// Digest is the digest of the targeted content.\n\tDigest string `json:\"digest\"`\n\n\t// Size specifies the size in bytes of the blob.\n\tSize int64 `json:\"size\"`\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/image-spec/specs-go/v1/manifest.go",
    "content": "// Copyright 2016 The Linux Foundation\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage v1\n\nimport \"github.com/opencontainers/image-spec/specs-go\"\n\n// Manifest defines a schema2 manifest\ntype Manifest struct {\n\tspecs.Versioned\n\n\t// Config references a configuration object for a container, by digest.\n\t// The referenced configuration object is a JSON blob that the runtime uses to set up the container.\n\tConfig Descriptor `json:\"config\"`\n\n\t// Layers is an indexed list of layers referenced by the manifest.\n\tLayers []Descriptor `json:\"layers\"`\n\n\t// Annotations contains arbitrary metadata for the manifest list.\n\tAnnotations map[string]string `json:\"annotations\"`\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/image-spec/specs-go/v1/manifest_list.go",
    "content": "// Copyright 2016 The Linux Foundation\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage v1\n\nimport \"github.com/opencontainers/image-spec/specs-go\"\n\n// Platform describes the platform which the image in the manifest runs on.\ntype Platform struct {\n\t// Architecture field specifies the CPU architecture, for example\n\t// `amd64` or `ppc64`.\n\tArchitecture string `json:\"architecture\"`\n\n\t// OS specifies the operating system, for example `linux` or `windows`.\n\tOS string `json:\"os\"`\n\n\t// OSVersion is an optional field specifying the operating system\n\t// version, for example `10.0.10586`.\n\tOSVersion string `json:\"os.version,omitempty\"`\n\n\t// OSFeatures is an optional field specifying an array of strings,\n\t// each listing a required OS feature (for example on Windows `win32k`).\n\tOSFeatures []string `json:\"os.features,omitempty\"`\n\n\t// Variant is an optional field specifying a variant of the CPU, for\n\t// example `ppc64le` to specify a little-endian version of a PowerPC CPU.\n\tVariant string `json:\"variant,omitempty\"`\n\n\t// Features is an optional field specifying an array of strings, each\n\t// listing a required CPU feature (for example `sse4` or `aes`).\n\tFeatures []string `json:\"features,omitempty\"`\n}\n\n// ManifestDescriptor describes a platform specific manifest.\ntype ManifestDescriptor struct {\n\tDescriptor\n\n\t// Platform describes the platform which the image in the manifest runs on.\n\tPlatform Platform `json:\"platform\"`\n}\n\n// ManifestList  references manifests for various platforms.\ntype ManifestList struct {\n\tspecs.Versioned\n\n\t// Manifests references platform specific manifests.\n\tManifests []ManifestDescriptor `json:\"manifests\"`\n\n\t// Annotations contains arbitrary metadata for the manifest list.\n\tAnnotations map[string]string `json:\"annotations\"`\n}\n"
  },
  {
    "path": "vendor/github.com/opencontainers/image-spec/specs-go/v1/mediatype.go",
    "content": "// Copyright 2016 The Linux Foundation\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage v1\n\nconst (\n\t// MediaTypeDescriptor specifies the media type for a content descriptor.\n\tMediaTypeDescriptor = \"application/vnd.oci.descriptor.v1+json\"\n\n\t// MediaTypeImageManifest specifies the media type for an image manifest.\n\tMediaTypeImageManifest = \"application/vnd.oci.image.manifest.v1+json\"\n\n\t// MediaTypeImageManifestList specifies the media type for an image manifest list.\n\tMediaTypeImageManifestList = \"application/vnd.oci.image.manifest.list.v1+json\"\n\n\t// MediaTypeImageLayer is the media type used for layers referenced by the manifest.\n\tMediaTypeImageLayer = \"application/vnd.oci.image.layer.v1.tar+gzip\"\n\n\t// MediaTypeImageLayerNonDistributable is the media type for layers referenced by\n\t// the manifest but with distribution restrictions.\n\tMediaTypeImageLayerNonDistributable = \"application/vnd.oci.image.layer.nondistributable.v1.tar+gzip\"\n\n\t// MediaTypeImageConfig specifies the media type for the image configuration.\n\tMediaTypeImageConfig = \"application/vnd.oci.image.config.v1+json\"\n)\n"
  },
  {
    "path": "vendor/github.com/opencontainers/image-spec/specs-go/version.go",
    "content": "// Copyright 2016 The Linux Foundation\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage specs\n\nimport \"fmt\"\n\nconst (\n\t// VersionMajor is for an API incompatible changes\n\tVersionMajor = 1\n\t// VersionMinor is for functionality in a backwards-compatible manner\n\tVersionMinor = 0\n\t// VersionPatch is for backwards-compatible bug fixes\n\tVersionPatch = 0\n\n\t// VersionDev indicates development branch. Releases will be empty string.\n\tVersionDev = \"-rc2\"\n)\n\n// Version is the specification version that the package types support.\nvar Version = fmt.Sprintf(\"%d.%d.%d%s\", VersionMajor, VersionMinor, VersionPatch, VersionDev)\n"
  },
  {
    "path": "vendor/github.com/opencontainers/image-spec/specs-go/versioned.go",
    "content": "// Copyright 2016 The Linux Foundation\n//\n// Licensed under the Apache License, Version 2.0 (the \"License\");\n// you may not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n//     http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing, software\n// distributed under the License is distributed on an \"AS IS\" BASIS,\n// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n// See the License for the specific language governing permissions and\n// limitations under the License.\n\npackage specs\n\n// Versioned provides a struct with the manifest schemaVersion and mediaType.\n// Incoming content with unknown schema version can be decoded against this\n// struct to check the version.\ntype Versioned struct {\n\t// SchemaVersion is the image manifest schema that this image follows\n\tSchemaVersion int `json:\"schemaVersion\"`\n\n\t// MediaType is the media type of this schema.\n\tMediaType string `json:\"mediaType,omitempty\"`\n}\n"
  },
  {
    "path": "vendor/github.com/spf13/pflag/LICENSE",
    "content": "Copyright (c) 2012 Alex Ogier. All rights reserved.\nCopyright (c) 2012 The Go Authors. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n   * Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n   * Redistributions in binary form must reproduce the above\ncopyright notice, this list of conditions and the following disclaimer\nin the documentation and/or other materials provided with the\ndistribution.\n   * Neither the name of Google Inc. nor the names of its\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "vendor/github.com/spf13/pflag/flag.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n/*\n\tpflag is a drop-in replacement for Go's flag package, implementing\n\tPOSIX/GNU-style --flags.\n\n\tpflag is compatible with the GNU extensions to the POSIX recommendations\n\tfor command-line options. See\n\thttp://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html\n\n\tUsage:\n\n\tpflag is a drop-in replacement of Go's native flag package. If you import\n\tpflag under the name \"flag\" then all code should continue to function\n\twith no changes.\n\n\t\timport flag \"github.com/ogier/pflag\"\n\n\tThere is one exception to this: if you directly instantiate the Flag struct\n\tthere is one more field \"Shorthand\" that you will need to set.\n\tMost code never instantiates this struct directly, and instead uses\n\tfunctions such as String(), BoolVar(), and Var(), and is therefore\n\tunaffected.\n\n\tDefine flags using flag.String(), Bool(), Int(), etc.\n\n\tThis declares an integer flag, -flagname, stored in the pointer ip, with type *int.\n\t\tvar ip = flag.Int(\"flagname\", 1234, \"help message for flagname\")\n\tIf you like, you can bind the flag to a variable using the Var() functions.\n\t\tvar flagvar int\n\t\tfunc init() {\n\t\t\tflag.IntVar(&flagvar, \"flagname\", 1234, \"help message for flagname\")\n\t\t}\n\tOr you can create custom flags that satisfy the Value interface (with\n\tpointer receivers) and couple them to flag parsing by\n\t\tflag.Var(&flagVal, \"name\", \"help message for flagname\")\n\tFor such flags, the default value is just the initial value of the variable.\n\n\tAfter all flags are defined, call\n\t\tflag.Parse()\n\tto parse the command line into the defined flags.\n\n\tFlags may then be used directly. If you're using the flags themselves,\n\tthey are all pointers; if you bind to variables, they're values.\n\t\tfmt.Println(\"ip has value \", *ip)\n\t\tfmt.Println(\"flagvar has value \", flagvar)\n\n\tAfter parsing, the arguments after the flag are available as the\n\tslice flag.Args() or individually as flag.Arg(i).\n\tThe arguments are indexed from 0 through flag.NArg()-1.\n\n\tThe pflag package also defines some new functions that are not in flag,\n\tthat give one-letter shorthands for flags. You can use these by appending\n\t'P' to the name of any function that defines a flag.\n\t\tvar ip = flag.IntP(\"flagname\", \"f\", 1234, \"help message\")\n\t\tvar flagvar bool\n\t\tfunc init() {\n\t\t\tflag.BoolVarP(\"boolname\", \"b\", true, \"help message\")\n\t\t}\n\t\tflag.VarP(&flagVar, \"varname\", \"v\", 1234, \"help message\")\n\tShorthand letters can be used with single dashes on the command line.\n\tBoolean shorthand flags can be combined with other shorthand flags.\n\n\tCommand line flag syntax:\n\t\t--flag    // boolean flags only\n\t\t--flag=x\n\n\tUnlike the flag package, a single dash before an option means something\n\tdifferent than a double dash. Single dashes signify a series of shorthand\n\tletters for flags. All but the last shorthand letter must be boolean flags.\n\t\t// boolean flags\n\t\t-f\n\t\t-abc\n\t\t// non-boolean flags\n\t\t-n 1234\n\t\t-Ifile\n\t\t// mixed\n\t\t-abcs \"hello\"\n\t\t-abcn1234\n\n\tFlag parsing stops after the terminator \"--\". Unlike the flag package,\n\tflags can be interspersed with arguments anywhere on the command line\n\tbefore this terminator.\n\n\tInteger flags accept 1234, 0664, 0x1234 and may be negative.\n\tBoolean flags (in their long form) accept 1, 0, t, f, true, false,\n\tTRUE, FALSE, True, False.\n\tDuration flags accept any input valid for time.ParseDuration.\n\n\tThe default set of command-line flags is controlled by\n\ttop-level functions.  The FlagSet type allows one to define\n\tindependent sets of flags, such as to implement subcommands\n\tin a command-line interface. The methods of FlagSet are\n\tanalogous to the top-level functions for the command-line\n\tflag set.\n*/\npackage pflag\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"sort\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n)\n\n// ErrHelp is the error returned if the flag -help is invoked but no such flag is defined.\nvar ErrHelp = errors.New(\"pflag: help requested\")\n\n// -- bool Value\ntype boolValue bool\n\nfunc newBoolValue(val bool, p *bool) *boolValue {\n\t*p = val\n\treturn (*boolValue)(p)\n}\n\nfunc (b *boolValue) Set(s string) error {\n\tv, err := strconv.ParseBool(s)\n\t*b = boolValue(v)\n\treturn err\n}\n\nfunc (b *boolValue) String() string { return fmt.Sprintf(\"%v\", *b) }\n\n// -- int Value\ntype intValue int\n\nfunc newIntValue(val int, p *int) *intValue {\n\t*p = val\n\treturn (*intValue)(p)\n}\n\nfunc (i *intValue) Set(s string) error {\n\tv, err := strconv.ParseInt(s, 0, 64)\n\t*i = intValue(v)\n\treturn err\n}\n\nfunc (i *intValue) String() string { return fmt.Sprintf(\"%v\", *i) }\n\n// -- int64 Value\ntype int64Value int64\n\nfunc newInt64Value(val int64, p *int64) *int64Value {\n\t*p = val\n\treturn (*int64Value)(p)\n}\n\nfunc (i *int64Value) Set(s string) error {\n\tv, err := strconv.ParseInt(s, 0, 64)\n\t*i = int64Value(v)\n\treturn err\n}\n\nfunc (i *int64Value) String() string { return fmt.Sprintf(\"%v\", *i) }\n\n// -- uint Value\ntype uintValue uint\n\nfunc newUintValue(val uint, p *uint) *uintValue {\n\t*p = val\n\treturn (*uintValue)(p)\n}\n\nfunc (i *uintValue) Set(s string) error {\n\tv, err := strconv.ParseUint(s, 0, 64)\n\t*i = uintValue(v)\n\treturn err\n}\n\nfunc (i *uintValue) String() string { return fmt.Sprintf(\"%v\", *i) }\n\n// -- uint64 Value\ntype uint64Value uint64\n\nfunc newUint64Value(val uint64, p *uint64) *uint64Value {\n\t*p = val\n\treturn (*uint64Value)(p)\n}\n\nfunc (i *uint64Value) Set(s string) error {\n\tv, err := strconv.ParseUint(s, 0, 64)\n\t*i = uint64Value(v)\n\treturn err\n}\n\nfunc (i *uint64Value) String() string { return fmt.Sprintf(\"%v\", *i) }\n\n// -- string Value\ntype stringValue string\n\nfunc newStringValue(val string, p *string) *stringValue {\n\t*p = val\n\treturn (*stringValue)(p)\n}\n\nfunc (s *stringValue) Set(val string) error {\n\t*s = stringValue(val)\n\treturn nil\n}\n\nfunc (s *stringValue) String() string { return fmt.Sprintf(\"%s\", *s) }\n\n// -- float64 Value\ntype float64Value float64\n\nfunc newFloat64Value(val float64, p *float64) *float64Value {\n\t*p = val\n\treturn (*float64Value)(p)\n}\n\nfunc (f *float64Value) Set(s string) error {\n\tv, err := strconv.ParseFloat(s, 64)\n\t*f = float64Value(v)\n\treturn err\n}\n\nfunc (f *float64Value) String() string { return fmt.Sprintf(\"%v\", *f) }\n\n// -- time.Duration Value\ntype durationValue time.Duration\n\nfunc newDurationValue(val time.Duration, p *time.Duration) *durationValue {\n\t*p = val\n\treturn (*durationValue)(p)\n}\n\nfunc (d *durationValue) Set(s string) error {\n\tv, err := time.ParseDuration(s)\n\t*d = durationValue(v)\n\treturn err\n}\n\nfunc (d *durationValue) String() string { return (*time.Duration)(d).String() }\n\n// Value is the interface to the dynamic value stored in a flag.\n// (The default value is represented as a string.)\ntype Value interface {\n\tString() string\n\tSet(string) error\n}\n\n// ErrorHandling defines how to handle flag parsing errors.\ntype ErrorHandling int\n\nconst (\n\tContinueOnError ErrorHandling = iota\n\tExitOnError\n\tPanicOnError\n)\n\n// A FlagSet represents a set of defined flags.\ntype FlagSet struct {\n\t// Usage is the function called when an error occurs while parsing flags.\n\t// The field is a function (not a method) that may be changed to point to\n\t// a custom error handler.\n\tUsage func()\n\n\tname          string\n\tparsed        bool\n\tactual        map[string]*Flag\n\tformal        map[string]*Flag\n\tshorthands    map[byte]*Flag\n\targs          []string // arguments after flags\n\texitOnError   bool     // does the program exit if there's an error?\n\terrorHandling ErrorHandling\n\toutput        io.Writer // nil means stderr; use out() accessor\n\tinterspersed  bool      // allow interspersed option/non-option args\n}\n\n// A Flag represents the state of a flag.\ntype Flag struct {\n\tName      string // name as it appears on command line\n\tShorthand string // one-letter abbreviated flag\n\tUsage     string // help message\n\tValue     Value  // value as set\n\tDefValue  string // default value (as text); for usage message\n\tChanged   bool   // If the user set the value (or if left to default)\n}\n\n// sortFlags returns the flags as a slice in lexicographical sorted order.\nfunc sortFlags(flags map[string]*Flag) []*Flag {\n\tlist := make(sort.StringSlice, len(flags))\n\ti := 0\n\tfor _, f := range flags {\n\t\tlist[i] = f.Name\n\t\ti++\n\t}\n\tlist.Sort()\n\tresult := make([]*Flag, len(list))\n\tfor i, name := range list {\n\t\tresult[i] = flags[name]\n\t}\n\treturn result\n}\n\nfunc (f *FlagSet) out() io.Writer {\n\tif f.output == nil {\n\t\treturn os.Stderr\n\t}\n\treturn f.output\n}\n\n// SetOutput sets the destination for usage and error messages.\n// If output is nil, os.Stderr is used.\nfunc (f *FlagSet) SetOutput(output io.Writer) {\n\tf.output = output\n}\n\n// VisitAll visits the flags in lexicographical order, calling fn for each.\n// It visits all flags, even those not set.\nfunc (f *FlagSet) VisitAll(fn func(*Flag)) {\n\tfor _, flag := range sortFlags(f.formal) {\n\t\tfn(flag)\n\t}\n}\n\nfunc (f *FlagSet) HasFlags() bool {\n\treturn len(f.formal) > 0\n}\n\n// VisitAll visits the command-line flags in lexicographical order, calling\n// fn for each.  It visits all flags, even those not set.\nfunc VisitAll(fn func(*Flag)) {\n\tcommandLine.VisitAll(fn)\n}\n\n// Visit visits the flags in lexicographical order, calling fn for each.\n// It visits only those flags that have been set.\nfunc (f *FlagSet) Visit(fn func(*Flag)) {\n\tfor _, flag := range sortFlags(f.actual) {\n\t\tfn(flag)\n\t}\n}\n\n// Visit visits the command-line flags in lexicographical order, calling fn\n// for each.  It visits only those flags that have been set.\nfunc Visit(fn func(*Flag)) {\n\tcommandLine.Visit(fn)\n}\n\n// Lookup returns the Flag structure of the named flag, returning nil if none exists.\nfunc (f *FlagSet) Lookup(name string) *Flag {\n\treturn f.formal[name]\n}\n\n// Lookup returns the Flag structure of the named command-line flag,\n// returning nil if none exists.\nfunc Lookup(name string) *Flag {\n\treturn commandLine.formal[name]\n}\n\n// Set sets the value of the named flag.\nfunc (f *FlagSet) Set(name, value string) error {\n\tflag, ok := f.formal[name]\n\tif !ok {\n\t\treturn fmt.Errorf(\"no such flag -%v\", name)\n\t}\n\terr := flag.Value.Set(value)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif f.actual == nil {\n\t\tf.actual = make(map[string]*Flag)\n\t}\n\tf.actual[name] = flag\n\tf.Lookup(name).Changed = true\n\treturn nil\n}\n\n// Set sets the value of the named command-line flag.\nfunc Set(name, value string) error {\n\treturn commandLine.Set(name, value)\n}\n\n// PrintDefaults prints, to standard error unless configured\n// otherwise, the default values of all defined flags in the set.\nfunc (f *FlagSet) PrintDefaults() {\n\tf.VisitAll(func(flag *Flag) {\n\t\tformat := \"--%s=%s: %s\\n\"\n\t\tif _, ok := flag.Value.(*stringValue); ok {\n\t\t\t// put quotes on the value\n\t\t\tformat = \"--%s=%q: %s\\n\"\n\t\t}\n\t\tif len(flag.Shorthand) > 0 {\n\t\t\tformat = \"  -%s, \" + format\n\t\t} else {\n\t\t\tformat = \"   %s   \" + format\n\t\t}\n\t\tfmt.Fprintf(f.out(), format, flag.Shorthand, flag.Name, flag.DefValue, flag.Usage)\n\t})\n}\n\nfunc (f *FlagSet) FlagUsages() string {\n\tx := new(bytes.Buffer)\n\n\tf.VisitAll(func(flag *Flag) {\n\t\tformat := \"--%s=%s: %s\\n\"\n\t\tif _, ok := flag.Value.(*stringValue); ok {\n\t\t\t// put quotes on the value\n\t\t\tformat = \"--%s=%q: %s\\n\"\n\t\t}\n\t\tif len(flag.Shorthand) > 0 {\n\t\t\tformat = \"  -%s, \" + format\n\t\t} else {\n\t\t\tformat = \"   %s   \" + format\n\t\t}\n\t\tfmt.Fprintf(x, format, flag.Shorthand, flag.Name, flag.DefValue, flag.Usage)\n\t})\n\n\treturn x.String()\n}\n\n// PrintDefaults prints to standard error the default values of all defined command-line flags.\nfunc PrintDefaults() {\n\tcommandLine.PrintDefaults()\n}\n\n// defaultUsage is the default function to print a usage message.\nfunc defaultUsage(f *FlagSet) {\n\tfmt.Fprintf(f.out(), \"Usage of %s:\\n\", f.name)\n\tf.PrintDefaults()\n}\n\n// NOTE: Usage is not just defaultUsage(commandLine)\n// because it serves (via godoc flag Usage) as the example\n// for how to write your own usage function.\n\n// Usage prints to standard error a usage message documenting all defined command-line flags.\n// The function is a variable that may be changed to point to a custom function.\nvar Usage = func() {\n\tfmt.Fprintf(os.Stderr, \"Usage of %s:\\n\", os.Args[0])\n\tPrintDefaults()\n}\n\n// NFlag returns the number of flags that have been set.\nfunc (f *FlagSet) NFlag() int { return len(f.actual) }\n\n// NFlag returns the number of command-line flags that have been set.\nfunc NFlag() int { return len(commandLine.actual) }\n\n// Arg returns the i'th argument.  Arg(0) is the first remaining argument\n// after flags have been processed.\nfunc (f *FlagSet) Arg(i int) string {\n\tif i < 0 || i >= len(f.args) {\n\t\treturn \"\"\n\t}\n\treturn f.args[i]\n}\n\n// Arg returns the i'th command-line argument.  Arg(0) is the first remaining argument\n// after flags have been processed.\nfunc Arg(i int) string {\n\treturn commandLine.Arg(i)\n}\n\n// NArg is the number of arguments remaining after flags have been processed.\nfunc (f *FlagSet) NArg() int { return len(f.args) }\n\n// NArg is the number of arguments remaining after flags have been processed.\nfunc NArg() int { return len(commandLine.args) }\n\n// Args returns the non-flag arguments.\nfunc (f *FlagSet) Args() []string { return f.args }\n\n// Args returns the non-flag command-line arguments.\nfunc Args() []string { return commandLine.args }\n\n// BoolVar defines a bool flag with specified name, default value, and usage string.\n// The argument p points to a bool variable in which to store the value of the flag.\nfunc (f *FlagSet) BoolVar(p *bool, name string, value bool, usage string) {\n\tf.VarP(newBoolValue(value, p), name, \"\", usage)\n}\n\n// Like BoolVar, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) BoolVarP(p *bool, name, shorthand string, value bool, usage string) {\n\tf.VarP(newBoolValue(value, p), name, shorthand, usage)\n}\n\n// BoolVar defines a bool flag with specified name, default value, and usage string.\n// The argument p points to a bool variable in which to store the value of the flag.\nfunc BoolVar(p *bool, name string, value bool, usage string) {\n\tcommandLine.VarP(newBoolValue(value, p), name, \"\", usage)\n}\n\n// Like BoolVar, but accepts a shorthand letter that can be used after a single dash.\nfunc BoolVarP(p *bool, name, shorthand string, value bool, usage string) {\n\tcommandLine.VarP(newBoolValue(value, p), name, shorthand, usage)\n}\n\n// Bool defines a bool flag with specified name, default value, and usage string.\n// The return value is the address of a bool variable that stores the value of the flag.\nfunc (f *FlagSet) Bool(name string, value bool, usage string) *bool {\n\tp := new(bool)\n\tf.BoolVarP(p, name, \"\", value, usage)\n\treturn p\n}\n\n// Like Bool, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) BoolP(name, shorthand string, value bool, usage string) *bool {\n\tp := new(bool)\n\tf.BoolVarP(p, name, shorthand, value, usage)\n\treturn p\n}\n\n// Bool defines a bool flag with specified name, default value, and usage string.\n// The return value is the address of a bool variable that stores the value of the flag.\nfunc Bool(name string, value bool, usage string) *bool {\n\treturn commandLine.BoolP(name, \"\", value, usage)\n}\n\n// Like Bool, but accepts a shorthand letter that can be used after a single dash.\nfunc BoolP(name, shorthand string, value bool, usage string) *bool {\n\treturn commandLine.BoolP(name, shorthand, value, usage)\n}\n\n// IntVar defines an int flag with specified name, default value, and usage string.\n// The argument p points to an int variable in which to store the value of the flag.\nfunc (f *FlagSet) IntVar(p *int, name string, value int, usage string) {\n\tf.VarP(newIntValue(value, p), name, \"\", usage)\n}\n\n// Like IntVar, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) IntVarP(p *int, name, shorthand string, value int, usage string) {\n\tf.VarP(newIntValue(value, p), name, shorthand, usage)\n}\n\n// IntVar defines an int flag with specified name, default value, and usage string.\n// The argument p points to an int variable in which to store the value of the flag.\nfunc IntVar(p *int, name string, value int, usage string) {\n\tcommandLine.VarP(newIntValue(value, p), name, \"\", usage)\n}\n\n// Like IntVar, but accepts a shorthand letter that can be used after a single dash.\nfunc IntVarP(p *int, name, shorthand string, value int, usage string) {\n\tcommandLine.VarP(newIntValue(value, p), name, shorthand, usage)\n}\n\n// Int defines an int flag with specified name, default value, and usage string.\n// The return value is the address of an int variable that stores the value of the flag.\nfunc (f *FlagSet) Int(name string, value int, usage string) *int {\n\tp := new(int)\n\tf.IntVarP(p, name, \"\", value, usage)\n\treturn p\n}\n\n// Like Int, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) IntP(name, shorthand string, value int, usage string) *int {\n\tp := new(int)\n\tf.IntVarP(p, name, shorthand, value, usage)\n\treturn p\n}\n\n// Int defines an int flag with specified name, default value, and usage string.\n// The return value is the address of an int variable that stores the value of the flag.\nfunc Int(name string, value int, usage string) *int {\n\treturn commandLine.IntP(name, \"\", value, usage)\n}\n\n// Like Int, but accepts a shorthand letter that can be used after a single dash.\nfunc IntP(name, shorthand string, value int, usage string) *int {\n\treturn commandLine.IntP(name, shorthand, value, usage)\n}\n\n// Int64Var defines an int64 flag with specified name, default value, and usage string.\n// The argument p points to an int64 variable in which to store the value of the flag.\nfunc (f *FlagSet) Int64Var(p *int64, name string, value int64, usage string) {\n\tf.VarP(newInt64Value(value, p), name, \"\", usage)\n}\n\n// Like Int64Var, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) Int64VarP(p *int64, name, shorthand string, value int64, usage string) {\n\tf.VarP(newInt64Value(value, p), name, shorthand, usage)\n}\n\n// Int64Var defines an int64 flag with specified name, default value, and usage string.\n// The argument p points to an int64 variable in which to store the value of the flag.\nfunc Int64Var(p *int64, name string, value int64, usage string) {\n\tcommandLine.VarP(newInt64Value(value, p), name, \"\", usage)\n}\n\n// Like Int64Var, but accepts a shorthand letter that can be used after a single dash.\nfunc Int64VarP(p *int64, name, shorthand string, value int64, usage string) {\n\tcommandLine.VarP(newInt64Value(value, p), name, shorthand, usage)\n}\n\n// Int64 defines an int64 flag with specified name, default value, and usage string.\n// The return value is the address of an int64 variable that stores the value of the flag.\nfunc (f *FlagSet) Int64(name string, value int64, usage string) *int64 {\n\tp := new(int64)\n\tf.Int64VarP(p, name, \"\", value, usage)\n\treturn p\n}\n\n// Like Int64, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) Int64P(name, shorthand string, value int64, usage string) *int64 {\n\tp := new(int64)\n\tf.Int64VarP(p, name, shorthand, value, usage)\n\treturn p\n}\n\n// Int64 defines an int64 flag with specified name, default value, and usage string.\n// The return value is the address of an int64 variable that stores the value of the flag.\nfunc Int64(name string, value int64, usage string) *int64 {\n\treturn commandLine.Int64P(name, \"\", value, usage)\n}\n\n// Like Int64, but accepts a shorthand letter that can be used after a single dash.\nfunc Int64P(name, shorthand string, value int64, usage string) *int64 {\n\treturn commandLine.Int64P(name, shorthand, value, usage)\n}\n\n// UintVar defines a uint flag with specified name, default value, and usage string.\n// The argument p points to a uint variable in which to store the value of the flag.\nfunc (f *FlagSet) UintVar(p *uint, name string, value uint, usage string) {\n\tf.VarP(newUintValue(value, p), name, \"\", usage)\n}\n\n// Like UintVar, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) UintVarP(p *uint, name, shorthand string, value uint, usage string) {\n\tf.VarP(newUintValue(value, p), name, shorthand, usage)\n}\n\n// UintVar defines a uint flag with specified name, default value, and usage string.\n// The argument p points to a uint  variable in which to store the value of the flag.\nfunc UintVar(p *uint, name string, value uint, usage string) {\n\tcommandLine.VarP(newUintValue(value, p), name, \"\", usage)\n}\n\n// Like UintVar, but accepts a shorthand letter that can be used after a single dash.\nfunc UintVarP(p *uint, name, shorthand string, value uint, usage string) {\n\tcommandLine.VarP(newUintValue(value, p), name, shorthand, usage)\n}\n\n// Uint defines a uint flag with specified name, default value, and usage string.\n// The return value is the address of a uint  variable that stores the value of the flag.\nfunc (f *FlagSet) Uint(name string, value uint, usage string) *uint {\n\tp := new(uint)\n\tf.UintVarP(p, name, \"\", value, usage)\n\treturn p\n}\n\n// Like Uint, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) UintP(name, shorthand string, value uint, usage string) *uint {\n\tp := new(uint)\n\tf.UintVarP(p, name, shorthand, value, usage)\n\treturn p\n}\n\n// Uint defines a uint flag with specified name, default value, and usage string.\n// The return value is the address of a uint  variable that stores the value of the flag.\nfunc Uint(name string, value uint, usage string) *uint {\n\treturn commandLine.UintP(name, \"\", value, usage)\n}\n\n// Like Uint, but accepts a shorthand letter that can be used after a single dash.\nfunc UintP(name, shorthand string, value uint, usage string) *uint {\n\treturn commandLine.UintP(name, shorthand, value, usage)\n}\n\n// Uint64Var defines a uint64 flag with specified name, default value, and usage string.\n// The argument p points to a uint64 variable in which to store the value of the flag.\nfunc (f *FlagSet) Uint64Var(p *uint64, name string, value uint64, usage string) {\n\tf.VarP(newUint64Value(value, p), name, \"\", usage)\n}\n\n// Like Uint64Var, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) Uint64VarP(p *uint64, name, shorthand string, value uint64, usage string) {\n\tf.VarP(newUint64Value(value, p), name, shorthand, usage)\n}\n\n// Uint64Var defines a uint64 flag with specified name, default value, and usage string.\n// The argument p points to a uint64 variable in which to store the value of the flag.\nfunc Uint64Var(p *uint64, name string, value uint64, usage string) {\n\tcommandLine.VarP(newUint64Value(value, p), name, \"\", usage)\n}\n\n// Like Uint64Var, but accepts a shorthand letter that can be used after a single dash.\nfunc Uint64VarP(p *uint64, name, shorthand string, value uint64, usage string) {\n\tcommandLine.VarP(newUint64Value(value, p), name, shorthand, usage)\n}\n\n// Uint64 defines a uint64 flag with specified name, default value, and usage string.\n// The return value is the address of a uint64 variable that stores the value of the flag.\nfunc (f *FlagSet) Uint64(name string, value uint64, usage string) *uint64 {\n\tp := new(uint64)\n\tf.Uint64VarP(p, name, \"\", value, usage)\n\treturn p\n}\n\n// Like Uint64, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) Uint64P(name, shorthand string, value uint64, usage string) *uint64 {\n\tp := new(uint64)\n\tf.Uint64VarP(p, name, shorthand, value, usage)\n\treturn p\n}\n\n// Uint64 defines a uint64 flag with specified name, default value, and usage string.\n// The return value is the address of a uint64 variable that stores the value of the flag.\nfunc Uint64(name string, value uint64, usage string) *uint64 {\n\treturn commandLine.Uint64P(name, \"\", value, usage)\n}\n\n// Like Uint64, but accepts a shorthand letter that can be used after a single dash.\nfunc Uint64P(name, shorthand string, value uint64, usage string) *uint64 {\n\treturn commandLine.Uint64P(name, shorthand, value, usage)\n}\n\n// StringVar defines a string flag with specified name, default value, and usage string.\n// The argument p points to a string variable in which to store the value of the flag.\nfunc (f *FlagSet) StringVar(p *string, name string, value string, usage string) {\n\tf.VarP(newStringValue(value, p), name, \"\", usage)\n}\n\n// Like StringVar, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) StringVarP(p *string, name, shorthand string, value string, usage string) {\n\tf.VarP(newStringValue(value, p), name, shorthand, usage)\n}\n\n// StringVar defines a string flag with specified name, default value, and usage string.\n// The argument p points to a string variable in which to store the value of the flag.\nfunc StringVar(p *string, name string, value string, usage string) {\n\tcommandLine.VarP(newStringValue(value, p), name, \"\", usage)\n}\n\n// Like StringVar, but accepts a shorthand letter that can be used after a single dash.\nfunc StringVarP(p *string, name, shorthand string, value string, usage string) {\n\tcommandLine.VarP(newStringValue(value, p), name, shorthand, usage)\n}\n\n// String defines a string flag with specified name, default value, and usage string.\n// The return value is the address of a string variable that stores the value of the flag.\nfunc (f *FlagSet) String(name string, value string, usage string) *string {\n\tp := new(string)\n\tf.StringVarP(p, name, \"\", value, usage)\n\treturn p\n}\n\n// Like String, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) StringP(name, shorthand string, value string, usage string) *string {\n\tp := new(string)\n\tf.StringVarP(p, name, shorthand, value, usage)\n\treturn p\n}\n\n// String defines a string flag with specified name, default value, and usage string.\n// The return value is the address of a string variable that stores the value of the flag.\nfunc String(name string, value string, usage string) *string {\n\treturn commandLine.StringP(name, \"\", value, usage)\n}\n\n// Like String, but accepts a shorthand letter that can be used after a single dash.\nfunc StringP(name, shorthand string, value string, usage string) *string {\n\treturn commandLine.StringP(name, shorthand, value, usage)\n}\n\n// Float64Var defines a float64 flag with specified name, default value, and usage string.\n// The argument p points to a float64 variable in which to store the value of the flag.\nfunc (f *FlagSet) Float64Var(p *float64, name string, value float64, usage string) {\n\tf.VarP(newFloat64Value(value, p), name, \"\", usage)\n}\n\n// Like Float64Var, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) Float64VarP(p *float64, name, shorthand string, value float64, usage string) {\n\tf.VarP(newFloat64Value(value, p), name, shorthand, usage)\n}\n\n// Float64Var defines a float64 flag with specified name, default value, and usage string.\n// The argument p points to a float64 variable in which to store the value of the flag.\nfunc Float64Var(p *float64, name string, value float64, usage string) {\n\tcommandLine.VarP(newFloat64Value(value, p), name, \"\", usage)\n}\n\n// Like Float64Var, but accepts a shorthand letter that can be used after a single dash.\nfunc Float64VarP(p *float64, name, shorthand string, value float64, usage string) {\n\tcommandLine.VarP(newFloat64Value(value, p), name, shorthand, usage)\n}\n\n// Float64 defines a float64 flag with specified name, default value, and usage string.\n// The return value is the address of a float64 variable that stores the value of the flag.\nfunc (f *FlagSet) Float64(name string, value float64, usage string) *float64 {\n\tp := new(float64)\n\tf.Float64VarP(p, name, \"\", value, usage)\n\treturn p\n}\n\n// Like Float64, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) Float64P(name, shorthand string, value float64, usage string) *float64 {\n\tp := new(float64)\n\tf.Float64VarP(p, name, shorthand, value, usage)\n\treturn p\n}\n\n// Float64 defines a float64 flag with specified name, default value, and usage string.\n// The return value is the address of a float64 variable that stores the value of the flag.\nfunc Float64(name string, value float64, usage string) *float64 {\n\treturn commandLine.Float64P(name, \"\", value, usage)\n}\n\n// Like Float64, but accepts a shorthand letter that can be used after a single dash.\nfunc Float64P(name, shorthand string, value float64, usage string) *float64 {\n\treturn commandLine.Float64P(name, shorthand, value, usage)\n}\n\n// DurationVar defines a time.Duration flag with specified name, default value, and usage string.\n// The argument p points to a time.Duration variable in which to store the value of the flag.\nfunc (f *FlagSet) DurationVar(p *time.Duration, name string, value time.Duration, usage string) {\n\tf.VarP(newDurationValue(value, p), name, \"\", usage)\n}\n\n// Like DurationVar, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) DurationVarP(p *time.Duration, name, shorthand string, value time.Duration, usage string) {\n\tf.VarP(newDurationValue(value, p), name, shorthand, usage)\n}\n\n// DurationVar defines a time.Duration flag with specified name, default value, and usage string.\n// The argument p points to a time.Duration variable in which to store the value of the flag.\nfunc DurationVar(p *time.Duration, name string, value time.Duration, usage string) {\n\tcommandLine.VarP(newDurationValue(value, p), name, \"\", usage)\n}\n\n// Like DurationVar, but accepts a shorthand letter that can be used after a single dash.\nfunc DurationVarP(p *time.Duration, name, shorthand string, value time.Duration, usage string) {\n\tcommandLine.VarP(newDurationValue(value, p), name, shorthand, usage)\n}\n\n// Duration defines a time.Duration flag with specified name, default value, and usage string.\n// The return value is the address of a time.Duration variable that stores the value of the flag.\nfunc (f *FlagSet) Duration(name string, value time.Duration, usage string) *time.Duration {\n\tp := new(time.Duration)\n\tf.DurationVarP(p, name, \"\", value, usage)\n\treturn p\n}\n\n// Like Duration, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) DurationP(name, shorthand string, value time.Duration, usage string) *time.Duration {\n\tp := new(time.Duration)\n\tf.DurationVarP(p, name, shorthand, value, usage)\n\treturn p\n}\n\n// Duration defines a time.Duration flag with specified name, default value, and usage string.\n// The return value is the address of a time.Duration variable that stores the value of the flag.\nfunc Duration(name string, value time.Duration, usage string) *time.Duration {\n\treturn commandLine.DurationP(name, \"\", value, usage)\n}\n\n// Like Duration, but accepts a shorthand letter that can be used after a single dash.\nfunc DurationP(name, shorthand string, value time.Duration, usage string) *time.Duration {\n\treturn commandLine.DurationP(name, shorthand, value, usage)\n}\n\n// Var defines a flag with the specified name and usage string. The type and\n// value of the flag are represented by the first argument, of type Value, which\n// typically holds a user-defined implementation of Value. For instance, the\n// caller could create a flag that turns a comma-separated string into a slice\n// of strings by giving the slice the methods of Value; in particular, Set would\n// decompose the comma-separated string into the slice.\nfunc (f *FlagSet) Var(value Value, name string, usage string) {\n\tf.VarP(value, name, \"\", usage)\n}\n\n// Like Var, but accepts a shorthand letter that can be used after a single dash.\nfunc (f *FlagSet) VarP(value Value, name, shorthand, usage string) {\n\t// Remember the default value as a string; it won't change.\n\tflag := &Flag{name, shorthand, usage, value, value.String(), false}\n\tf.AddFlag(flag)\n}\n\nfunc (f *FlagSet) AddFlag(flag *Flag) {\n\t_, alreadythere := f.formal[flag.Name]\n\tif alreadythere {\n\t\tmsg := fmt.Sprintf(\"%s flag redefined: %s\", f.name, flag.Name)\n\t\tfmt.Fprintln(f.out(), msg)\n\t\tpanic(msg) // Happens only if flags are declared with identical names\n\t}\n\tif f.formal == nil {\n\t\tf.formal = make(map[string]*Flag)\n\t}\n\tf.formal[flag.Name] = flag\n\n\tif len(flag.Shorthand) == 0 {\n\t\treturn\n\t}\n\tif len(flag.Shorthand) > 1 {\n\t\tfmt.Fprintf(f.out(), \"%s shorthand more than ASCII character: %s\\n\", f.name, flag.Shorthand)\n\t\tpanic(\"shorthand is more than one character\")\n\t}\n\tif f.shorthands == nil {\n\t\tf.shorthands = make(map[byte]*Flag)\n\t}\n\tc := flag.Shorthand[0]\n\told, alreadythere := f.shorthands[c]\n\tif alreadythere {\n\t\tfmt.Fprintf(f.out(), \"%s shorthand reused: %q for %s and %s\\n\", f.name, c, flag.Name, old.Name)\n\t\tpanic(\"shorthand redefinition\")\n\t}\n\tf.shorthands[c] = flag\n}\n\n// Var defines a flag with the specified name and usage string. The type and\n// value of the flag are represented by the first argument, of type Value, which\n// typically holds a user-defined implementation of Value. For instance, the\n// caller could create a flag that turns a comma-separated string into a slice\n// of strings by giving the slice the methods of Value; in particular, Set would\n// decompose the comma-separated string into the slice.\nfunc Var(value Value, name string, usage string) {\n\tcommandLine.VarP(value, name, \"\", usage)\n}\n\n// Like Var, but accepts a shorthand letter that can be used after a single dash.\nfunc VarP(value Value, name, shorthand, usage string) {\n\tcommandLine.VarP(value, name, shorthand, usage)\n}\n\n// failf prints to standard error a formatted error and usage message and\n// returns the error.\nfunc (f *FlagSet) failf(format string, a ...interface{}) error {\n\terr := fmt.Errorf(format, a...)\n\tfmt.Fprintln(f.out(), err)\n\tf.usage()\n\treturn err\n}\n\n// usage calls the Usage method for the flag set, or the usage function if\n// the flag set is commandLine.\nfunc (f *FlagSet) usage() {\n\tif f == commandLine {\n\t\tUsage()\n\t} else if f.Usage == nil {\n\t\tdefaultUsage(f)\n\t} else {\n\t\tf.Usage()\n\t}\n}\n\nfunc (f *FlagSet) setFlag(flag *Flag, value string, origArg string) error {\n\tif err := flag.Value.Set(value); err != nil {\n\t\treturn f.failf(\"invalid argument %q for %s: %v\", value, origArg, err)\n\t}\n\t// mark as visited for Visit()\n\tif f.actual == nil {\n\t\tf.actual = make(map[string]*Flag)\n\t}\n\tf.actual[flag.Name] = flag\n\tflag.Changed = true\n\treturn nil\n}\n\nfunc (f *FlagSet) parseLongArg(s string, args []string) (a []string, err error) {\n\ta = args\n\tif len(s) == 2 { // \"--\" terminates the flags\n\t\tf.args = append(f.args, args...)\n\t\treturn\n\t}\n\tname := s[2:]\n\tif len(name) == 0 || name[0] == '-' || name[0] == '=' {\n\t\terr = f.failf(\"bad flag syntax: %s\", s)\n\t\treturn\n\t}\n\tsplit := strings.SplitN(name, \"=\", 2)\n\tname = split[0]\n\tm := f.formal\n\tflag, alreadythere := m[name] // BUG\n\tif !alreadythere {\n\t\tif name == \"help\" { // special case for nice help message.\n\t\t\tf.usage()\n\t\t\treturn args, ErrHelp\n\t\t}\n\t\terr = f.failf(\"unknown flag: --%s\", name)\n\t\treturn\n\t}\n\tif len(split) == 1 {\n\t\tif _, ok := flag.Value.(*boolValue); !ok {\n\t\t\terr = f.failf(\"flag needs an argument: %s\", s)\n\t\t\treturn\n\t\t}\n\t\tf.setFlag(flag, \"true\", s)\n\t} else {\n\t\tif e := f.setFlag(flag, split[1], s); e != nil {\n\t\t\terr = e\n\t\t\treturn\n\t\t}\n\t}\n\treturn args, nil\n}\n\nfunc (f *FlagSet) parseShortArg(s string, args []string) (a []string, err error) {\n\ta = args\n\tshorthands := s[1:]\n\n\tfor i := 0; i < len(shorthands); i++ {\n\t\tc := shorthands[i]\n\t\tflag, alreadythere := f.shorthands[c]\n\t\tif !alreadythere {\n\t\t\tif c == 'h' { // special case for nice help message.\n\t\t\t\tf.usage()\n\t\t\t\terr = ErrHelp\n\t\t\t\treturn\n\t\t\t}\n\t\t\t//TODO continue on error\n\t\t\terr = f.failf(\"unknown shorthand flag: %q in -%s\", c, shorthands)\n\t\t\tif len(args) == 0 {\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tif alreadythere {\n\t\t\tif _, ok := flag.Value.(*boolValue); ok {\n\t\t\t\tf.setFlag(flag, \"true\", s)\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif i < len(shorthands)-1 {\n\t\t\t\tif e := f.setFlag(flag, shorthands[i+1:], s); e != nil {\n\t\t\t\t\terr = e\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tif len(args) == 0 {\n\t\t\t\terr = f.failf(\"flag needs an argument: %q in -%s\", c, shorthands)\n\t\t\t\treturn\n\t\t\t}\n\t\t\tif e := f.setFlag(flag, args[0], s); e != nil {\n\t\t\t\terr = e\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\ta = args[1:]\n\t\tbreak // should be unnecessary\n\t}\n\n\treturn\n}\n\nfunc (f *FlagSet) parseArgs(args []string) (err error) {\n\tfor len(args) > 0 {\n\t\ts := args[0]\n\t\targs = args[1:]\n\t\tif len(s) == 0 || s[0] != '-' || len(s) == 1 {\n\t\t\tif !f.interspersed {\n\t\t\t\tf.args = append(f.args, s)\n\t\t\t\tf.args = append(f.args, args...)\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tf.args = append(f.args, s)\n\t\t\tcontinue\n\t\t}\n\n\t\tif s[1] == '-' {\n\t\t\targs, err = f.parseLongArg(s, args)\n\t\t} else {\n\t\t\targs, err = f.parseShortArg(s, args)\n\t\t}\n\t}\n\treturn\n}\n\n// Parse parses flag definitions from the argument list, which should not\n// include the command name.  Must be called after all flags in the FlagSet\n// are defined and before flags are accessed by the program.\n// The return value will be ErrHelp if -help was set but not defined.\nfunc (f *FlagSet) Parse(arguments []string) error {\n\tf.parsed = true\n\tf.args = make([]string, 0, len(arguments))\n\terr := f.parseArgs(arguments)\n\tif err != nil {\n\t\tswitch f.errorHandling {\n\t\tcase ContinueOnError:\n\t\t\treturn err\n\t\tcase ExitOnError:\n\t\t\tos.Exit(2)\n\t\tcase PanicOnError:\n\t\t\tpanic(err)\n\t\t}\n\t}\n\treturn nil\n}\n\n// Parsed reports whether f.Parse has been called.\nfunc (f *FlagSet) Parsed() bool {\n\treturn f.parsed\n}\n\n// Parse parses the command-line flags from os.Args[1:].  Must be called\n// after all flags are defined and before flags are accessed by the program.\nfunc Parse() {\n\t// Ignore errors; commandLine is set for ExitOnError.\n\tcommandLine.Parse(os.Args[1:])\n}\n\n// Whether to support interspersed option/non-option arguments.\nfunc SetInterspersed(interspersed bool) {\n\tcommandLine.SetInterspersed(interspersed)\n}\n\n// Parsed returns true if the command-line flags have been parsed.\nfunc Parsed() bool {\n\treturn commandLine.Parsed()\n}\n\n// The default set of command-line flags, parsed from os.Args.\nvar commandLine = NewFlagSet(os.Args[0], ExitOnError)\n\n// NewFlagSet returns a new, empty flag set with the specified name and\n// error handling property.\nfunc NewFlagSet(name string, errorHandling ErrorHandling) *FlagSet {\n\tf := &FlagSet{\n\t\tname:          name,\n\t\terrorHandling: errorHandling,\n\t\tinterspersed:  true,\n\t}\n\treturn f\n}\n\n// Whether to support interspersed option/non-option arguments.\nfunc (f *FlagSet) SetInterspersed(interspersed bool) {\n\tf.interspersed = interspersed\n}\n\n// Init sets the name and error handling property for a flag set.\n// By default, the zero FlagSet uses an empty name and the\n// ContinueOnError error handling policy.\nfunc (f *FlagSet) Init(name string, errorHandling ErrorHandling) {\n\tf.name = name\n\tf.errorHandling = errorHandling\n}\n"
  },
  {
    "path": "vendor/go4.org/LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n"
  },
  {
    "path": "vendor/go4.org/errorutil/highlight.go",
    "content": "/*\nCopyright 2011 Google Inc.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n     http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n*/\n\n// Package errorutil helps make better error messages.\npackage errorutil // import \"go4.org/errorutil\"\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n)\n\n// HighlightBytePosition takes a reader and the location in bytes of a parse\n// error (for instance, from json.SyntaxError.Offset) and returns the line, column,\n// and pretty-printed context around the error with an arrow indicating the exact\n// position of the syntax error.\nfunc HighlightBytePosition(f io.Reader, pos int64) (line, col int, highlight string) {\n\tline = 1\n\tbr := bufio.NewReader(f)\n\tlastLine := \"\"\n\tthisLine := new(bytes.Buffer)\n\tfor n := int64(0); n < pos; n++ {\n\t\tb, err := br.ReadByte()\n\t\tif err != nil {\n\t\t\tbreak\n\t\t}\n\t\tif b == '\\n' {\n\t\t\tlastLine = thisLine.String()\n\t\t\tthisLine.Reset()\n\t\t\tline++\n\t\t\tcol = 1\n\t\t} else {\n\t\t\tcol++\n\t\t\tthisLine.WriteByte(b)\n\t\t}\n\t}\n\tif line > 1 {\n\t\thighlight += fmt.Sprintf(\"%5d: %s\\n\", line-1, lastLine)\n\t}\n\thighlight += fmt.Sprintf(\"%5d: %s\\n\", line, thisLine.String())\n\thighlight += fmt.Sprintf(\"%s^\\n\", strings.Repeat(\" \", col+5))\n\treturn\n}\n"
  },
  {
    "path": "vendor/golang.org/x/crypto/LICENSE",
    "content": "Copyright (c) 2009 The Go Authors. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n   * Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n   * Redistributions in binary form must reproduce the above\ncopyright notice, this list of conditions and the following disclaimer\nin the documentation and/or other materials provided with the\ndistribution.\n   * Neither the name of Google Inc. nor the names of its\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "vendor/golang.org/x/crypto/PATENTS",
    "content": "Additional IP Rights Grant (Patents)\n\n\"This implementation\" means the copyrightable works distributed by\nGoogle as part of the Go project.\n\nGoogle hereby grants to You a perpetual, worldwide, non-exclusive,\nno-charge, royalty-free, irrevocable (except as stated in this section)\npatent license to make, have made, use, offer to sell, sell, import,\ntransfer and otherwise run, modify and propagate the contents of this\nimplementation of Go, where such license applies only to those patent\nclaims, both currently owned or controlled by Google and acquired in\nthe future, licensable by Google that are necessarily infringed by this\nimplementation of Go.  This grant does not include claims that would be\ninfringed only as a consequence of further modification of this\nimplementation.  If you or your agent or exclusive licensee institute or\norder or agree to the institution of patent litigation against any\nentity (including a cross-claim or counterclaim in a lawsuit) alleging\nthat this implementation of Go or any code incorporated within this\nimplementation of Go constitutes direct or contributory patent\ninfringement, or inducement of patent infringement, then any patent\nrights granted to you under this License for this implementation of Go\nshall terminate as of the date such litigation is filed.\n"
  },
  {
    "path": "vendor/golang.org/x/crypto/ssh/terminal/terminal.go",
    "content": "// Copyright 2011 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage terminal\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"sync\"\n\t\"unicode/utf8\"\n)\n\n// EscapeCodes contains escape sequences that can be written to the terminal in\n// order to achieve different styles of text.\ntype EscapeCodes struct {\n\t// Foreground colors\n\tBlack, Red, Green, Yellow, Blue, Magenta, Cyan, White []byte\n\n\t// Reset all attributes\n\tReset []byte\n}\n\nvar vt100EscapeCodes = EscapeCodes{\n\tBlack:   []byte{keyEscape, '[', '3', '0', 'm'},\n\tRed:     []byte{keyEscape, '[', '3', '1', 'm'},\n\tGreen:   []byte{keyEscape, '[', '3', '2', 'm'},\n\tYellow:  []byte{keyEscape, '[', '3', '3', 'm'},\n\tBlue:    []byte{keyEscape, '[', '3', '4', 'm'},\n\tMagenta: []byte{keyEscape, '[', '3', '5', 'm'},\n\tCyan:    []byte{keyEscape, '[', '3', '6', 'm'},\n\tWhite:   []byte{keyEscape, '[', '3', '7', 'm'},\n\n\tReset: []byte{keyEscape, '[', '0', 'm'},\n}\n\n// Terminal contains the state for running a VT100 terminal that is capable of\n// reading lines of input.\ntype Terminal struct {\n\t// AutoCompleteCallback, if non-null, is called for each keypress with\n\t// the full input line and the current position of the cursor (in\n\t// bytes, as an index into |line|). If it returns ok=false, the key\n\t// press is processed normally. Otherwise it returns a replacement line\n\t// and the new cursor position.\n\tAutoCompleteCallback func(line string, pos int, key rune) (newLine string, newPos int, ok bool)\n\n\t// Escape contains a pointer to the escape codes for this terminal.\n\t// It's always a valid pointer, although the escape codes themselves\n\t// may be empty if the terminal doesn't support them.\n\tEscape *EscapeCodes\n\n\t// lock protects the terminal and the state in this object from\n\t// concurrent processing of a key press and a Write() call.\n\tlock sync.Mutex\n\n\tc      io.ReadWriter\n\tprompt []rune\n\n\t// line is the current line being entered.\n\tline []rune\n\t// pos is the logical position of the cursor in line\n\tpos int\n\t// echo is true if local echo is enabled\n\techo bool\n\t// pasteActive is true iff there is a bracketed paste operation in\n\t// progress.\n\tpasteActive bool\n\n\t// cursorX contains the current X value of the cursor where the left\n\t// edge is 0. cursorY contains the row number where the first row of\n\t// the current line is 0.\n\tcursorX, cursorY int\n\t// maxLine is the greatest value of cursorY so far.\n\tmaxLine int\n\n\ttermWidth, termHeight int\n\n\t// outBuf contains the terminal data to be sent.\n\toutBuf []byte\n\t// remainder contains the remainder of any partial key sequences after\n\t// a read. It aliases into inBuf.\n\tremainder []byte\n\tinBuf     [256]byte\n\n\t// history contains previously entered commands so that they can be\n\t// accessed with the up and down keys.\n\thistory stRingBuffer\n\t// historyIndex stores the currently accessed history entry, where zero\n\t// means the immediately previous entry.\n\thistoryIndex int\n\t// When navigating up and down the history it's possible to return to\n\t// the incomplete, initial line. That value is stored in\n\t// historyPending.\n\thistoryPending string\n}\n\n// NewTerminal runs a VT100 terminal on the given ReadWriter. If the ReadWriter is\n// a local terminal, that terminal must first have been put into raw mode.\n// prompt is a string that is written at the start of each input line (i.e.\n// \"> \").\nfunc NewTerminal(c io.ReadWriter, prompt string) *Terminal {\n\treturn &Terminal{\n\t\tEscape:       &vt100EscapeCodes,\n\t\tc:            c,\n\t\tprompt:       []rune(prompt),\n\t\ttermWidth:    80,\n\t\ttermHeight:   24,\n\t\techo:         true,\n\t\thistoryIndex: -1,\n\t}\n}\n\nconst (\n\tkeyCtrlD     = 4\n\tkeyCtrlU     = 21\n\tkeyEnter     = '\\r'\n\tkeyEscape    = 27\n\tkeyBackspace = 127\n\tkeyUnknown   = 0xd800 /* UTF-16 surrogate area */ + iota\n\tkeyUp\n\tkeyDown\n\tkeyLeft\n\tkeyRight\n\tkeyAltLeft\n\tkeyAltRight\n\tkeyHome\n\tkeyEnd\n\tkeyDeleteWord\n\tkeyDeleteLine\n\tkeyClearScreen\n\tkeyPasteStart\n\tkeyPasteEnd\n)\n\nvar pasteStart = []byte{keyEscape, '[', '2', '0', '0', '~'}\nvar pasteEnd = []byte{keyEscape, '[', '2', '0', '1', '~'}\n\n// bytesToKey tries to parse a key sequence from b. If successful, it returns\n// the key and the remainder of the input. Otherwise it returns utf8.RuneError.\nfunc bytesToKey(b []byte, pasteActive bool) (rune, []byte) {\n\tif len(b) == 0 {\n\t\treturn utf8.RuneError, nil\n\t}\n\n\tif !pasteActive {\n\t\tswitch b[0] {\n\t\tcase 1: // ^A\n\t\t\treturn keyHome, b[1:]\n\t\tcase 5: // ^E\n\t\t\treturn keyEnd, b[1:]\n\t\tcase 8: // ^H\n\t\t\treturn keyBackspace, b[1:]\n\t\tcase 11: // ^K\n\t\t\treturn keyDeleteLine, b[1:]\n\t\tcase 12: // ^L\n\t\t\treturn keyClearScreen, b[1:]\n\t\tcase 23: // ^W\n\t\t\treturn keyDeleteWord, b[1:]\n\t\t}\n\t}\n\n\tif b[0] != keyEscape {\n\t\tif !utf8.FullRune(b) {\n\t\t\treturn utf8.RuneError, b\n\t\t}\n\t\tr, l := utf8.DecodeRune(b)\n\t\treturn r, b[l:]\n\t}\n\n\tif !pasteActive && len(b) >= 3 && b[0] == keyEscape && b[1] == '[' {\n\t\tswitch b[2] {\n\t\tcase 'A':\n\t\t\treturn keyUp, b[3:]\n\t\tcase 'B':\n\t\t\treturn keyDown, b[3:]\n\t\tcase 'C':\n\t\t\treturn keyRight, b[3:]\n\t\tcase 'D':\n\t\t\treturn keyLeft, b[3:]\n\t\tcase 'H':\n\t\t\treturn keyHome, b[3:]\n\t\tcase 'F':\n\t\t\treturn keyEnd, b[3:]\n\t\t}\n\t}\n\n\tif !pasteActive && len(b) >= 6 && b[0] == keyEscape && b[1] == '[' && b[2] == '1' && b[3] == ';' && b[4] == '3' {\n\t\tswitch b[5] {\n\t\tcase 'C':\n\t\t\treturn keyAltRight, b[6:]\n\t\tcase 'D':\n\t\t\treturn keyAltLeft, b[6:]\n\t\t}\n\t}\n\n\tif !pasteActive && len(b) >= 6 && bytes.Equal(b[:6], pasteStart) {\n\t\treturn keyPasteStart, b[6:]\n\t}\n\n\tif pasteActive && len(b) >= 6 && bytes.Equal(b[:6], pasteEnd) {\n\t\treturn keyPasteEnd, b[6:]\n\t}\n\n\t// If we get here then we have a key that we don't recognise, or a\n\t// partial sequence. It's not clear how one should find the end of a\n\t// sequence without knowing them all, but it seems that [a-zA-Z~] only\n\t// appears at the end of a sequence.\n\tfor i, c := range b[0:] {\n\t\tif c >= 'a' && c <= 'z' || c >= 'A' && c <= 'Z' || c == '~' {\n\t\t\treturn keyUnknown, b[i+1:]\n\t\t}\n\t}\n\n\treturn utf8.RuneError, b\n}\n\n// queue appends data to the end of t.outBuf\nfunc (t *Terminal) queue(data []rune) {\n\tt.outBuf = append(t.outBuf, []byte(string(data))...)\n}\n\nvar eraseUnderCursor = []rune{' ', keyEscape, '[', 'D'}\nvar space = []rune{' '}\n\nfunc isPrintable(key rune) bool {\n\tisInSurrogateArea := key >= 0xd800 && key <= 0xdbff\n\treturn key >= 32 && !isInSurrogateArea\n}\n\n// moveCursorToPos appends data to t.outBuf which will move the cursor to the\n// given, logical position in the text.\nfunc (t *Terminal) moveCursorToPos(pos int) {\n\tif !t.echo {\n\t\treturn\n\t}\n\n\tx := visualLength(t.prompt) + pos\n\ty := x / t.termWidth\n\tx = x % t.termWidth\n\n\tup := 0\n\tif y < t.cursorY {\n\t\tup = t.cursorY - y\n\t}\n\n\tdown := 0\n\tif y > t.cursorY {\n\t\tdown = y - t.cursorY\n\t}\n\n\tleft := 0\n\tif x < t.cursorX {\n\t\tleft = t.cursorX - x\n\t}\n\n\tright := 0\n\tif x > t.cursorX {\n\t\tright = x - t.cursorX\n\t}\n\n\tt.cursorX = x\n\tt.cursorY = y\n\tt.move(up, down, left, right)\n}\n\nfunc (t *Terminal) move(up, down, left, right int) {\n\tmovement := make([]rune, 3*(up+down+left+right))\n\tm := movement\n\tfor i := 0; i < up; i++ {\n\t\tm[0] = keyEscape\n\t\tm[1] = '['\n\t\tm[2] = 'A'\n\t\tm = m[3:]\n\t}\n\tfor i := 0; i < down; i++ {\n\t\tm[0] = keyEscape\n\t\tm[1] = '['\n\t\tm[2] = 'B'\n\t\tm = m[3:]\n\t}\n\tfor i := 0; i < left; i++ {\n\t\tm[0] = keyEscape\n\t\tm[1] = '['\n\t\tm[2] = 'D'\n\t\tm = m[3:]\n\t}\n\tfor i := 0; i < right; i++ {\n\t\tm[0] = keyEscape\n\t\tm[1] = '['\n\t\tm[2] = 'C'\n\t\tm = m[3:]\n\t}\n\n\tt.queue(movement)\n}\n\nfunc (t *Terminal) clearLineToRight() {\n\top := []rune{keyEscape, '[', 'K'}\n\tt.queue(op)\n}\n\nconst maxLineLength = 4096\n\nfunc (t *Terminal) setLine(newLine []rune, newPos int) {\n\tif t.echo {\n\t\tt.moveCursorToPos(0)\n\t\tt.writeLine(newLine)\n\t\tfor i := len(newLine); i < len(t.line); i++ {\n\t\t\tt.writeLine(space)\n\t\t}\n\t\tt.moveCursorToPos(newPos)\n\t}\n\tt.line = newLine\n\tt.pos = newPos\n}\n\nfunc (t *Terminal) advanceCursor(places int) {\n\tt.cursorX += places\n\tt.cursorY += t.cursorX / t.termWidth\n\tif t.cursorY > t.maxLine {\n\t\tt.maxLine = t.cursorY\n\t}\n\tt.cursorX = t.cursorX % t.termWidth\n\n\tif places > 0 && t.cursorX == 0 {\n\t\t// Normally terminals will advance the current position\n\t\t// when writing a character. But that doesn't happen\n\t\t// for the last character in a line. However, when\n\t\t// writing a character (except a new line) that causes\n\t\t// a line wrap, the position will be advanced two\n\t\t// places.\n\t\t//\n\t\t// So, if we are stopping at the end of a line, we\n\t\t// need to write a newline so that our cursor can be\n\t\t// advanced to the next line.\n\t\tt.outBuf = append(t.outBuf, '\\n')\n\t}\n}\n\nfunc (t *Terminal) eraseNPreviousChars(n int) {\n\tif n == 0 {\n\t\treturn\n\t}\n\n\tif t.pos < n {\n\t\tn = t.pos\n\t}\n\tt.pos -= n\n\tt.moveCursorToPos(t.pos)\n\n\tcopy(t.line[t.pos:], t.line[n+t.pos:])\n\tt.line = t.line[:len(t.line)-n]\n\tif t.echo {\n\t\tt.writeLine(t.line[t.pos:])\n\t\tfor i := 0; i < n; i++ {\n\t\t\tt.queue(space)\n\t\t}\n\t\tt.advanceCursor(n)\n\t\tt.moveCursorToPos(t.pos)\n\t}\n}\n\n// countToLeftWord returns then number of characters from the cursor to the\n// start of the previous word.\nfunc (t *Terminal) countToLeftWord() int {\n\tif t.pos == 0 {\n\t\treturn 0\n\t}\n\n\tpos := t.pos - 1\n\tfor pos > 0 {\n\t\tif t.line[pos] != ' ' {\n\t\t\tbreak\n\t\t}\n\t\tpos--\n\t}\n\tfor pos > 0 {\n\t\tif t.line[pos] == ' ' {\n\t\t\tpos++\n\t\t\tbreak\n\t\t}\n\t\tpos--\n\t}\n\n\treturn t.pos - pos\n}\n\n// countToRightWord returns then number of characters from the cursor to the\n// start of the next word.\nfunc (t *Terminal) countToRightWord() int {\n\tpos := t.pos\n\tfor pos < len(t.line) {\n\t\tif t.line[pos] == ' ' {\n\t\t\tbreak\n\t\t}\n\t\tpos++\n\t}\n\tfor pos < len(t.line) {\n\t\tif t.line[pos] != ' ' {\n\t\t\tbreak\n\t\t}\n\t\tpos++\n\t}\n\treturn pos - t.pos\n}\n\n// visualLength returns the number of visible glyphs in s.\nfunc visualLength(runes []rune) int {\n\tinEscapeSeq := false\n\tlength := 0\n\n\tfor _, r := range runes {\n\t\tswitch {\n\t\tcase inEscapeSeq:\n\t\t\tif (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') {\n\t\t\t\tinEscapeSeq = false\n\t\t\t}\n\t\tcase r == '\\x1b':\n\t\t\tinEscapeSeq = true\n\t\tdefault:\n\t\t\tlength++\n\t\t}\n\t}\n\n\treturn length\n}\n\n// handleKey processes the given key and, optionally, returns a line of text\n// that the user has entered.\nfunc (t *Terminal) handleKey(key rune) (line string, ok bool) {\n\tif t.pasteActive && key != keyEnter {\n\t\tt.addKeyToLine(key)\n\t\treturn\n\t}\n\n\tswitch key {\n\tcase keyBackspace:\n\t\tif t.pos == 0 {\n\t\t\treturn\n\t\t}\n\t\tt.eraseNPreviousChars(1)\n\tcase keyAltLeft:\n\t\t// move left by a word.\n\t\tt.pos -= t.countToLeftWord()\n\t\tt.moveCursorToPos(t.pos)\n\tcase keyAltRight:\n\t\t// move right by a word.\n\t\tt.pos += t.countToRightWord()\n\t\tt.moveCursorToPos(t.pos)\n\tcase keyLeft:\n\t\tif t.pos == 0 {\n\t\t\treturn\n\t\t}\n\t\tt.pos--\n\t\tt.moveCursorToPos(t.pos)\n\tcase keyRight:\n\t\tif t.pos == len(t.line) {\n\t\t\treturn\n\t\t}\n\t\tt.pos++\n\t\tt.moveCursorToPos(t.pos)\n\tcase keyHome:\n\t\tif t.pos == 0 {\n\t\t\treturn\n\t\t}\n\t\tt.pos = 0\n\t\tt.moveCursorToPos(t.pos)\n\tcase keyEnd:\n\t\tif t.pos == len(t.line) {\n\t\t\treturn\n\t\t}\n\t\tt.pos = len(t.line)\n\t\tt.moveCursorToPos(t.pos)\n\tcase keyUp:\n\t\tentry, ok := t.history.NthPreviousEntry(t.historyIndex + 1)\n\t\tif !ok {\n\t\t\treturn \"\", false\n\t\t}\n\t\tif t.historyIndex == -1 {\n\t\t\tt.historyPending = string(t.line)\n\t\t}\n\t\tt.historyIndex++\n\t\trunes := []rune(entry)\n\t\tt.setLine(runes, len(runes))\n\tcase keyDown:\n\t\tswitch t.historyIndex {\n\t\tcase -1:\n\t\t\treturn\n\t\tcase 0:\n\t\t\trunes := []rune(t.historyPending)\n\t\t\tt.setLine(runes, len(runes))\n\t\t\tt.historyIndex--\n\t\tdefault:\n\t\t\tentry, ok := t.history.NthPreviousEntry(t.historyIndex - 1)\n\t\t\tif ok {\n\t\t\t\tt.historyIndex--\n\t\t\t\trunes := []rune(entry)\n\t\t\t\tt.setLine(runes, len(runes))\n\t\t\t}\n\t\t}\n\tcase keyEnter:\n\t\tt.moveCursorToPos(len(t.line))\n\t\tt.queue([]rune(\"\\r\\n\"))\n\t\tline = string(t.line)\n\t\tok = true\n\t\tt.line = t.line[:0]\n\t\tt.pos = 0\n\t\tt.cursorX = 0\n\t\tt.cursorY = 0\n\t\tt.maxLine = 0\n\tcase keyDeleteWord:\n\t\t// Delete zero or more spaces and then one or more characters.\n\t\tt.eraseNPreviousChars(t.countToLeftWord())\n\tcase keyDeleteLine:\n\t\t// Delete everything from the current cursor position to the\n\t\t// end of line.\n\t\tfor i := t.pos; i < len(t.line); i++ {\n\t\t\tt.queue(space)\n\t\t\tt.advanceCursor(1)\n\t\t}\n\t\tt.line = t.line[:t.pos]\n\t\tt.moveCursorToPos(t.pos)\n\tcase keyCtrlD:\n\t\t// Erase the character under the current position.\n\t\t// The EOF case when the line is empty is handled in\n\t\t// readLine().\n\t\tif t.pos < len(t.line) {\n\t\t\tt.pos++\n\t\t\tt.eraseNPreviousChars(1)\n\t\t}\n\tcase keyCtrlU:\n\t\tt.eraseNPreviousChars(t.pos)\n\tcase keyClearScreen:\n\t\t// Erases the screen and moves the cursor to the home position.\n\t\tt.queue([]rune(\"\\x1b[2J\\x1b[H\"))\n\t\tt.queue(t.prompt)\n\t\tt.cursorX, t.cursorY = 0, 0\n\t\tt.advanceCursor(visualLength(t.prompt))\n\t\tt.setLine(t.line, t.pos)\n\tdefault:\n\t\tif t.AutoCompleteCallback != nil {\n\t\t\tprefix := string(t.line[:t.pos])\n\t\t\tsuffix := string(t.line[t.pos:])\n\n\t\t\tt.lock.Unlock()\n\t\t\tnewLine, newPos, completeOk := t.AutoCompleteCallback(prefix+suffix, len(prefix), key)\n\t\t\tt.lock.Lock()\n\n\t\t\tif completeOk {\n\t\t\t\tt.setLine([]rune(newLine), utf8.RuneCount([]byte(newLine)[:newPos]))\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t\tif !isPrintable(key) {\n\t\t\treturn\n\t\t}\n\t\tif len(t.line) == maxLineLength {\n\t\t\treturn\n\t\t}\n\t\tt.addKeyToLine(key)\n\t}\n\treturn\n}\n\n// addKeyToLine inserts the given key at the current position in the current\n// line.\nfunc (t *Terminal) addKeyToLine(key rune) {\n\tif len(t.line) == cap(t.line) {\n\t\tnewLine := make([]rune, len(t.line), 2*(1+len(t.line)))\n\t\tcopy(newLine, t.line)\n\t\tt.line = newLine\n\t}\n\tt.line = t.line[:len(t.line)+1]\n\tcopy(t.line[t.pos+1:], t.line[t.pos:])\n\tt.line[t.pos] = key\n\tif t.echo {\n\t\tt.writeLine(t.line[t.pos:])\n\t}\n\tt.pos++\n\tt.moveCursorToPos(t.pos)\n}\n\nfunc (t *Terminal) writeLine(line []rune) {\n\tfor len(line) != 0 {\n\t\tremainingOnLine := t.termWidth - t.cursorX\n\t\ttodo := len(line)\n\t\tif todo > remainingOnLine {\n\t\t\ttodo = remainingOnLine\n\t\t}\n\t\tt.queue(line[:todo])\n\t\tt.advanceCursor(visualLength(line[:todo]))\n\t\tline = line[todo:]\n\t}\n}\n\nfunc (t *Terminal) Write(buf []byte) (n int, err error) {\n\tt.lock.Lock()\n\tdefer t.lock.Unlock()\n\n\tif t.cursorX == 0 && t.cursorY == 0 {\n\t\t// This is the easy case: there's nothing on the screen that we\n\t\t// have to move out of the way.\n\t\treturn t.c.Write(buf)\n\t}\n\n\t// We have a prompt and possibly user input on the screen. We\n\t// have to clear it first.\n\tt.move(0 /* up */, 0 /* down */, t.cursorX /* left */, 0 /* right */)\n\tt.cursorX = 0\n\tt.clearLineToRight()\n\n\tfor t.cursorY > 0 {\n\t\tt.move(1 /* up */, 0, 0, 0)\n\t\tt.cursorY--\n\t\tt.clearLineToRight()\n\t}\n\n\tif _, err = t.c.Write(t.outBuf); err != nil {\n\t\treturn\n\t}\n\tt.outBuf = t.outBuf[:0]\n\n\tif n, err = t.c.Write(buf); err != nil {\n\t\treturn\n\t}\n\n\tt.writeLine(t.prompt)\n\tif t.echo {\n\t\tt.writeLine(t.line)\n\t}\n\n\tt.moveCursorToPos(t.pos)\n\n\tif _, err = t.c.Write(t.outBuf); err != nil {\n\t\treturn\n\t}\n\tt.outBuf = t.outBuf[:0]\n\treturn\n}\n\n// ReadPassword temporarily changes the prompt and reads a password, without\n// echo, from the terminal.\nfunc (t *Terminal) ReadPassword(prompt string) (line string, err error) {\n\tt.lock.Lock()\n\tdefer t.lock.Unlock()\n\n\toldPrompt := t.prompt\n\tt.prompt = []rune(prompt)\n\tt.echo = false\n\n\tline, err = t.readLine()\n\n\tt.prompt = oldPrompt\n\tt.echo = true\n\n\treturn\n}\n\n// ReadLine returns a line of input from the terminal.\nfunc (t *Terminal) ReadLine() (line string, err error) {\n\tt.lock.Lock()\n\tdefer t.lock.Unlock()\n\n\treturn t.readLine()\n}\n\nfunc (t *Terminal) readLine() (line string, err error) {\n\t// t.lock must be held at this point\n\n\tif t.cursorX == 0 && t.cursorY == 0 {\n\t\tt.writeLine(t.prompt)\n\t\tt.c.Write(t.outBuf)\n\t\tt.outBuf = t.outBuf[:0]\n\t}\n\n\tlineIsPasted := t.pasteActive\n\n\tfor {\n\t\trest := t.remainder\n\t\tlineOk := false\n\t\tfor !lineOk {\n\t\t\tvar key rune\n\t\t\tkey, rest = bytesToKey(rest, t.pasteActive)\n\t\t\tif key == utf8.RuneError {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tif !t.pasteActive {\n\t\t\t\tif key == keyCtrlD {\n\t\t\t\t\tif len(t.line) == 0 {\n\t\t\t\t\t\treturn \"\", io.EOF\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif key == keyPasteStart {\n\t\t\t\t\tt.pasteActive = true\n\t\t\t\t\tif len(t.line) == 0 {\n\t\t\t\t\t\tlineIsPasted = true\n\t\t\t\t\t}\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t} else if key == keyPasteEnd {\n\t\t\t\tt.pasteActive = false\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif !t.pasteActive {\n\t\t\t\tlineIsPasted = false\n\t\t\t}\n\t\t\tline, lineOk = t.handleKey(key)\n\t\t}\n\t\tif len(rest) > 0 {\n\t\t\tn := copy(t.inBuf[:], rest)\n\t\t\tt.remainder = t.inBuf[:n]\n\t\t} else {\n\t\t\tt.remainder = nil\n\t\t}\n\t\tt.c.Write(t.outBuf)\n\t\tt.outBuf = t.outBuf[:0]\n\t\tif lineOk {\n\t\t\tif t.echo {\n\t\t\t\tt.historyIndex = -1\n\t\t\t\tt.history.Add(line)\n\t\t\t}\n\t\t\tif lineIsPasted {\n\t\t\t\terr = ErrPasteIndicator\n\t\t\t}\n\t\t\treturn\n\t\t}\n\n\t\t// t.remainder is a slice at the beginning of t.inBuf\n\t\t// containing a partial key sequence\n\t\treadBuf := t.inBuf[len(t.remainder):]\n\t\tvar n int\n\n\t\tt.lock.Unlock()\n\t\tn, err = t.c.Read(readBuf)\n\t\tt.lock.Lock()\n\n\t\tif err != nil {\n\t\t\treturn\n\t\t}\n\n\t\tt.remainder = t.inBuf[:n+len(t.remainder)]\n\t}\n\n\tpanic(\"unreachable\") // for Go 1.0.\n}\n\n// SetPrompt sets the prompt to be used when reading subsequent lines.\nfunc (t *Terminal) SetPrompt(prompt string) {\n\tt.lock.Lock()\n\tdefer t.lock.Unlock()\n\n\tt.prompt = []rune(prompt)\n}\n\nfunc (t *Terminal) clearAndRepaintLinePlusNPrevious(numPrevLines int) {\n\t// Move cursor to column zero at the start of the line.\n\tt.move(t.cursorY, 0, t.cursorX, 0)\n\tt.cursorX, t.cursorY = 0, 0\n\tt.clearLineToRight()\n\tfor t.cursorY < numPrevLines {\n\t\t// Move down a line\n\t\tt.move(0, 1, 0, 0)\n\t\tt.cursorY++\n\t\tt.clearLineToRight()\n\t}\n\t// Move back to beginning.\n\tt.move(t.cursorY, 0, 0, 0)\n\tt.cursorX, t.cursorY = 0, 0\n\n\tt.queue(t.prompt)\n\tt.advanceCursor(visualLength(t.prompt))\n\tt.writeLine(t.line)\n\tt.moveCursorToPos(t.pos)\n}\n\nfunc (t *Terminal) SetSize(width, height int) error {\n\tt.lock.Lock()\n\tdefer t.lock.Unlock()\n\n\tif width == 0 {\n\t\twidth = 1\n\t}\n\n\toldWidth := t.termWidth\n\tt.termWidth, t.termHeight = width, height\n\n\tswitch {\n\tcase width == oldWidth:\n\t\t// If the width didn't change then nothing else needs to be\n\t\t// done.\n\t\treturn nil\n\tcase len(t.line) == 0 && t.cursorX == 0 && t.cursorY == 0:\n\t\t// If there is nothing on current line and no prompt printed,\n\t\t// just do nothing\n\t\treturn nil\n\tcase width < oldWidth:\n\t\t// Some terminals (e.g. xterm) will truncate lines that were\n\t\t// too long when shinking. Others, (e.g. gnome-terminal) will\n\t\t// attempt to wrap them. For the former, repainting t.maxLine\n\t\t// works great, but that behaviour goes badly wrong in the case\n\t\t// of the latter because they have doubled every full line.\n\n\t\t// We assume that we are working on a terminal that wraps lines\n\t\t// and adjust the cursor position based on every previous line\n\t\t// wrapping and turning into two. This causes the prompt on\n\t\t// xterms to move upwards, which isn't great, but it avoids a\n\t\t// huge mess with gnome-terminal.\n\t\tif t.cursorX >= t.termWidth {\n\t\t\tt.cursorX = t.termWidth - 1\n\t\t}\n\t\tt.cursorY *= 2\n\t\tt.clearAndRepaintLinePlusNPrevious(t.maxLine * 2)\n\tcase width > oldWidth:\n\t\t// If the terminal expands then our position calculations will\n\t\t// be wrong in the future because we think the cursor is\n\t\t// |t.pos| chars into the string, but there will be a gap at\n\t\t// the end of any wrapped line.\n\t\t//\n\t\t// But the position will actually be correct until we move, so\n\t\t// we can move back to the beginning and repaint everything.\n\t\tt.clearAndRepaintLinePlusNPrevious(t.maxLine)\n\t}\n\n\t_, err := t.c.Write(t.outBuf)\n\tt.outBuf = t.outBuf[:0]\n\treturn err\n}\n\ntype pasteIndicatorError struct{}\n\nfunc (pasteIndicatorError) Error() string {\n\treturn \"terminal: ErrPasteIndicator not correctly handled\"\n}\n\n// ErrPasteIndicator may be returned from ReadLine as the error, in addition\n// to valid line data. It indicates that bracketed paste mode is enabled and\n// that the returned line consists only of pasted data. Programs may wish to\n// interpret pasted data more literally than typed data.\nvar ErrPasteIndicator = pasteIndicatorError{}\n\n// SetBracketedPasteMode requests that the terminal bracket paste operations\n// with markers. Not all terminals support this but, if it is supported, then\n// enabling this mode will stop any autocomplete callback from running due to\n// pastes. Additionally, any lines that are completely pasted will be returned\n// from ReadLine with the error set to ErrPasteIndicator.\nfunc (t *Terminal) SetBracketedPasteMode(on bool) {\n\tif on {\n\t\tio.WriteString(t.c, \"\\x1b[?2004h\")\n\t} else {\n\t\tio.WriteString(t.c, \"\\x1b[?2004l\")\n\t}\n}\n\n// stRingBuffer is a ring buffer of strings.\ntype stRingBuffer struct {\n\t// entries contains max elements.\n\tentries []string\n\tmax     int\n\t// head contains the index of the element most recently added to the ring.\n\thead int\n\t// size contains the number of elements in the ring.\n\tsize int\n}\n\nfunc (s *stRingBuffer) Add(a string) {\n\tif s.entries == nil {\n\t\tconst defaultNumEntries = 100\n\t\ts.entries = make([]string, defaultNumEntries)\n\t\ts.max = defaultNumEntries\n\t}\n\n\ts.head = (s.head + 1) % s.max\n\ts.entries[s.head] = a\n\tif s.size < s.max {\n\t\ts.size++\n\t}\n}\n\n// NthPreviousEntry returns the value passed to the nth previous call to Add.\n// If n is zero then the immediately prior value is returned, if one, then the\n// next most recent, and so on. If such an element doesn't exist then ok is\n// false.\nfunc (s *stRingBuffer) NthPreviousEntry(n int) (value string, ok bool) {\n\tif n >= s.size {\n\t\treturn \"\", false\n\t}\n\tindex := s.head - n\n\tif index < 0 {\n\t\tindex += s.max\n\t}\n\treturn s.entries[index], true\n}\n"
  },
  {
    "path": "vendor/golang.org/x/crypto/ssh/terminal/util.go",
    "content": "// Copyright 2011 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build darwin dragonfly freebsd linux,!appengine netbsd openbsd\n\n// Package terminal provides support functions for dealing with terminals, as\n// commonly found on UNIX systems.\n//\n// Putting a terminal into raw mode is the most common requirement:\n//\n// \toldState, err := terminal.MakeRaw(0)\n// \tif err != nil {\n// \t        panic(err)\n// \t}\n// \tdefer terminal.Restore(0, oldState)\npackage terminal // import \"golang.org/x/crypto/ssh/terminal\"\n\nimport (\n\t\"io\"\n\t\"syscall\"\n\t\"unsafe\"\n)\n\n// State contains the state of a terminal.\ntype State struct {\n\ttermios syscall.Termios\n}\n\n// IsTerminal returns true if the given file descriptor is a terminal.\nfunc IsTerminal(fd int) bool {\n\tvar termios syscall.Termios\n\t_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)\n\treturn err == 0\n}\n\n// MakeRaw put the terminal connected to the given file descriptor into raw\n// mode and returns the previous state of the terminal so that it can be\n// restored.\nfunc MakeRaw(fd int) (*State, error) {\n\tvar oldState State\n\tif _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&oldState.termios)), 0, 0, 0); err != 0 {\n\t\treturn nil, err\n\t}\n\n\tnewState := oldState.termios\n\tnewState.Iflag &^= syscall.ISTRIP | syscall.INLCR | syscall.ICRNL | syscall.IGNCR | syscall.IXON | syscall.IXOFF\n\tnewState.Lflag &^= syscall.ECHO | syscall.ICANON | syscall.ISIG\n\tif _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlWriteTermios, uintptr(unsafe.Pointer(&newState)), 0, 0, 0); err != 0 {\n\t\treturn nil, err\n\t}\n\n\treturn &oldState, nil\n}\n\n// GetState returns the current state of a terminal which may be useful to\n// restore the terminal after a signal.\nfunc GetState(fd int) (*State, error) {\n\tvar oldState State\n\tif _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&oldState.termios)), 0, 0, 0); err != 0 {\n\t\treturn nil, err\n\t}\n\n\treturn &oldState, nil\n}\n\n// Restore restores the terminal connected to the given file descriptor to a\n// previous state.\nfunc Restore(fd int, state *State) error {\n\t_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlWriteTermios, uintptr(unsafe.Pointer(&state.termios)), 0, 0, 0)\n\treturn err\n}\n\n// GetSize returns the dimensions of the given terminal.\nfunc GetSize(fd int) (width, height int, err error) {\n\tvar dimensions [4]uint16\n\n\tif _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), uintptr(syscall.TIOCGWINSZ), uintptr(unsafe.Pointer(&dimensions)), 0, 0, 0); err != 0 {\n\t\treturn -1, -1, err\n\t}\n\treturn int(dimensions[1]), int(dimensions[0]), nil\n}\n\n// ReadPassword reads a line of input from a terminal without local echo.  This\n// is commonly used for inputting passwords and other sensitive data. The slice\n// returned does not include the \\n.\nfunc ReadPassword(fd int) ([]byte, error) {\n\tvar oldState syscall.Termios\n\tif _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&oldState)), 0, 0, 0); err != 0 {\n\t\treturn nil, err\n\t}\n\n\tnewState := oldState\n\tnewState.Lflag &^= syscall.ECHO\n\tnewState.Lflag |= syscall.ICANON | syscall.ISIG\n\tnewState.Iflag |= syscall.ICRNL\n\tif _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlWriteTermios, uintptr(unsafe.Pointer(&newState)), 0, 0, 0); err != 0 {\n\t\treturn nil, err\n\t}\n\n\tdefer func() {\n\t\tsyscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlWriteTermios, uintptr(unsafe.Pointer(&oldState)), 0, 0, 0)\n\t}()\n\n\tvar buf [16]byte\n\tvar ret []byte\n\tfor {\n\t\tn, err := syscall.Read(fd, buf[:])\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif n == 0 {\n\t\t\tif len(ret) == 0 {\n\t\t\t\treturn nil, io.EOF\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t\tif buf[n-1] == '\\n' {\n\t\t\tn--\n\t\t}\n\t\tret = append(ret, buf[:n]...)\n\t\tif n < len(buf) {\n\t\t\tbreak\n\t\t}\n\t}\n\n\treturn ret, nil\n}\n"
  },
  {
    "path": "vendor/golang.org/x/crypto/ssh/terminal/util_bsd.go",
    "content": "// Copyright 2013 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build darwin dragonfly freebsd netbsd openbsd\n\npackage terminal\n\nimport \"syscall\"\n\nconst ioctlReadTermios = syscall.TIOCGETA\nconst ioctlWriteTermios = syscall.TIOCSETA\n"
  },
  {
    "path": "vendor/golang.org/x/crypto/ssh/terminal/util_linux.go",
    "content": "// Copyright 2013 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage terminal\n\n// These constants are declared here, rather than importing\n// them from the syscall package as some syscall packages, even\n// on linux, for example gccgo, do not declare them.\nconst ioctlReadTermios = 0x5401  // syscall.TCGETS\nconst ioctlWriteTermios = 0x5402 // syscall.TCSETS\n"
  },
  {
    "path": "vendor/golang.org/x/crypto/ssh/terminal/util_windows.go",
    "content": "// Copyright 2011 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\n// +build windows\n\n// Package terminal provides support functions for dealing with terminals, as\n// commonly found on UNIX systems.\n//\n// Putting a terminal into raw mode is the most common requirement:\n//\n// \toldState, err := terminal.MakeRaw(0)\n// \tif err != nil {\n// \t        panic(err)\n// \t}\n// \tdefer terminal.Restore(0, oldState)\npackage terminal\n\nimport (\n\t\"io\"\n\t\"syscall\"\n\t\"unsafe\"\n)\n\nconst (\n\tenableLineInput       = 2\n\tenableEchoInput       = 4\n\tenableProcessedInput  = 1\n\tenableWindowInput     = 8\n\tenableMouseInput      = 16\n\tenableInsertMode      = 32\n\tenableQuickEditMode   = 64\n\tenableExtendedFlags   = 128\n\tenableAutoPosition    = 256\n\tenableProcessedOutput = 1\n\tenableWrapAtEolOutput = 2\n)\n\nvar kernel32 = syscall.NewLazyDLL(\"kernel32.dll\")\n\nvar (\n\tprocGetConsoleMode             = kernel32.NewProc(\"GetConsoleMode\")\n\tprocSetConsoleMode             = kernel32.NewProc(\"SetConsoleMode\")\n\tprocGetConsoleScreenBufferInfo = kernel32.NewProc(\"GetConsoleScreenBufferInfo\")\n)\n\ntype (\n\tshort int16\n\tword  uint16\n\n\tcoord struct {\n\t\tx short\n\t\ty short\n\t}\n\tsmallRect struct {\n\t\tleft   short\n\t\ttop    short\n\t\tright  short\n\t\tbottom short\n\t}\n\tconsoleScreenBufferInfo struct {\n\t\tsize              coord\n\t\tcursorPosition    coord\n\t\tattributes        word\n\t\twindow            smallRect\n\t\tmaximumWindowSize coord\n\t}\n)\n\ntype State struct {\n\tmode uint32\n}\n\n// IsTerminal returns true if the given file descriptor is a terminal.\nfunc IsTerminal(fd int) bool {\n\tvar st uint32\n\tr, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)\n\treturn r != 0 && e == 0\n}\n\n// MakeRaw put the terminal connected to the given file descriptor into raw\n// mode and returns the previous state of the terminal so that it can be\n// restored.\nfunc MakeRaw(fd int) (*State, error) {\n\tvar st uint32\n\t_, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)\n\tif e != 0 {\n\t\treturn nil, error(e)\n\t}\n\tst &^= (enableEchoInput | enableProcessedInput | enableLineInput | enableProcessedOutput)\n\t_, _, e = syscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(fd), uintptr(st), 0)\n\tif e != 0 {\n\t\treturn nil, error(e)\n\t}\n\treturn &State{st}, nil\n}\n\n// GetState returns the current state of a terminal which may be useful to\n// restore the terminal after a signal.\nfunc GetState(fd int) (*State, error) {\n\tvar st uint32\n\t_, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)\n\tif e != 0 {\n\t\treturn nil, error(e)\n\t}\n\treturn &State{st}, nil\n}\n\n// Restore restores the terminal connected to the given file descriptor to a\n// previous state.\nfunc Restore(fd int, state *State) error {\n\t_, _, err := syscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(fd), uintptr(state.mode), 0)\n\treturn err\n}\n\n// GetSize returns the dimensions of the given terminal.\nfunc GetSize(fd int) (width, height int, err error) {\n\tvar info consoleScreenBufferInfo\n\t_, _, e := syscall.Syscall(procGetConsoleScreenBufferInfo.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&info)), 0)\n\tif e != 0 {\n\t\treturn 0, 0, error(e)\n\t}\n\treturn int(info.size.x), int(info.size.y), nil\n}\n\n// ReadPassword reads a line of input from a terminal without local echo.  This\n// is commonly used for inputting passwords and other sensitive data. The slice\n// returned does not include the \\n.\nfunc ReadPassword(fd int) ([]byte, error) {\n\tvar st uint32\n\t_, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)\n\tif e != 0 {\n\t\treturn nil, error(e)\n\t}\n\told := st\n\n\tst &^= (enableEchoInput)\n\tst |= (enableProcessedInput | enableLineInput | enableProcessedOutput)\n\t_, _, e = syscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(fd), uintptr(st), 0)\n\tif e != 0 {\n\t\treturn nil, error(e)\n\t}\n\n\tdefer func() {\n\t\tsyscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(fd), uintptr(old), 0)\n\t}()\n\n\tvar buf [16]byte\n\tvar ret []byte\n\tfor {\n\t\tn, err := syscall.Read(syscall.Handle(fd), buf[:])\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tif n == 0 {\n\t\t\tif len(ret) == 0 {\n\t\t\t\treturn nil, io.EOF\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t\tif buf[n-1] == '\\n' {\n\t\t\tn--\n\t\t}\n\t\tif n > 0 && buf[n-1] == '\\r' {\n\t\t\tn--\n\t\t}\n\t\tret = append(ret, buf[:n]...)\n\t\tif n < len(buf) {\n\t\t\tbreak\n\t\t}\n\t}\n\n\treturn ret, nil\n}\n"
  },
  {
    "path": "vendor/gopkg.in/inf.v0/LICENSE",
    "content": "Copyright (c) 2012 Péter Surányi. Portions Copyright (c) 2009 The Go\nAuthors. All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are\nmet:\n\n   * Redistributions of source code must retain the above copyright\nnotice, this list of conditions and the following disclaimer.\n   * Redistributions in binary form must reproduce the above\ncopyright notice, this list of conditions and the following disclaimer\nin the documentation and/or other materials provided with the\ndistribution.\n   * Neither the name of Google Inc. nor the names of its\ncontributors may be used to endorse or promote products derived from\nthis software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n\"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\nLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\nA PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nOWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\nSPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "vendor/gopkg.in/inf.v0/dec.go",
    "content": "// Package inf (type inf.Dec) implements \"infinite-precision\" decimal\n// arithmetic.\n// \"Infinite precision\" describes two characteristics: practically unlimited\n// precision for decimal number representation and no support for calculating\n// with any specific fixed precision.\n// (Although there is no practical limit on precision, inf.Dec can only\n// represent finite decimals.)\n//\n// This package is currently in experimental stage and the API may change.\n//\n// This package does NOT support:\n//  - rounding to specific precisions (as opposed to specific decimal positions)\n//  - the notion of context (each rounding must be explicit)\n//  - NaN and Inf values, and distinguishing between positive and negative zero\n//  - conversions to and from float32/64 types\n//\n// Features considered for possible addition:\n//  + formatting options\n//  + Exp method\n//  + combined operations such as AddRound/MulAdd etc\n//  + exchanging data in decimal32/64/128 formats\n//\npackage inf // import \"gopkg.in/inf.v0\"\n\n// TODO:\n//  - avoid excessive deep copying (quo and rounders)\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"math/big\"\n\t\"strings\"\n)\n\n// A Dec represents a signed arbitrary-precision decimal.\n// It is a combination of a sign, an arbitrary-precision integer coefficient\n// value, and a signed fixed-precision exponent value.\n// The sign and the coefficient value are handled together as a signed value\n// and referred to as the unscaled value.\n// (Positive and negative zero values are not distinguished.)\n// Since the exponent is most commonly non-positive, it is handled in negated\n// form and referred to as scale.\n//\n// The mathematical value of a Dec equals:\n//\n//  unscaled * 10**(-scale)\n//\n// Note that different Dec representations may have equal mathematical values.\n//\n//  unscaled  scale  String()\n//  -------------------------\n//         0      0    \"0\"\n//         0      2    \"0.00\"\n//         0     -2    \"0\"\n//         1      0    \"1\"\n//       100      2    \"1.00\"\n//        10      0   \"10\"\n//         1     -1   \"10\"\n//\n// The zero value for a Dec represents the value 0 with scale 0.\n//\n// Operations are typically performed through the *Dec type.\n// The semantics of the assignment operation \"=\" for \"bare\" Dec values is\n// undefined and should not be relied on.\n//\n// Methods are typically of the form:\n//\n//\tfunc (z *Dec) Op(x, y *Dec) *Dec\n//\n// and implement operations z = x Op y with the result as receiver; if it\n// is one of the operands it may be overwritten (and its memory reused).\n// To enable chaining of operations, the result is also returned. Methods\n// returning a result other than *Dec take one of the operands as the receiver.\n//\n// A \"bare\" Quo method (quotient / division operation) is not provided, as the\n// result is not always a finite decimal and thus in general cannot be\n// represented as a Dec.\n// Instead, in the common case when rounding is (potentially) necessary,\n// QuoRound should be used with a Scale and a Rounder.\n// QuoExact or QuoRound with RoundExact can be used in the special cases when it\n// is known that the result is always a finite decimal.\n//\ntype Dec struct {\n\tunscaled big.Int\n\tscale    Scale\n}\n\n// Scale represents the type used for the scale of a Dec.\ntype Scale int32\n\nconst scaleSize = 4 // bytes in a Scale value\n\n// Scaler represents a method for obtaining the scale to use for the result of\n// an operation on x and y.\ntype scaler interface {\n\tScale(x *Dec, y *Dec) Scale\n}\n\nvar bigInt = [...]*big.Int{\n\tbig.NewInt(0), big.NewInt(1), big.NewInt(2), big.NewInt(3), big.NewInt(4),\n\tbig.NewInt(5), big.NewInt(6), big.NewInt(7), big.NewInt(8), big.NewInt(9),\n\tbig.NewInt(10),\n}\n\nvar exp10cache [64]big.Int = func() [64]big.Int {\n\te10, e10i := [64]big.Int{}, bigInt[1]\n\tfor i, _ := range e10 {\n\t\te10[i].Set(e10i)\n\t\te10i = new(big.Int).Mul(e10i, bigInt[10])\n\t}\n\treturn e10\n}()\n\n// NewDec allocates and returns a new Dec set to the given int64 unscaled value\n// and scale.\nfunc NewDec(unscaled int64, scale Scale) *Dec {\n\treturn new(Dec).SetUnscaled(unscaled).SetScale(scale)\n}\n\n// NewDecBig allocates and returns a new Dec set to the given *big.Int unscaled\n// value and scale.\nfunc NewDecBig(unscaled *big.Int, scale Scale) *Dec {\n\treturn new(Dec).SetUnscaledBig(unscaled).SetScale(scale)\n}\n\n// Scale returns the scale of x.\nfunc (x *Dec) Scale() Scale {\n\treturn x.scale\n}\n\n// Unscaled returns the unscaled value of x for u and true for ok when the\n// unscaled value can be represented as int64; otherwise it returns an undefined\n// int64 value for u and false for ok. Use x.UnscaledBig().Int64() to avoid\n// checking the validity of the value when the check is known to be redundant.\nfunc (x *Dec) Unscaled() (u int64, ok bool) {\n\tu = x.unscaled.Int64()\n\tvar i big.Int\n\tok = i.SetInt64(u).Cmp(&x.unscaled) == 0\n\treturn\n}\n\n// UnscaledBig returns the unscaled value of x as *big.Int.\nfunc (x *Dec) UnscaledBig() *big.Int {\n\treturn &x.unscaled\n}\n\n// SetScale sets the scale of z, with the unscaled value unchanged, and returns\n// z.\n// The mathematical value of the Dec changes as if it was multiplied by\n// 10**(oldscale-scale).\nfunc (z *Dec) SetScale(scale Scale) *Dec {\n\tz.scale = scale\n\treturn z\n}\n\n// SetUnscaled sets the unscaled value of z, with the scale unchanged, and\n// returns z.\nfunc (z *Dec) SetUnscaled(unscaled int64) *Dec {\n\tz.unscaled.SetInt64(unscaled)\n\treturn z\n}\n\n// SetUnscaledBig sets the unscaled value of z, with the scale unchanged, and\n// returns z.\nfunc (z *Dec) SetUnscaledBig(unscaled *big.Int) *Dec {\n\tz.unscaled.Set(unscaled)\n\treturn z\n}\n\n// Set sets z to the value of x and returns z.\n// It does nothing if z == x.\nfunc (z *Dec) Set(x *Dec) *Dec {\n\tif z != x {\n\t\tz.SetUnscaledBig(x.UnscaledBig())\n\t\tz.SetScale(x.Scale())\n\t}\n\treturn z\n}\n\n// Sign returns:\n//\n//\t-1 if x <  0\n//\t 0 if x == 0\n//\t+1 if x >  0\n//\nfunc (x *Dec) Sign() int {\n\treturn x.UnscaledBig().Sign()\n}\n\n// Neg sets z to -x and returns z.\nfunc (z *Dec) Neg(x *Dec) *Dec {\n\tz.SetScale(x.Scale())\n\tz.UnscaledBig().Neg(x.UnscaledBig())\n\treturn z\n}\n\n// Cmp compares x and y and returns:\n//\n//   -1 if x <  y\n//    0 if x == y\n//   +1 if x >  y\n//\nfunc (x *Dec) Cmp(y *Dec) int {\n\txx, yy := upscale(x, y)\n\treturn xx.UnscaledBig().Cmp(yy.UnscaledBig())\n}\n\n// Abs sets z to |x| (the absolute value of x) and returns z.\nfunc (z *Dec) Abs(x *Dec) *Dec {\n\tz.SetScale(x.Scale())\n\tz.UnscaledBig().Abs(x.UnscaledBig())\n\treturn z\n}\n\n// Add sets z to the sum x+y and returns z.\n// The scale of z is the greater of the scales of x and y.\nfunc (z *Dec) Add(x, y *Dec) *Dec {\n\txx, yy := upscale(x, y)\n\tz.SetScale(xx.Scale())\n\tz.UnscaledBig().Add(xx.UnscaledBig(), yy.UnscaledBig())\n\treturn z\n}\n\n// Sub sets z to the difference x-y and returns z.\n// The scale of z is the greater of the scales of x and y.\nfunc (z *Dec) Sub(x, y *Dec) *Dec {\n\txx, yy := upscale(x, y)\n\tz.SetScale(xx.Scale())\n\tz.UnscaledBig().Sub(xx.UnscaledBig(), yy.UnscaledBig())\n\treturn z\n}\n\n// Mul sets z to the product x*y and returns z.\n// The scale of z is the sum of the scales of x and y.\nfunc (z *Dec) Mul(x, y *Dec) *Dec {\n\tz.SetScale(x.Scale() + y.Scale())\n\tz.UnscaledBig().Mul(x.UnscaledBig(), y.UnscaledBig())\n\treturn z\n}\n\n// Round sets z to the value of x rounded to Scale s using Rounder r, and\n// returns z.\nfunc (z *Dec) Round(x *Dec, s Scale, r Rounder) *Dec {\n\treturn z.QuoRound(x, NewDec(1, 0), s, r)\n}\n\n// QuoRound sets z to the quotient x/y, rounded using the given Rounder to the\n// specified scale.\n//\n// If the rounder is RoundExact but the result can not be expressed exactly at\n// the specified scale, QuoRound returns nil, and the value of z is undefined.\n//\n// There is no corresponding Div method; the equivalent can be achieved through\n// the choice of Rounder used.\n//\nfunc (z *Dec) QuoRound(x, y *Dec, s Scale, r Rounder) *Dec {\n\treturn z.quo(x, y, sclr{s}, r)\n}\n\nfunc (z *Dec) quo(x, y *Dec, s scaler, r Rounder) *Dec {\n\tscl := s.Scale(x, y)\n\tvar zzz *Dec\n\tif r.UseRemainder() {\n\t\tzz, rA, rB := new(Dec).quoRem(x, y, scl, true, new(big.Int), new(big.Int))\n\t\tzzz = r.Round(new(Dec), zz, rA, rB)\n\t} else {\n\t\tzz, _, _ := new(Dec).quoRem(x, y, scl, false, nil, nil)\n\t\tzzz = r.Round(new(Dec), zz, nil, nil)\n\t}\n\tif zzz == nil {\n\t\treturn nil\n\t}\n\treturn z.Set(zzz)\n}\n\n// QuoExact sets z to the quotient x/y and returns z when x/y is a finite\n// decimal. Otherwise it returns nil and the value of z is undefined.\n//\n// The scale of a non-nil result is \"x.Scale() - y.Scale()\" or greater; it is\n// calculated so that the remainder will be zero whenever x/y is a finite\n// decimal.\nfunc (z *Dec) QuoExact(x, y *Dec) *Dec {\n\treturn z.quo(x, y, scaleQuoExact{}, RoundExact)\n}\n\n// quoRem sets z to the quotient x/y with the scale s, and if useRem is true,\n// it sets remNum and remDen to the numerator and denominator of the remainder.\n// It returns z, remNum and remDen.\n//\n// The remainder is normalized to the range -1 < r < 1 to simplify rounding;\n// that is, the results satisfy the following equation:\n//\n//  x / y = z + (remNum/remDen) * 10**(-z.Scale())\n//\n// See Rounder for more details about rounding.\n//\nfunc (z *Dec) quoRem(x, y *Dec, s Scale, useRem bool,\n\tremNum, remDen *big.Int) (*Dec, *big.Int, *big.Int) {\n\t// difference (required adjustment) compared to \"canonical\" result scale\n\tshift := s - (x.Scale() - y.Scale())\n\t// pointers to adjusted unscaled dividend and divisor\n\tvar ix, iy *big.Int\n\tswitch {\n\tcase shift > 0:\n\t\t// increased scale: decimal-shift dividend left\n\t\tix = new(big.Int).Mul(x.UnscaledBig(), exp10(shift))\n\t\tiy = y.UnscaledBig()\n\tcase shift < 0:\n\t\t// decreased scale: decimal-shift divisor left\n\t\tix = x.UnscaledBig()\n\t\tiy = new(big.Int).Mul(y.UnscaledBig(), exp10(-shift))\n\tdefault:\n\t\tix = x.UnscaledBig()\n\t\tiy = y.UnscaledBig()\n\t}\n\t// save a copy of iy in case it to be overwritten with the result\n\tiy2 := iy\n\tif iy == z.UnscaledBig() {\n\t\tiy2 = new(big.Int).Set(iy)\n\t}\n\t// set scale\n\tz.SetScale(s)\n\t// set unscaled\n\tif useRem {\n\t\t// Int division\n\t\t_, intr := z.UnscaledBig().QuoRem(ix, iy, new(big.Int))\n\t\t// set remainder\n\t\tremNum.Set(intr)\n\t\tremDen.Set(iy2)\n\t} else {\n\t\tz.UnscaledBig().Quo(ix, iy)\n\t}\n\treturn z, remNum, remDen\n}\n\ntype sclr struct{ s Scale }\n\nfunc (s sclr) Scale(x, y *Dec) Scale {\n\treturn s.s\n}\n\ntype scaleQuoExact struct{}\n\nfunc (sqe scaleQuoExact) Scale(x, y *Dec) Scale {\n\trem := new(big.Rat).SetFrac(x.UnscaledBig(), y.UnscaledBig())\n\tf2, f5 := factor2(rem.Denom()), factor(rem.Denom(), bigInt[5])\n\tvar f10 Scale\n\tif f2 > f5 {\n\t\tf10 = Scale(f2)\n\t} else {\n\t\tf10 = Scale(f5)\n\t}\n\treturn x.Scale() - y.Scale() + f10\n}\n\nfunc factor(n *big.Int, p *big.Int) int {\n\t// could be improved for large factors\n\td, f := n, 0\n\tfor {\n\t\tdd, dm := new(big.Int).DivMod(d, p, new(big.Int))\n\t\tif dm.Sign() == 0 {\n\t\t\tf++\n\t\t\td = dd\n\t\t} else {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn f\n}\n\nfunc factor2(n *big.Int) int {\n\t// could be improved for large factors\n\tf := 0\n\tfor ; n.Bit(f) == 0; f++ {\n\t}\n\treturn f\n}\n\nfunc upscale(a, b *Dec) (*Dec, *Dec) {\n\tif a.Scale() == b.Scale() {\n\t\treturn a, b\n\t}\n\tif a.Scale() > b.Scale() {\n\t\tbb := b.rescale(a.Scale())\n\t\treturn a, bb\n\t}\n\taa := a.rescale(b.Scale())\n\treturn aa, b\n}\n\nfunc exp10(x Scale) *big.Int {\n\tif int(x) < len(exp10cache) {\n\t\treturn &exp10cache[int(x)]\n\t}\n\treturn new(big.Int).Exp(bigInt[10], big.NewInt(int64(x)), nil)\n}\n\nfunc (x *Dec) rescale(newScale Scale) *Dec {\n\tshift := newScale - x.Scale()\n\tswitch {\n\tcase shift < 0:\n\t\te := exp10(-shift)\n\t\treturn NewDecBig(new(big.Int).Quo(x.UnscaledBig(), e), newScale)\n\tcase shift > 0:\n\t\te := exp10(shift)\n\t\treturn NewDecBig(new(big.Int).Mul(x.UnscaledBig(), e), newScale)\n\t}\n\treturn x\n}\n\nvar zeros = []byte(\"00000000000000000000000000000000\" +\n\t\"00000000000000000000000000000000\")\nvar lzeros = Scale(len(zeros))\n\nfunc appendZeros(s []byte, n Scale) []byte {\n\tfor i := Scale(0); i < n; i += lzeros {\n\t\tif n > i+lzeros {\n\t\t\ts = append(s, zeros...)\n\t\t} else {\n\t\t\ts = append(s, zeros[0:n-i]...)\n\t\t}\n\t}\n\treturn s\n}\n\nfunc (x *Dec) String() string {\n\tif x == nil {\n\t\treturn \"<nil>\"\n\t}\n\tscale := x.Scale()\n\ts := []byte(x.UnscaledBig().String())\n\tif scale <= 0 {\n\t\tif scale != 0 && x.unscaled.Sign() != 0 {\n\t\t\ts = appendZeros(s, -scale)\n\t\t}\n\t\treturn string(s)\n\t}\n\tnegbit := Scale(-((x.Sign() - 1) / 2))\n\t// scale > 0\n\tlens := Scale(len(s))\n\tif lens-negbit <= scale {\n\t\tss := make([]byte, 0, scale+2)\n\t\tif negbit == 1 {\n\t\t\tss = append(ss, '-')\n\t\t}\n\t\tss = append(ss, '0', '.')\n\t\tss = appendZeros(ss, scale-lens+negbit)\n\t\tss = append(ss, s[negbit:]...)\n\t\treturn string(ss)\n\t}\n\t// lens > scale\n\tss := make([]byte, 0, lens+1)\n\tss = append(ss, s[:lens-scale]...)\n\tss = append(ss, '.')\n\tss = append(ss, s[lens-scale:]...)\n\treturn string(ss)\n}\n\n// Format is a support routine for fmt.Formatter. It accepts the decimal\n// formats 'd' and 'f', and handles both equivalently.\n// Width, precision, flags and bases 2, 8, 16 are not supported.\nfunc (x *Dec) Format(s fmt.State, ch rune) {\n\tif ch != 'd' && ch != 'f' && ch != 'v' && ch != 's' {\n\t\tfmt.Fprintf(s, \"%%!%c(dec.Dec=%s)\", ch, x.String())\n\t\treturn\n\t}\n\tfmt.Fprintf(s, x.String())\n}\n\nfunc (z *Dec) scan(r io.RuneScanner) (*Dec, error) {\n\tunscaled := make([]byte, 0, 256) // collects chars of unscaled as bytes\n\tdp, dg := -1, -1                 // indexes of decimal point, first digit\nloop:\n\tfor {\n\t\tch, _, err := r.ReadRune()\n\t\tif err == io.EOF {\n\t\t\tbreak loop\n\t\t}\n\t\tif err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t\tswitch {\n\t\tcase ch == '+' || ch == '-':\n\t\t\tif len(unscaled) > 0 || dp >= 0 { // must be first character\n\t\t\t\tr.UnreadRune()\n\t\t\t\tbreak loop\n\t\t\t}\n\t\tcase ch == '.':\n\t\t\tif dp >= 0 {\n\t\t\t\tr.UnreadRune()\n\t\t\t\tbreak loop\n\t\t\t}\n\t\t\tdp = len(unscaled)\n\t\t\tcontinue // don't add to unscaled\n\t\tcase ch >= '0' && ch <= '9':\n\t\t\tif dg == -1 {\n\t\t\t\tdg = len(unscaled)\n\t\t\t}\n\t\tdefault:\n\t\t\tr.UnreadRune()\n\t\t\tbreak loop\n\t\t}\n\t\tunscaled = append(unscaled, byte(ch))\n\t}\n\tif dg == -1 {\n\t\treturn nil, fmt.Errorf(\"no digits read\")\n\t}\n\tif dp >= 0 {\n\t\tz.SetScale(Scale(len(unscaled) - dp))\n\t} else {\n\t\tz.SetScale(0)\n\t}\n\t_, ok := z.UnscaledBig().SetString(string(unscaled), 10)\n\tif !ok {\n\t\treturn nil, fmt.Errorf(\"invalid decimal: %s\", string(unscaled))\n\t}\n\treturn z, nil\n}\n\n// SetString sets z to the value of s, interpreted as a decimal (base 10),\n// and returns z and a boolean indicating success. The scale of z is the\n// number of digits after the decimal point (including any trailing 0s),\n// or 0 if there is no decimal point. If SetString fails, the value of z\n// is undefined but the returned value is nil.\nfunc (z *Dec) SetString(s string) (*Dec, bool) {\n\tr := strings.NewReader(s)\n\t_, err := z.scan(r)\n\tif err != nil {\n\t\treturn nil, false\n\t}\n\t_, _, err = r.ReadRune()\n\tif err != io.EOF {\n\t\treturn nil, false\n\t}\n\t// err == io.EOF => scan consumed all of s\n\treturn z, true\n}\n\n// Scan is a support routine for fmt.Scanner; it sets z to the value of\n// the scanned number. It accepts the decimal formats 'd' and 'f', and\n// handles both equivalently. Bases 2, 8, 16 are not supported.\n// The scale of z is the number of digits after the decimal point\n// (including any trailing 0s), or 0 if there is no decimal point.\nfunc (z *Dec) Scan(s fmt.ScanState, ch rune) error {\n\tif ch != 'd' && ch != 'f' && ch != 's' && ch != 'v' {\n\t\treturn fmt.Errorf(\"Dec.Scan: invalid verb '%c'\", ch)\n\t}\n\ts.SkipSpace()\n\t_, err := z.scan(s)\n\treturn err\n}\n\n// Gob encoding version\nconst decGobVersion byte = 1\n\nfunc scaleBytes(s Scale) []byte {\n\tbuf := make([]byte, scaleSize)\n\ti := scaleSize\n\tfor j := 0; j < scaleSize; j++ {\n\t\ti--\n\t\tbuf[i] = byte(s)\n\t\ts >>= 8\n\t}\n\treturn buf\n}\n\nfunc scale(b []byte) (s Scale) {\n\tfor j := 0; j < scaleSize; j++ {\n\t\ts <<= 8\n\t\ts |= Scale(b[j])\n\t}\n\treturn\n}\n\n// GobEncode implements the gob.GobEncoder interface.\nfunc (x *Dec) GobEncode() ([]byte, error) {\n\tbuf, err := x.UnscaledBig().GobEncode()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tbuf = append(append(buf, scaleBytes(x.Scale())...), decGobVersion)\n\treturn buf, nil\n}\n\n// GobDecode implements the gob.GobDecoder interface.\nfunc (z *Dec) GobDecode(buf []byte) error {\n\tif len(buf) == 0 {\n\t\treturn fmt.Errorf(\"Dec.GobDecode: no data\")\n\t}\n\tb := buf[len(buf)-1]\n\tif b != decGobVersion {\n\t\treturn fmt.Errorf(\"Dec.GobDecode: encoding version %d not supported\", b)\n\t}\n\tl := len(buf) - scaleSize - 1\n\terr := z.UnscaledBig().GobDecode(buf[:l])\n\tif err != nil {\n\t\treturn err\n\t}\n\tz.SetScale(scale(buf[l : l+scaleSize]))\n\treturn nil\n}\n\n// MarshalText implements the encoding.TextMarshaler interface.\nfunc (x *Dec) MarshalText() ([]byte, error) {\n\treturn []byte(x.String()), nil\n}\n\n// UnmarshalText implements the encoding.TextUnmarshaler interface.\nfunc (z *Dec) UnmarshalText(data []byte) error {\n\t_, ok := z.SetString(string(data))\n\tif !ok {\n\t\treturn fmt.Errorf(\"invalid inf.Dec\")\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "vendor/gopkg.in/inf.v0/rounder.go",
    "content": "package inf\n\nimport (\n\t\"math/big\"\n)\n\n// Rounder represents a method for rounding the (possibly infinite decimal)\n// result of a division to a finite Dec. It is used by Dec.Round() and\n// Dec.Quo().\n//\n// See the Example for results of using each Rounder with some sample values.\n//\ntype Rounder rounder\n\n// See http://speleotrove.com/decimal/damodel.html#refround for more detailed\n// definitions of these rounding modes.\nvar (\n\tRoundDown     Rounder // towards 0\n\tRoundUp       Rounder // away from 0\n\tRoundFloor    Rounder // towards -infinity\n\tRoundCeil     Rounder // towards +infinity\n\tRoundHalfDown Rounder // to nearest; towards 0 if same distance\n\tRoundHalfUp   Rounder // to nearest; away from 0 if same distance\n\tRoundHalfEven Rounder // to nearest; even last digit if same distance\n)\n\n// RoundExact is to be used in the case when rounding is not necessary.\n// When used with Quo or Round, it returns the result verbatim when it can be\n// expressed exactly with the given precision, and it returns nil otherwise.\n// QuoExact is a shorthand for using Quo with RoundExact.\nvar RoundExact Rounder\n\ntype rounder interface {\n\n\t// When UseRemainder() returns true, the Round() method is passed the\n\t// remainder of the division, expressed as the numerator and denominator of\n\t// a rational.\n\tUseRemainder() bool\n\n\t// Round sets the rounded value of a quotient to z, and returns z.\n\t// quo is rounded down (truncated towards zero) to the scale obtained from\n\t// the Scaler in Quo().\n\t//\n\t// When the remainder is not used, remNum and remDen are nil.\n\t// When used, the remainder is normalized between -1 and 1; that is:\n\t//\n\t//  -|remDen| < remNum < |remDen|\n\t//\n\t// remDen has the same sign as y, and remNum is zero or has the same sign\n\t// as x.\n\tRound(z, quo *Dec, remNum, remDen *big.Int) *Dec\n}\n\ntype rndr struct {\n\tuseRem bool\n\tround  func(z, quo *Dec, remNum, remDen *big.Int) *Dec\n}\n\nfunc (r rndr) UseRemainder() bool {\n\treturn r.useRem\n}\n\nfunc (r rndr) Round(z, quo *Dec, remNum, remDen *big.Int) *Dec {\n\treturn r.round(z, quo, remNum, remDen)\n}\n\nvar intSign = []*big.Int{big.NewInt(-1), big.NewInt(0), big.NewInt(1)}\n\nfunc roundHalf(f func(c int, odd uint) (roundUp bool)) func(z, q *Dec, rA, rB *big.Int) *Dec {\n\treturn func(z, q *Dec, rA, rB *big.Int) *Dec {\n\t\tz.Set(q)\n\t\tbrA, brB := rA.BitLen(), rB.BitLen()\n\t\tif brA < brB-1 {\n\t\t\t// brA < brB-1 => |rA| < |rB/2|\n\t\t\treturn z\n\t\t}\n\t\troundUp := false\n\t\tsrA, srB := rA.Sign(), rB.Sign()\n\t\ts := srA * srB\n\t\tif brA == brB-1 {\n\t\t\trA2 := new(big.Int).Lsh(rA, 1)\n\t\t\tif s < 0 {\n\t\t\t\trA2.Neg(rA2)\n\t\t\t}\n\t\t\troundUp = f(rA2.Cmp(rB)*srB, z.UnscaledBig().Bit(0))\n\t\t} else {\n\t\t\t// brA > brB-1 => |rA| > |rB/2|\n\t\t\troundUp = true\n\t\t}\n\t\tif roundUp {\n\t\t\tz.UnscaledBig().Add(z.UnscaledBig(), intSign[s+1])\n\t\t}\n\t\treturn z\n\t}\n}\n\nfunc init() {\n\tRoundExact = rndr{true,\n\t\tfunc(z, q *Dec, rA, rB *big.Int) *Dec {\n\t\t\tif rA.Sign() != 0 {\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\treturn z.Set(q)\n\t\t}}\n\tRoundDown = rndr{false,\n\t\tfunc(z, q *Dec, rA, rB *big.Int) *Dec {\n\t\t\treturn z.Set(q)\n\t\t}}\n\tRoundUp = rndr{true,\n\t\tfunc(z, q *Dec, rA, rB *big.Int) *Dec {\n\t\t\tz.Set(q)\n\t\t\tif rA.Sign() != 0 {\n\t\t\t\tz.UnscaledBig().Add(z.UnscaledBig(), intSign[rA.Sign()*rB.Sign()+1])\n\t\t\t}\n\t\t\treturn z\n\t\t}}\n\tRoundFloor = rndr{true,\n\t\tfunc(z, q *Dec, rA, rB *big.Int) *Dec {\n\t\t\tz.Set(q)\n\t\t\tif rA.Sign()*rB.Sign() < 0 {\n\t\t\t\tz.UnscaledBig().Add(z.UnscaledBig(), intSign[0])\n\t\t\t}\n\t\t\treturn z\n\t\t}}\n\tRoundCeil = rndr{true,\n\t\tfunc(z, q *Dec, rA, rB *big.Int) *Dec {\n\t\t\tz.Set(q)\n\t\t\tif rA.Sign()*rB.Sign() > 0 {\n\t\t\t\tz.UnscaledBig().Add(z.UnscaledBig(), intSign[2])\n\t\t\t}\n\t\t\treturn z\n\t\t}}\n\tRoundHalfDown = rndr{true, roundHalf(\n\t\tfunc(c int, odd uint) bool {\n\t\t\treturn c > 0\n\t\t})}\n\tRoundHalfUp = rndr{true, roundHalf(\n\t\tfunc(c int, odd uint) bool {\n\t\t\treturn c >= 0\n\t\t})}\n\tRoundHalfEven = rndr{true, roundHalf(\n\t\tfunc(c int, odd uint) bool {\n\t\t\treturn c > 0 || c == 0 && odd == 1\n\t\t})}\n}\n"
  }
]